WorldWideScience

Sample records for maximum absolute error

  1. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  2. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  3. Study of errors in absolute flux density measurements of Cassiopeia A

    International Nuclear Information System (INIS)

    Kanda, M.

    1975-10-01

    An error analysis for absolute flux density measurements of Cassiopeia A is discussed. The lower-bound quadrature-accumulation error for state-of-the-art measurements of the absolute flux density of Cas A around 7 GHz is estimated to be 1.71% for 3 sigma limits. The corresponding practicable error for the careful but not state-of-the-art measurement is estimated to be 4.46% for 3 sigma limits

  4. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    Science.gov (United States)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  5. Assessing energy forecasting inaccuracy by simultaneously considering temporal and absolute errors

    International Nuclear Information System (INIS)

    Frías-Paredes, Laura; Mallor, Fermín; Gastón-Romeo, Martín; León, Teresa

    2017-01-01

    Highlights: • A new method to match time series is defined to assess energy forecasting accuracy. • This method relies in a new family of step patterns that optimizes the MAE. • A new definition of the Temporal Distortion Index between two series is provided. • A parametric extension controls both the temporal distortion index and the MAE. • Pareto optimal transformations of the forecast series are obtained for both indexes. - Abstract: Recent years have seen a growing trend in wind and solar energy generation globally and it is expected that an important percentage of total energy production comes from these energy sources. However, they present inherent variability that implies fluctuations in energy generation that are difficult to forecast. Thus, forecasting errors have a considerable role in the impacts and costs of renewable energy integration, management, and commercialization. This study presents an important advance in the task of analyzing prediction models, in particular, in the timing component of prediction error, which improves previous pioneering results. A new method to match time series is defined in order to assess energy forecasting accuracy. This method relies on a new family of step patterns, an essential component of the algorithm to evaluate the temporal distortion index (TDI). This family minimizes the mean absolute error (MAE) of the transformation with respect to the reference series (the real energy series) and also allows detailed control of the temporal distortion entailed in the prediction series. The simultaneous consideration of temporal and absolute errors allows the use of Pareto frontiers as characteristic error curves. Real examples of wind energy forecasts are used to illustrate the results.

  6. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  7. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  8. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  9. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    Science.gov (United States)

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  10. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    Science.gov (United States)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  11. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    Science.gov (United States)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  12. Errors of absolute methods of reactor neutron activation analysis caused by non-1/E epithermal neutron spectra

    International Nuclear Information System (INIS)

    Erdtmann, G.

    1993-08-01

    A sufficiently accurate characterization of the neutron flux and spectrum, i.e. the determination of the thermal flux, the flux ratio and the epithermal flux spectrum shape factor, α, is a prerequisite for all types of absolute and monostandard methods of reactor neutron activation analysis. A convenient method for these measurements is the bare triple monitor method. However, the results of this method, are very imprecise, because there are high error propagation factors form the counting errors of the monitor activities. Procedures are described to calculate the errors of the flux parameters, the α-dependent cross-section ratios, and of the analytical results from the errors of the activities of the monitor isotopes. They are included in FORTRAN programs which also allow a graphical representation of the results. A great number of examples were calculated for ten different irradiation facilities in four reactors and for 28 elements. Plots of the results are presented and discussed. (orig./HP) [de

  13. Absolute transition probabilities in the NeI 3p-3s fine structure by beam-gas-dye laser spectroscopy

    International Nuclear Information System (INIS)

    Hartmetz, P.; Schmoranzer, H.

    1983-01-01

    The beam-gas-dye laser two-step excitation technique is further developed and applied to the direct measurement of absolute atomic transition probabilities in the NeI 3p-3s fine-structure transition array with a maximum experimental error of 5%. (orig.)

  14. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  15. Errors and limits in the determination of plasma electron density by measuring the absolute values of the emitted continuum radiation intensity

    International Nuclear Information System (INIS)

    Bilbao, L.; Bruzzone, H.; Grondona, D.

    1994-01-01

    The reliable determination of a plasma electron structure requires a good knowledge of the errors affecting the employed technique. A technique based on the measurements of the absolute light intensity emitted by travelling plasma structures in plasma focus devices has been used, but it can be easily modified to other geometries and even to stationary plasma structures with time-varying plasma densities. The purpose of this work is to discuss in some detail the errors and limits of this technique. Three separate errors are shown: the minimum size of the density structure that can be resolved, an overall error in the measurements themselves, and an uncertainty in the shape of the density profile. (author)

  16. Rectangular maximum-volume submatrices and their applications

    KAUST Repository

    Mikhalev, Aleksandr; Oseledets, I.V.

    2017-01-01

    We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.

  17. Rectangular maximum-volume submatrices and their applications

    KAUST Repository

    Mikhalev, Aleksandr

    2017-10-18

    We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.

  18. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  19. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  20. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  1. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  2. Effect of error in crack length measurement on maximum load fracture toughness of Zr-2.5Nb pressure tube material

    International Nuclear Information System (INIS)

    Bind, A.K.; Sunil, Saurav; Singh, R.N.; Chakravartty, J.K.

    2016-03-01

    Recently it was found that maximum load toughness (J max ) for Zr-2.5Nb pressure tube material was practically unaffected by error in Δ a . To check the sensitivity of the J max to error in Δ a measurement, the J max was calculated assuming no crack growth up to the maximum load (P max ) for as received and hydrogen charged Zr-2.5Nb pressure tube material. For load up to the P max , the J values calculated assuming no crack growth (J NC ) were slightly higher than that calculated based on Δ a measured using DCPD technique (JDCPD). In general, error in the J calculation found to be increased exponentially with Δ a . The error in J max calculation was increased with an increase in Δ a and a decrease in J max . Based on deformation theory of J, an analytic criterion was developed to check the insensitivity of the J max to error in Δ a . There was very good linear relation was found between the J max calculated based on Δ a measured using DCPD technique and the J max calculated assuming no crack growth. This relation will be very useful to calculate J max without measuring the crack growth during fracture test especially for irradiated material. (author)

  3. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: I. general description

    Energy Technology Data Exchange (ETDEWEB)

    Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex

    2012-06-21

    Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the

  4. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  5. THE FAST DECLINING TYPE Ia SUPERNOVA 2003gs, AND EVIDENCE FOR A SIGNIFICANT DISPERSION IN NEAR-INFRARED ABSOLUTE MAGNITUDES OF FAST DECLINERS AT MAXIMUM LIGHT

    International Nuclear Information System (INIS)

    Krisciunas, Kevin; Marion, G. H.; Suntzeff, Nicholas B.

    2009-01-01

    We obtained optical photometry of SN 2003gs on 49 nights, from 2 to 494 days after T(B max ). We also obtained near-IR photometry on 21 nights. SN 2003gs was the first fast declining Type Ia SN that has been well observed since SN 1999by. While it was subluminous in optical bands compared to more slowly declining Type Ia SNe, it was not subluminous at maximum light in the near-IR bands. There appears to be a bimodal distribution in the near-IR absolute magnitudes of Type Ia SNe at maximum light. Those that peak in the near-IR after T(B max ) are subluminous in the all bands. Those that peak in the near-IR prior to T(B max ), such as SN 2003gs, have effectively the same near-IR absolute magnitudes at maximum light regardless of the decline rate Δm 15 (B). Near-IR spectral evidence suggests that opacities in the outer layers of SN 2003gs are reduced much earlier than for normal Type Ia SNe. That may allow γ rays that power the luminosity to escape more rapidly and accelerate the decline rate. This conclusion is consistent with the photometric behavior of SN 2003gs in the IR, which indicates a faster than normal decline from approximately normal peak brightness.

  6. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  7. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  8. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing; Pang, Chaoyi; Zhou, Xiaofang; Zhang, Xiangliang; Deng, Ke

    2014-01-01

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.

  9. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing

    2014-04-04

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.

  10. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  11. Partial sums of arithmetical functions with absolutely convergent ...

    Indian Academy of Sciences (India)

    For an arithmetical function f with absolutely convergent Ramanujan expansion, we derive an asymptotic formula for the ∑ n ≤ N f(n)$ with explicit error term. As a corollary we obtain new results about sum-of-divisors functions and Jordan's totient functions.

  12. Maximum solid concentrations of coal water slurries predicted by neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jun; Li, Yanchang; Zhou, Junhu; Liu, Jianzhong; Cen, Kefa

    2010-12-15

    The nonlinear back-propagation (BP) neural network models were developed to predict the maximum solid concentration of coal water slurry (CWS) which is a substitute for oil fuel, based on physicochemical properties of 37 typical Chinese coals. The Levenberg-Marquardt algorithm was used to train five BP neural network models with different input factors. The data pretreatment method, learning rate and hidden neuron number were optimized by training models. It is found that the Hardgrove grindability index (HGI), moisture and coalification degree of parent coal are 3 indispensable factors for the prediction of CWS maximum solid concentration. Each BP neural network model gives a more accurate prediction result than the traditional polynomial regression equation. The BP neural network model with 3 input factors of HGI, moisture and oxygen/carbon ratio gives the smallest mean absolute error of 0.40%, which is much lower than that of 1.15% given by the traditional polynomial regression equation. (author)

  13. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. On Selection of the Probability Distribution for Representing the Maximum Annual Wind Speed in East Cairo, Egypt

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh. I.; El-Hemamy, S.T.

    2013-01-01

    The main objective of this paper is to identify an appropriate probability model and best plotting position formula which represent the maximum annual wind speed in east Cairo. This model can be used to estimate the extreme wind speed and return period at a particular site as well as to determine the radioactive release distribution in case of accident occurrence at a nuclear power plant. Wind speed probabilities can be estimated by using probability distributions. An accurate determination of probability distribution for maximum wind speed data is very important in expecting the extreme value . The probability plots of the maximum annual wind speed (MAWS) in east Cairo are fitted to six major statistical distributions namely: Gumbel, Weibull, Normal, Log-Normal, Logistic and Log- Logistic distribution, while eight plotting positions of Hosking and Wallis, Hazen, Gringorten, Cunnane, Blom, Filliben, Benard and Weibull are used for determining exceedance of their probabilities. A proper probability distribution for representing the MAWS is selected by the statistical test criteria in frequency analysis. Therefore, the best plotting position formula which can be used to select appropriate probability model representing the MAWS data must be determined. The statistical test criteria which represented in: the probability plot correlation coefficient (PPCC), the root mean square error (RMSE), the relative root mean square error (RRMSE) and the maximum absolute error (MAE) are used to select the appropriate probability position and distribution. The data obtained show that the maximum annual wind speed in east Cairo vary from 44.3 Km/h to 96.1 Km/h within duration of 39 years . Weibull plotting position combined with Normal distribution gave the highest fit, most reliable, accurate predictions and determination of the wind speed in the study area having the highest value of PPCC and lowest values of RMSE, RRMSE and MAE

  15. Land Use in LCIA: an absolute scale proposal for Biotic Production Potential

    DEFF Research Database (Denmark)

    Saez de Bikuna Salinas, Koldo; Ibrom, Andreas; Hauschild, Michael Zwicky

    , the present study proposes a single absolute scale for the midpoint impact category (MIC) of Biotic Production Potential (BPP). It is hypothesized that, for an ecosystem in equilibrium (where NPP equals decay), such an ecosystem has reached the maximum biotic throughput subject to site-specific conditions...... and no externally added inputs. The original ecosystem (or Potential Natural Vegetation) of a certain land gives then the maximum BPP with no additional, downstream or upstream, impacts. This Natural BPP is proposed as the maximum BPP in a hypothetical Absolute Scale for LCA’s Land Use framework. It is argued...... that this maximum BPP is Nature’s optimal solution through evolution-adaptation mechanisms, which provides the maximum matter throughput subject to the rest of environmental constraints (without further impacts). As a consequence, this scale rises a Land Use Optimality Point that suggests the existence of a limit...

  16. Rational functions with maximal radius of absolute monotonicity

    KAUST Repository

    Loczi, Lajos; Ketcheson, David I.

    2014-01-01

    -Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend

  17. Absolute magnitudes by statistical parallaxes

    International Nuclear Information System (INIS)

    Heck, A.

    1978-01-01

    The author describes an algorithm for stellar luminosity calibrations (based on the principle of maximum likelihood) which allows the calibration of relations of the type: Msub(i)=sup(N)sub(j=1)Σqsub(j)Csub(ij), i=1,...,n, where n is the size of the sample at hand, Msub(i) are the individual absolute magnitudes, Csub(ij) are observational quantities (j=1,...,N), and qsub(j) are the coefficients to be determined. If one puts N=1 and Csub(iN)=1, one has q 1 =M(mean), the mean absolute magnitude of the sample. As additional output, the algorithm provides one also with the dispersion in magnitude of the sample sigmasub(M), the mean solar motion (U,V,W) and the corresponding velocity ellipsoid (sigmasub(u), sigmasub(v), sigmasub(w). The use of this algorithm is illustrated. (Auth.)

  18. Accurate Maximum Power Tracking in Photovoltaic Systems Affected by Partial Shading

    Directory of Open Access Journals (Sweden)

    Pierluigi Guerriero

    2015-01-01

    Full Text Available A maximum power tracking algorithm exploiting operating point information gained on individual solar panels is presented. The proposed algorithm recognizes the presence of multiple local maxima in the power voltage curve of a shaded solar field and evaluates the coordinated of the absolute maximum. The effectiveness of the proposed approach is evidenced by means of circuit level simulation and experimental results. Experiments evidenced that, in comparison with a standard perturb and observe algorithm, we achieve faster convergence in normal operating conditions (when the solar field is uniformly illuminated and we accurately locate the absolute maximum power point in partial shading conditions, thus avoiding the convergence on local maxima.

  19. Regional compensation for statistical maximum likelihood reconstruction error of PET image pixels

    International Nuclear Information System (INIS)

    Forma, J; Ruotsalainen, U; Niemi, J A

    2013-01-01

    In positron emission tomography (PET), there is an increasing interest in studying not only the regional mean tracer concentration, but its variation arising from local differences in physiology, the tissue heterogeneity. However, in reconstructed images this physiological variation is shadowed by a large reconstruction error, which is caused by noisy data and the inversion of tomographic problem. We present a new procedure which can quantify the error variation in regional reconstructed values for given PET measurement, and reveal the remaining tissue heterogeneity. The error quantification is made by creating and reconstructing the noise realizations of virtual sinograms, which are statistically similar with the measured sinogram. Tests with physical phantom data show that the characterization of error variation and the true heterogeneity are possible, despite the existing model error when real measurement is considered. (paper)

  20. Rational functions with maximal radius of absolute monotonicity

    KAUST Repository

    Loczi, Lajos

    2014-05-19

    We study the radius of absolute monotonicity R of rational functions with numerator and denominator of degree s that approximate the exponential function to order p. Such functions arise in the application of implicit s-stage, order p Runge-Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend and Kraaijevanger. We determine the maximum attainable radius for functions in several one-parameter families of rational functions. Moreover, we prove earlier conjectured optimal radii in some families with 2 or 3 parameters via uniqueness arguments for systems of polynomial inequalities. Our results also prove the optimality of some strong stability preserving implicit and singly diagonally implicit Runge-Kutta methods. Whereas previous results in this area were primarily numerical, we give all constants as exact algebraic numbers.

  1. Definition of correcting factors for absolute radon content measurement formula

    International Nuclear Information System (INIS)

    Ji Changsong; Xiao Ziyun; Yang Jianfeng

    1992-01-01

    The absolute method of radio content measurement is based on thomas radon measurement formula. It was found in experiment that the systematic error existed in radon content measurement by means of thomas formula. By the analysis on the behaviour of radon daughter five factors including filter efficiency, detector construction factor, self-absorbance, energy spectrum factor, and gravity factor were introduced into the thomas formula, so that the systematic error was eliminated. The measuring methods of the five factors are given

  2. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  3. Lyman alpha SMM/UVSP absolute calibration and geocoronal correction

    Science.gov (United States)

    Fontenla, Juan M.; Reichmann, Edwin J.

    1987-01-01

    Lyman alpha observations from the Ultraviolet Spectrometer Polarimeter (UVSP) instrument of the Solar Maximum Mission (SMM) spacecraft were analyzed and provide instrumental calibration details. Specific values of the instrument quantum efficiency, Lyman alpha absolute intensity, and correction for geocoronal absorption are presented.

  4. Absolute GPS Positioning Using Genetic Algorithms

    Science.gov (United States)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  5. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Science.gov (United States)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  6. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    International Nuclear Information System (INIS)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  7. Absolute measurement of 152Eu

    International Nuclear Information System (INIS)

    Baba, Hiroshi; Baba, Sumiko; Ichikawa, Shinichi; Sekine, Toshiaki; Ishikawa, Isamu

    1981-08-01

    A new method of the absolute measurement for 152 Eu was established based on the 4πβ-γ spectroscopic anti-coincidence method. It is a coincidence counting method consisting of a 4πβ-counter and a Ge(Li) γ-ray detector, in which the effective counting efficiencies of the 4πβ-counter for β-rays, conversion electrons, and Auger electrons were obtained by taking the intensity ratios for certain γ-rays between the single spectrum and the spectrum coincident with the pulses from the 4πβ-counter. First, in order to verify the method, three different methods of the absolute measurement were performed with a prepared 60 Co source to find excellent agreement among the results deduced by them. Next, the 4πβ-γ spectroscopic coincidence measurement was applied to 152 Eu sources prepared by irradiating an enriched 151 Eu target in a reactor. The result was compared with that obtained by the γ-ray spectrometry using a 152 Eu standard source supplied by LMRI. They agreed with each other within the error of 2%. (author)

  8. Maximum Safety Regenerative Power Tracking for DC Traction Power Systems

    Directory of Open Access Journals (Sweden)

    Guifu Du

    2017-02-01

    Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.

  9. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement

    Science.gov (United States)

    Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu

    2017-03-01

    In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.

  10. Pseudo-absolute quantitative analysis using gas chromatography – Vacuum ultraviolet spectroscopy – A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Ling [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States); Smuts, Jonathan; Walsh, Phillip [VUV Analytics, Inc., Cedar Park, TX (United States); Qiu, Changling [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States); McNair, Harold M. [Department of Chemistry, Virginia Tech, Blacksburg, VA (United States); Schug, Kevin A., E-mail: kschug@uta.edu [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States)

    2017-02-08

    The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120–240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method. - Highlights: • Gas chromatography diagnostics and quantification using VUV detector. • Absorption cross-sections for molecules enable pseudo-absolute quantitation. • Injection diagnostics reveal systematic errors in hardware settings. • Internal

  11. Pseudo-absolute quantitative analysis using gas chromatography – Vacuum ultraviolet spectroscopy – A tutorial

    International Nuclear Information System (INIS)

    Bai, Ling; Smuts, Jonathan; Walsh, Phillip; Qiu, Changling; McNair, Harold M.; Schug, Kevin A.

    2017-01-01

    The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120–240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method. - Highlights: • Gas chromatography diagnostics and quantification using VUV detector. • Absorption cross-sections for molecules enable pseudo-absolute quantitation. • Injection diagnostics reveal systematic errors in hardware settings. • Internal

  12. Absolute gravity measurements at three sites characterized by different environmental conditions using two portable ballistic gravimeters

    Science.gov (United States)

    Greco, Filippo; Biolcati, Emanuele; Pistorio, Antonio; D'Agostino, Giancarlo; Germak, Alessandro; Origlia, Claudio; Del Negro, Ciro

    2015-03-01

    The performances of two absolute gravimeters at three different sites in Italy between 2009 and 2011 is presented. The measurements of the gravity acceleration g were performed using the absolute gravimeters Micro-g LaCoste FG5#238 and the INRiM prototype IMGC-02, which represent the state of the art in ballistic gravimeter technology (relative uncertainty of a few parts in 109). For the comparison, the measured g values were reported at the same height by means of the vertical gravity gradient estimated at each site with relative gravimeters. The consistency and reliability of the gravity observations, as well as the performance and efficiency of the instruments, were assessed by measurements made in sites characterized by different logistics and environmental conditions. Furthermore, the various factors affecting the measurements and their uncertainty were thoroughly investigated. The measurements showed good agreement, with the minimum and maximum differences being 4.0 and 8.3 μGal. The normalized errors are very much lower than 1, ranging between 0.06 and 0.45, confirming the compatibility between the results. This excellent agreement can be attributed to several factors, including the good working order of gravimeters and the correct setup and use of the instruments in different conditions. These results can contribute to the standardization of absolute gravity surveys largely for applications in geophysics, volcanology and other branches of geosciences, allowing achieving a good trade-off between uncertainty and efficiency of gravity measurements.

  13. Practical application of the theory of errors in measurement

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the practical application of the theory of errors in measurement. The topics of the chapter include fixing on a maximum desired error, selecting a maximum error, the procedure for limiting the error, utilizing a standard procedure, setting specifications for a standard procedure, and selecting the number of measurements to be made

  14. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    Science.gov (United States)

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  15. Absolute beam-charge measurement for single-bunch electron beams

    International Nuclear Information System (INIS)

    Suwada, Tsuyoshi; Ohsawa, Satoshi; Furukawa, Kazuro; Akasaka, Nobumasa

    2000-01-01

    The absolute beam charge of a single-bunch electron beam with a pulse width of 10 ps and that of a short-pulsed electron beam with a pulse width of 1 ns were measured with a Faraday cup in a beam test for the KEK B-Factory (KEKB) injector linac. It is strongly desired to obtain a precise beam-injection rate to the KEKB rings, and to estimate the amount of beam loss. A wall-current monitor was also recalibrated within an error of ±2%. This report describes the new results for an absolute beam-charge measurement for single-bunch and short-pulsed electron beams, and recalibration of the wall-current monitors in detail. (author)

  16. Absolute method of measuring magnetic susceptibility

    Science.gov (United States)

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  17. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  18. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  19. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    Science.gov (United States)

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    Science.gov (United States)

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  1. STAR barrel electromagnetic calorimeter absolute calibration using 'minimum ionizing particles' from collisions at RHIC

    International Nuclear Information System (INIS)

    Cormier, T.M.; Pavlinov, A.I.; Rykov, M.V.; Rykov, V.L.; Shestermanov, K.E.

    2002-01-01

    The procedure for the STAR Barrel Electromagnetic Calorimeter (BEMC) absolute calibrations, using penetrating charged particle hits (MIP-hits) from physics events at RHIC, is presented. Its systematic and statistical errors are evaluated. It is shown that, using this technique, the equalization and transfer of the absolute scale from the test beam can be done to a percent level accuracy in a reasonable amount of time for the entire STAR BEMC. MIP-hits would also be an effective tool for continuously monitoring the variations of the BEMC tower's gains, virtually without interference to STAR's main physics program. The method does not rely on simulations for anything other than geometric and some other small corrections, and also for estimations of the systematic errors. It directly transfers measured test beam responses to operations at RHIC

  2. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  3. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  4. An absolute distance interferometer with two external cavity diode lasers

    International Nuclear Information System (INIS)

    Hartmann, L; Meiners-Hagen, K; Abou-Zeid, A

    2008-01-01

    An absolute interferometer for length measurements in the range of several metres has been developed. The use of two external cavity diode lasers allows the implementation of a two-step procedure which combines the length measurement with a variable synthetic wavelength and its interpolation with a fixed synthetic wavelength. This synthetic wavelength is obtained at ≈42 µm by a modulation-free stabilization of both lasers to Doppler-reduced rubidium absorption lines. A stable reference interferometer is used as length standard. Different contributions to the total measurement uncertainty are discussed. It is shown that the measurement uncertainty can considerably be reduced by correcting the influence of vibrations on the measurement result and by applying linear regression to the quadrature signals of the absolute interferometer and the reference interferometer. The comparison of the absolute interferometer with a counting interferometer for distances up to 2 m results in a linearity error of 0.4 µm in good agreement with an estimation of the measurement uncertainty

  5. Globular Clusters: Absolute Proper Motions and Galactic Orbits

    Science.gov (United States)

    Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.

    2018-04-01

    We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.

  6. Maximum Power Point Tracking in Variable Speed Wind Turbine Based on Permanent Magnet Synchronous Generator Using Maximum Torque Sliding Mode Control Strategy

    Institute of Scientific and Technical Information of China (English)

    Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh

    2017-01-01

    The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).

  7. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    Science.gov (United States)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  8. Genomic DNA-based absolute quantification of gene expression in Vitis.

    Science.gov (United States)

    Gambetta, Gregory A; McElrone, Andrew J; Matthews, Mark A

    2013-07-01

    Many studies in which gene expression is quantified by polymerase chain reaction represent the expression of a gene of interest (GOI) relative to that of a reference gene (RG). Relative expression is founded on the assumptions that RG expression is stable across samples, treatments, organs, etc., and that reaction efficiencies of the GOI and RG are equal; assumptions which are often faulty. The true variability in RG expression and actual reaction efficiencies are seldom determined experimentally. Here we present a rapid and robust method for absolute quantification of expression in Vitis where varying concentrations of genomic DNA were used to construct GOI standard curves. This methodology was utilized to absolutely quantify and determine the variability of the previously validated RG ubiquitin (VvUbi) across three test studies in three different tissues (roots, leaves and berries). In addition, in each study a GOI was absolutely quantified. Data sets resulting from relative and absolute methods of quantification were compared and the differences were striking. VvUbi expression was significantly different in magnitude between test studies and variable among individual samples. Absolute quantification consistently reduced the coefficients of variation of the GOIs by more than half, often resulting in differences in statistical significance and in some cases even changing the fundamental nature of the result. Utilizing genomic DNA-based absolute quantification is fast and efficient. Through eliminating error introduced by assuming RG stability and equal reaction efficiencies between the RG and GOI this methodology produces less variation, increased accuracy and greater statistical power. © 2012 Scandinavian Plant Physiology Society.

  9. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  10. Absolute risk, absolute risk reduction and relative risk

    Directory of Open Access Journals (Sweden)

    Jose Andres Calvache

    2012-12-01

    Full Text Available This article illustrates the epidemiological concepts of absolute risk, absolute risk reduction and relative risk through a clinical example. In addition, it emphasizes the usefulness of these concepts in clinical practice, clinical research and health decision-making process.

  11. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  12. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    Directory of Open Access Journals (Sweden)

    Tianzhou Chen

    2013-09-01

    Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.

  13. The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.

    Science.gov (United States)

    1985-05-08

    also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies

  14. A Gridded Daily Min/Max Temperature Dataset With 0.1° Resolution for the Yangtze River Valley and its Error Estimation

    Science.gov (United States)

    Xiong, Qiufen; Hu, Jianglin

    2013-05-01

    The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and

  15. Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements

    Science.gov (United States)

    Anthony, Robert E.; Ringler, Adam; Wilson, David

    2018-01-01

    The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.

  16. Simplified fringe order correction for absolute phase maps recovered with multiple-spatial-frequency fringe projections

    International Nuclear Information System (INIS)

    Ding, Yi; Peng, Kai; Lu, Lei; Zhong, Kai; Zhu, Ziqi

    2017-01-01

    Various kinds of fringe order errors may occur in the absolute phase maps recovered with multi-spatial-frequency fringe projections. In existing methods, multiple successive pixels corrupted by fringe order errors are detected and corrected pixel-by-pixel with repeating searches, which is inefficient for applications. To improve the efficiency of multiple successive fringe order corrections, in this paper we propose a method to simplify the error detection and correction by the stepwise increasing property of fringe order. In the proposed method, the numbers of pixels in each step are estimated to find the possible true fringe order values, repeating the search in detecting multiple successive errors can be avoided for efficient error correction. The effectiveness of our proposed method is validated by experimental results. (paper)

  17. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    Energy Technology Data Exchange (ETDEWEB)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  18. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    International Nuclear Information System (INIS)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D

    2015-01-01

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  19. Absolute measurement of the $\\beta\\alpha$ decay of $^{16}$N

    CERN Multimedia

    We propose to study the $\\beta$-decay of $^{16}$N at ISOLDE with the aim of determining the branching ratio for $\\beta\\alpha$ decay on an absolute scale. There are indications that the previously measured branching ratio is in error by an amount significantly larger than the quoted uncertainty. This limits the precision with which the S-factor of the astrophysically important $^{12}$C($\\alpha, \\gamma)^{16}$O reaction can be determined.

  20. Encasing the Absolutes

    Directory of Open Access Journals (Sweden)

    Uroš Martinčič

    2014-05-01

    Full Text Available The paper explores the issue of structure and case in English absolute constructions, whose subjects are deduced by several descriptive grammars as being in the nominative case due to its supposed neutrality in terms of register. This deduction is countered by systematic accounts presented within the framework of the Minimalist Program which relate the case of absolute constructions to specific grammatical factors. Each proposal is shown as an attempt of analysing absolute constructions as basic predication structures, either full clauses or small clauses. I argue in favour of the small clause approach due to its minimal reliance on transformations and unique stipulations. Furthermore, I propose that small clauses project a singular category, and show that the use of two cases in English absolute constructions can be accounted for if they are analysed as depictive phrases, possibly selected by prepositions. The case of the subject in absolutes is shown to be a result of syntactic and non-syntactic factors. I thus argue in accordance with Minimalist goals that syntactic case does not exist, attributing its role in absolutes to other mechanisms.

  1. Micro ionization chamber dosimetry in IMRT verification: Clinical implications of dosimetric errors in the PTV

    International Nuclear Information System (INIS)

    Sanchez-Doblado, Francisco; Capote, Roberto; Rosello, Joan V.; Leal, Antonio; Lagares, Juan I.; Arrans, Rafael; Hartmann, Guenther H.

    2005-01-01

    Background and purpose: Absolute dose measurements for Intensity Modulated Radiotherapy (IMRT) beamlets is difficult due to the lack of lateral electron equilibrium. Recently we found that the absolute dosimetry in the penumbra region of the IMRT beamlet, can suffer from significant errors (Capote et al., Med Phys 31 (2004) 2416-2422). This work has the goal to estimate the error made when measuring the Planning Target Volume's (PTV) absolute dose by a micro ion chamber (μIC) in typical IMRT treatment. The dose error comes from the assumption that the dosimetric parameters determining the absolute dose are the same as for the reference conditions. Materials and Methods: Two IMRT treatment plans for common prostate carcinoma case, derived by forward and inverse optimisation, were considered. Detailed geometrical simulation of the μIC and the dose verification set-up was performed. The Monte Carlo (MC) simulation allows us to calculate the delivered dose to water and the dose delivered to the active volume of the ion chamber. However, the measured dose in water is usually derived from chamber readings assuming reference conditions. The MC simulation provides needed correction factors for ion chamber dosimetry in non reference conditions. Results: Dose calculations were carried out for some representative beamlets, a combination of segments and for the delivered IMRT treatments. We observe that the largest dose errors (i.e. the largest correction factors) correspond to the smaller contribution of the corresponding IMRT beamlets to the total dose delivered in the ionization chamber within PTV. Conclusion: The clinical impact of the calculated dose error in PTV measured dose was found to be negligible for studied IMRT treatments

  2. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  3. A novel capacitive absolute positioning sensor based on time grating with nanometer resolution

    Science.gov (United States)

    Pu, Hongji; Liu, Hongzhong; Liu, Xiaokang; Peng, Kai; Yu, Zhicheng

    2018-05-01

    The present work proposes a novel capacitive absolute positioning sensor based on time grating. The sensor includes a fine incremental-displacement measurement component combined with a coarse absolute-position measurement component to obtain high-resolution absolute positioning measurements. A single row type sensor was proposed to achieve fine displacement measurement, which combines the two electrode rows of a previously proposed double-row type capacitive displacement sensor based on time grating into a single row. To achieve absolute positioning measurement, the coarse measurement component is designed as a single-row type displacement sensor employing a single spatial period over the entire measurement range. In addition, this component employs a rectangular induction electrode and four groups of orthogonal discrete excitation electrodes with half-sinusoidal envelope shapes, which were formed by alternately extending the rectangular electrodes of the fine measurement component. The fine and coarse measurement components are tightly integrated to form a compact absolute positioning sensor. A prototype sensor was manufactured using printed circuit board technology for testing and optimization of the design in conjunction with simulations. Experimental results show that the prototype sensor achieves a ±300 nm measurement accuracy with a 1 nm resolution over a displacement range of 200 mm when employing error compensation. The proposed sensor is an excellent alternative to presently available long-range absolute nanometrology sensors owing to its low cost, simple structure, and ease of manufacturing.

  4. Methods for Estimation of Radiation Risk in Epidemiological Studies Accounting for Classical and Berkson Errors in Doses

    KAUST Repository

    Kukush, Alexander

    2011-01-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  5. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    Science.gov (United States)

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  6. The effect of biomechanical variables on force sensitive resistor error: Implications for calibration and improved accuracy.

    Science.gov (United States)

    Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P

    2016-03-21

    Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Absolute advantage

    NARCIS (Netherlands)

    J.G.M. van Marrewijk (Charles)

    2008-01-01

    textabstractA country is said to have an absolute advantage over another country in the production of a good or service if it can produce that good or service using fewer real resources. Equivalently, using the same inputs, the country can produce more output. The concept of absolute advantage can

  8. A digital, constant-frequency pulsed phase-locked-loop instrument for real-time, absolute ultrasonic phase measurements

    Science.gov (United States)

    Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.

    2018-05-01

    A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.

  9. The Absolute Stability Analysis in Fuzzy Control Systems with Parametric Uncertainties and Reference Inputs

    Science.gov (United States)

    Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei

    This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.

  10. Routine surveillance of environmental radioactivity in the influence area of the institute during 1998

    International Nuclear Information System (INIS)

    Breban, D.C.; Dumitru, R.O.

    1999-01-01

    The radioactivity measurements were performed according to the Monitoring Concept for nuclear units in IFIN-HH, which provides the type, the locations and sampling frequency of the environmental factors that were analyzed. Samples of water, sediment, soil, vegetation and aerosols have been analyzed for both gross beta and gamma activity, as follows: 1030 samples of potential radioactive water, surface, drinking and underground water; 21 sediment and 90 soil samples; 27 samples of spontaneous vegetation and 9 samples of milk, cereals and vegetables. Aerosol samples have been monitored twice a month. We analyzed for gross beta and gamma activity 119 samples of radioactive liquid effluents from the two nuclear units CPR and STDR. The maximum values of gross beta activity for drinking water was; 0.71 Bq/l (absolute error: 0.14 Bq/l) for the sample collected from the village well near to the reactor canal and 0.71 Bq/l (absolute error: 0.15 Bq/l) for the water collected from the village well near to the IFA-canal. For all the results reported in this paper the confidence level is 95 %. Gross beta values measured daily for sewage and surface water varied generally between 0.3 and 0.9 Bq/l. The maximum value of the gross beta activity recorded for the river water downstream to the sewage spill flow was 1.32 (absolute error: 0.20 Bq/l), that was situated below the maximum allowed level (1.8 Bq/l). Gamma spectroscopy analyses and radiochemical separations performed on annual composite samples of sewage water showed an average activity concentration for Cs-137 of 5 mBq/l (absolute error: 1 mBq/l) and 35 mBq/l (absolute error: 12 mBq/l) for Sr-90, values similar to those determined for the surface water samples. Co-60 was also detected in the sewage water and sediment collected at the sewage spill flow, with an activity concentration of 8 mBq/l (absolute error: 2 mBq/l) and 31 Bq/kg (absolute error: 5 Bq/kg), respectively. For the surface water and sediment samples collected

  11. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    Science.gov (United States)

    Brunke, Heinz-Peter; Matzka, Jürgen

    2018-01-01

    At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer). Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination) measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996).We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters) of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017).

  12. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    Directory of Open Access Journals (Sweden)

    H.-P. Brunke

    2018-01-01

    Full Text Available At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer. Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996.We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017.

  13. Determining the effect of grain size and maximum induction upon coercive field of electrical steels

    Science.gov (United States)

    Landgraf, Fernando José Gomes; da Silveira, João Ricardo Filipini; Rodrigues-Jr., Daniel

    2011-10-01

    Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 μm). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B50 and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed.

  14. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    Science.gov (United States)

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed.

  15. Standard Error Computations for Uncertainty Quantification in Inverse Problems: Asymptotic Theory vs. Bootstrapping.

    Science.gov (United States)

    Banks, H T; Holm, Kathleen; Robbins, Danielle

    2010-11-01

    We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods.

  16. The effects of a pilates-aerobic program on maximum exercise capacity of adult women

    Directory of Open Access Journals (Sweden)

    Milena Mikalački

    Full Text Available ABSTRACT Introduction: Physical exercise such as the Pilates method offers clinical benefits on the aging process. Likewise, physiologic parameters may be improved through aerobic exercise. Methods: In order to compare the differences of a Pilates-Aerobic intervention program on physiologic parameters such as the maximum heart rate (HRmax, relative maximal oxygen consumption (relative VO2max and absolute (absolute VOmax, maximum heart rate during maximal oxygen consumption (VO2max-HRmax, maximum minute volume (VE and forced vital capacity (FVC, a total of 64 adult women (active group = 48.1 ± 6.7 years; control group = 47.2 ± 7.4 years participated in the study. The physiological parameters, the maximal speed and total duration of test were measured by maximum exercise capacity testing through Bruce protocol. The HRmax was calculated by a cardio-ergometric software. Pulmonary function tests, maximal speed and total time during the physical test were performed in a treadmill (Medisoft, model 870c. Likewise, the spirometry analyzed the impact on oxygen uptake parameters, including FVC and VE. Results: The VO2max (relative and absolute, VE (all, P<0.001, VO2max-HRmax (P<0.05 and maximal speed of treadmill test (P<0.001 showed significant difference in the active group after a physical exercise interventional program. Conclusion: The present study indicates that the Pilates exercises through a continuous training program might significantly improve the cardiovascular system. Hence, mixing strength and aerobic exercises into a training program is considered the optimal mechanism for healthy aging.

  17. A kinetic-based sigmoidal model for the polymerase chain reaction and its application to high-capacity absolute quantitative real-time PCR

    Directory of Open Access Journals (Sweden)

    Stewart Don

    2008-05-01

    Full Text Available Abstract Background Based upon defining a common reference point, current real-time quantitative PCR technologies compare relative differences in amplification profile position. As such, absolute quantification requires construction of target-specific standard curves that are highly resource intensive and prone to introducing quantitative errors. Sigmoidal modeling using nonlinear regression has previously demonstrated that absolute quantification can be accomplished without standard curves; however, quantitative errors caused by distortions within the plateau phase have impeded effective implementation of this alternative approach. Results Recognition that amplification rate is linearly correlated to amplicon quantity led to the derivation of two sigmoid functions that allow target quantification via linear regression analysis. In addition to circumventing quantitative errors produced by plateau distortions, this approach allows the amplification efficiency within individual amplification reactions to be determined. Absolute quantification is accomplished by first converting individual fluorescence readings into target quantity expressed in fluorescence units, followed by conversion into the number of target molecules via optical calibration. Founded upon expressing reaction fluorescence in relation to amplicon DNA mass, a seminal element of this study was to implement optical calibration using lambda gDNA as a universal quantitative standard. Not only does this eliminate the need to prepare target-specific quantitative standards, it relegates establishment of quantitative scale to a single, highly defined entity. The quantitative competency of this approach was assessed by exploiting "limiting dilution assay" for absolute quantification, which provided an independent gold standard from which to verify quantitative accuracy. This yielded substantive corroborating evidence that absolute accuracies of ± 25% can be routinely achieved. Comparison

  18. Fluctuation theorems in feedback-controlled open quantum systems: Quantum coherence and absolute irreversibility

    Science.gov (United States)

    Murashita, Yûto; Gong, Zongping; Ashida, Yuto; Ueda, Masahito

    2017-10-01

    The thermodynamics of quantum coherence has attracted growing attention recently, where the thermodynamic advantage of quantum superposition is characterized in terms of quantum thermodynamics. We investigate the thermodynamic effects of quantum coherent driving in the context of the fluctuation theorem. We adopt a quantum-trajectory approach to investigate open quantum systems under feedback control. In these systems, the measurement backaction in the forward process plays a key role, and therefore the corresponding time-reversed quantum measurement and postselection must be considered in the backward process, in sharp contrast to the classical case. The state reduction associated with quantum measurement, in general, creates a zero-probability region in the space of quantum trajectories of the forward process, which causes singularly strong irreversibility with divergent entropy production (i.e., absolute irreversibility) and hence makes the ordinary fluctuation theorem break down. In the classical case, the error-free measurement ordinarily leads to absolute irreversibility, because the measurement restricts classical paths to the region compatible with the measurement outcome. In contrast, in open quantum systems, absolute irreversibility is suppressed even in the presence of the projective measurement due to those quantum rare events that go through the classically forbidden region with the aid of quantum coherent driving. This suppression of absolute irreversibility exemplifies the thermodynamic advantage of quantum coherent driving. Absolute irreversibility is shown to emerge in the absence of coherent driving after the measurement, especially in systems under time-delayed feedback control. We show that absolute irreversibility is mitigated by increasing the duration of quantum coherent driving or decreasing the delay time of feedback control.

  19. Practical, Reliable Error Bars in Quantum Tomography

    OpenAIRE

    Faist, Philippe; Renner, Renato

    2015-01-01

    Precise characterization of quantum devices is usually achieved with quantum tomography. However, most methods which are currently widely used in experiments, such as maximum likelihood estimation, lack a well-justified error analysis. Promising recent methods based on confidence regions are difficult to apply in practice or yield error bars which are unnecessarily large. Here, we propose a practical yet robust method for obtaining error bars. We do so by introducing a novel representation of...

  20. Maximum entropy reconstructions for crystallographic imaging; Cristallographie et reconstruction d`images par maximum d`entropie

    Energy Technology Data Exchange (ETDEWEB)

    Papoular, R

    1997-07-01

    The Fourier Transform is of central importance to Crystallography since it allows the visualization in real space of tridimensional scattering densities pertaining to physical systems from diffraction data (powder or single-crystal diffraction, using x-rays, neutrons, electrons or else). In turn, this visualization makes it possible to model and parametrize these systems, the crystal structures of which are eventually refined by Least-Squares techniques (e.g., the Rietveld method in the case of Powder Diffraction). The Maximum Entropy Method (sometimes called MEM or MaxEnt) is a general imaging technique, related to solving ill-conditioned inverse problems. It is ideally suited for tackling undetermined systems of linear questions (for which the number of variables is much larger than the number of equations). It is already being applied successfully in Astronomy, Radioastronomy and Medical Imaging. The advantages of using MAXIMUM Entropy over conventional Fourier and `difference Fourier` syntheses stem from the following facts: MaxEnt takes the experimental error bars into account; MaxEnt incorporate Prior Knowledge (e.g., the positivity of the scattering density in some instances); MaxEnt allows density reconstructions from incompletely phased data, as well as from overlapping Bragg reflections; MaxEnt substantially reduces truncation errors to which conventional experimental Fourier reconstructions are usually prone. The principles of Maximum Entropy imaging as applied to Crystallography are first presented. The method is then illustrated by a detailed example specific to Neutron Diffraction: the search for proton in solids. (author). 17 refs.

  1. The evaluation of the feasibility about prostate SBRT by analyzing interfraction errors of internal organs

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Gi; Son, Sang Joon; Moon, Joon Gi; KIm, Bo Kyum; Lee, Je Hee [Dept. of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2016-12-15

    To figure out if the treatment plan for rectum, bladder and prostate that have a lot of interfraction errors satisfies dosimetric limits without adaptive plan by analyzing MR image. This study was based on 5 prostate cancer patients who had IMRT(total dose: 70 Gy) Using ViewRay MRIdian System(ViewRay, ViewRay Inc., Cleveland, OH, USA) The treatment plans were made on the same CT images to compare with the plan quality according to adaptive plan, and the Eclipse(Ver 10.0.42, Varian, USA) was used. After registrate the 5 treatment MR images to the CT images for treatment plan to analyze the interfraction changes of organ, we measured the dose volume histogram and the changes of the absolute volume for each organ by applying the first treatment plan to each image. Over 5 fractions, the total dose for PTV was V{sub 36.25} Gy ≧ 95%. To confirm that the prescription dose satisfies the SBRT dose limit for prostate, we measured V{sub 100%} , V{sub 95%}, V{sub 90%} for CTV and V{sub 100%}, V{sub 90%}, V{sub 80%}, V{sub 50%} of rectum and bladder. All dose average value of CTV, rectum and bladder satisfied dose limit, but there was a case that exceeded dose limit more than one after analyzing the each image of treatment. After measuring the changes of absolute volume comparing the MR image of the first treatment plan with the one of the interfraction treatment, the difference values were maximum 1.72 times at rectum and maximum 2.0 times at bladder. In case of rectum, the expected values were planned under the dose limit, on average, V{sub 100%}=0.32%, V{sub 90%}=3.33%, V{sub 80%}=7.71%, V{sub 50%}=23.55% in the first treatment plan. In case of rectum, the average of absolute volume in first plan was 117.9 cc. However, the average of really treated volume was 79.2 cc. In case of CTV, the 100% prescription dose area didn't satisfy even though the margin for PTV was 5 mm because of the variation of rectal and bladder volume. There was no case that the value from average

  2. The evaluation of the feasibility about prostate SBRT by analyzing interfraction errors of internal organs

    International Nuclear Information System (INIS)

    Hong, Soon Gi; Son, Sang Joon; Moon, Joon Gi; KIm, Bo Kyum; Lee, Je Hee

    2016-01-01

    To figure out if the treatment plan for rectum, bladder and prostate that have a lot of interfraction errors satisfies dosimetric limits without adaptive plan by analyzing MR image. This study was based on 5 prostate cancer patients who had IMRT(total dose: 70 Gy) Using ViewRay MRIdian System(ViewRay, ViewRay Inc., Cleveland, OH, USA) The treatment plans were made on the same CT images to compare with the plan quality according to adaptive plan, and the Eclipse(Ver 10.0.42, Varian, USA) was used. After registrate the 5 treatment MR images to the CT images for treatment plan to analyze the interfraction changes of organ, we measured the dose volume histogram and the changes of the absolute volume for each organ by applying the first treatment plan to each image. Over 5 fractions, the total dose for PTV was V_3_6_._2_5 Gy ≧ 95%. To confirm that the prescription dose satisfies the SBRT dose limit for prostate, we measured V_1_0_0_% , V_9_5_%, V_9_0_% for CTV and V_1_0_0_%, V_9_0_%, V_8_0_%, V_5_0_% of rectum and bladder. All dose average value of CTV, rectum and bladder satisfied dose limit, but there was a case that exceeded dose limit more than one after analyzing the each image of treatment. After measuring the changes of absolute volume comparing the MR image of the first treatment plan with the one of the interfraction treatment, the difference values were maximum 1.72 times at rectum and maximum 2.0 times at bladder. In case of rectum, the expected values were planned under the dose limit, on average, V_1_0_0_%=0.32%, V_9_0_%=3.33%, V_8_0_%=7.71%, V_5_0_%=23.55% in the first treatment plan. In case of rectum, the average of absolute volume in first plan was 117.9 cc. However, the average of really treated volume was 79.2 cc. In case of CTV, the 100% prescription dose area didn't satisfy even though the margin for PTV was 5 mm because of the variation of rectal and bladder volume. There was no case that the value from average of five fractions is over the

  3. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry.

    Science.gov (United States)

    Wang, Guochao; Tan, Lilong; Yan, Shuhua

    2018-02-07

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  4. ABSOLUTE NEUTRINO MASSES

    DEFF Research Database (Denmark)

    Schechter, J.; Shahid, M. N.

    2012-01-01

    We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos.......We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos....

  5. Auto-calibration of Systematic Odometry Errors in Mobile Robots

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Andersen, Nils Axel

    1999-01-01

    This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement....... By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations...... and experiments on a mobile robot....

  6. Data error effects on net radiation and evapotranspiration estimation

    International Nuclear Information System (INIS)

    Llasat, M.C.; Snyder, R.L.

    1998-01-01

    The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)

  7. Absolute nuclear material assay

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  8. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry

    Directory of Open Access Journals (Sweden)

    Guochao Wang

    2018-02-01

    Full Text Available We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  9. Thermodynamics of negative absolute pressures

    International Nuclear Information System (INIS)

    Lukacs, B.; Martinas, K.

    1984-03-01

    The authors show that the possibility of negative absolute pressure can be incorporated into the axiomatic thermodynamics, analogously to the negative absolute temperature. There are examples for such systems (GUT, QCD) processing negative absolute pressure in such domains where it can be expected from thermodynamical considerations. (author)

  10. Reduction of weighing errors caused by tritium decay heating

    International Nuclear Information System (INIS)

    Shaw, J.F.

    1978-01-01

    The deuterium-tritium source gas mixture for laser targets is formulated by weight. Experiments show that the maximum weighing error caused by tritium decay heating is 0.2% for a 104-cm 3 mix vessel. Air cooling the vessel reduces the weighing error by 90%

  11. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  12. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  13. Superselective renal artery embolization with lipiodol and absolute alcohol emulsion for renal tumor

    International Nuclear Information System (INIS)

    Yu Miao; Li Jiakai; Sun Minglu; Wang Huixian

    2008-01-01

    Objective: To evaluate the efficacy of the renal arterial embolization with lipidol and absolute alcohol emulsion in the treatment of renal tumors. Methods: The superselective renal arterial embolization by using coaxial-cathaterization with infusion of lipiodol and absolute alcohol (in proportion of 2 :1) emulsion was performed in twenty patients with malignant and benign kidney tumors. 4 weeks later, the renal arteriography was taken routinely and repeated embolization was performed in case of necessary; and follow up was carried out periodically. Results: The imaging findings showed thorough tumor necrosis and feeding vessel abruption in 18 cases after one session of treatment. The volume of tumors decreased more than a half in 13 patients (82.25%, 13/18) associated with a well-distributed lipidol inside the tumors. The second session of treatment was performed in other 2 patients and the clinical symptoms relieved obviously. Conclusions: The superselective renal artery embolization with lipidol and absolute alcohol emulsion can permanently embolize all tumor feeding arteries in capillary vessel level with maximum reservation of renal function, providing definitively efficacy and worthwhile to be recommended widely. (authors)

  14. The correction of vibration in frequency scanning interferometry based absolute distance measurement system for dynamic measurements

    Science.gov (United States)

    Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu

    2015-10-01

    Absolute distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an absolute distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based absolute distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement error more than 103 times bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.

  15. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  16. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  17. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    Science.gov (United States)

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  18. Relative and absolute test-retest reliabilities of pressure pain threshold in patients with knee osteoarthritis.

    Science.gov (United States)

    Srimurugan Pratheep, Neeraja; Madeleine, Pascal; Arendt-Nielsen, Lars

    2018-04-25

    Pressure pain threshold (PPT) and PPT maps are commonly used to quantify and visualize mechanical pain sensitivity. Although PPT's have frequently been reported from patients with knee osteoarthritis (KOA), the absolute and relative reliability of PPT assessments remain to be determined. Thus, the purpose of this study was to evaluate the test-retest relative and absolute reliability of PPT in KOA. For that purpose, intra- and interclass correlation coefficient (ICC) as well as the standard error of measurement (SEM) and the minimal detectable change (MDC) values within eight anatomical locations covering the most painful knee of KOA patients was measured. Twenty KOA patients participated in two sessions with a period of 2 weeks±3 days apart. PPT's were assessed over eight anatomical locations covering the knee and two remote locations over tibialis anterior and brachioradialis. The patients rated their maximum pain intensity during the past 24 h and prior to the recordings on a visual analog scale (VAS), and completed The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and PainDetect surveys. The ICC, SEM and MDC between the sessions were assessed. The ICC for the individual variability was expressed with coefficient of variance (CV). Bland-Altman plots were used to assess potential bias in the dataset. The ICC ranged from 0.85 to 0.96 for all the anatomical locations which is considered "almost perfect". CV was lowest in session 1 and ranged from 44.2 to 57.6%. SEM for comparison ranged between 34 and 71 kPa and MDC ranged between 93 and 197 kPa with a mean PPT ranged from 273.5 to 367.7 kPa in session 1 and 268.1-331.3 kPa in session 2. The analysis of Bland-Altman plot showed no systematic bias. PPT maps showed that the patients had lower thresholds in session 2, but no significant difference was observed for the comparison between the sessions for PPT or VAS. No correlations were seen between PainDetect and PPT and PainDetect and WOMAC

  19. Automated absolute activation analysis with californium-252 sources

    International Nuclear Information System (INIS)

    MacMurdo, K.W.; Bowman, W.W.

    1978-09-01

    A 100-mg 252 Cf neutron activation analysis facility is used routinely at the Savannah River Laboratory for multielement analysis of many solid and liquid samples. An absolute analysis technique converts counting data directly to elemental concentration without the use of classical comparative standards and flux monitors. With the totally automated pneumatic sample transfer system, cyclic irradiation-decay-count regimes can be pre-selected for up to 40 samples, and samples can be analyzed with the facility unattended. An automatic data control system starts and stops a high-resolution gamma-ray spectrometer and/or a delayed-neutron detector; the system also stores data and controls output modes. Gamma ray data are reduced by three main programs in the IBM 360/195 computer: the 4096-channel spectrum and pertinent experimental timing, counting, and sample data are stored on magnetic tape; the spectrum is then reduced to a list of significant photopeak energies, integrated areas, and their associated statistical errors; and the third program assigns gamma ray photopeaks to the appropriate neutron activation product(s) by comparing photopeak energies to tabulated gamma ray energies. Photopeak areas are then converted to elemental concentration by using experimental timing and sample data, calculated elemental neutron capture rates, absolute detector efficiencies, and absolute spectroscopic decay data. Calculational procedures have been developed so that fissile material can be analyzed by cyclic neutron activation and delayed-neutron counting procedures. These calculations are based on a 6 half-life group model of delayed neutron emission; calculations include corrections for delayed neutron interference from 17 O. Detection sensitivities of 239 Pu were demonstrated with 15-g samples at a throughput of up to 140 per day. Over 40 elements can be detected at the sub-ppM level

  20. Danish Towns during Absolutism

    DEFF Research Database (Denmark)

    This anthology, No. 4 in the Danish Urban Studies Series, presents in English recent significant research on Denmark's urban development during the Age of Absolutism, 1660-1848, and features 13 articles written by leading Danish urban historians. The years of Absolutism were marked by a general...

  1. Multiwavelength Absolute Phase Retrieval from Noisy Diffractive Patterns: Wavelength Multiplexing Algorithm

    Directory of Open Access Journals (Sweden)

    Vladimir Katkovnik

    2018-05-01

    Full Text Available We study the problem of multiwavelength absolute phase retrieval from noisy diffraction patterns. The system is lensless with multiwavelength coherent input light beams and random phase masks applied for wavefront modulation. The light beams are formed by light sources radiating all wavelengths simultaneously. A sensor equipped by a Color Filter Array (CFA is used for spectral measurement registration. The developed algorithm targeted on optimal phase retrieval from noisy observations is based on maximum likelihood technique. The algorithm is specified for Poissonian and Gaussian noise distributions. One of the key elements of the algorithm is an original sparse modeling of the multiwavelength complex-valued wavefronts based on the complex-domain block-matching 3D filtering. Presented numerical experiments are restricted to noisy Poissonian observations. They demonstrate that the developed algorithm leads to effective solutions explicitly using the sparsity for noise suppression and enabling accurate reconstruction of absolute phase of high-dynamic range.

  2. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  3. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  4. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  5. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  6. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  7. DI3 - A New Procedure for Absolute Directional Measurements

    Directory of Open Access Journals (Sweden)

    A Geese

    2011-06-01

    Full Text Available The standard observatory procedure for determining a geomagnetic field's declination and inclination absolutely is the DI-flux measurement. The instrument consists of a non-magnetic theodolite equipped with a single-axis fluxgate magnetometer. Additionally, a scalar magnetometer is needed to provide all three components of the field. Using only 12 measurement steps, all systematic errors can be accounted for, but if only one of the readings is wrong, the whole measurement has to be rejected. We use a three-component sensor on top of the theodolites telescope. By performing more measurement steps, we gain much better control of the whole procedure: As the magnetometer can be fully calibrated by rotating about two independent directions, every combined reading of magnetometer output and theodolite angles provides the absolute field vector. We predefined a set of angle positions that the observer has to try to achieve. To further simplify the measurement procedure, the observer is guided by a pocket pc, in which he has only to confirm the theodolite position. The magnetic field is then stored automatically, together with the horizontal and vertical angles. The DI3 measurement is periodically performed at the Niemegk Observatory, allowing for a direct comparison with the traditional measurements.

  8. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  9. A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine

    Directory of Open Access Journals (Sweden)

    Bian Xiangjuan

    2014-05-01

    Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.

  10. Tinker-OpenMM: Absolute and relative alchemical free energies using AMOEBA on GPUs.

    Science.gov (United States)

    Harger, Matthew; Li, Daniel; Wang, Zhi; Dalby, Kevin; Lagardère, Louis; Piquemal, Jean-Philip; Ponder, Jay; Ren, Pengyu

    2017-09-05

    The capabilities of the polarizable force fields for alchemical free energy calculations have been limited by the high computational cost and complexity of the underlying potential energy functions. In this work, we present a GPU-based general alchemical free energy simulation platform for polarizable potential AMOEBA. Tinker-OpenMM, the OpenMM implementation of the AMOEBA simulation engine has been modified to enable both absolute and relative alchemical simulations on GPUs, which leads to a ∼200-fold improvement in simulation speed over a single CPU core. We show that free energy values calculated using this platform agree with the results of Tinker simulations for the hydration of organic compounds and binding of host-guest systems within the statistical errors. In addition to absolute binding, we designed a relative alchemical approach for computing relative binding affinities of ligands to the same host, where a special path was applied to avoid numerical instability due to polarization between the different ligands that bind to the same site. This scheme is general and does not require ligands to have similar scaffolds. We show that relative hydration and binding free energy calculated using this approach match those computed from the absolute free energy approach. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Proton spectroscopic imaging of polyacrylamide gel dosimeters for absolute radiation dosimetry

    International Nuclear Information System (INIS)

    Murphy, P.S.; Schwarz, A.J.; Leach, M.O.

    2000-01-01

    Proton spectroscopy has been evaluated as a method for quantifying radiation induced changes in polyacrylamide gel dosimeters. A calibration was first performed using BANG-type gel samples receiving uniform doses of 6 MV photons from 0 to 9 Gy in 1 Gy intervals. The peak integral of the acrylic protons belonging to acrylamide and methylenebisacrylamide normalized to the water signal was plotted against absorbed dose. Response was approximately linear within the range 0-7 Gy. A large gel phantom irradiated with three, coplanar 3x3cm square fields to 5.74 Gy at isocentre was then imaged with an echo-filter technique to map the distribution of monomers directly. The image, normalized to the water signal, was converted into an absolute dose map. At the isocentre the measured dose was 5.69 Gy (SD = 0.09) which was in good agreement with the planned dose. The measured dose distribution elsewhere in the sample shows greater errors. A T 2 derived dose map demonstrated a better relative distribution but gave an overestimate of the dose at isocentre of 18%. The data indicate that MR measurements of monomer concentration can complement T 2 -based measurements and can be used to verify absolute dose. Compared with the more usual T 2 measurements for assessing gel polymerization, monomer concentration analysis is less sensitive to parameters such as gel pH and temperature, which can cause ambiguous relaxation time measurements and erroneous absolute dose calculations. (author)

  12. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  13. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  14. Optimal design of the absolute positioning sensor for a high-speed maglev train and research on its fault diagnosis.

    Science.gov (United States)

    Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge

    2012-01-01

    This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.

  15. Optimal Design of the Absolute Positioning Sensor for a High-Speed Maglev Train and Research on Its Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Junge Zhang

    2012-08-01

    Full Text Available This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.

  16. Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?

    Science.gov (United States)

    Kiernan, D; Hosking, J; O'Brien, T

    2016-03-01

    Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Long term observation on absolute lymphocyte counts in the adult health study sample, Hiroshima and Nagasaki

    International Nuclear Information System (INIS)

    Oesterle, S.N.; Norman, J.E. Jr.

    1980-01-01

    Total peripheral blood lymphocytes were evaluated by age and exposure status in the Adult Health Study population during three examination cycles between 1958 and 1972. No radiation effect was observed, but a significant drop in the absolute lymphocyte counts of those aged 70 years and over and a corresponding maximum for persons aged 50 - 59 was observed. (author)

  18. An Empirical Analysis for the Prediction of a Financial Crisis in Turkey through the Use of Forecast Error Measures

    Directory of Open Access Journals (Sweden)

    Seyma Caliskan Cavdar

    2015-08-01

    Full Text Available In this study, we try to examine whether the forecast errors obtained by the ANN models affect the breakout of financial crises. Additionally, we try to investigate how much the asymmetric information and forecast errors are reflected on the output values. In our study, we used the exchange rate of USD/TRY (USD, the Borsa Istanbul 100 Index (BIST, and gold price (GP as our output variables of our Artificial Neural Network (ANN models. We observe that the predicted ANN model has a strong explanation capability for the 2001 and 2008 crises. Our calculations of some symmetry measures such as mean absolute percentage error (MAPE, symmetric mean absolute percentage error (sMAPE, and Shannon entropy (SE, clearly demonstrate the degree of asymmetric information and the deterioration of the financial system prior to, during, and after the financial crisis. We found that the asymmetric information prior to crisis is larger as compared to other periods. This situation can be interpreted as early warning signals before the potential crises. This evidence seems to favor an asymmetric information view of financial crises.

  19. Quantum states and their marginals. From multipartite entanglement to quantum error-correcting codes

    International Nuclear Information System (INIS)

    Huber, Felix Michael

    2017-01-01

    At the heart of the curious phenomenon of quantum entanglement lies the relation between the whole and its parts. In my thesis, I explore different aspects of this theme in the multipartite setting by drawing connections to concepts from statistics, graph theory, and quantum error-correcting codes: first, I address the case when joint quantum states are determined by their few-body parts and by Jaynes' maximum entropy principle. This can be seen as an extension of the notion of entanglement, with less complex states already being determined by their few-body marginals. Second, I address the conditions for certain highly entangled multipartite states to exist. In particular, I present the solution of a long-standing open problem concerning the existence of an absolutely maximally entangled state on seven qubits. This sheds light on the algebraic properties of pure quantum states, and on the conditions that constrain the sharing of entanglement amongst multiple particles. Third, I investigate Ulam's graph reconstruction problems in the quantum setting, and obtain legitimacy conditions of a set of states to be the reductions of a joint graph state. Lastly, I apply and extend the weight enumerator machinery from quantum error correction to investigate the existence of codes and highly entangled states in higher dimensions. This clarifies the physical interpretation of the weight enumerators and of the quantum MacWilliams identity, leading to novel applications in multipartite entanglement.

  20. Absolute negative mobility in the anomalous diffusion

    Science.gov (United States)

    Chen, Ruyin; Chen, Chongyang; Nie, Linru

    2017-12-01

    Transport of an inertial Brownian particle driven by the multiplicative Lévy noise was investigated here. Numerical results indicate that: (i) The Lévy noise is able to induce absolute negative mobility (ANM) in the system, while disappearing in the deterministic case; (ii) the ANM can occur in the region of superdiffusion while disappearing in the region of normal diffusion, and the appropriate stable index of the Lévy noise makes the particle move along the opposite direction of the bias force to the maximum degree; (iii) symmetry breaking of the Lévy noise also causes the ANM effect. In addition, the intrinsic physical mechanism and conditions for the ANM to occur are discussed in detail. Our results have the implication that the Lévy noise plays an important role in the occurrence of the ANM phenomenon.

  1. The importance of intra-hospital pharmacovigilance in the detection of medication errors

    Science.gov (United States)

    Villegas, Francisco; Figueroa-Montero, David; Barbero-Becerra, Varenka; Juárez-Hernández, Eva; Uribe, Misael; Chávez-Tapia, Norberto; González-Chon, Octavio

    2018-01-01

    Hospitalized patients are susceptible to medication errors, which represent between the fourth and the sixth cause of death. The department of intra-hospital pharmacovigilance intervenes in the entire process of medication with the purpose to prevent, repair and assess damages. To analyze medication errors reported by Mexican Fundación Clínica Médica Sur pharmacovigilance system and their impact on patients. Prospective study carried out from 2012 to 2015, where medication prescriptions given to patients were recorded. Owing to heterogeneity, data were described as absolute numbers in a logarithmic scale. 292 932 prescriptions of 56 368 patients were analyzed, and 8.9% of medication errors were identified. The treating physician was responsible of 83.32% of medication errors, residents of 6.71% and interns of 0.09%. No error caused permanent damage or death. This is the pharmacovigilance study with the largest sample size reported. Copyright: © 2018 SecretarÍa de Salud.

  2. Absolute beam current monitoring in endstation c

    International Nuclear Information System (INIS)

    Bochna, C.

    1995-01-01

    The first few experiments at CEBAF require approximately 1% absolute measurements of beam currents expected to range from 10-25μA. This represents errors of 100-250 nA. The initial complement of beam current monitors are of the non intercepting type. CEBAF accelerator division has provided a stripline monitor and a cavity monitor, and the authors have installed an Unser monitor (parametric current transformer or PCT). After calibrating the Unser monitor with a precision current reference, the authors plan to transfer this calibration using CW beam to the stripline monitors and cavity monitors. It is important that this be done fairly rapidly because while the gain of the Unser monitor is quite stable, the offset may drift on the order of .5μA per hour. A summary of what the authors have learned about the linearity, zero drift, and gain drift of each type of current monitor will be presented

  3. Reducing Dose Uncertainty for Spot-Scanning Proton Beam Therapy of Moving Tumors by Optimizing the Spot Delivery Sequence

    International Nuclear Information System (INIS)

    Li, Heng; Zhu, X. Ronald; Zhang, Xiaodong

    2015-01-01

    Purpose: To develop and validate a novel delivery strategy for reducing the respiratory motion–induced dose uncertainty of spot-scanning proton therapy. Methods and Materials: The spot delivery sequence was optimized to reduce dose uncertainty. The effectiveness of the delivery sequence optimization was evaluated using measurements and patient simulation. One hundred ninety-one 2-dimensional measurements using different delivery sequences of a single-layer uniform pattern were obtained with a detector array on a 1-dimensional moving platform. Intensity modulated proton therapy plans were generated for 10 lung cancer patients, and dose uncertainties for different delivery sequences were evaluated by simulation. Results: Without delivery sequence optimization, the maximum absolute dose error can be up to 97.2% in a single measurement, whereas the optimized delivery sequence results in a maximum absolute dose error of ≤11.8%. In patient simulation, the optimized delivery sequence reduces the mean of fractional maximum absolute dose error compared with the regular delivery sequence by 3.3% to 10.6% (32.5-68.0% relative reduction) for different patients. Conclusions: Optimizing the delivery sequence can reduce dose uncertainty due to respiratory motion in spot-scanning proton therapy, assuming the 4-dimensional CT is a true representation of the patients' breathing patterns.

  4. Near threshold absolute TDCS: First results

    International Nuclear Information System (INIS)

    Roesel, T.; Schlemmer, P.; Roeder, J.; Frost, L.; Jung, K.; Ehrhardt, H.

    1992-01-01

    A new method, and first results for an impact energy 2 eV above the threshold of ionisation of helium, are presented for the measurement of absolute triple differential cross sections (TDCS) in a crossed beam experiment. The method is based upon measurement of beam/target overlap densities using known absolute total ionisation cross sections and of detection efficiencies using known absolute double differential cross sections (DDCS). For the present work the necessary absolute DDCS for 1 eV electrons had also to be measured. Results are presented for several different coplanar kinematics and are compared with recent DWBA calculations. (orig.)

  5. Absolute entropy of ions in methanol

    International Nuclear Information System (INIS)

    Abakshin, V.A.; Kobenin, V.A.; Krestov, G.A.

    1978-01-01

    By measuring the initial thermoelectromotive forces of chains with bromo-silver electrodes in tetraalkylammonium bromide solutions the absolute entropy of bromide-ion in methanol is determined in the 298.15-318.15 K range. The anti Ssub(Brsup(-))sup(0) = 9.8 entropy units value is used for calculation of the absolute partial molar entropy of alkali metal ions and halogenide ions. It has been found that, absolute entropy of Cs + =12.0 entropy units, I - =14.0 entropy units. The obtained ion absolute entropies in methanol at 298.15 K within 1-2 entropy units is in an agreement with published data

  6. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells.

    Science.gov (United States)

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-10-14

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.

  7. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells

    Directory of Open Access Journals (Sweden)

    Yanjun Wang

    2016-10-01

    Full Text Available First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.

  8. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    International Nuclear Information System (INIS)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex

    2012-01-01

    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  9. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  10. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  11. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  12. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  13. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  14. Pertinence analysis of intensity-modulated radiation therapy dosimetry error and parameters of beams

    International Nuclear Information System (INIS)

    Chi Zifeng; Liu Dan; Cao Yankun; Li Runxiao; Han Chun

    2012-01-01

    Objective: To study the relationship between parameter settings in the intensity-modulated radiation therapy (IMRT) planning in order to explore the effect of parameters on absolute dose verification. Methods: Forty-three esophageal carcinoma cases were optimized with Pinnacle 7.6c by experienced physicist using appropriate optimization parameters and dose constraints with a number of iterations to meet the clinical acceptance criteria. The plans were copied to water-phantom, 0.13 cc ion Farmer chamber and DOSE1 dosimeter was used to measure the absolute dose. The statistical data of the parameters of beams for the 43 cases were collected, and the relationships among them were analyzed. The statistical data of the dosimetry error were collected, and comparative analysis was made for the relation between the parameters of beams and ion chamber absolute dose verification results. Results: The parameters of beams were correlated among each other. Obvious affiliation existed between the dose accuracy and parameter settings. When the beam segment number of IMRT plan was more than 80, the dose deviation would be greater than 3%; however, if the beam segment number was less than 80, the dose deviation was smaller than 3%. When the beam segment number was more than 100, part of the dose deviation of this plan was greater than 4%. On the contrary, if the beam segment number was less than 100, the dose deviation was smaller than 4% definitely. Conclusions: In order to decrease the absolute dose verification error, less beam angles and less beam segments are needed and the beam segment number should be controlled within the range of 80. (authors)

  15. Error studies of Halbach Magnets

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, S. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2017-03-02

    These error studies were done on the Halbach magnets for the CBETA “First Girder” as described in note [CBETA001]. The CBETA magnets have since changed slightly to the lattice in [CBETA009]. However, this is not a large enough change to significantly affect the results here. The QF and BD arc FFAG magnets are considered. For each assumed set of error distributions and each ideal magnet, 100 random magnets with errors are generated. These are then run through an automated version of the iron wire multipole cancellation algorithm. The maximum wire diameter allowed is 0.063” as in the proof-of-principle magnets. Initially, 32 wires (2 per Halbach wedge) are tried, then if this does not achieve 1e-­4 level accuracy in the simulation, 48 and then 64 wires. By “1e-4 accuracy”, it is meant the FOM defined by √(Σn≥sextupole an 2+bn 2) is less than 1 unit, where the multipoles are taken at the maximum nominal beam radius, R=23mm for these magnets. The algorithm initially uses 20 convergence interations. If 64 wires does not achieve 1e-­4 accuracy, this is increased to 50 iterations to check for slow converging cases. There are also classifications for magnets that do not achieve 1e-4 but do achieve 1e-3 (FOM ≤ 10 units). This is technically within the spec discussed in the Jan 30, 2017 review; however, there will be errors in practical shimming not dealt with in the simulation, so it is preferable to do much better than the spec in the simulation.

  16. The existence of negative absolute temperatures in Axelrod’s social influence model

    Science.gov (United States)

    Villegas-Febres, J. C.; Olivares-Rivas, W.

    2008-06-01

    We introduce the concept of temperature as an order parameter in the standard Axelrod’s social influence model. It is defined as the relation between suitably defined entropy and energy functions, T=(. We show that at the critical point, where the order/disorder transition occurs, this absolute temperature changes in sign. At this point, which corresponds to the transition homogeneous/heterogeneous culture, the entropy of the system shows a maximum. We discuss the relationship between the temperature and other properties of the model in terms of cultural traits.

  17. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600-MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600-MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV

  18. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurements of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV. (auth)

  19. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    Science.gov (United States)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  20. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    Science.gov (United States)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  1. ACCESS, Absolute Color Calibration Experiment for Standard Stars: Integration, Test, and Ground Performance

    Science.gov (United States)

    Kaiser, Mary Elizabeth; Morris, Matthew; Aldoroty, Lauren; Kurucz, Robert; McCandliss, Stephan; Rauscher, Bernard; Kimble, Randy; Kruk, Jeffrey; Wright, Edward L.; Feldman, Paul; Riess, Adam; Gardner, Jonathon; Bohlin, Ralph; Deustua, Susana; Dixon, Van; Sahnow, David J.; Perlmutter, Saul

    2018-01-01

    Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. Systematic errors associated with astrophysical data used to probe fundamental astrophysical questions, such as SNeIa observations used to constrain dark energy theories, now exceed the statistical errors associated with merged databases of these measurements. ACCESS, “Absolute Color Calibration Experiment for Standard Stars”, is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35‑1.7μm bandpass. To achieve this goal ACCESS (1) observes HST/ Calspec stars (2) above the atmosphere to eliminate telluric spectral contaminants (e.g. OH) (3) using a single optical path and (HgCdTe) detector (4) that is calibrated to NIST laboratory standards and (5) monitored on the ground and in-flight using a on-board calibration monitor. The observations are (6) cross-checked and extended through the generation of stellar atmosphere models for the targets. The ACCESS telescope and spectrograph have been designed, fabricated, and integrated. Subsystems have been tested. Performance results for subsystems, operations testing, and the integrated spectrograph will be presented. NASA sounding rocket grant NNX17AC83G supports this work.

  2. Projective absoluteness for Sacks forcing

    NARCIS (Netherlands)

    Ikegami, D.

    2009-01-01

    We show that Sigma(1)(3)-absoluteness for Sacks forcing is equivalent to the nonexistence of a Delta(1)(2) Bernstein set. We also show that Sacks forcing is the weakest forcing notion among all of the preorders that add a new real with respect to Sigma(1)(3) forcing absoluteness.

  3. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  4. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  5. Measuring worst-case errors in a robot workcell

    International Nuclear Information System (INIS)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors

  6. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    Science.gov (United States)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  7. FEL small signal gain reduction due to phase error of undulator

    International Nuclear Information System (INIS)

    Jia Qika

    2002-01-01

    The effects of undulator phase errors on the Free Electron Laser small signal gain is analyzed and discussed. The gain reduction factor due to the phase error is given analytically for low-gain regimes, it shows that degradation of the gain is similar to that of the spontaneous radiation, has a simple exponential relation with square of the rms phase error, and the linear variation part of phase error induces the position shift of maximum gain. The result also shows that the Madey's theorem still hold in the presence of phase error. The gain reduction factor due to the phase error for high-gain regimes also can be given in a simple way

  8. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  9. Reliability and Measurement Error of Tensiomyography to Assess Mechanical Muscle Function: A Systematic Review.

    Science.gov (United States)

    Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego

    2017-12-01

    Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.

  10. Impact and quantification of the sources of error in DNA pooling designs.

    Science.gov (United States)

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  11. Influence of Ephemeris Error on GPS Single Point Positioning Accuracy

    Science.gov (United States)

    Lihua, Ma; Wang, Meng

    2013-09-01

    The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.

  12. Research on the Method of Noise Error Estimation of Atomic Clocks

    Science.gov (United States)

    Song, H. J.; Dong, S. W.; Li, W.; Zhang, J. H.; Jing, Y. J.

    2017-05-01

    The simulation methods of different noises of atomic clocks are given. The frequency flicker noise of atomic clock is studied by using the Markov process theory. The method for estimating the maximum interval error of the frequency white noise is studied by using the Wiener process theory. Based on the operation of 9 cesium atomic clocks in the time frequency reference laboratory of NTSC (National Time Service Center), the noise coefficients of the power-law spectrum model are estimated, and the simulations are carried out according to the noise models. Finally, the maximum interval error estimates of the frequency white noises generated by the 9 cesium atomic clocks have been acquired.

  13. Error and objectivity: cognitive illusions and qualitative research.

    Science.gov (United States)

    Paley, John

    2005-07-01

    Psychological research has shown that cognitive illusions, of which visual illusions are just a special case, are systematic and pervasive, raising epistemological questions about how error in all forms of research can be identified and eliminated. The quantitative sciences make use of statistical techniques for this purpose, but it is not clear what the qualitative equivalent is, particularly in view of widespread scepticism about validity and objectivity. I argue that, in the light of cognitive psychology, the 'error question' cannot be dismissed as a positivist obsession, and that the concepts of truth and objectivity are unavoidable. However, they constitute only a 'minimal realism', which does not necessarily bring a commitment to 'absolute' truth, certainty, correspondence, causation, reductionism, or universal laws in its wake. The assumption that it does reflects a misreading of positivism and, ironically, precipitates a 'crisis of legitimation and representation', as described by constructivist authors.

  14. Optical losses due to tracking error estimation for a low concentrating solar collector

    International Nuclear Information System (INIS)

    Sallaberry, Fabienne; García de Jalón, Alberto; Torres, José-Luis; Pujol-Nadal, Ramón

    2015-01-01

    Highlights: • A solar thermal collector with low concentration and one-axis tracking was tested. • A quasi-dynamic testing procedure for IAM was defined for tracking collector. • The adequation between the concentrator optics and the tracking was checked. • The maximum and long-term optical losses due to tracking error were calculated. - Abstract: The determination of the accuracy of a solar tracker used in domestic hot water solar collectors is not yet standardized. However, while using optical concentration devices, it is important to use a solar tracker with adequate precision with regard to the specific optical concentration factor. Otherwise, the concentrator would sustain high optical losses due to the inadequate focusing of the solar radiation onto its receiver, despite having a good quality. This study is focused on the estimation of long-term optical losses due to the tracking error of a low-temperature collector using low-concentration optics. For this purpose, a testing procedure for the incidence angle modifier on the tracking plane is proposed to determinate the acceptance angle of its concentrator even with different longitudinal incidence angles along the focal line plane. Then, the impact of maximum tracking error angle upon the optical efficiency has been determined. Finally, the calculation of the long-term optical error due to the tracking errors, using the design angular tracking error declared by the manufacturer, is carried out. The maximum tracking error calculated for this collector imply an optical loss of about 8.5%, which is high, but the average long-term optical loss calculated for one year was about 1%, which is reasonable for such collectors used for domestic hot water

  15. The approach of Bayesian model indicates media awareness of medical errors

    Science.gov (United States)

    Ravichandran, K.; Arulchelvan, S.

    2016-06-01

    This research study brings out the factors behind the increase in medical malpractices in the Indian subcontinent in the present day environment and impacts of television media awareness towards it. Increased media reporting of medical malpractices and errors lead to hospitals taking corrective action and improve the quality of medical services that they provide. The model of Cultivation Theory can be used to measure the influence of media in creating awareness of medical errors. The patient's perceptions of various errors rendered by the medical industry from different parts of India were taken up for this study. Bayesian method was used for data analysis and it gives absolute values to indicate satisfaction of the recommended values. To find out the impact of maintaining medical records of a family online by the family doctor in reducing medical malpractices which creates the importance of service quality in medical industry through the ICT.

  16. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  17. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  18. Absolute determination of the deuterium content of heavy water, measurement of absolute density

    International Nuclear Information System (INIS)

    Ceccaldi, M.; Riedinger, M.; Menache, M.

    1975-01-01

    The absolute density of two heavy water samples rich in deuterium (with a grade higher than 99.9%) was determined with the hydrostatic method. The exact isotopic composition of this water (hydrogen and oxygen isotopes) was very carefully studied. A theoretical estimate enabled us to get the absolute density value of isotopically pure D 2 16 O. This value was found to be 1104.750 kg.m -3 at t 68 =22.3 0 C and under the pressure of one atmosphere. (orig.) [de

  19. Absolute Position Sensing Based on a Robust Differential Capacitive Sensor with a Grounded Shield Window

    Directory of Open Access Journals (Sweden)

    Yang Bai

    2016-05-01

    Full Text Available A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10−4 pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range.

  20. The absolute environmental performance of buildings

    DEFF Research Database (Denmark)

    Brejnrod, Kathrine Nykjær; Kalbar, Pradip; Petersen, Steffen

    2017-01-01

    Our paper presents a novel approach for absolute sustainability assessment of a building's environmental performance. It is demonstrated how the absolute sustainable share of the earth carrying capacity of a specific building type can be estimated using carrying capacity based normalization factors....... A building is considered absolute sustainable if its annual environmental burden is less than its share of the earth environmental carrying capacity. Two case buildings – a standard house and an upcycled single-family house located in Denmark – were assessed according to this approach and both were found...... to exceed the target values of three (almost four) of the eleven impact categories included in the study. The worst-case excess was for the case building, representing prevalent Danish building practices, which utilized 1563% of the Climate Change carrying capacity. Four paths to reach absolute...

  1. Absolute Summ

    Science.gov (United States)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  2. Adaptive color halftoning for minimum perceived error using the blue noise mask

    Science.gov (United States)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  3. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    Energy Technology Data Exchange (ETDEWEB)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico; Reichardt, Christian L. [School of Physics, University of Melbourne, 313 David Caro building, Swanston St and Tin Alley, Parkville VIC 3010 (Australia); Baxter, Eric J. [Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd Street, Philadelphia, PA 19104 (United States); Bleem, Lindsey E. [Argonne National Laboratory, High-Energy Physics Division, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Crawford, Thomas M. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Holder, Gilbert P. [Department of Astronomy and Department of Physics, University of Illinois, 1002 West Green St., Urbana, IL 61801 (United States); Manzotti, Alessandro, E-mail: srinivasan.raghunathan@unimelb.edu.au, E-mail: s.patil2@student.unimelb.edu.au, E-mail: ebax@sas.upenn.edu, E-mail: federico.bianchini@unimelb.edu.au, E-mail: bleeml@uchicago.edu, E-mail: tcrawfor@kicp.uchicago.edu, E-mail: gholder@illinois.edu, E-mail: manzotti@uchicago.edu, E-mail: christian.reichardt@unimelb.edu.au [Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  4. Absolute flux scale for radioastronomy

    International Nuclear Information System (INIS)

    Ivanov, V.P.; Stankevich, K.S.

    1986-01-01

    The authors propose and provide support for a new absolute flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on absolute measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of absolute scales in radio astronomy are summarized

  5. A global algorithm for estimating Absolute Salinity

    Science.gov (United States)

    McDougall, T. J.; Jackett, D. R.; Millero, F. J.; Pawlowicz, R.; Barker, P. M.

    2012-12-01

    The International Thermodynamic Equation of Seawater - 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density) than does Practical Salinity. When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic), Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg-1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p) in the world ocean. To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811). In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally).

  6. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  7. Relative and Absolute Reliability of Timed Up and Go Test in Community Dwelling Older Adult and Healthy Young People

    Directory of Open Access Journals (Sweden)

    Farhad Azadi

    2014-01-01

    Full Text Available Objectives: Relative and absolute reliability are psychometric properties of the test that many clinical decisions are based on them. In many cases, only relative reliability takes into consideration while the absolute reliability is also very important. Methods & Materials: Eleven community-dwelling older adults aged 65 years and older (69.64±3.58 and 20 healthy young in the age range 20 to 35 years (28.80±4.15 using three versions of Timed Up and Go test were evaluated twice with an interval of 2 to 5 days. Results: Generally, the non-homogeneity of the study population was stratified to increase the Intra-class Correlation Coefficient (ICC this coefficient in elderly people is greater than young people and with a secondary task is reduced. In This study, absolute reliability indices using different data sources and equations lead to in more or less similar results. At general, in test–retest situations, the elderly more than the young people must be changed to be interpreted as a real change, not random. The random error contribution is slightly greater in elderly than young and with a secondary task is increased.It seems, heterogeneity leads to moderation in absolute reliability indices. Conclusion: In relative reliability studies, researchers and clinicians should pay attention to factors such as homogeneity of population and etc. As well as, absolute reliability beside relative reliability is needed and necessary in clinical decision making.

  8. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    Science.gov (United States)

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  9. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  10. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.

    Science.gov (United States)

    Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A

    2018-01-01

    Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  11. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer

    Directory of Open Access Journals (Sweden)

    Marco A. Rendón-Medina

    2018-01-01

    Full Text Available Summary:. Rapid prototyping models (RPMs had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co, with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96. Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  12. A global algorithm for estimating Absolute Salinity

    Directory of Open Access Journals (Sweden)

    T. J. McDougall

    2012-12-01

    Full Text Available The International Thermodynamic Equation of Seawater – 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density than does Practical Salinity.

    When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic, Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg−1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p in the world ocean.

    To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811. In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally.

  13. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  14. Analysis of the "naming game" with learning errors in communications.

    Science.gov (United States)

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  15. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.

    Science.gov (United States)

    Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep

    2017-06-12

    Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  16. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  17. Error analysis of the phase-shifting technique when applied to shadow moire

    International Nuclear Information System (INIS)

    Han, Changwoon; Han Bongtae

    2006-01-01

    An exact solution for the intensity distribution of shadow moire fringes produced by a broad spectrum light is presented. A mathematical study quantifies errors in fractional fringe orders determined by the phase-shifting technique, and its validity is corroborated experimentally. The errors vary cyclically as the distance between the reference grating and the specimen increases. The amplitude of the maximum error is approximately 0.017 fringe, which defines the theoretical limit of resolution enhancement offered by the phase-shifting technique

  18. Dosimetric verification and evaluation of segmental multileaf collimator (SMLC)-IMRT for quality assurance. The second report. Absolute dose

    International Nuclear Information System (INIS)

    Tateoka, Kunihiko; Hareyama, Masato; Oouchi, Atsushi; Nakata, Kensei; Nagase, Daiki; Saikawa, Tsunehiko; Shimizume, Kazunari; Sugimoto, Harumi; Waka, Masaaki

    2003-01-01

    Intensity-modulated radiation therapy (IMRT) was developed to irradiate the target are more conformally, sparing organs at risk (OARs). Since the beams are sequentially delivered by many, small, irregular, and off-center fields in IMRT, dosimetric quality assurance (QA) is an extremely important issue. QA is performed by verifying both the dose distribution and doses at arbitrary points. In this work, we describe the verification of doses at arbitrary points in our hospital for Segmental multileaf collimator (SMLC)-IMRT. In general, verification of the absolute doses for IMRT is performed by comparison between the calculated doses using Radiation Treatment Planning Systems (RTP) and the measured doses using an ionization chamber with a small volume at arbitrary points in relatively flat regions of the dose gradients. However, no clear definitions of the dose gradients and the flat regions have yet been reported. We carried out verification by comparison of the measured doses with the average dose and the central point dose in a virtual Farmer type ionization chamber (V-F) and a virtual PinPoint ionization chamber (V-P) equal to the Farmer-type ionization chamber volume and PinPoint ionization chamber volumes using the RTP. Furthermore, we defined the dose gradients as the deviation of the maximum dose from the minimum dose in the virtual ionization chamber volume. In IMRT, the dose gradients may be as high as 80% or more in the virtual ionization chamber volume. Therefore, it is thought that the effective center of the ionization chamber varies by segment for IMRT fields (i.e., the variation of the ionization chamber replacement effect). Additionally, in regions with a higher dose gradient, uncertainty in the measured doses is influenced by the variations in the ionization chamber replacement effect and the ionization chamber positioning error. We more objectively examined the verification method for the absolute dose in IMRT using the virtual ionization chamber

  19. Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment

    International Nuclear Information System (INIS)

    Liu, Ming; Cygler, Joanna; Vandervoort, Eric

    2016-01-01

    The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and the current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99 th percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.

  20. Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Ming [Carleton University (Canada); Cygler, Joanna [The Ottawa Hospital Cancer Centre, Carleton University, Ottawa University (Canada); Vandervoort, Eric [The Ottawa Hospital Cancer Centre, Ottawa University (Canada)

    2016-08-15

    The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and the current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.

  1. Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.

    Science.gov (United States)

    Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2017-05-06

    The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

  2. Invariant and Absolute Invariant Means of Double Sequences

    Directory of Open Access Journals (Sweden)

    Abdullah Alotaibi

    2012-01-01

    Full Text Available We examine some properties of the invariant mean, define the concepts of strong σ-convergence and absolute σ-convergence for double sequences, and determine the associated sublinear functionals. We also define the absolute invariant mean through which the space of absolutely σ-convergent double sequences is characterized.

  3. The stars: an absolute radiometric reference for the on-orbit calibration of PLEIADES-HR satellites

    Science.gov (United States)

    Meygret, Aimé; Blanchet, Gwendoline; Mounier, Flore; Buil, Christian

    2017-09-01

    The accurate on-orbit radiometric calibration of optical sensors has become a challenge for space agencies who gather their effort through international working groups such as CEOS/WGCV or GSICS with the objective to insure the consistency of space measurements and to reach an absolute accuracy compatible with more and more demanding scientific needs. Different targets are traditionally used for calibration depending on the sensor or spacecraft specificities: from on-board calibration systems to ground targets, they all take advantage of our capacity to characterize and model them. But achieving the in-flight stability of a diffuser panel is always a challenge while the calibration over ground targets is often limited by their BDRF characterization and the atmosphere variability. Thanks to their agility, some satellites have the capability to view extra-terrestrial targets such as the moon or stars. The moon is widely used for calibration and its albedo is known through ROLO (RObotic Lunar Observatory) USGS model but with a poor absolute accuracy limiting its use to sensor drift monitoring or cross-calibration. Although the spectral irradiance of some stars is known with a very high accuracy, it was not really shown that they could provide an absolute reference for remote sensors calibration. This paper shows that high resolution optical sensors can be calibrated with a high absolute accuracy using stars. The agile-body PLEIADES 1A satellite is used for this demonstration. The star based calibration principle is described and the results are provided for different stars, each one being acquired several times. These results are compared to the official calibration provided by ground targets and the main error contributors are discussed.

  4. An absolute calibration system for millimeter-accuracy APOLLO measurements

    Science.gov (United States)

    Adelberger, E. G.; Battat, J. B. R.; Birkmeier, K. J.; Colmenares, N. R.; Davis, R.; Hoyle, C. D.; Huang, L. R.; McMillan, R. J.; Murphy, T. W., Jr.; Schlerman, E.; Skrobol, C.; Stubbs, C. W.; Zach, A.

    2017-12-01

    Lunar laser ranging provides a number of leading experimental tests of gravitation—important in our quest to unify general relativity and the standard model of physics. The apache point observatory lunar laser-ranging operation (APOLLO) has for years achieved median range precision at the  ∼2 mm level. Yet residuals in model-measurement comparisons are an order-of-magnitude larger, raising the question of whether the ranging data are not nearly as accurate as they are precise, or if the models are incomplete or ill-conditioned. This paper describes a new absolute calibration system (ACS) intended both as a tool for exposing and eliminating sources of systematic error, and also as a means to directly calibrate ranging data in situ. The system consists of a high-repetition-rate (80 MHz) laser emitting short (motivating continued work on model capabilities. The ACS provides the means to deliver APOLLO data both accurate and precise below the 2 mm level.

  5. Absolute measurement of a tritium standard

    International Nuclear Information System (INIS)

    Hadzisehovic, M.; Mocilnik, I.; Buraei, K.; Pongrac, S.; Milojevic, A.

    1978-01-01

    For the determination of a tritium absolute activity standard, a method of internal gas counting has been used. The procedure involves water reduction by uranium and zinc further the measurement of the absolute disintegration rate of tritium per unit of the effective volume of the counter by a compensation method. Criteria for the choice of methods and procedures concerning the determination and measurement of gaseous 3 H yield, parameters of gaseous hydrogen, sample mass of HTO and the absolute disintegration rate of tritium are discussed. In order to obtain gaseous sources of 3 H (and 2 H), the same reversible chemical reaction was used, namely, the water - uranium hydride - hydrogen system. This reaction was proved to be quantitative above 500 deg C by measuring the yield of the gas obtained and the absolute activity of an HTO standard. A brief description of the measuring apparatus is given, as well as a critical discussion of the brass counter quality and the possibility of obtaining equal working conditions at the counter ends. (T.G.)

  6. Cryogenic, Absolute, High Pressure Sensor

    Science.gov (United States)

    Chapman, John J. (Inventor); Shams. Qamar A. (Inventor); Powers, William T. (Inventor)

    2001-01-01

    A pressure sensor is provided for cryogenic, high pressure applications. A highly doped silicon piezoresistive pressure sensor is bonded to a silicon substrate in an absolute pressure sensing configuration. The absolute pressure sensor is bonded to an aluminum nitride substrate. Aluminum nitride has appropriate coefficient of thermal expansion for use with highly doped silicon at cryogenic temperatures. A group of sensors, either two sensors on two substrates or four sensors on a single substrate are packaged in a pressure vessel.

  7. Temperature and SAR measurement errors in the evaluation of metallic linear structures heating during MRI using fluoroptic (registered) probes

    Energy Technology Data Exchange (ETDEWEB)

    Mattei, E [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Triventi, M [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Calcagnini, G [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Censi, F [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Kainz, W [Center for Devices and Radiological Health, Food and Drug Administration, Rockville, MD (United States); Bassen, H I [Center for Devices and Radiological Health, Food and Drug Administration, Rockville, MD (United States); Bartolini, P [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy)

    2007-03-21

    The purpose of this work is to evaluate the error associated with temperature and SAR measurements using fluoroptic (registered) temperature probes on pacemaker (PM) leads during magnetic resonance imaging (MRI). We performed temperature measurements on pacemaker leads, excited with a 25, 64, and 128 MHz current. The PM lead tip heating was measured with a fluoroptic (registered) thermometer (Luxtron, Model 3100, USA). Different contact configurations between the pigmented portion of the temperature probe and the PM lead tip were investigated to find the contact position minimizing the temperature and SAR underestimation. A computer model was used to estimate the error made by fluoroptic (registered) probes in temperature and SAR measurement. The transversal contact of the pigmented portion of the temperature probe and the PM lead tip minimizes the underestimation for temperature and SAR. This contact position also has the lowest temperature and SAR error. For other contact positions, the maximum temperature error can be as high as -45%, whereas the maximum SAR error can be as high as -54%. MRI heating evaluations with temperature probes should use a contact position minimizing the maximum error, need to be accompanied by a thorough uncertainty budget and the temperature and SAR errors should be specified.

  8. A developmental study of latent absolute pitch memory.

    Science.gov (United States)

    Jakubowski, Kelly; Müllensiefen, Daniel; Stewart, Lauren

    2017-03-01

    The ability to recall the absolute pitch level of familiar music (latent absolute pitch memory) is widespread in adults, in contrast to the rare ability to label single pitches without a reference tone (overt absolute pitch memory). The present research investigated the developmental profile of latent absolute pitch (AP) memory and explored individual differences related to this ability. In two experiments, 288 children from 4 to12 years of age performed significantly above chance at recognizing the absolute pitch level of familiar melodies. No age-related improvement or decline, nor effects of musical training, gender, or familiarity with the stimuli were found in regard to latent AP task performance. These findings suggest that latent AP memory is a stable ability that is developed from as early as age 4 and persists into adulthood.

  9. Comparative Study of Regional Estimation Methods for Daily Maximum Temperature (A Case Study of the Isfahan Province

    Directory of Open Access Journals (Sweden)

    Ghamar Fadavi

    2016-02-01

    data (24 days for each year and 2 days for each month were used for different interpolation methods. Using difference measures viz. Root Mean Square Error (RMSE, Mean Bias Error (MBE, Mean Absolute Error (MAE and Correlation Coefficient (r, the performance and accuracy of each model were tested to select the best method. Results and Discussion: The assessment of normalizing condition of data was done using Kolmogrov-Smirnov test at ninety five percent (95% level of significance in Mini Tab software. The results show that distribution of daily maximum temperature data had no significant difference with normal distribution for both years. Weighed inverse distance method used for estimation daily maximum temperature, for this purpose, root mean square error (RMSE for different status of power (1 to 5 and number of station (5,10,15 and20 was calculated. According to the minimum RMSE, power for 2 and number of station for 15 in 2007 and power for 1 and number of station for 5 in 1992 were obtained as optimum power and number of station. The results also show that in regression equation the correlation coefficient were more than 0.8 for the most of the days. The regression coefficient of elevation (h and latitude (y were almost negative for all the month and the regression coefficient of longitude (x was positive, showing that decreasing temperature with increasing elevation and increasing temperature with increasing longitude. The results revealed that for Kriging method the Gussian model had the best semivariogram and after that spherical and exponential were in the next order, respectively for 2007 year. In the year 1992, spherical and Gussian models had better semivariogram among others. Elevation was the best variable to improve Co-kriging method as auxiliary data. such that The correlation coefficient between temperature and elevation was more than 0.5 for all days. The results also show that for Co-Kriging method the spherical model had the best semivariogram and

  10. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    Science.gov (United States)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  11. A Simultaneously Calibration Approach for Installation and Attitude Errors of an INS/GPS/LDS Target Tracker

    Directory of Open Access Journals (Sweden)

    Jianhua Cheng

    2015-02-01

    Full Text Available To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS/Global position system (GPS/Laser distance scanner (LDS integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1 the attitude measure error of INS/GPS; (2 the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  12. A simultaneously calibration approach for installation and attitude errors of an INS/GPS/LDS target tracker.

    Science.gov (United States)

    Cheng, Jianhua; Chen, Daidai; Sun, Xiangyu; Wang, Tongda

    2015-02-04

    To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS)/Global position system (GPS)/Laser distance scanner (LDS) integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1) the attitude measure error of INS/GPS; (2) the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  13. Minimizing the Levelized Cost of Energy in Single-Phase Photovoltaic Systems with an Absolute Active Power Control

    DEFF Research Database (Denmark)

    Yang, Yongheng; Koutroulis, Eftichios; Sangwongwanich, Ariya

    2015-01-01

    . An increase of the inverter lifetime and a reduction of the energy yield can alter the cost of energy, demanding an optimization of the power limitation. Therefore, aiming at minimizing the Levelized Cost of Energy (LCOE), the power limit is optimized for the AAPC strategy in this paper. The optimization...... control strategy, the Absolute Active Power Control (AAPC) can effectively solve the overloading issues by limiting the maximum possible PV power to a certain level (i.e., the power limitation), and also benefit the inverter reliability. However, its feasibility is challenged by the energy loss......, compared to the conventional PV inverter operating only in the maximum power point tracking mode. In the presented case study, the minimum of LCOE is achieved for the system when the power limit is optimized to a certain level of the designed maximum feed-in power (i.e., 3 kW). In addition, the proposed...

  14. Advancing Absolute Calibration for JWST and Other Applications

    Science.gov (United States)

    Rieke, George; Bohlin, Ralph; Boyajian, Tabetha; Carey, Sean; Casagrande, Luca; Deustua, Susana; Gordon, Karl; Kraemer, Kathleen; Marengo, Massimo; Schlawin, Everett; Su, Kate; Sloan, Greg; Volk, Kevin

    2017-10-01

    We propose to exploit the unique optical stability of the Spitzer telescope, along with that of IRAC, to (1) transfer the accurate absolute calibration obtained with MSX on very bright stars directly to two reference stars within the dynamic range of the JWST imagers (and of other modern instrumentation); (2) establish a second accurate absolute calibration based on the absolutely calibrated spectrum of the sun, transferred onto the astronomical system via alpha Cen A; and (3) provide accurate infrared measurements for the 11 (of 15) highest priority stars with no such data but with accurate interferometrically measured diameters, allowing us to optimize determinations of effective temperatures using the infrared flux method and thus to extend the accurate absolute calibration spectrally. This program is integral to plans for an accurate absolute calibration of JWST and will also provide a valuable Spitzer legacy.

  15. On the mean squared error of the ridge estimator of the covariance and precision matrix

    NARCIS (Netherlands)

    van Wieringen, Wessel N.

    2017-01-01

    For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.

  16. Verification of surface minimum, mean, and maximum temperature forecasts in Calabria for summer 2008

    Directory of Open Access Journals (Sweden)

    S. Federico

    2011-02-01

    Full Text Available Since 2005, one-hour temperature forecasts for the Calabria region (southern Italy, modelled by the Regional Atmospheric Modeling System (RAMS, have been issued by CRATI/ISAC-CNR (Consortium for Research and Application of Innovative Technologies/Institute for Atmospheric and Climate Sciences of the National Research Council and are available online at http://meteo.crati.it/previsioni.html (every six hours. Beginning in June 2008, the horizontal resolution was enhanced to 2.5 km. In the present paper, forecast skill and accuracy are evaluated out to four days for the 2008 summer season (from 6 June to 30 September, 112 runs. For this purpose, gridded high horizontal resolution forecasts of minimum, mean, and maximum temperatures are evaluated against gridded analyses at the same horizontal resolution (2.5 km.

    Gridded analysis is based on Optimal Interpolation (OI and uses the RAMS first-day temperature forecast as the background field. Observations from 87 thermometers are used in the analysis system. The analysis error is introduced to quantify the effect of using the RAMS first-day forecast as the background field in the OI analyses and to define the forecast error unambiguously, while spatial interpolation (SI analysis is considered to quantify the statistics' sensitivity to the verifying analysis and to show the quality of the OI analyses for different background fields.

    Two case studies, the first one with a low (less than the 10th percentile root mean square error (RMSE in the OI analysis, the second with the largest RMSE of the whole period in the OI analysis, are discussed to show the forecast performance under two different conditions. Cumulative statistics are used to quantify forecast errors out to four days. Results show that maximum temperature has the largest RMSE, while minimum and mean temperature errors are similar. For the period considered

  17. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  18. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  20. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  1. PERBANDINGAN ANALISIS LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR DAN PARTIAL LEAST SQUARES (Studi Kasus: Data Microarray

    Directory of Open Access Journals (Sweden)

    KADEK DWI FARMANI

    2012-09-01

    Full Text Available Linear regression analysis is one of the parametric statistical methods which utilize the relationship between two or more quantitative variables. In linear regression analysis, there are several assumptions that must be met that is normal distribution of errors, there is no correlation between the error and error variance is constant and homogent. There are some constraints that caused the assumption can not be met, for example, the correlation between independent variables (multicollinearity, constraints on the number of data and independent variables are obtained. When the number of samples obtained less than the number of independent variables, then the data is called the microarray data. Least Absolute shrinkage and Selection Operator (LASSO and Partial Least Squares (PLS is a statistical method that can be used to overcome the microarray, overfitting, and multicollinearity. From the above description, it is necessary to study with the intention of comparing LASSO and PLS method. This study uses coronary heart and stroke patients data which is a microarray data and contain multicollinearity. With these two characteristics of the data that most have a weak correlation between independent variables, LASSO method produces a better model than PLS seen from the large RMSEP.

  2. Absolute pitch among students at the Shanghai Conservatory of Music: a large-scale direct-test study.

    Science.gov (United States)

    Deutsch, Diana; Li, Xiaonuo; Shen, Jing

    2013-11-01

    This paper reports a large-scale direct-test study of absolute pitch (AP) in students at the Shanghai Conservatory of Music. Overall note-naming scores were very high, with high scores correlating positively with early onset of musical training. Students who had begun training at age ≤5 yr scored 83% correct not allowing for semitone errors and 90% correct allowing for semitone errors. Performance levels were higher for white key pitches than for black key pitches. This effect was greater for orchestral performers than for pianists, indicating that it cannot be attributed to early training on the piano. Rather, accuracy in identifying notes of different names (C, C#, D, etc.) correlated with their frequency of occurrence in a large sample of music taken from the Western tonal repertoire. There was also an effect of pitch range, so that performance on tones in the two-octave range beginning on Middle C was higher than on tones in the octave below Middle C. In addition, semitone errors tended to be on the sharp side. The evidence also ran counter to the hypothesis, previously advanced by others, that the note A plays a special role in pitch identification judgments.

  3. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper

    2015-01-07

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

  4. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www. science direct.com/ science /article/pii/S0377042716301492?via%3Dihub

  5. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716301492?via%3Dihub

  6. A self-consistent, absolute isochronal age scale for young moving groups in the solar neighbourhood

    OpenAIRE

    Bell, Cameron P. M.; Mamajek, Eric E.; Naylor, Tim

    2015-01-01

    We present a self-consistent, absolute isochronal age scale for young (< 200 Myr), nearby (< 100 pc) moving groups in the solar neighbourhood based on homogeneous fitting of semi-empirical pre-main-sequence model isochrones using the tau^2 maximum-likelihood fitting statistic of Naylor & Jeffries in the M_V, V-J colour-magnitude diagram. The final adopted ages for the groups are: 149+51-19 Myr for the AB Dor moving group, 24+/-3 Myr for the {\\beta} Pic moving group (BPMG), 45+11-7 Myr for the...

  7. NGS Absolute Gravity Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NGS Absolute Gravity data (78 stations) was received in July 1993. Principal gravity parameters include Gravity Value, Uncertainty, and Vertical Gradient. The...

  8. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    International Nuclear Information System (INIS)

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward; Szczygieł, Dorota M.; Gould, Andrew; Sneden, Christopher; Dong, Subo

    2013-01-01

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M V,RRc = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M V,RRab = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M V,RRc = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W π , W θ , W z ) = (12.0, –209.9, 3.0) km s –1 in the radial, rotational, and vertical directions with dispersions (σ W π ,σ W θ ,σ W z ) = (150.4, 106.1, 96.0) km s -1 . For the disk, we find (W π , W θ , W z ) = (13.0, –42.0, –27.3) km s –1 relative to the Sun with dispersions (σ W π ,σ W θ ,σ W z ) = (67.7,59.2,54.9) km s -1 . Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4

  9. Analysis of error functions in speckle shearing interferometry

    International Nuclear Information System (INIS)

    Wan Saffiey Wan Abdullah

    2001-01-01

    Electronic Speckle Pattern Shearing Interferometry (ESPSI) or shearography has successfully been used in NDT for slope (∂w/ (∂x and / or (∂w/ (∂y) measurement while strain measurement (∂u/ ∂x, ∂v/ ∂y, ∂u/ ∂y and (∂v/ (∂x) is still under investigation. This method is well accepted in industrial applications especially in the aerospace industry. Demand of this method is increasing due to complexity of the test materials and objects. ESPSI has successfully performed in NDT only for qualitative measurement whilst quantitative measurement is the current aim of many manufacturers. Industrial use of such equipment is being completed without considering the errors arising from numerous sources, including wavefront divergence. The majority of commercial systems are operated with diverging object illumination wave fronts without considering the curvature of the object illumination wavefront or the object geometry, when calculating the interferometer fringe function and quantifying data. This thesis reports the novel approach in quantified maximum phase change difference analysis for derivative out-of-plane (OOP) and in-plane (IP) cases that propagate from the divergent illumination wavefront compared to collimated illumination. The theoretical of maximum phase difference is formulated by means of three dependent variables, these being the object distance, illuminated diameter, center of illuminated area and camera distance and illumination angle. The relative maximum phase change difference that may contributed to the error in the measurement analysis in this scope of research is defined by the difference of maximum phase difference value measured by divergent illumination wavefront relative to the maximum phase difference value of collimated illumination wavefront, taken at the edge of illuminated area. Experimental validation using test objects for derivative out-of-plane and derivative in-plane deformation, using a single illumination wavefront

  10. Visual error augmentation enhances learning in three dimensions.

    Science.gov (United States)

    Sharp, Ian; Huang, Felix; Patton, James

    2011-09-02

    Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  11. Visual error augmentation enhances learning in three dimensions

    Directory of Open Access Journals (Sweden)

    Huang Felix

    2011-09-01

    Full Text Available Abstract Because recent preliminary evidence points to the use of Error augmentation (EA for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed. Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  12. Conically scanning lidar error in complex terrain

    Directory of Open Access Journals (Sweden)

    Ferhat Bingöl

    2009-05-01

    Full Text Available Conically scanning lidars assume the flow to be homogeneous in order to deduce the horizontal wind speed. However, in mountainous or complex terrain this assumption is not valid implying a risk that the lidar will derive an erroneous wind speed. The magnitude of this error is measured by collocating a meteorological mast and a lidar at two Greek sites, one hilly and one mountainous. The maximum error for the sites investigated is of the order of 10 %. In order to predict the error for various wind directions the flows at both sites are simulated with the linearized flow model, WAsP Engineering 2.0. The measurement data are compared with the model predictions with good results for the hilly site, but with less success at the mountainous site. This is a deficiency of the flow model, but the methods presented in this paper can be used with any flow model.

  13. Absolute isotopic abundances of Ti in meteorites

    International Nuclear Information System (INIS)

    Niederer, F.R.; Papanastassiou, D.A.; Wasserburg, G.J.

    1985-01-01

    The absolute isotope abundance of Ti has been determined in Ca-Al-rich inclusions from the Allende and Leoville meteorites and in samples of whole meteorites. The absolute Ti isotope abundances differ by a significant mass dependent isotope fractionation transformation from the previously reported abundances, which were normalized for fractionation using 46 Ti/ 48 Ti. Therefore, the absolute compositions define distinct nucleosynthetic components from those previously identified or reflect the existence of significant mass dependent isotope fractionation in nature. We provide a general formalism for determining the possible isotope compositions of the exotic Ti from the measured composition, for different values of isotope fractionation in nature and for different mixing ratios of the exotic and normal components. The absolute Ti and Ca isotopic compositions still support the correlation of 50 Ti and 48 Ca effects in the FUN inclusions and imply contributions from neutron-rich equilibrium or quasi-equilibrium nucleosynthesis. The present identification of endemic effects at 46 Ti, for the absolute composition, implies a shortfall of an explosive-oxygen component or reflects significant isotope fractionation. Additional nucleosynthetic components are required by 47 Ti and 49 Ti effects. Components are also defined in which 48 Ti is enhanced. Results are given and discussed. (author)

  14. Statistical analysis of lifetime determinations in the presence of large errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1984-01-01

    The lifetimes of the new particles are very short, and most of the experiments which measure decay times are subject to measurement errors which are not negligible compared with the decay times themselves. Bartlett has analyzed the problem of lifetime estimation if the error on each event is small or zero. For the case of non-negligible measurement errors, σsub(i), on each event, we are interested in a few basic questions: How well does maximum likelihood work. That is, (a) are the errors reasonable, (b) is the answer unbiased, and (c) are there other estimators with superior performance. We concentrate on the results of our Monte Carlo investigation for the case in which the experiment is sensitive over all times -infinity< xsub(i)< infinity

  15. Error Immune Logic for Low-Power Probabilistic Computing

    Directory of Open Access Journals (Sweden)

    Bo Marr

    2010-01-01

    design for the maximum amount of energy savings per a given error rate. Spice simulation results using a commercially available and well-tested 0.25 μm technology are given verifying the ultra-low power, probabilistic full-adder designs. Further, close to 6X energy savings is achieved for a probabilistic full-adder over the deterministic case.

  16. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  17. Selective segmental sclerotherapy of the liver by transportal absolute ethanol injection: an animal experimental study

    International Nuclear Information System (INIS)

    Pan Jie; Yang Ning; Zhang Zhongzhong; Hu Libin; Chen Jie; Wu Wei; Jin Zhengyu; Liu Wei

    2003-01-01

    Objective: To evaluate the safety and efficacy of selective segmental sclerotherapy (SSS) of the liver by transportal absolute ethanol injection with an animal experimental study, and to discuss several technical points involved in this method. Methods: Thirty dogs received SSS of the liver by transportal absolute ethanol injection with the injection dose of 0.2-1.0 ml/kg, repeated examinations of blood ethanol level, WBC, and liver functions were done, and CT and pathological examinations of the liver were performed. Results: All dogs treated with SSS survived during the study. The maximum elevation of blood ethanol values occurred in group F. Its average value was (1.603 ± 0.083) mg/ml, which was much lower than that of death level. Transient elevations of blood WBC and ALT were seen. The average values of WBC and ALT were (46.36 ± 7.28) x 10 9 and (827.36 ± 147.25) U/L, respectively. CT and pathological examinations proved that the dogs given SSS by transportal absolute ethanol injection with the injection dose of 0.3-1.0 ml/kg had a complete wedge-shaped necrosis in the liver. Conclusion: Selective segmental sclerotherapy of the liver by transportal ethanol injection was quite safe and effective if the proper dose of ethanol was injected. SSS may be useful in the treatment of HCC

  18. Performance of Different Light Sources for the Absolute Calibration of Radiation Thermometers

    Science.gov (United States)

    Martín, M. J.; Mantilla, J. M.; del Campo, D.; Hernanz, M. L.; Pons, A.; Campos, J.

    2017-09-01

    The evolving mise en pratique for the definition of the kelvin (MeP-K) [1, 2] will, in its forthcoming edition, encourage the realization and dissemination of the thermodynamic temperature either directly (primary thermometry) or indirectly (relative primary thermometry) via fixed points with assigned reference thermodynamic temperatures. In the last years, the Centro Español de Metrología (CEM), in collaboration with the Instituto de Óptica of Consejo Superior de Investigaciones Científicas (IO-CSIC), has developed several setups for absolute calibration of standard radiation thermometers using the radiance method to allow CEM the direct dissemination of the thermodynamic temperature and the assignment of the thermodynamic temperatures to several fixed points. Different calibration facilities based on a monochromator and/or a laser and an integrating sphere have been developed to calibrate CEM's standard radiation thermometers (KE-LP2 and KE-LP4) and filter radiometer (FIRA2). This system is based on the one described in [3] placed in IO-CSIC. Different light sources have been tried and tested for measuring absolute spectral radiance responsivity: a Xe-Hg 500 W lamp, a supercontinuum laser NKT SuperK-EXR20 and a diode laser emitting at 6473 nm with a typical maximum power of 120 mW. Their advantages and disadvantages have been studied such as sensitivity to interferences generated by the laser inside the filter, flux stability generated by the radiant sources and so forth. This paper describes the setups used, the uncertainty budgets and the results obtained for the absolute temperatures of Cu, Co-C, Pt-C and Re-C fixed points, measured with the three thermometers with central wavelengths around 650 nm.

  19. A Miniature Four-Hole Probe for Measurement of Three-Dimensional Flow with Large Gradients

    Directory of Open Access Journals (Sweden)

    Ravirai Jangir

    2014-01-01

    Full Text Available A miniature four-hole probe with a sensing area of 1.284 mm2 to minimise the measurement errors due to the large pressure and velocity gradients that occur in highly three-dimensional turbomachinery flows is designed, fabricated, calibrated, and validated. The probe has good spatial resolution in two directions, thus minimising spatial and flow gradient errors. The probe is calibrated in an open jet calibration tunnel at a velocity of 50 m/s in yaw and pitch angles range of ±40 degrees with an interval of 5 degrees. The calibration coefficients are defined, determined, and presented. Sensitivity coefficients are also calculated and presented. A lookup table method is used to determine the four unknown quantities, namely, total and static pressures and flow angles. The maximum absolute errors in yaw and pitch angles are 2.4 and 1.3 deg., respectively. The maximum absolute errors in total, static, and dynamic pressures are 3.4, 3.9, and 4.9% of the dynamic pressures, respectively. Measurements made with this probe, a conventional five-hole probe and a miniature Pitot probe across a calibration section, demonstrated that the errors due to gradient and surface proximity for this probe are considerably reduced compared to the five-hole probe.

  20. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    Science.gov (United States)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  1. Absolute earthquake locations using 3-D versus 1-D velocity models below a local seismic network: example from the Pyrenees

    Science.gov (United States)

    Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.

    2018-03-01

    Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.

  2. Investigating Absolute Value: A Real World Application

    Science.gov (United States)

    Kidd, Margaret; Pagni, David

    2009-01-01

    Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…

  3. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  4. Application of Joint Error Maximal Mutual Compensation to hexapod robots

    DEFF Research Database (Denmark)

    Veryha, Yauheni; Petersen, Henrik Gordon

    2008-01-01

    A good practice to ensure high-positioning accuracy in industrial robots is to use joint error maximum mutual compensation (JEMMC). This paper presents an application of JEMMC for positioning of hexapod robots to improve end-effector positioning accuracy. We developed an algorithm and simulation ...

  5. A novel setup for the determination of absolute cross sections for low-energy electron induced strand breaks in oligonucleotides - The effect of the radiosensitizer 5-fluorouracil

    International Nuclear Information System (INIS)

    Rackwitz, J.; Rankovic, M.L.; Milosavljevic, A.R.; Bald, I.

    2017-01-01

    Low-energy electrons (LEEs) play an important role in DNA radiation damage. Here we present a method to quantify LEE induced strand breakage in well-defined oligonucleotide single strands in terms of absolute cross sections. An LEE irradiation setup covering electron energies <500 eV is constructed and optimized to irradiate DNA origami triangles carrying well-defined oligonucleotide target strands. Measurements are presented for 10.0 and 5.5 eV for different oligonucleotide targets. The determination of absolute strand break cross sections is performed by atomic force microscopy analysis. An accurate fluence determination ensures small margins of error of the determined absolute single strand break cross sections σ SSB . In this way, the influence of sequence modification with the radiosensitive 5-Fluorouracil ( 5F U) is studied using an absolute and relative data analysis. We demonstrate an increase in the strand break yields of 5F U containing oligonucleotides by a factor of 1.5 to 1.6 compared with non-modified oligonucleotide sequences when irradiated with 10 eV electrons. (authors)

  6. Dependence of absolute magnitudes (energies) of flares on the cluster age containing flare stars

    International Nuclear Information System (INIS)

    Parsamyan, Eh.S.

    1976-01-01

    Dependences between Δmsub(u) and msub(u) are given for the Orion, NGC 7000, Pleiades and Praesepe aggregations. Maximum absolute values of flares have been calculated for stars with different luminosities. It has been shown that the values of flares can be limited by a straight line which gives the representation on the distribution of maximum values of amplitudes for the stars with different luminosities in an aggregation. Presented are k and m 0 parameters characterizing the lines fot the Orion, NGC 7000, Pleiades and Praesepe aggregation and their age T dependence. From the dependence between k (angular coefficient of straight lines) and lgT for the aggregation with known T the age of those aggregation involving a great amount of flaring stars can be found. The age of flaring stars in the neighbourhood of the Sun has been determined. The age of UV Ceti has been shown by an order to exceed that of the rest stars

  7. Approach To Absolute Zero

    Indian Academy of Sciences (India)

    more and more difficult to remove heat as one approaches absolute zero. This is the ... A new and active branch of engineering ... This temperature is called the critical temperature, Te' For sulfur dioxide the critical ..... adsorbent charcoal.

  8. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  9. Reducing errors benefits the field-based learning of a fundamental movement skill in children.

    Science.gov (United States)

    Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W

    2013-03-01

    Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.

  10. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  11. Absolute spectrophotometry of Nova Cygni 1975

    International Nuclear Information System (INIS)

    Kontizas, E.; Kontizas, M.; Smyth, M.J.

    1976-01-01

    Radiometric photoelectric spectrophotometry of Nova Cygni 1975 was carried out on 1975 August 31, September 2, 3. α Lyr was used as reference star and its absolute spectral energy distribution was used to reduce the spectrophotometry of the nova to absolute units. Emission strengths of Hα, Hβ, Hγ (in W cm -2 ) were derived. The Balmer decrement Hα:Hβ:Hγ was compared with theory, and found to deviate less than had been reported for an earlier nova. (author)

  12. Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...

  13. The Pragmatics of "Unruly" Dative Absolutes in Early Slavic

    Directory of Open Access Journals (Sweden)

    Daniel E. Collins

    2011-08-01

    Full Text Available This chapter examines some uses of the dative absolute in Old Church Slavonic and in early recensional Slavonic texts that depart from notions of how Indo-European absolute constructions should behave, either because they have subjects coreferential with the (putative main-clause subjects or because they function as if they were main clauses in their own right. Such "noncanonical" absolutes have generally been written off as mechanistic translations or as mistakes by scribes who did not understand the proper uses of the construction. In reality, the problem is not with literalistic translators or incompetent scribes but with the definition of the construction itself; it is quite possible to redefine the Early Slavic dative absolute in a way that accounts for the supposedly deviant cases. While the absolute is generally dependent semantically on an adjacent unit of discourse, it should not always be regarded as subordinated syntactically. There are good grounds for viewing some absolutes not as dependent clauses but as independent sentences whose collateral character is an issue not of syntax but of the pragmatics of discourse.

  14. Excited-state structure and electronic dephasing time of Nile blue from absolute resonance Raman intensities

    Science.gov (United States)

    Lawless, Mary K.; Mathies, Richard A.

    1992-06-01

    Absolute resonance Raman cross sections are measured for Nile blue 690 perchlorate dissolved in ethylene glycol with excitation at 514, 531, and 568 nm. These values and the absorption spectrum are modeled using a time-dependent wave packet formalism. The excited-state equilibrium geometry changes are quantitated for 40 resonance Raman active modes, seven of which (590, 1141, 1351, 1429, 1492, 1544, and 1640 cm-1 ) carry 70% of the total resonance Raman intensity. This demonstrates that in addition to the prominent 590 and 1640 cm-1 modes, a large number of vibrational degrees of freedom are Franck-Condon coupled to the electronic transition. After exposure of the explicit vibrational progressions, the residual absorption linewidth is separated into its homogeneous [350 cm-1 half-width at half-maximum (HWHM)] and inhomogeneous (313 cm-1 HWHM) components through an analysis of the absolute Raman cross sections. The value of the electronic dephasing time derived from this study (25 fs) compares well to previously published results. These data should be valuable in multimode modeling of femtosecond experiments on Nile blue.

  15. Maximum Power Point Tracking of Photovoltaic System for Traffic Light Application

    Directory of Open Access Journals (Sweden)

    Riza Muhida

    2013-07-01

    Full Text Available Photovoltaic traffic light system is a significant application of renewable energy source. The development of the system is an alternative effort of local authority to reduce expenditure for paying fees to power supplier which the power comes from conventional energy source. Since photovoltaic (PV modules still have relatively low conversion efficiency, an alternative control of maximum power point tracking (MPPT method is applied to the traffic light system. MPPT is intended to catch up the maximum power at daytime in order to charge the battery at the maximum rate in which the power from the battery is intended to be used at night time or cloudy day. MPPT is actually a DC-DC converter that can step up or down voltage in order to achieve the maximum power using Pulse Width Modulation (PWM control. From experiment, we obtained the voltage of operation using MPPT is at 16.454 V, this value has error of 2.6%, if we compared with maximum power point voltage of PV module that is 16.9 V. Based on this result it can be said that this MPPT control works successfully to deliver the power from PV module to battery maximally.

  16. Absolutyzm i pluralizm (ABSOLUTISM AND PLURALISM

    Directory of Open Access Journals (Sweden)

    Renata Ziemińska

    2005-06-01

    Full Text Available Alethic absolutism is a thesis that propositions can not be more or less true, that they are true or false for ever (if true at all and that their truth is independent on any circumstances of their assertion. In negative version, easier to defend, alethic absolutism claims the very same proposition can not be both true and false relative to circumstances of its assertion. Simple alethic pluralism is a thesis that we have many concepts of truth. It is a very good way to dissolve the controversy between alethic relativism and absolutism. Many philosophical concepts of truth are the best reason for such pluralism. If concept is meaning of a name, we have many concepts of truth because the name 'truth' was understood in many ways. The variety of meanings however can be superficial. Under it we can find one idea of truth expressed in correspondence truism or schema (T. The content of the truism is too poor to be content of anyone concept of truth, so it usually is connected with some picture of the world (ontology and we have so many concepts of truth as many pictures of the world. The authoress proposes the hierarchical pluralism with privileged classic (or correspondence in weak sense concept of truth as absolute property.Other author's publications:

  17. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  18. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  19. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  20. Introducing the Mean Absolute Deviation "Effect" Size

    Science.gov (United States)

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  1. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  2. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    Energy Technology Data Exchange (ETDEWEB)

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward [Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Szczygieł, Dorota M.; Gould, Andrew [Department of Astronomy, The Ohio State University, 4051 McPherson Laboratory, Columbus, OH 43210 (United States); Sneden, Christopher [Department of Astronomy, University of Texas at Austin, TX 78712 (United States); Dong, Subo [Institute for Advanced Study, 500 Einstein Drive, Princeton, NJ 08540 (United States)

    2013-09-20

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M{sub V,RRc} = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M{sub V,RRab} = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M{sub V,RRc} = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W{sub π}, W{sub θ}, W{sub z} ) = (12.0, –209.9, 3.0) km s{sup –1} in the radial, rotational, and vertical directions with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (150.4, 106.1, 96.0) km s{sup -1}. For the disk, we find (W{sub π}, W{sub θ}, W{sub z} ) = (13.0, –42.0, –27.3) km s{sup –1} relative to the Sun with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (67.7,59.2,54.9) km s{sup -1}. Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4.

  3. Propagation of angular errors in two-axis rotation systems

    Science.gov (United States)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  4. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Absolute calibration in vivo measurement systems

    International Nuclear Information System (INIS)

    Kruchten, D.A.; Hickman, D.P.

    1991-02-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs

  6. A novel simultaneous streak and framing camera without principle errors

    Science.gov (United States)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  7. Incorrect Weighting of Absolute Performance in Self-Assessment

    Science.gov (United States)

    Jeffrey, Scott A.; Cozzarin, Brian

    Students spend much of their life in an attempt to assess their aptitude for numerous tasks. For example, they expend a great deal of effort to determine their academic standing given a distribution of grades. This research finds that students use their absolute performance, or percentage correct as a yardstick for their self-assessment, even when relative standing is much more informative. An experiment shows that this reliance on absolute performance for self-evaluation causes a misallocation of time and financial resources. Reasons for this inappropriate responsiveness to absolute performance are explored.

  8. Absolute instrumental neutron activation analysis at Lawrence Livermore Laboratory

    International Nuclear Information System (INIS)

    Heft, R.E.

    1977-01-01

    The Environmental Science Division at Lawrence Livermore Laboratory has in use a system of absolute Instrumental Neutron Activation Analysis (INAA). Basically, absolute INAA is dependent upon the absolute measurement of the disintegration rates of the nuclides produced by neutron capture. From such disintegration rate data, the amount of the target element present in the irradiated sample is calculated by dividing the observed disintegration rate for each nuclide by the expected value for the disintegration rate per microgram of the target element that produced the nuclide. In absolute INAA, the expected value for disintegration rate per microgram is calculated from nuclear parameters and from measured values of both thermal and epithermal neutron fluxes which were present during irradiation. Absolute INAA does not depend on the concurrent irradiation of elemental standards but does depend on the values for thermal and epithermal neutron capture cross-sections for the target nuclides. A description of the analytical method is presented

  9. Determing and monitoring of maximum permissible power for HWRR-3

    International Nuclear Information System (INIS)

    Jia Zhanli; Xiao Shigang; Jin Huajin; Lu Changshen

    1987-01-01

    The operating power of a reactor is an important parameter to be monitored. This report briefly describes the determining and monitoring of maximum permissiable power for HWRR-3. The calculating method is described, and the result of calculation and analysis of error are also given. On-line calculation and real time monitoring have been realized at the heavy water reactor. It provides the reactor with a real time and reliable supervision. This makes operation convenient and increases reliability

  10. Absolute Navigation Information Estimation for Micro Planetary Rovers

    Directory of Open Access Journals (Sweden)

    Muhammad Ilyas

    2016-03-01

    Full Text Available This paper provides algorithms to estimate absolute navigation information, e.g., absolute attitude and position, by using low power, weight and volume Microelectromechanical Systems-type (MEMS sensors that are suitable for micro planetary rovers. Planetary rovers appear to be easily navigable robots due to their extreme slow speed and rotation but, unfortunately, the sensor suites available for terrestrial robots are not always available for planetary rover navigation. This makes them difficult to navigate in a completely unexplored, harsh and complex environment. Whereas the relative attitude and position can be tracked in a similar way as for ground robots, absolute navigation information, unlike in terrestrial applications, is difficult to obtain for a remote celestial body, such as Mars or the Moon. In this paper, an algorithm called the EASI algorithm (Estimation of Attitude using Sun sensor and Inclinometer is presented to estimate the absolute attitude using a MEMS-type sun sensor and inclinometer, only. Moreover, the output of the EASI algorithm is fused with MEMS gyros to produce more accurate and reliable attitude estimates. An absolute position estimation algorithm has also been presented based on these on-board sensors. Experimental results demonstrate the viability of the proposed algorithms and the sensor suite for low-cost and low-weight micro planetary rovers.

  11. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Mark [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Tuen Mun Hospital, Hong Kong (China); Grehn, Melanie [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Cremers, Florian [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Siebert, Frank-Andre [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Wurster, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Department for Radiation Oncology, University Medicine Greifswald, Greifswald (Germany); Huttenlocher, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Dunst, Jürgen [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Department for Radiation Oncology, University Clinic Copenhagen, Copenhagen (Denmark); Hildebrandt, Guido [Department for Radiation Oncology, University Medicine Rostock, Rostock (Germany); Schweikard, Achim [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Rades, Dirk [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Ernst, Floris [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); and others

    2017-03-15

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  12. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    International Nuclear Information System (INIS)

    Chan, Mark; Grehn, Melanie; Cremers, Florian; Siebert, Frank-Andre; Wurster, Stefan; Huttenlocher, Stefan; Dunst, Jürgen; Hildebrandt, Guido; Schweikard, Achim; Rades, Dirk; Ernst, Floris

    2017-01-01

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  13. Oligomeric models for estimation of polydimethylsiloxane-water partition ratios with COSMO-RS theory: impact of the combinatorial term on absolute error.

    Science.gov (United States)

    Parnis, J Mark; Mackay, Donald

    2017-03-22

    A series of 12 oligomeric models for polydimethylsiloxane (PDMS) were evaluated for their effectiveness in estimating the PDMS-water partition ratio, K PDMS-w . Models ranging in size and complexity from the -Si(CH 3 ) 2 -O- model previously published by Goss in 2011 to octadeca-methyloctasiloxane (CH 3 -(Si(CH 3 ) 2 -O-) 8 CH 3 ) were assessed based on their RMS error with 253 experimental measurements of log K PDMS-w from six published works. The lowest RMS error for log K PDMS-w (0.40 in log K) was obtained with the cyclic oligomer, decamethyl-cyclo-penta-siloxane (D5), (-Si(CH 3 ) 2 -O-) 5 , with the mixing-entropy associated combinatorial term included in the chemical potential calculation. The presence or absence of terminal methyl groups on linear oligomer models is shown to have significant impact only for oligomers containing 1 or 2 -Si(CH 3 ) 2 -O- units. Removal of the combinatorial term resulted in a significant increase in the RMS error for most models, with the smallest increase associated with the largest oligomer studied. The importance of inclusion of the combinatorial term in the chemical potential for liquid oligomer models is discussed.

  14. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    Science.gov (United States)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  15. Absolute-magnitude distributions of supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, Dean; Wright, John [Department of Physics, Xavier University of Louisiana, New Orleans, LA 70125 (United States); Jenkins III, Robert L. [Applied Physics Department, Richard Stockton College, Galloway, NJ 08205 (United States); Maddox, Larry, E-mail: drichar7@xula.edu [Department of Chemistry and Physics, Southeastern Louisiana University, Hammond, LA 70402 (United States)

    2014-05-01

    The absolute-magnitude distributions of seven supernova (SN) types are presented. The data used here were primarily taken from the Asiago Supernova Catalogue, but were supplemented with additional data. We accounted for both foreground and host-galaxy extinction. A bootstrap method is used to correct the samples for Malmquist bias. Separately, we generate volume-limited samples, restricted to events within 100 Mpc. We find that the superluminous events (M{sub B} < –21) make up only about 0.1% of all SNe in the bias-corrected sample. The subluminous events (M{sub B} > –15) make up about 3%. The normal Ia distribution was the brightest with a mean absolute blue magnitude of –19.25. The IIP distribution was the dimmest at –16.75.

  16. Calibration with Absolute Shrinkage

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul

    2001-01-01

    In this paper, penalized regression using the L-1 norm on the estimated parameters is proposed for chemometric je calibration. The algorithm is of the lasso type, introduced by Tibshirani in 1996 as a linear regression method with bound on the absolute length of the parameters, but a modification...

  17. ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS

    Energy Technology Data Exchange (ETDEWEB)

    ANDREWS, J.W.

    1998-12-31

    One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method of reconciling any conflicting results from the two leakage tests.

  18. A multicentre audit of HDR/PDR brachytherapy absolute dosimetry in association with the INTERLACE trial (NCT015662405)

    Science.gov (United States)

    Díez, P.; Aird, E. G. A.; Sander, T.; Gouldstone, C. A.; Sharpe, P. H. G.; Lee, C. D.; Lowe, G.; Thomas, R. A. S.; Simnor, T.; Bownes, P.; Bidmead, M.; Gandon, L.; Eaton, D.; Palmer, A. L.

    2017-12-01

    A UK multicentre audit to evaluate HDR and PDR brachytherapy has been performed using alanine absolute dosimetry. This is the first national UK audit performing an absolute dose measurement at a clinically relevant distance (20 mm) from the source. It was performed in both INTERLACE (a phase III multicentre trial in cervical cancer) and non-INTERLACE brachytherapy centres treating gynaecological tumours. Forty-seven UK centres (including the National Physical Laboratory) were visited. A simulated line source was generated within each centre’s treatment planning system and dwell times calculated to deliver 10 Gy at 20 mm from the midpoint of the central dwell (representative of Point A of the Manchester system). The line source was delivered in a water-equivalent plastic phantom (Barts Solid Water) encased in blocks of PMMA (polymethyl methacrylate) and charge measured with an ion chamber at 3 positions (120° apart, 20 mm from the source). Absorbed dose was then measured with alanine at the same positions and averaged to reduce source positional uncertainties. Charge was also measured at 50 mm from the source (representative of Point B of the Manchester system). Source types included 46 HDR and PDR 192Ir sources, (7 Flexisource, 24 mHDR-v2, 12 GammaMed HDR Plus, 2 GammaMed PDR Plus, 1 VS2000) and 1 HDR 60Co source, (Co0.A86). Alanine measurements when compared to the centres’ calculated dose showed a mean difference (±SD) of  +1.1% (±1.4%) at 20 mm. Differences were also observed between source types and dose calculation algorithm. Ion chamber measurements demonstrated significant discrepancies between the three holes mainly due to positional variation of the source within the catheter (0.4%-4.9% maximum difference between two holes). This comprehensive audit of absolute dose to water from a simulated line source showed all centres could deliver the prescribed dose to within 5% maximum difference between measurement and calculation.

  19. MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.

  20. Maximum Power Point Tracking Based on Sliding Mode Control

    Directory of Open Access Journals (Sweden)

    Nimrod Vázquez

    2015-01-01

    Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.

  1. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  2. Evaluation of inter-fraction error during prostate radiotherapy

    International Nuclear Information System (INIS)

    Komiyama, Takafumi; Nakamura, Koji; Motoyama, Tsuyoshi; Onishi, Hiroshi; Sano, Naoki

    2008-01-01

    The purpose of this study was to evaluate inter-fraction error (inter-fraction set-up error+inter-fraction internal organ motion) between treatment planning and delivery during radiotherapy for localized prostate cancer. Twenty three prostate cancer patients underwent image-guided radical irradiation with the CT-linac system. All patients were treated in the supine position. After set-up with external skin markers, using CT-linac system, pretherapy CT images were obtained and isocenter displacement was measured. The mean displacement of the isocenter was 1.8 mm, 3.3 mm, and 1.7 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. The maximum displacement of the isocenter was 7 mm, 12 mm, and 9 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. The mean interquartile range of displacement of the isocenter was 1.8 mm, 3.7 mm, and 2.0 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. In radiotherapy for localized prostate cancer, inter-fraction error was largest in the ventral-dorsal directions. Errors in the ventral-dorsal directions influence both local control and late adverse effects. Our study suggested the set-up with external skin markers was not enough for radical radiotherapy for localized prostate cancer, thereby those such as a CT-linac system for correction of inter-fraction error being required. (author)

  3. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  4. Efficacy of intrahepatic absolute alcohol in unrespectable hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Farooqi, J.I.; Hameed, K.; Khan, I.U.; Shah, S.

    2001-01-01

    To determine efficacy of intrahepatic absolute alcohol injection in researchable hepatocellular carcinoma. A randomized, controlled, experimental and interventional clinical trial. Gastroenterology Department, PGMI, Hayatabad Medical Complex, Peshawar during the period from June, 1998 to June, 2000. Thirty patients were treated by percutaneous, intrahepatic absolute alcohol injection sin repeated sessions, 33 patients were not given or treated with alcohol to serve as control. Both the groups were comparable for age, sex and other baseline characteristics. Absolute alcohol therapy significantly improved quality of life of patients, reduced the tumor size and mortality as well as showed significantly better results regarding survival (P< 0.05) than the patients of control group. We conclude that absolute alcohol is a beneficial and safe palliative treatment measure in advanced hepatocellular carcinoma (HCC). (author)

  5. Absolute and Relative Reliability of the Timed 'Up & Go' Test and '30second Chair-Stand' Test in Hospitalised Patients with Stroke

    DEFF Research Database (Denmark)

    Lyders Johansen, Katrine; Derby Stistrup, Rikke; Skibdal Schjøtt, Camilla

    2016-01-01

    OBJECTIVE: The timed 'Up & Go' test and '30second Chair-Stand' test are simple clinical outcome measures widely used to assess functional performance. The reliability of both tests in hospitalised stroke patients is unknown. The purpose was to investigate the relative and absolute reliability...... of both tests in patients admitted to an acute stroke unit. METHODS: Sixty-two patients (men, n = 41) attended two test sessions separated by a one hours rest. Intraclass correlation coefficients (ICC2,1) were calculated to assess relative reliability. Absolute reliability was expressed as Standard Error...... of Measurement (with 95% certainty-SEM95) and Smallest Real Difference (SRD) and as percentage of their respective means if heteroscedasticity was observed in Bland Altman plots (SEM95% and SRD%). RESULTS: ICC values for interrater reliability were 0.97 and 0.99 for the timed 'Up & Go' test and 0.88 and 0...

  6. Planck absolute entropy of a rotating BTZ black hole

    Science.gov (United States)

    Riaz, S. M. Jawwad

    2018-04-01

    In this paper, the Planck absolute entropy and the Bekenstein-Smarr formula of the rotating Banados-Teitelboim-Zanelli (BTZ) black hole are presented via a complex thermodynamical system contributed by its inner and outer horizons. The redefined entropy approaches zero as the temperature of the rotating BTZ black hole tends to absolute zero, satisfying the Nernst formulation of a black hole. Hence, it can be regarded as the Planck absolute entropy of the rotating BTZ black hole.

  7. Absolute nuclear material assay using count distribution (LAMBDA) space

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  8. Maximum entropy method approach to the θ term

    International Nuclear Information System (INIS)

    Imachi, Masahiro; Shinno, Yasuhiko; Yoneyama, Hiroshi

    2004-01-01

    In Monte Carlo simulations of lattice field theory with a θ term, one confronts the complex weight problem, or the sign problem. This is circumvented by performing the Fourier transform of the topological charge distribution P(Q). This procedure, however, causes flattening phenomenon of the free energy f(θ), which makes study of the phase structure unfeasible. In order to treat this problem, we apply the maximum entropy method (MEM) to a Gaussian form of P(Q), which serves as a good example to test whether the MEM can be applied effectively to the θ term. We study the case with flattering as well as that without flattening. In the latter case, the results of the MEM agree with those obtained from the direct application of the Fourier transform. For the former, the MEM gives a smoother f(θ) than that of the Fourier transform. Among various default models investigated, the images which yield the least error do not show flattening, although some others cannot be excluded given the uncertainly related to statistical error. (author)

  9. THE DISKMASS SURVEY. II. ERROR BUDGET

    International Nuclear Information System (INIS)

    Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A.

    2010-01-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ * ), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25 0 -35 0 is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F bar ) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σ dyn ), disk stellar mass-to-light ratio (Υ disk * ), and disk maximality (F *,max disk ≡V disk *,max / V c ). Random and systematic errors in these quantities for individual galaxies will be ∼25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  10. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  11. Tests for detecting overdispersion in models with measurement error in covariates.

    Science.gov (United States)

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry.

    Science.gov (United States)

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric

    2010-04-01

    The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  13. Approach to Absolute Zero

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 10. Approach to Absolute Zero Below 10 milli-Kelvin. R Srinivasan. Series Article Volume 2 Issue 10 October 1997 pp 8-16. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/002/10/0008-0016 ...

  14. Absolute total and one and two electron transfer cross sections for Ar8+ on Ar as a function of energy

    International Nuclear Information System (INIS)

    Vancura, J.; Kostroun, V.O.

    1992-01-01

    The absolute total and one and two electron transfer cross sections for Ar 8+ on Ar were measured as a function of projectile laboratory energy from 0.090 to 0.550 keV/amu. The effective one electron transfer cross section dominates above 0.32 keV/amu, while below this energy, the effective two electron transfer starts to become appreciable. The total cross section varies by a factor over the energy range explored. The overall error in the cross section measurement is estimated to be ± 15%

  15. Population-based absolute risk estimation with survey data

    Science.gov (United States)

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  16. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    Science.gov (United States)

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  17. Absolute marine gravimetry with matter-wave interferometry.

    Science.gov (United States)

    Bidel, Y; Zahzam, N; Blanchard, C; Bonnin, A; Cadoret, M; Bresson, A; Rouxel, D; Lequentrec-Lalancette, M F

    2018-02-12

    Measuring gravity from an aircraft or a ship is essential in geodesy, geophysics, mineral and hydrocarbon exploration, and navigation. Today, only relative sensors are available for onboard gravimetry. This is a major drawback because of the calibration and drift estimation procedures which lead to important operational constraints. Atom interferometry is a promising technology to obtain onboard absolute gravimeter. But, despite high performances obtained in static condition, no precise measurements were reported in dynamic. Here, we present absolute gravity measurements from a ship with a sensor based on atom interferometry. Despite rough sea conditions, we obtained precision below 10 -5  m s -2 . The atom gravimeter was also compared with a commercial spring gravimeter and showed better performances. This demonstration opens the way to the next generation of inertial sensors (accelerometer, gyroscope) based on atom interferometry which should provide high-precision absolute measurements from a moving platform.

  18. [Absolute and relative strength-endurance of the knee flexor and extensor muscles: a reliability study using the IsoMed 2000-dynamometer].

    Science.gov (United States)

    Dirnberger, J; Wiesinger, H P; Stöggl, T; Kösters, A; Müller, E

    2012-09-01

    Isokinetic devices are highly rated in strength-related performance diagnosis. A few years ago, the broad variety of existing products was extended by the IsoMed 2000-dynamometer. In order for an isokinetic device to be clinically useful, the reliability of specific applications must be established. Although there have already been single studies on this topic for the IsoMed 2000 concerning maximum strength measurements, there has been no study regarding the assessment of strength-endurance so far. The aim of the present study was to establish the reliability for various methods of quantification of strength-endurance using the IsoMed 2000. A sample of 33 healthy young subjects (age: 23.8 ± 2.6 years) participated in one familiarisation and two testing sessions, 3-4 days apart. Testing consisted of a series 30 full effort concentric extension-flexion cycles of the right knee muscles at an angular velocity of 180 °/s. Based on the parameters Peak, Torque and Work for each repetition, indices of absolute (KADabs) and relative (KADrel) strength-endurance were derived. KADabs was calculated as the mean value of all testing repetitions, KADrel was determined in two ways: on the one hand, as the percentage decrease between the first and the last 5 repetitions (KADrelA) and on the other, as the negative slope derived from the linear regression equitation of all repetitions (KADrelB). Detection of systematic errors was performed using paired sample t-tests, relative and absolute reliability were examined using intraclass correlation coefficient (ICC 2.1) and standard error of measurement (SEM%), respectively. In general, for extension measurements concerning KADabs and - in an weakened form - KADrel high ICC -values of 0.76-0.89 combined with clinically acceptable values of SEM% of 1.2-5.9 % could be found. For flexion measurements this only applies to KADabs, whereas results for KADrel turned out to be clearly weaker with ICC- and SEM% values of 0.42-0.62 and 9

  19. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Absolute absorption cross-section and photolysis rate of I2

    Directory of Open Access Journals (Sweden)

    A. Saiz-Lopez

    2004-01-01

    Full Text Available Following recent observations of molecular iodine (I2 in the coastal marine boundary layer (MBL (Saiz-Lopez and Plane, 2004, it has become important to determine the absolute absorption cross-section of I2 at reasonably high resolution, and also to evaluate the rate of photolysis of the molecule in the lower atmosphere. The absolute absorption cross-section (σ of gaseous I2 at room temperature and pressure (295K, 760Torr was therefore measured between 182 and 750nm using a Fourier Transform spectrometer at a resolution of 4cm-1 (0.1nm at λ=500nm. The maximum absorption cross-section in the visible region was observed at λ=533.0nm to be σ=(4.24±0.50x10-18cm2molecule-1. The spectrum is available as supplementary material accompanying this paper. The photo-dissociation rate constant (J of gaseous I2 was also measured directly in a solar simulator, yielding J(I2=0.12±0.03s-1 for the lower troposphere. This is in excellent agreement with the value of 0.12±0.015s-1 calculated using the measured absorption cross-section, terrestrial solar flux for clear sky conditions and assuming a photo-dissociation yield of unity. A two-stream radiation transfer model was then used to determine the variation in photolysis rate with solar zenith angle (SZA, from which an analytic expression is derived for use in atmospheric models. Photolysis appears to be the dominant loss process for I2 during daytime, and hence an important source of iodine atoms in the lower atmosphere.

  1. Relative and absolute risk in epidemiology and health physics

    International Nuclear Information System (INIS)

    Goldsmith, R.; Peterson, H.T. Jr.

    1983-01-01

    The health risk from ionizing radiation commonly is expressed in two forms: (1) the relative risk, which is the percentage increase in natural disease rate and (2) the absolute or attributable risk which represents the difference between the natural rate and the rate associated with the agent in question. Relative risk estimates for ionizing radiation generally are higher than those expressed as the absolute risk. This raises the question of which risk estimator is the most appropriate under different conditions. The absolute risk has generally been used for radiation risk assessment, although mathematical combinations such as the arithmetic or geometric mean of both the absolute and relative risks, have also been used. Combinations of the two risk estimators are not valid because the absolute and relative risk are not independent variables. Both human epidemiologic studies and animal experimental data can be found to illustrate the functional relationship between the natural cancer risk and the risk associated with radiation. This implies that the radiation risk estimate derived from one population may not be appropriate for predictions in another population, unless it is adjusted for the difference in the natural disease incidence between the two populations

  2. Redetermination and absolute configuration of atalaphylline

    Directory of Open Access Journals (Sweden)

    Hoong-Kun Fun

    2010-02-01

    Full Text Available The title acridone alkaloid [systematic name: 1,3,5-trihydroxy-2,4-bis(3-methylbut-2-enylacridin-9(10H-one], C23H25NO4, has previously been reported as crystallizing in the chiral orthorhombic space group P212121 [Chantrapromma et al. (2010. Acta Cryst. E66, o81–o82] but the absolute configuration could not be determined from data collected with Mo radiation. The absolute configuration has now been determined by refinement of the Flack parameter with data collected using Cu radiation. All features of the molecule and its crystal packing are similar to those previously described.

  3. Uncertainty of angular displacement measurement with a MEMS gyroscope integrated in a smartphone

    International Nuclear Information System (INIS)

    De Campos Porath, Maurício; Dolci, Ricardo

    2015-01-01

    Low-cost inertial sensors have recently gained popularity and are now widely used in electronic devices such as smartphones and tablets. In this paper we present the results of a set of experiments aiming to assess the angular displacement measurement errors of a gyroscope integrated in a smartphone of a recent model. The goal is to verify whether these sensors could substitute dedicated electronic inclinometers for the measurement of angular displacement. We estimated a maximum error of 0.3° (sum of expanded uncertainty and maximum absolute bias) for the roll and pitch axes, for a measurement time without referencing up to 1 h. (paper)

  4. Ac-dc converter firing error detection

    International Nuclear Information System (INIS)

    Gould, O.L.

    1996-01-01

    Each of the twelve Booster Main Magnet Power Supply modules consist of two three-phase, full-wave rectifier bridges in series to provide a 560 VDC maximum output. The harmonic contents of the twelve-pulse ac-dc converter output are multiples of the 60 Hz ac power input, with a predominant 720 Hz signal greater than 14 dB in magnitude above the closest harmonic components at maximum output. The 720 Hz harmonic is typically greater than 20 dB below the 500 VDC output signal under normal operation. Extracting specific harmonics from the rectifier output signal of a 6, 12, or 24 pulse ac-dc converter allows the detection of SCR firing angle errors or complete misfires. A bandpass filter provides the input signal to a frequency-to-voltage converter. Comparing the output of the frequency-to-voltage converter to a reference voltage level provides an indication of the magnitude of the harmonics in the ac-dc converter output signal

  5. Absolute calibration of sniffer probes on Wendelstein 7-X

    International Nuclear Information System (INIS)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-01-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m 2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m 2 per MW injected beam power is measured.

  6. Absolute calibration of sniffer probes on Wendelstein 7-X

    Science.gov (United States)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m2 per MW injected beam power is measured.

  7. Absolute calibration of sniffer probes on Wendelstein 7-X

    Energy Technology Data Exchange (ETDEWEB)

    Moseev, D., E-mail: dmitry.moseev@ipp.mpg.de; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Gellert, F. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Ernst-Moritz-Arndt-Universität Greifswald, Greifswald (Germany); Oosterbeek, J. W. [Eindhoven University of Technology, Eindhoven (Netherlands)

    2016-08-15

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m{sup 2} per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m{sup 2} per MW injected beam power is measured.

  8. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    Science.gov (United States)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with Fi

  9. Approximation for maximum pressure calculation in containment of PWR reactors

    International Nuclear Information System (INIS)

    Souza, A.L. de

    1989-01-01

    A correlation was developed to estimate the maximum pressure of dry containment of PWR following a Loss-of-Coolant Accident - LOCA. The expression proposed is a function of the total energy released to the containment by the primary circuit, of the free volume of the containment building and of the total surface are of the heat-conducting structures. The results show good agreement with those present in Final Safety Analysis Report - FSAR of several PWR's plants. The errors are in the order of ± 12%. (author) [pt

  10. Noncircular features in Saturn's rings IV: Absolute radius scale and Saturn's pole direction

    Science.gov (United States)

    French, Richard G.; McGhee-French, Colleen A.; Lonergan, Katherine; Sepersky, Talia; Jacobson, Robert A.; Nicholson, Philip D.; Hedman, Mathew M.; Marouf, Essam A.; Colwell, Joshua E.

    2017-07-01

    We present a comprehensive solution for the geometry of Saturn's ring system, based on orbital fits to an extensive set of occultation observations of 122 individual ring edges and gaps. We begin with a restricted set of very high quality Cassini VIMS, UVIS, and RSS measurements for quasi-circular features in the C and B rings and the Cassini Division, and then successively add suitably weighted additional Cassini and historical occultation measurements (from Voyager, HST and the widely-observed 28 Sgr occultation of 3 Jul 1989) for additional non-circular features, to derive an absolute radius scale applicable across the entire classical ring system. As part of our adopted solution, we determine first-order corrections to the spacecraft trajectories used to determine the geometry of individual occultation chords. We adopt a simple linear model for Saturn's precession, and our favored solution yields a precession rate on the sky n^˙P = 0.207 ± 0 .006‧‧yr-1 , equivalent to an angular rate of polar motion ΩP = 0.451 ± 0 .014‧‧yr-1 . The 3% formal uncertainty in the fitted precession rate is approaching the point where it can provide a useful constraint on models of Saturn's interior, although realistic errors are likely to be larger, given the linear approximation of the precession model and possible unmodeled systematic errors in the spacecraft ephemerides. Our results are largely consistent with independent estimates of the precession rate based on historical RPX times (Nicholson et al., 1999 AAS/Division for Planetary Sciences Meeting Abstracts #31 31, 44.01) and from theoretical expectations that account for Titan's 700-yr precession period (Vienne and Duriez 1992, Astronomy and Astrophysics 257, 331-352). The fitted precession rate based on Cassini data only is somewhat lower, which may be an indication of unmodeled shorter term contributions to Saturn's polar motion from other satellites, or perhaps the result of inconsistencies in the assumed

  11. Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems

    Science.gov (United States)

    Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka

    2018-06-01

    One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.

  12. Regional absolute conductivity reconstruction using projected current density in MREIT

    International Nuclear Information System (INIS)

    Sajib, Saurav Z K; Kim, Hyung Joong; Woo, Eung Je; Kwon, Oh In

    2012-01-01

    slice and the reconstructed regional projected current density, we propose a direct non-iterative algorithm to reconstruct the absolute conductivity in the ROI. The numerical simulations in the presence of various degrees of noise, as well as a phantom MRI imaging experiment showed that the proposed method reconstructs the regional absolute conductivity in a ROI within a subject including the defective regions. In the simulation experiment, the relative L 2 -mode errors of the reconstructed regional and global conductivities were 0.79 and 0.43, respectively, using a noise level of 50 db in the defective region. (paper)

  13. Coordinated joint motion control system with position error correction

    Science.gov (United States)

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  14. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  15. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  16. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  17. Correcting electrode modelling errors in EIT on realistic 3D head models.

    Science.gov (United States)

    Jehl, Markus; Avery, James; Malone, Emma; Holder, David; Betcke, Timo

    2015-12-01

    Electrical impedance tomography (EIT) is a promising medical imaging technique which could aid differentiation of haemorrhagic from ischaemic stroke in an ambulance. One challenge in EIT is the ill-posed nature of the image reconstruction, i.e., that small measurement or modelling errors can result in large image artefacts. It is therefore important that reconstruction algorithms are improved with regard to stability to modelling errors. We identify that wrongly modelled electrode positions constitute one of the biggest sources of image artefacts in head EIT. Therefore, the use of the Fréchet derivative on the electrode boundaries in a realistic three-dimensional head model is investigated, in order to reconstruct electrode movements simultaneously to conductivity changes. We show a fast implementation and analyse the performance of electrode position reconstructions in time-difference and absolute imaging for simulated and experimental voltages. Reconstructing the electrode positions and conductivities simultaneously increased the image quality significantly in the presence of electrode movement.

  18. Strongly nonlinear theory of rapid solidification near absolute stability

    Science.gov (United States)

    Kowal, Katarzyna N.; Altieri, Anthony L.; Davis, Stephen H.

    2017-10-01

    We investigate the nonlinear evolution of the morphological deformation of a solid-liquid interface of a binary melt under rapid solidification conditions near two absolute stability limits. The first of these involves the complete stabilization of the system to cellular instabilities as a result of large enough surface energy. We derive nonlinear evolution equations in several limits in this scenario and investigate the effect of interfacial disequilibrium on the nonlinear deformations that arise. In contrast to the morphological stability problem in equilibrium, in which only cellular instabilities appear and only one absolute stability boundary exists, in disequilibrium the system is prone to oscillatory instabilities and a second absolute stability boundary involving attachment kinetics arises. Large enough attachment kinetics stabilize the oscillatory instabilities. We derive a nonlinear evolution equation to describe the nonlinear development of the solid-liquid interface near this oscillatory absolute stability limit. We find that strong asymmetries develop with time. For uniform oscillations, the evolution equation for the interface reduces to the simple form f''+(βf')2+f =0 , where β is the disequilibrium parameter. Lastly, we investigate a distinguished limit near both absolute stability limits in which the system is prone to both cellular and oscillatory instabilities and derive a nonlinear evolution equation that captures the nonlinear deformations in this limit. Common to all these scenarios is the emergence of larger asymmetries in the resulting shapes of the solid-liquid interface with greater departures from equilibrium and larger morphological numbers. The disturbances additionally sharpen near the oscillatory absolute stability boundary, where the interface becomes deep-rooted. The oscillations are time-periodic only for small-enough initial amplitudes and their frequency depends on a single combination of physical parameters, including the

  19. PM10 Analysis for Three Industrialized Areas using Extreme Value

    International Nuclear Information System (INIS)

    Hasfazilah Ahmat; Ahmad Shukri Yahaya; Nor Azam Ramli; Hasfazilah Ahmat

    2015-01-01

    One of the concerns of the air pollution studies is to compute the concentrations of one or more pollutants' species in space and time in relation to the independent variables, for instance emissions into the atmosphere, meteorological factors and parameters. One of the most significant statistical disciplines developed for the applied sciences and many other disciplines for the last few decades is the extreme value theory (EVT). This study assesses the use of extreme value distributions of the two-parameter Gumbel, two and three-parameter Weibull, Generalized Extreme Value (GEV) and two and three-parameter Generalized Pareto Distribution (GPD) on the maximum concentration of daily PM10 data recorded in the year 2010 - 2012 in Pasir Gudang, Johor; Bukit Rambai, Melaka; and Nilai, Negeri Sembilan. Parameters for all distributions are estimated using the Method of Moments (MOM) and Maximum Likelihood Estimator (MLE). Six performance indicators namely; the accuracy measures which include predictive accuracy (PA), Coefficient of Determination (R2), Index of Agreement (IA) and error measures that consist of Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Normalized Absolute Error (NAE) are used to find the goodness-of-fit of the distribution. The best distribution is selected based on the highest accuracy measures and the smallest error measures. The results showed that the GEV is the best fit for daily maximum concentration for PM10 for all monitoring stations. The analysis also demonstrates that the estimated numbers of days in which the concentration of PM10 exceeded the Malaysian Ambient Air Quality Guidelines (MAAQG) of 150 mg/ m"3 are between 1/2 and 11/2 days. (author)

  20. Human errors evaluation for muster in emergency situations applying human error probability index (HEPI, in the oil company warehouse in Hamadan City

    Directory of Open Access Journals (Sweden)

    2012-12-01

    Full Text Available Introduction: Emergency situation is one of the influencing factors on human error. The aim of this research was purpose to evaluate human error in emergency situation of fire and explosion at the oil company warehouse in Hamadan city applying human error probability index (HEPI. . Material and Method: First, the scenario of emergency situation of those situation of fire and explosion at the oil company warehouse was designed and then maneuver against, was performed. The scaled questionnaire of muster for the maneuver was completed in the next stage. Collected data were analyzed to calculate the probability success for the 18 actions required in an emergency situation from starting point of the muster until the latest action to temporary sheltersafe. .Result: The result showed that the highest probability of error occurrence was related to make safe workplace (evaluation phase with 32.4 % and lowest probability of occurrence error in detection alarm (awareness phase with 1.8 %, probability. The highest severity of error was in the evaluation phase and the lowest severity of error was in the awareness and recovery phase. Maximum risk level was related to the evaluating exit routes and selecting one route and choosy another exit route and minimum risk level was related to the four evaluation phases. . Conclusion: To reduce the risk of reaction in the exit phases of an emergency situation, the following actions are recommended, based on the finding in this study: A periodic evaluation of the exit phase and modifying them if necessary, conducting more maneuvers and analyzing this results along with a sufficient feedback to the employees.

  1. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  2. Forcing absoluteness and regularity properties

    NARCIS (Netherlands)

    Ikegami, D.

    2010-01-01

    For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.

  3. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  4. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    Energy Technology Data Exchange (ETDEWEB)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L. [NASA Marshall Space Flight Center, Huntsville, AL 35812 (United States); Khazanov, Igor, E-mail: David.a.Falconer@nasa.gov [Center for Space Plasma and Aeronomic Research, University of Alabama in Huntsville, Huntsville, AL 35899 (United States)

    2016-12-20

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter at central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.

  5. Absolute calibration of sniffer probes on Wendelstein 7-X

    NARCIS (Netherlands)

    Moseev, D.; Laqua, H.P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.J.; Oosterbeek, J.W.

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of

  6. Absolute tense forms in Tswana | Pretorius | Journal for Language ...

    African Journals Online (AJOL)

    These views were compared in an attempt to put forth an applicable framework for the classification of the tenses in Tswana and to identify the absolute tenses of Tswana. Keywords: tense; simple tenses; compound tenses; absolute tenses; relative tenses; aspect; auxiliary verbs; auxiliary verbal groups; Tswana Opsomming

  7. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  8. Medication prescribing errors in a public teaching hospital in India: A prospective study.

    Directory of Open Access Journals (Sweden)

    Pote S

    2007-03-01

    Full Text Available Background: To prevent medication errors in prescribing, one needs to know their types and relative occurrence. Such errors are a great cause of concern as they have the potential to cause patient harm. The aim of this study was to determine the nature and types of medication prescribing errors in an Indian setting.Methods: The medication errors were analyzed in a prospective observational study conducted in 3 medical wards of a public teaching hospital in India. The medication errors were analyzed by means of Micromedex Drug-Reax database.Results: Out of 312 patients, only 304 were included in the study. Of the 304 cases, 103 (34% cases had at least one error. The total number of errors found was 157. The drug-drug interactions were the most frequently (68.2% occurring type of error, which was followed by incorrect dosing interval (12% and dosing errors (9.5%. The medication classes involved most were antimicrobial agents (29.4%, cardiovascular agents (15.4%, GI agents (8.6% and CNS agents (8.2%. The moderate errors contributed maximum (61.8% to the total errors when compared to the major (25.5% and minor (12.7% errors. The results showed that the number of errors increases with age and number of medicines prescribed.Conclusion: The results point to the establishment of medication error reporting at each hospital and to share the data with other hospitals. The role of clinical pharmacist in this situation appears to be a strong intervention; and the clinical pharmacist, initially, could confine to identification of the medication errors.

  9. Measurement Rounding Errors in an Assessment Model of Project Led Engineering Education

    Directory of Open Access Journals (Sweden)

    Francisco Moreira

    2009-11-01

    Full Text Available This paper analyzes the rounding errors that occur in the assessment of an interdisciplinary Project-Led Education (PLE process implemented in the Integrated Master degree on Industrial Management and Engineering (IME at University of Minho. PLE is an innovative educational methodology which makes use of active learning, promoting higher levels of motivation and students’ autonomy. The assessment model is based on multiple evaluation components with different weights. Each component can be evaluated by several teachers involved in different Project Supporting Courses (PSC. This model can be affected by different types of errors, namely: (1 rounding errors, and (2 non-uniform criteria of rounding the grades. A rigorous analysis of the assessment model was made and the rounding errors involved on each project component were characterized and measured. This resulted in a global maximum error of 0.308 on the individual student project grade, in a 0 to 100 scale. This analysis intended to improve not only the reliability of the assessment results, but also teachers’ awareness of this problem. Recommendations are also made in order to improve the assessment model and reduce the rounding errors as much as possible.

  10. Comments on a derivation and application of the 'maximum entropy production' principle

    International Nuclear Information System (INIS)

    Grinstein, G; Linsker, R

    2007-01-01

    We show that (1) an error invalidates the derivation (Dewar 2005 J. Phys. A: Math. Gen. 38 L371) of the maximum entropy production (MaxEP) principle for systems far from equilibrium, for which the constitutive relations are nonlinear; and (2) the claim (Dewar 2003 J. Phys. A: Math. Gen. 36 631) that the phenomenon of 'self-organized criticality' is a consequence of MaxEP for slowly driven systems is unjustified. (comment)

  11. Probative value of absolute and relative judgments in eyewitness identification.

    Science.gov (United States)

    Clark, Steven E; Erickson, Michael A; Breneman, Jesse

    2011-10-01

    It is well-accepted that eyewitness identification decisions based on relative judgments are less accurate than identification decisions based on absolute judgments. However, the theoretical foundation for this view has not been established. In this study relative and absolute judgments were compared through simulations of the WITNESS model (Clark, Appl Cogn Psychol 17:629-654, 2003) to address the question: Do suspect identifications based on absolute judgments have higher probative value than suspect identifications based on relative judgments? Simulations of the WITNESS model showed a consistent advantage for absolute judgments over relative judgments for suspect-matched lineups. However, simulations of same-foils lineups showed a complex interaction based on the accuracy of memory and the similarity relationships among lineup members.

  12. Novel high accurate sensorless dual-axis solar tracking system controlled by maximum power point tracking unit of photovoltaic systems

    International Nuclear Information System (INIS)

    Fathabadi, Hassan

    2016-01-01

    Highlights: • Novel high accurate sensorless dual-axis solar tracker. • It has the advantages of both sensor based and sensorless solar trackers. • It does not have the disadvantages of sensor based and sensorless solar trackers. • Tracking error of only 0.11° that is less than the tracking errors of others. • An increase of 28.8–43.6% depending on the seasons in the energy efficiency. - Abstract: In this study, a novel high accurate sensorless dual-axis solar tracker controlled by the maximum power point tracking unit available in almost all photovoltaic systems is proposed. The maximum power point tracking controller continuously calculates the maximum output power of the photovoltaic module/panel/array, and uses the altitude and azimuth angles deviations to track the sun direction where the greatest value of the maximum output power is extracted. Unlike all other sensorless solar trackers, the proposed solar tracking system is a closed loop system which means it uses the actual direction of the sun at any time to track the sun direction, and this is the contribution of this work. The proposed solar tracker has the advantages of both sensor based and sensorless dual-axis solar trackers, but it does not have their disadvantages. Other sensorless solar trackers all are open loop, i.e., they use offline estimated data about the sun path in the sky obtained from solar map equations, so low exactness, cloudy sky, and requiring new data for new location are their problems. A photovoltaic system has been built, and it is experimentally verified that the proposed solar tracking system tracks the sun direction with the tracking error of 0.11° which is less than the tracking errors of other both sensor based and sensorless solar trackers. An increase of 28.8–43.6% depending on the seasons in the energy efficiency is the main advantage of utilizing the proposed solar tracking system.

  13. Positioning, alignment and absolute pointing of the ANTARES neutrino telescope

    International Nuclear Information System (INIS)

    Fehr, F; Distefano, C

    2010-01-01

    A precise detector alignment and absolute pointing is crucial for point-source searches. The ANTARES neutrino telescope utilises an array of hydrophones, tiltmeters and compasses for the relative positioning of the optical sensors. The absolute calibration is accomplished by long-baseline low-frequency triangulation of the acoustic reference devices in the deep-sea with a differential GPS system at the sea surface. The absolute pointing can be independently verified by detecting the shadow of the Moon in cosmic rays.

  14. Does Absolute Synonymy exist in Owere-Igbo? | Omego | AFRREV ...

    African Journals Online (AJOL)

    Among Igbo linguistic researchers, determining whether absolute synonymy exists in Owere–Igbo, a dialect of the Igbo language predominantly spoken by the people of Owerri, Imo State, Nigeria, has become a thorny issue. While some linguistic scholars strive to establish that absolute synonymy exists in the lexical ...

  15. Three-dimensional patient setup errors at different treatment sites measured by the Tomotherapy megavoltage CT

    Energy Technology Data Exchange (ETDEWEB)

    Hui, S.K.; Lusczek, E.; Dusenbery, K. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; DeFor, T. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Biostatistics and Informatics Core; Levitt, S. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; Karolinska Institutet, Stockholm (Sweden). Dept. of Onkol-Patol

    2012-04-15

    Reduction of interfraction setup uncertainty is vital for assuring the accuracy of conformal radiotherapy. We report a systematic study of setup error to assess patients' three-dimensional (3D) localization at various treatment sites. Tomotherapy megavoltage CT (MVCT) images were scanned daily in 259 patients from 2005-2008. We analyzed 6,465 MVCT images to measure setup error for head and neck (H and N), chest/thorax, abdomen, prostate, legs, and total marrow irradiation (TMI). Statistical comparisons of the absolute displacements across sites and time were performed in rotation (R), lateral (x), craniocaudal (y), and vertical (z) directions. The global systematic errors were measured to be less than 3 mm in each direction with increasing order of errors for different sites: H and N, prostate, chest, pelvis, spine, legs, and TMI. The differences in displacements in the x, y, and z directions, and 3D average displacement between treatment sites were significant (p < 0.01). Overall improvement in patient localization with time (after 3-4 treatment fractions) was observed. Large displacement (> 5 mm) was observed in the 75{sup th} percentile of the patient groups for chest, pelvis, legs, and spine in the x and y direction in the second week of the treatment. MVCT imaging is essential for determining 3D setup error and to reduce uncertainty in localization at all anatomical locations. Setup error evaluation should be performed daily for all treatment regions, preferably for all treatment fractions. (orig.)

  16. Subtropical Arctic Ocean temperatures during the Palaeocene/Eocene thermal maximum

    Science.gov (United States)

    Sluijs, A.; Schouten, S.; Pagani, M.; Woltering, M.; Brinkhuis, H.; Damste, J.S.S.; Dickens, G.R.; Huber, M.; Reichart, G.-J.; Stein, R.; Matthiessen, J.; Lourens, L.J.; Pedentchouk, N.; Backman, J.; Moran, K.; Clemens, S.; Cronin, T.; Eynaud, F.; Gattacceca, J.; Jakobsson, M.; Jordan, R.; Kaminski, M.; King, J.; Koc, N.; Martinez, N.C.; McInroy, D.; Moore, T.C.; O'Regan, M.; Onodera, J.; Palike, H.; Rea, B.; Rio, D.; Sakamoto, T.; Smith, D.C.; St John, K.E.K.; Suto, I.; Suzuki, N.; Takahashi, K.; Watanabe, M. E.; Yamamoto, M.

    2006-01-01

    The Palaeocene/Eocene thermal maximum, ???55 million years ago, was a brief period of widespread, extreme climatic warming, that was associated with massive atmospheric greenhouse gas input. Although aspects of the resulting environmental changes are well documented at low latitudes, no data were available to quantify simultaneous changes in the Arctic region. Here we identify the Palaeocene/Eocene thermal maximum in a marine sedimentary sequence obtained during the Arctic Coring Expedition. We show that sea surface temperatures near the North Pole increased from ???18??C to over 23??C during this event. Such warm values imply the absence of ice and thus exclude the influence of ice-albedo feedbacks on this Arctic warming. At the same time, sea level rose while anoxic and euxinic conditions developed in the ocean's bottom waters and photic zone, respectively. Increasing temperature and sea level match expectations based on palaeoclimate model simulations, but the absolute polar temperatures that we derive before, during and after the event are more than 10??C warmer than those model-predicted. This suggests that higher-than-modern greenhouse gas concentrations must have operated in conjunction with other feedback mechanisms-perhaps polar stratospheric clouds or hurricane-induced ocean mixing-to amplify early Palaeogene polar temperatures. ?? 2006 Nature Publishing Group.

  17. Probabilistic error bounds for reduced order modeling

    Energy Technology Data Exchange (ETDEWEB)

    Abdo, M.G.; Wang, C.; Abdel-Khalik, H.S., E-mail: abdo@purdue.edu, E-mail: wang1730@purdue.edu, E-mail: abdelkhalik@purdue.edu [Purdue Univ., School of Nuclear Engineering, West Lafayette, IN (United States)

    2015-07-01

    Reduced order modeling has proven to be an effective tool when repeated execution of reactor analysis codes is required. ROM operates on the assumption that the intrinsic dimensionality of the associated reactor physics models is sufficiently small when compared to the nominal dimensionality of the input and output data streams. By employing a truncation technique with roots in linear algebra matrix decomposition theory, ROM effectively discards all components of the input and output data that have negligible impact on reactor attributes of interest. This manuscript introduces a mathematical approach to quantify the errors resulting from the discarded ROM components. As supported by numerical experiments, the introduced analysis proves that the contribution of the discarded components could be upper-bounded with an overwhelmingly high probability. The reverse of this statement implies that the ROM algorithm can self-adapt to determine the level of the reduction needed such that the maximum resulting reduction error is below a given tolerance limit that is set by the user. (author)

  18. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  19. Bounds on absolutely maximally entangled states from shadow inequalities, and the quantum MacWilliams identity

    Science.gov (United States)

    Huber, Felix; Eltschka, Christopher; Siewert, Jens; Gühne, Otfried

    2018-04-01

    A pure multipartite quantum state is called absolutely maximally entangled (AME), if all reductions obtained by tracing out at least half of its parties are maximally mixed. Maximal entanglement is then present across every bipartition. The existence of such states is in many cases unclear. With the help of the weight enumerator machinery known from quantum error correction and the shadow inequalities, we obtain new bounds on the existence of AME states in dimensions larger than two. To complete the treatment on the weight enumerator machinery, the quantum MacWilliams identity is derived in the Bloch representation. Finally, we consider AME states whose subsystems have different local dimensions, and present an example for a 2×3×3×3 system that shows maximal entanglement across every bipartition.

  20. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors

    Directory of Open Access Journals (Sweden)

    Heon-Ju Kwon

    2018-03-01

    Full Text Available Background/Aims Computed tomography (CT hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT. However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Methods Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (VP was measured via the assumptive hepatectomy plane. Retrospective liver volume (VR was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W, errors in percentage (% VP and VR were evaluated. Plane-dependent error in VP was defined as the absolute difference between VP and VR. % plane-dependent error was defined as follows: |VP–VR|/W∙100. Results Mean VP, VR, and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in VP were 73.3 mL and 10.7%. Mean error and % error in VR were 64.4 mL and 9.3%. Mean plane-dependent error in VP was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in VP exceeded 10% of W in approximately 10% of the subjects in our study. Conclusions There was approximately 5% plane-dependent error in liver VP on CT volumetry. Plane-dependent error in VP exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  1. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors.

    Science.gov (United States)

    Kwon, Heon-Ju; Kim, Kyoung Won; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-03-01

    Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (V P ) was measured via the assumptive hepatectomy plane. Retrospective liver volume (V R ) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) V P and V R were evaluated. Plane-dependent error in V P was defined as the absolute difference between V P and V R . % plane-dependent error was defined as follows: |V P -V R |/W∙100. Mean V P , V R , and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in V P were 73.3 mL and 10.7%. Mean error and % error in V R were 64.4 mL and 9.3%. Mean plane-dependent error in V P was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in V P exceeded 10% of W in approximately 10% of the subjects in our study. There was approximately 5% plane-dependent error in liver V P on CT volumetry. Plane-dependent error in V P exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  2. Relationships between GPS-signal propagation errors and EISCAT observations

    Directory of Open Access Journals (Sweden)

    N. Jakowski

    1996-12-01

    Full Text Available When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS, the TEC over Europe is estimated within the geographic ranges -20°≤ λ ≤40°E and 32.5°≤ Φ ≤70°N in longitude and latitude, respectively. The derived TEC maps over Europe contribute to the study of horizontal coupling and transport proces- ses during significant ionospheric events. Due to their comprehensive information about the high-latitude ionosphere, EISCAT observations may help to study the influence of ionospheric phenomena upon propagation errors in GPS navigation systems. Since there are still some accuracy limiting problems to be solved in TEC determination using GPS, data comparison of TEC with vertical electron density profiles derived from EISCAT observations is valuable to enhance the accuracy of propagation-error estimations. This is evident both for absolute TEC calibration as well as for the conversion of ray-path-related observations to vertical TEC. The combination of EISCAT data and GPS-derived TEC data enables a better understanding of large-scale ionospheric processes.

  3. The DiskMass Survey. II. Error Budget

    Science.gov (United States)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  4. Moral absolutism and ectopic pregnancy.

    Science.gov (United States)

    Kaczor, C

    2001-02-01

    If one accepts a version of absolutism that excludes the intentional killing of any innocent human person from conception to natural death, ectopic pregnancy poses vexing difficulties. Given that the embryonic life almost certainly will die anyway, how can one retain one's moral principle and yet adequately respond to a situation that gravely threatens the life of the mother and her future fertility? The four options of treatment most often discussed in the literature are non-intervention, salpingectomy (removal of tube with embryo), salpingostomy (removal of embryo alone), and use of methotrexate (MXT). In this essay, I review these four options and introduce a fifth (the milking technique). In order to assess these options in terms of the absolutism mentioned, it will also be necessary to discuss various accounts of the intention/foresight distinction. I conclude that salpingectomy, salpingostomy, and the milking technique are compatible with absolutist presuppositions, but not the use of methotrexate.

  5. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  6. A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation

    International Nuclear Information System (INIS)

    Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.

    2006-01-01

    We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near mono-energetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the

  7. Technical Note: Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry

    International Nuclear Information System (INIS)

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frederic

    2010-01-01

    Purpose: The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Methods: Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side (''face up'' or ''face down'') were simulated. Results: Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%--3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Conclusions: Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  8. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  9. Absolute and Relative Socioeconomic Health Inequalities across Age Groups.

    Science.gov (United States)

    van Zon, Sander K R; Bültmann, Ute; Mendes de Leon, Carlos F; Reijneveld, Sijmen A

    2015-01-01

    The magnitude of socioeconomic health inequalities differs across age groups. It is less clear whether socioeconomic health inequalities differ across age groups by other factors that are known to affect the relation between socioeconomic position and health, like the indicator of socioeconomic position, the health outcome, gender, and as to whether socioeconomic health inequalities are measured in absolute or in relative terms. The aim is to investigate whether absolute and relative socioeconomic health inequalities differ across age groups by indicator of socioeconomic position, health outcome and gender. The study sample was derived from the baseline measurement of the LifeLines Cohort Study and consisted of 95,432 participants. Socioeconomic position was measured as educational level and household income. Physical and mental health were measured with the RAND-36. Age concerned eleven 5-years age groups. Absolute inequalities were examined by comparing means. Relative inequalities were examined by comparing Gini-coefficients. Analyses were performed for both health outcomes by both educational level and household income. Analyses were performed for all age groups, and stratified by gender. Absolute and relative socioeconomic health inequalities differed across age groups by indicator of socioeconomic position, health outcome, and gender. Absolute inequalities were most pronounced for mental health by household income. They were larger in younger than older age groups. Relative inequalities were most pronounced for physical health by educational level. Gini-coefficients were largest in young age groups and smallest in older age groups. Absolute and relative socioeconomic health inequalities differed cross-sectionally across age groups by indicator of socioeconomic position, health outcome and gender. Researchers should critically consider the implications of choosing a specific age group, in addition to the indicator of socioeconomic position and health outcome

  10. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    Science.gov (United States)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  11. Theoretical Calculation of Absolute Radii of Atoms and Ions. Part 1. The Atomic Radii

    Directory of Open Access Journals (Sweden)

    Raka Biswas

    2002-02-01

    Full Text Available Abstract. A set of theoretical atomic radii corresponding to the principal maximum in the radial distribution function, 4πr2R2 for the outermost orbital has been calculated for the ground state of 103 elements of the periodic table using Slater orbitals. The set of theoretical radii are found to reproduce the periodic law and the Lother Meyer’s atomic volume curve and reproduce the expected vertical and horizontal trend of variation in atomic size in the periodic table. The d-block and f-block contractions are distinct in the calculated sizes. The computed sizes qualitatively correlate with the absolute size dependent properties like ionization potentials and electronegativity of elements. The radii are used to calculate a number of size dependent periodic physical properties of isolated atoms viz., the diamagnetic part of the atomic susceptibility, atomic polarizability and the chemical hardness. The calculated global hardness and atomic polarizability of a number of atoms are found to be close to the available experimental values and the profiles of the physical properties computed in terms of the theoretical atomic radii exhibit their inherent periodicity. A simple method of computing the absolute size of atoms has been explored and a large body of known material has been brought together to reveal how many different properties correlate with atomic size.

  12. Some things ought never be done: moral absolutes in clinical ethics.

    Science.gov (United States)

    Pellegrino, Edmund D

    2005-01-01

    Moral absolutes have little or no moral standing in our morally diverse modern society. Moral relativism is far more palatable for most ethicists and to the public at large. Yet, when pressed, every moral relativist will finally admit that there are some things which ought never be done. It is the rarest of moral relativists that will take rape, murder, theft, child sacrifice as morally neutral choices. In general ethics, the list of those things that must never be done will vary from person to person. In clinical ethics, however, the nature of the physician-patient relationship is such that certain moral absolutes are essential to the attainment of the good of the patient - the end of the relationship itself. These are all derivatives of the first moral absolute of all morality: Do good and avoid evil. In the clinical encounter, this absolute entails several subsidiary absolutes - act for the good of the patient, do not kill, keep promises, protect the dignity of the patient, do not lie, avoid complicity with evil. Each absolute is intrinsic to the healing and helping ends of the clinical encounter.

  13. Subdivision Error Analysis and Compensation for Photoelectric Angle Encoder in a Telescope Control System

    Directory of Open Access Journals (Sweden)

    Yanrui Su

    2015-01-01

    Full Text Available As the position sensor, photoelectric angle encoder affects the accuracy and stability of telescope control system (TCS. A TCS-based subdivision error compensation method for encoder is proposed. Six types of subdivision error sources are extracted through mathematical expressions of subdivision signals first. Then the period length relationships between subdivision signals and subdivision errors are deduced. And the error compensation algorithm only utilizing the shaft position of TCS is put forward, along with two control models; Model I is that the algorithm applies only to the speed loop of TCS and Model II is applied to both speed loop and position loop. Combined with actual project, elevation jittering phenomenon of the telescope is discussed to decide the necessity of DC-type subdivision error compensation. Low-speed elevation performance before and after error compensation is compared to help decide that Model II is preferred. In contrast to original performance, the maximum position error of the elevation with DC subdivision error compensation is reduced by approximately 47.9% from 1.42″ to 0.74″. The elevation gets a huge decrease in jitters. This method can compensate the encoder subdivision errors effectively and improve the stability of TCS.

  14. Relativistic Absolutism in Moral Education.

    Science.gov (United States)

    Vogt, W. Paul

    1982-01-01

    Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess absolute rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)

  15. Error quantification of osteometric data in forensic anthropology.

    Science.gov (United States)

    Langley, Natalie R; Meadows Jantz, Lee; McNulty, Shauna; Maijanen, Heli; Ousley, Stephen D; Jantz, Richard L

    2018-04-10

    This study evaluates the reliability of osteometric data commonly used in forensic case analyses, with specific reference to the measurements in Data Collection Procedures 2.0 (DCP 2.0). Four observers took a set of 99 measurements four times on a sample of 50 skeletons (each measurement was taken 200 times by each observer). Two-way mixed ANOVAs and repeated measures ANOVAs with pairwise comparisons were used to examine interobserver (between-subjects) and intraobserver (within-subjects) variability. Relative technical error of measurement (TEM) was calculated for measurements with significant ANOVA results to examine the error among a single observer repeating a measurement multiple times (e.g. repeatability or intraobserver error), as well as the variability between multiple observers (interobserver error). Two general trends emerged from these analyses: (1) maximum lengths and breadths have the lowest error across the board (TEMForensic Skeletal Material, 3rd edition. Each measurement was examined carefully to determine the likely source of the error (e.g. data input, instrumentation, observer's method, or measurement definition). For several measurements (e.g. anterior sacral breadth, distal epiphyseal breadth of the tibia) only one observer differed significantly from the remaining observers, indicating a likely problem with the measurement definition as interpreted by that observer; these definitions were clarified in DCP 2.0 to eliminate this confusion. Other measurements were taken from landmarks that are difficult to locate consistently (e.g. pubis length, ischium length); these measurements were omitted from DCP 2.0. This manual is available for free download online (https://fac.utk.edu/wp-content/uploads/2016/03/DCP20_webversion.pdf), along with an accompanying instructional video (https://www.youtube.com/watch?v=BtkLFl3vim4). Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  17. Cryogenic Current Comparator for Absolute Measurement of the Dark Current of the Superconducting Cavities for Tesla

    CERN Document Server

    Knaack, K; Wittenburg, K

    2003-01-01

    A newly high performance SQUID based measurement system for detecting dark currents, generated by superconducting cavities for TESLA is proposed. It makes use of the Cryogenic Current Comparator principle and senses dark currents in the nA range with a small signal bandwidth of 70 kHz. To reach the maximum possible energy in the TESLA project is a strong motivation to push the gradients of the superconducting cavities closer to the physical limit of 50 MV/m. The field emission of electrons (the so called dark current) of the superconducting cavities at strong fields may limit the maximum gradient. The absolute measurement of the dark current in correlation with the gradient will give a proper value to compare and classify the cavities. This contribution describes a Cryogenic Current Comparator (CCC) as an excellent and useful tool for this purpose. The most important component of the CCC is a high performance DC SQUID system which is able to measure extremely low magnetic fields, e.g. caused by the extracted ...

  18. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  19. Effekten af absolut kumulation

    DEFF Research Database (Denmark)

    Kyvsgaard, Britta; Klement, Christian

    2012-01-01

    Som led i finansloven for 2011 blev regeringen og forligspartierne enige om at undersøge reglerne om strafudmåling ved samtidig pådømmelse af flere kriminelle forhold og i forbindelse hermed vurdere konsekvenserne af at ændre de gældende regler i forhold til kapacitetsbehovet i Kriminalforsorgens...... samlet bødesum ved en absolut kumulation i forhold til en modereret kumulation, som nu er gældende....

  20. Some absolutely effective product methods

    Directory of Open Access Journals (Sweden)

    H. P. Dikshit

    1992-01-01

    Full Text Available It is proved that the product method A(C,1, where (C,1 is the Cesàro arithmetic mean matrix, is totally effective under certain conditions concerning the matrix A. This general result is applied to study absolute Nörlund summability of Fourier series and other related series.

  1. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  2. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    Science.gov (United States)

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  3. Absolute measurement method of environment radon content

    International Nuclear Information System (INIS)

    Ji Changsong

    1989-11-01

    A portable environment radon content device with a 40 liter decay chamber based on the method of Thomas double filter radon content absolute measurement has been developed. The correctness of the method of Thomas double filter absolute measurement has been verified by the experiments to measure the sampling gas density of radon that the theoretical density has been known. In addition, the intrinsic uncertainty of this method is also determined in the experiments. The confidence of this device is about 95%, the sensitivity is better than 0.37 Bqm -3 and the intrinsic uncertainty is less than 10%. The results show that the selected measuring and structure parameters are reasonable and the experimental methods are acceptable. In this method, the influence on the measured values from the radioactive equilibrium of radon and its daughters, the ratio of combination daughters to the total daughters and the fraction of charged particles has been excluded in the theory and experimental methods. The formula of Thomas double filter absolute measuring radon is applicable to the cylinder decay chamber, and the applicability is also verified when the diameter of exit filter is much smaller than the diameter of inlet filter

  4. Study on absolute humidity influence of NRL-1 measuring apparatus for radon

    International Nuclear Information System (INIS)

    Shan Jian; Xiao Detao; Zhao Guizhi; Zhou Qingzhi; Liu Yan; Qiu Shoukang; Meng Yecheng; Xiong Xinming; Liu Xiaosong; Ma Wenrong

    2014-01-01

    The absolute humidity and temperature's effects on the NRL-1 measuring apparatus for radon were studied in this paper. By controlling the radon activity concentration of the radon laboratory in University of South China and improving the temperature and humidity adjust strategy, different correction factor values under different absolute humidities were obtained. Moreover, a correction curve between 1.90 and 14.91 g/m"3 was also attained. The results show that in the case of absolute humidity, when it is less than 2.4 g/m"3, collection efficiency of the NRL-1 measuring apparatus for radon tends to be constant, and the correction factor of the absolute humidity closes to 1. However, the correction factor increases nonlinearly along with the absolute humidity. (authors)

  5. Performance evaluations of continuous glucose monitoring systems: precision absolute relative deviation is part of the assessment.

    Science.gov (United States)

    Obermaier, Karin; Schmelzeisen-Redeker, Günther; Schoemaker, Michael; Klötzer, Hans-Martin; Kirchsteiger, Harald; Eikmeier, Heino; del Re, Luigi

    2013-07-01

    Even though a Clinical and Laboratory Standards Institute proposal exists on the design of studies and performance criteria for continuous glucose monitoring (CGM) systems, it has not yet led to a consistent evaluation of different systems, as no consensus has been reached on the reference method to evaluate them or on acceptance levels. As a consequence, performance assessment of CGM systems tends to be inconclusive, and a comparison of the outcome of different studies is difficult. Published information and available data (as presented in this issue of Journal of Diabetes Science and Technology by Freckmann and coauthors) are used to assess the suitability of several frequently used methods [International Organization for Standardization, continuous glucose error grid analysis, mean absolute relative deviation (MARD), precision absolute relative deviation (PARD)] when assessing performance of CGM systems in terms of accuracy and precision. The combined use of MARD and PARD seems to allow for better characterization of sensor performance. The use of different quantities for calibration and evaluation, e.g., capillary blood using a blood glucose (BG) meter versus venous blood using a laboratory measurement, introduces an additional error source. Using BG values measured in more or less large intervals as the only reference leads to a significant loss of information in comparison with the continuous sensor signal and possibly to an erroneous estimation of sensor performance during swings. Both can be improved using data from two identical CGM sensors worn by the same patient in parallel. Evaluation of CGM performance studies should follow an identical study design, including sufficient swings in glycemia. At least a part of the study participants should wear two identical CGM sensors in parallel. All data available should be used for evaluation, both by MARD and PARD, a good PARD value being a precondition to trust a good MARD value. Results should be analyzed and

  6. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  7. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  8. Absolutely minimal extensions of functions on metric spaces

    International Nuclear Information System (INIS)

    Milman, V A

    1999-01-01

    Extensions of a real-valued function from the boundary ∂X 0 of an open subset X 0 of a metric space (X,d) to X 0 are discussed. For the broad class of initial data coming under discussion (linearly bounded functions) locally Lipschitz extensions to X 0 that preserve localized moduli of continuity are constructed. In the set of these extensions an absolutely minimal extension is selected, which was considered before by Aronsson for Lipschitz initial functions in the case X 0 subset of R n . An absolutely minimal extension can be regarded as an ∞-harmonic function, that is, a limit of p-harmonic functions as p→+∞. The proof of the existence of absolutely minimal extensions in a metric space with intrinsic metric is carried out by the Perron method. To this end, ∞-subharmonic, ∞-superharmonic, and ∞-harmonic functions on a metric space are defined and their properties are established

  9. An improved approach to reduce partial volume errors in brain SPET

    International Nuclear Information System (INIS)

    Hatton, R.L.; Hatton, B.F.; Michael, G.; Barnden, L.; QUT, Brisbane, QLD; The Queen Elizabeth Hospital, Adelaide, SA

    1999-01-01

    Full text: Limitations in SPET resolution give rise to significant partial volume error (PVE) in small brain structures We have investigated a previously published method (Muller-Gartner et al., J Cereb Blood Flow Metab 1992;16: 650-658) to correct PVE in grey matter using MRI. An MRI is registered and segmented to obtain a grey matter tissue volume which is then smoothed to obtain resolution matched to the corresponding SPET. By dividing the original SPET with this correction map, structures can be corrected for PVE on a pixel-by-pixel basis. Since this approach is limited by space-invariant filtering, modification was made by estimating projections for the segmented MRI and reconstructing these using identical parameters to SPET. The methods were tested on simulated brain scans, reconstructed with the ordered subsets EM algorithm (8,16, 32, 64 equivalent EM iterations) The new method provided better recovery visually. For 32 EM iterations, recovery coefficients were calculated for grey matter regions. The effects of potential errors in the method were examined. Mean recovery was unchanged with one pixel registration error, the maximum error found in most registration programs. Errors in segmentation > 2 pixels results in loss of accuracy for small structures. The method promises to be useful for reducing PVE in brain SPET

  10. Absolute carrier phase effects in the two-color excitation of dipolar molecules

    International Nuclear Information System (INIS)

    Brown, Alex; Meath, W.J.; Kondo, A.E.

    2002-01-01

    The pump-probe excitation of a two-level dipolar (d≠0) molecule, where the pump frequency is tuned to the energy level separation while the probe frequency is extremely small, is examined theoretically as an example of absolute phase control of excitation processes. The state populations depend on the probe field's absolute carrier phase but are independent of the pump field's absolute carrier phase. Interestingly, the absolute phase effects occur for pulse durations much longer and field intensities much weaker than those required to see such effects in single pulse excitation

  11. Analyzing the installation angle error of a SAW torque sensor

    International Nuclear Information System (INIS)

    Fan, Yanping; Ji, Xiaojun; Cai, Ping

    2014-01-01

    When a torque is applied to a shaft, normal strain oriented at ±45° direction to the shaft axis is at its maximum, which requires two one-port SAW resonators to be bonded to the shaft at ±45° to the shaft axis. In order to make the SAW torque sensitivity high enough, the installation angle error of two SAW resonators must be confined within ±5° according to our design requirement. However, there are few studies devoted to the installation angle analysis of a SAW torque sensor presently and the angle error was usually obtained by a manual method. Hence, we propose an approximation method to analyze the angle error. First, according to the sensitive mechanism of the SAW device to torque, the SAW torque sensitivity is deduced based on the linear piezoelectric constitutive equation and the perturbation theory. Then, when a torque is applied to the tested shaft, the stress condition of two SAW resonators mounted with an angle deviating from ±45° to the shaft axis, is analyzed. The angle error is obtained by means of the torque sensitivities of two orthogonal SAW resonators. Finally, the torque measurement system is constructed and the loading and unloading experiments are performed twice. The torque sensitivities of two SAW resonators are obtained by applying average and least square method to the experimental results. Based on the derived angle error estimation function, the angle error is estimated about 3.447°, which is close to the actual angle error 2.915°. The difference between the estimated angle and the actual angle is discussed. The validity of the proposed angle error analysis method is testified to by the experimental results. (technical design note)

  12. Determination of absolute detection efficiencies for detectors of interest in homeland security

    International Nuclear Information System (INIS)

    Ayaz-Maierhafer, Birsen; DeVol, Timothy A.

    2007-01-01

    The absolute total and absolute peak detection efficiencies of gamma ray detector materials NaI:Tl, CdZnTe, HPGe, HPXe, LaBr 3 :Ce and LaCl 3 :Ce were simulated and compared to that of polyvinyltoluene (PVT). The dimensions of the PVT detector were 188.82 cmx60.96 cmx5.08 cm, which is a typical size for a single-panel portal monitor. The absolute total and peak detection efficiencies for these detector materials for the point, line and spherical source geometries of 60 Co (1332 keV), 137 Cs (662 keV) and 241 Am (59.5 keV) were simulated at various source-to-detector distances using the Monte Carlo N-Particle software (MCNP5-V1.30). The comparison of the absolute total detection efficiencies for a point, line and spherical source geometry of 60 Co and 137 Cs at different source-to-detector distance showed that the absolute detection efficiency for PVT is higher relative to the other detectors of typical dimensions for that material. However, the absolute peak detection efficiency of some of these detectors are higher relative to PVT, for example the absolute peak detection efficiency of NaI:Tl (7.62 cm diameterx7.62 cm long), HPGe (7.62 cm diameterx7.62 cm long), HPXe (11.43 cm diameterx60.96 cm long), and LaCl 3 :Ce (5.08 cm diameterx5.08 cm long) are all greater than that of a 188.82 cmx60.96 cmx5.08 cm PVT detector for 60 Co and 137 Cs for all geometries studied. The absolute total and absolute peak detection efficiencies of a right circular cylinder of NaI:Tl with various diameters and thicknesses were determined for a point source. The effect of changing the solid angle on the NaI:Tl detectors showed that with increasing solid angle and detector thickness, the absolute efficiency increases. This work establishes a common basis for differentiating detector materials for passive portal monitoring of gamma ray radiation

  13. Regional and site-specific absolute humidity data for use in tritium dose calculations

    International Nuclear Information System (INIS)

    Etnier, E.L.

    1980-01-01

    Due to the potential variability in average absolute humidity over the continental U.S., and the dependence of atmospheric 3 H specific activity on absolute humidity, availability of regional absolute humidity data is of value in estimating the radiological significance of 3 H releases. Most climatological data are in the form of relative humidity, which must be converted to absolute humidity for dose calculations. Absolute humidity was calculated for 218 points across the U.S., using the 1977 annual summary of U.S. Climatological Data, and is given in a table. Mean regional values are shown on a map. (author)

  14. Sensitivity of dose-finding studies to observation errors.

    Science.gov (United States)

    Zohar, Sarah; O'Quigley, John

    2009-11-01

    The purpose of Phase I designs is to estimate the MTD (maximum tolerated dose, in practice a dose with some given acceptable rate of toxicity) while, at the same time, minimizing the number of patients treated at doses too far removed from the MTD. Our purpose here is to investigate the sensitivity of conclusions from dose-finding designs to recording or observation errors. Certain toxicities may go undetected and, conversely, certain non-toxicities may be incorrectly recorded as dose-limiting toxicities. Recording inaccuracies would be expected to have an influence on final and within trial recommendations and, in this paper, we study in greater depth this question. We focus, in particular on three designs used currently; the standard '3+3' design, the grouped up-and-down design [M. Gezmu, N. Flournoy, Group up-and-down designs for dose finding. Journal of Statistical Planning and Inference 2006; 136 (6): 1749-1764.] and the continual reassessment method (CRM, [J. O'Quigley, M. Pepe, L. Fisher, Continual reassessment method: a practical design for phase 1 clinical trials in cancer. Biometrics 1990; 46 (1): 33-48.]). A non-toxicity incorrectly recorded as a toxicity (error of first kind) has a greater influence in general than the converse (error of second kind). These results are illustrated via figures which suggest that the standard '3+3' design in particular is sensitive to errors of the second kind. Such errors can have a very important impact on drug development in that, if carried through to the Phase 2 and Phase 3 studies, we can significantly increase the probability of failure to detect efficacy as a result of having delivered an inadequate dose.

  15. System for simultaneously measuring 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser.

    Science.gov (United States)

    Cui, Cunxing; Feng, Qibo; Zhang, Bin; Zhao, Yuqiong

    2016-03-21

    A novel method for simultaneously measuring six degree-of-freedom (6DOF) geometric motion errors is proposed in this paper, and the corresponding measurement instrument is developed. Simultaneous measurement of 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser is accomplished for the first time to the best of the authors' knowledge. Dual-frequency laser beams that are orthogonally linear polarized were adopted as the measuring datum. Positioning error measurement was achieved by heterodyne interferometry, and other 5DOF geometric motion errors were obtained by fiber collimation measurement. A series of experiments was performed to verify the effectiveness of the developed instrument. The experimental results showed that the stability and accuracy of the positioning error measurement are 31.1 nm and 0.5 μm, respectively. For the straightness error measurements, the stability and resolution are 60 and 40 nm, respectively, and the maximum deviation of repeatability is ± 0.15 μm in the x direction and ± 0.1 μm in the y direction. For pitch and yaw measurements, the stabilities are 0.03″ and 0.04″, the maximum deviations of repeatability are ± 0.18″ and ± 0.24″, and the accuracies are 0.4″ and 0.35″, respectively. The stability and resolution of roll measurement are 0.29″ and 0.2″, respectively, and the accuracy is 0.6″.

  16. Absolute decay parametric instability of high-temperature plasma

    International Nuclear Information System (INIS)

    Zozulya, A.A.; Silin, V.P.; Tikhonchuk, V.T.

    1986-01-01

    A new absolute decay parametric instability having wide spatial localization region is shown to be possible near critical plasma density. Its excitation is conditioned by distributed feedback of counter-running Langmuir waves occurring during parametric decay of incident and reflected pumping wave components. In a hot plasma with the temperature of the order of kiloelectronvolt its threshold is lower than that of a known convective decay parametric instability. Minimum absolute instability threshold is shown to be realized under conditions of spatial parametric resonance of higher orders

  17. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  18. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  19. An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA

    Directory of Open Access Journals (Sweden)

    Jiye HUANG

    2014-05-01

    Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  20. Confidence-Accuracy Calibration in Absolute and Relative Face Recognition Judgments

    Science.gov (United States)

    Weber, Nathan; Brewer, Neil

    2004-01-01

    Confidence-accuracy (CA) calibration was examined for absolute and relative face recognition judgments as well as for recognition judgments from groups of stimuli presented simultaneously or sequentially (i.e., simultaneous or sequential mini-lineups). When the effect of difficulty was controlled, absolute and relative judgments produced…

  1. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    Science.gov (United States)

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  2. Auditory working memory predicts individual differences in absolute pitch learning.

    Science.gov (United States)

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the

  3. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    Science.gov (United States)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  4. Effect of asymmetrical transfer coefficients of a non-polarizing beam splitter on the nonlinear error of the polarization interferometer

    Science.gov (United States)

    Zhao, Chen-Guang; Tan, Jiu-Bin; Liu, Tao

    2010-09-01

    The mechanism of a non-polarizing beam splitter (NPBS) with asymmetrical transfer coefficients causing the rotation of polarization direction is explained in principle, and the measurement nonlinear error caused by NPBS is analyzed based on Jones matrix theory. Theoretical calculations show that the nonlinear error changes periodically, and the error period and peak values increase with the deviation between transmissivities of p-polarization and s-polarization states. When the transmissivity of p-polarization is 53% and that of s-polarization is 48%, the maximum error reaches 2.7 nm. The imperfection of NPBS is one of the main error sources in simultaneous phase-shifting polarization interferometer, and its influence can not be neglected in the nanoscale ultra-precision measurement.

  5. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  6. El problema de la conciencia en Los errores de José Revueltas

    Directory of Open Access Journals (Sweden)

    Evodio Escalante Betancourt

    2014-07-01

    Full Text Available Los errores es la gran novela que José Revueltas estaba destinado a escribir. En ella se concentra su etapa de madurez intelectual, filosófica y literaria. “El hombre es un ser erróneo y en eso radica su condición trágica” —dice Jacobo Ponce, personaje de la novela y alter ego de Revueltas. Reducir el error al grueso del ínfimo diámetro de un cabello, puesto en dimensiones cósmicas, se revela como un abismo puesto en relación con la categoría de saber absoluto que prefigura G. W. F. Hegel en su Fenomenología del espíritu. En este breve ensayo se da cuenta de las reflexiones intelectuales de Revueltas en torno a los postulados sobre la autoconciencia y el saber absoluto de Hegel. Los errores it’s the great novel that Revueltas was bound to write some day. It reveals his intellectual, philosophical and literary maturity. In words of Jacobo Ponce —character in the novel and alter ego of Revueltas itself—: “the man is erroneous by nature; here’s within his own tragedy”. Diminish the error up to the thickness of a capillar hair’s wide, in cosmic measures, means and reveals a huge gap facing the absolute knowledge coined by H.W. Hegel itself. So, in this brief essay Revueltas utter’s his owns intellectual thoughts bounding self-awareness and total knowledge within Hegel’s mind.

  7. Calibrating the absolute amplitude scale for air showers measured at LOFAR

    International Nuclear Information System (INIS)

    Nelles, A.; Hörandel, J. R.; Karskens, T.; Krause, M.; Corstanje, A.; Enriquez, J. E.; Falcke, H.; Rachen, J. P.; Rossetto, L.; Schellart, P.; Buitink, S.; Erdmann, M.; Krause, R.; Haungs, A.; Hiller, R.; Huege, T.; Link, K.; Schröder, F. G.; Norden, M. J.; Scholten, O.

    2015-01-01

    Air showers induced by cosmic rays create nanosecond pulses detectable at radio frequencies. These pulses have been measured successfully in the past few years at the LOw-Frequency ARray (LOFAR) and are used to study the properties of cosmic rays. For a complete understanding of this phenomenon and the underlying physical processes, an absolute calibration of the detecting antenna system is needed. We present three approaches that were used to check and improve the antenna model of LOFAR and to provide an absolute calibration of the whole system for air shower measurements. Two methods are based on calibrated reference sources and one on a calibration approach using the diffuse radio emission of the Galaxy, optimized for short data-sets. An accuracy of 19% in amplitude is reached. The absolute calibration is also compared to predictions from air shower simulations. These results are used to set an absolute energy scale for air shower measurements and can be used as a basis for an absolute scale for the measurement of astronomical transients with LOFAR

  8. Minimization of the effect of errors in approximate radiation view factors

    International Nuclear Information System (INIS)

    Clarksean, R.; Solbrig, C.

    1993-01-01

    The maximum temperature of irradiated fuel rods in storage containers was investigated taking credit only for radiation heat transfer. Estimating view factors is often easy but in many references the emphasis is placed on calculating the quadruple integrals exactly. Selecting different view factors in the view factor matrix as independent, yield somewhat different view factor matrices. In this study ten to twenty percent error in view factors produced small errors in the temperature which are well within the uncertainty due to the surface emissivities uncertainty. However, the enclosure and reciprocity principles must be strictly observed or large errors in the temperatures and wall heat flux were observed (up to a factor of 3). More than just being an aid for calculating the dependent view factors, satisfying these principles, particularly reciprocity, is more important than the calculation accuracy of the view factors. Comparison to experiment showed that the result of the radiation calculation was definitely conservative as desired in spite of the approximations to the view factors

  9. New design and facilities for the International Database for Absolute Gravity Measurements (AGrav): A support for the Establishment of a new Global Absolute Gravity Reference System

    Science.gov (United States)

    Wziontek, Hartmut; Falk, Reinhard; Bonvalot, Sylvain; Rülke, Axel

    2017-04-01

    After about 10 years of successful joint operation by BGI and BKG, the International Database for Absolute Gravity Measurements "AGrav" (see references hereafter) was under a major revision. The outdated web interface was replaced by a responsive, high level web application framework based on Python and built on top of Pyramid. Functionality was added, like interactive time series plots or a report generator and the interactive map-based station overview was updated completely, comprising now clustering and the classification of stations. Furthermore, the database backend was migrated to PostgreSQL for better support of the application framework and long-term availability. As comparisons of absolute gravimeters (AG) become essential to realize a precise and uniform gravity standard, the database was extended to document the results on international and regional level, including those performed at monitoring stations equipped with SGs. By this it will be possible to link different AGs and to trace their equivalence back to the key comparisons under the auspices of International Committee for Weights and Measures (CIPM) as the best metrological realization of the absolute gravity standard. In this way the new AGrav database accommodates the demands of the new Global Absolute Gravity Reference System as recommended by the IAG Resolution No. 2 adopted in Prague 2015. The new database will be presented with focus on the new user interface and new functionality, calling all institutions involved in absolute gravimetry to participate and contribute with their information to built up a most complete picture of high precision absolute gravimetry and improve its visibility. A Digital Object Identifier (DOI) will be provided by BGI to contributors to give a better traceability and facilitate the referencing of their gravity surveys. Links and references: BGI mirror site : http://bgi.obs-mip.fr/data-products/Gravity-Databases/Absolute-Gravity-data/ BKG mirror site: http

  10. Absolute nutrient concentration measurements in cell culture media: 1H q-NMR spectra and data to compare the efficiency of pH-controlled protein precipitation versus CPMG or post-processing filtering approaches

    Directory of Open Access Journals (Sweden)

    Luca Goldoni

    2016-09-01

    Full Text Available The NMR spectra and data reported in this article refer to the research article titled “A simple and accurate protocol for absolute polar metabolite quantification in cell cultures using q-NMR” [1]. We provide the 1H q-NMR spectra of cell culture media (DMEM after removal of serum proteins, which show the different efficiency of various precipitating solvents, the solvent/DMEM ratios, and pH of the solution. We compare the data of the absolute nutrient concentrations, measured by PULCON external standard method, before and after precipitation of serum proteins and those obtained using CPMG (Carr-Purcell-Meiboom-Gill sequence or applying post-processing filtering algorithms to remove, from the 1H q-NMR spectra, the proteins signal contribution. For each of these approaches, the percent error in the absolute value of every measurement for all the nutrients is also plotted as accuracy assessment. Keywords: 1H NMR, pH-controlled serum removal, PULCON, Accuracy, CPMG, Deconvolution

  11. The good, the bad and the outliers: automated detection of errors and outliers from groundwater hydrographs

    Science.gov (United States)

    Peterson, Tim J.; Western, Andrew W.; Cheng, Xiang

    2018-03-01

    Suspicious groundwater-level observations are common and can arise for many reasons ranging from an unforeseen biophysical process to bore failure and data management errors. Unforeseen observations may provide valuable insights that challenge existing expectations and can be deemed outliers, while monitoring and data handling failures can be deemed errors, and, if ignored, may compromise trend analysis and groundwater model calibration. Ideally, outliers and errors should be identified but to date this has been a subjective process that is not reproducible and is inefficient. This paper presents an approach to objectively and efficiently identify multiple types of errors and outliers. The approach requires only the observed groundwater hydrograph, requires no particular consideration of the hydrogeology, the drivers (e.g. pumping) or the monitoring frequency, and is freely available in the HydroSight toolbox. Herein, the algorithms and time-series model are detailed and applied to four observation bores with varying dynamics. The detection of outliers was most reliable when the observation data were acquired quarterly or more frequently. Outlier detection where the groundwater-level variance is nonstationary or the absolute trend increases rapidly was more challenging, with the former likely to result in an under-estimation of the number of outliers and the latter an overestimation in the number of outliers.

  12. Absolute cross sections from the ''boomerang model'' for resonant electron-molecule scattering

    International Nuclear Information System (INIS)

    Dube, L.; Herzenberg, A.

    1979-01-01

    The boomerang model is used to calculate absolute cross sections near the 2 Pi/sub g/ shape resonance in e-N 2 scattering. The calculated cross sections are shown to satisfy detailed balancing. The exchange of electrons is taken into account. A parametrized complex-potential curve for the intermediate N 2 /sup ts-/ ion is determined from a small part of the experimental data, and then used to calculate other properties. The calculations are in good agreement with the absolute cross sections for vibrational excitation from the ground state, the absolute cross section v = 1 → 2, and the absolute total cross section

  13. Artificial Intelligence as a Business Forecasting and Error Handling Tool

    OpenAIRE

    Md. Tabrez Quasim; Rupak Chattopadhyay

    2015-01-01

     Any business enterprise must rely a lot on how well it can predict the future happenings. To cope up with the modern global customer demand, technological challenges, market competitions etc., any organization is compelled to foresee the future having maximum impact and least chances of errors. The traditional forecasting approaches have some limitations. That is why the business world is adopting the modern Artificial Intelligence based forecasting techniques. This paper has tried to presen...

  14. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  15. Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.

    Science.gov (United States)

    Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D

    2016-10-01

    Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  16. A highly accurate absolute gravimetric network for Albania, Kosovo and Montenegro

    Science.gov (United States)

    Ullrich, Christian; Ruess, Diethard; Butta, Hubert; Qirko, Kristaq; Pavicevic, Bozidar; Murat, Meha

    2016-04-01

    The objective of this project is to establish a basic gravity network in Albania, Kosovo and Montenegro to enable further investigations in geodetic and geophysical issues. Therefore the first time in history absolute gravity measurements were performed in these countries. The Norwegian mapping authority Kartverket is assisting the national mapping authorities in Kosovo (KCA) (Kosovo Cadastral Agency - Agjencia Kadastrale e Kosovës), Albania (ASIG) (Autoriteti Shtetëror i Informacionit Gjeohapësinor) and in Montenegro (REA) (Real Estate Administration of Montenegro - Uprava za nekretnine Crne Gore) in improving the geodetic frameworks. The gravity measurements are funded by Kartverket. The absolute gravimetric measurements were performed from BEV (Federal Office of Metrology and Surveying) with the absolute gravimeter FG5-242. As a national metrology institute (NMI) the Metrology Service of the BEV maintains the national standards for the realisation of the legal units of measurement and ensures their international equivalence and recognition. Laser and clock of the absolute gravimeter were calibrated before and after the measurements. The absolute gravimetric survey was carried out from September to October 2015. Finally all 8 scheduled stations were successfully measured: there are three stations located in Montenegro, two stations in Kosovo and three stations in Albania. The stations are distributed over the countries to establish a gravity network for each country. The vertical gradients were measured at all 8 stations with the relative gravimeter Scintrex CG5. The high class quality of some absolute gravity stations can be used for gravity monitoring activities in future. The measurement uncertainties of the absolute gravity measurements range around 2.5 micro Gal at all stations (1 microgal = 10-8 m/s2). In Montenegro the large gravity difference of 200 MilliGal between station Zabljak and Podgorica can be even used for calibration of relative gravimeters

  17. Absolute Hugoniot measurements from a spherically convergent shock using x-ray radiography

    Science.gov (United States)

    Swift, Damian C.; Kritcher, Andrea L.; Hawreliak, James A.; Lazicki, Amy; MacPhee, Andrew; Bachmann, Benjamin; Döppner, Tilo; Nilsen, Joseph; Collins, Gilbert W.; Glenzer, Siegfried; Rothman, Stephen D.; Kraus, Dominik; Falcone, Roger W.

    2018-05-01

    The canonical high pressure equation of state measurement is to induce a shock wave in the sample material and measure two mechanical properties of the shocked material or shock wave. For accurate measurements, the experiment is normally designed to generate a planar shock which is as steady as possible in space and time, and a single state is measured. A converging shock strengthens as it propagates, so a range of shock pressures is induced in a single experiment. However, equation of state measurements must then account for spatial and temporal gradients. We have used x-ray radiography of spherically converging shocks to determine states along the shock Hugoniot. The radius-time history of the shock, and thus its speed, was measured by radiographing the position of the shock front as a function of time using an x-ray streak camera. The density profile of the shock was then inferred from the x-ray transmission at each instant of time. Simultaneous measurement of the density at the shock front and the shock speed determines an absolute mechanical Hugoniot state. The density profile was reconstructed using the known, unshocked density which strongly constrains the density jump at the shock front. The radiographic configuration and streak camera behavior were treated in detail to reduce systematic errors. Measurements were performed on the Omega and National Ignition Facility lasers, using a hohlraum to induce a spatially uniform drive over the outside of a solid, spherical sample and a laser-heated thermal plasma as an x-ray source for radiography. Absolute shock Hugoniot measurements were demonstrated for carbon-containing samples of different composition and initial density, up to temperatures at which K-shell ionization reduced the opacity behind the shock. Here we present the experimental method using measurements of polystyrene as an example.

  18. Initial Results of Using Daily CT Localization to Correct Portal Error in Prostate Cancer

    International Nuclear Information System (INIS)

    Lattanzi, Joseph; McNeely, Shawn; Barnes, Scott; Das, Indra; Schultheiss, Timothy E; Hanks, Gerald E.

    1997-01-01

    (surface marker shifts) error and absolute prostate motion relative to the bony pelvis. Prostate organ motion for the group is summarized in Table I. The maximum daily A/P shift was 7 mm observed in patient no. 4. Motion was less than 5 mm in the remaining patients and the overall mean change only 2.9 mm. The overall variability is quantified by a pooled standard deviation of only 1.5 mm. The maximum lateral shifts were less than 3 mm for all patients. With careful attention to proper patient positioning technique set-up error can be reduced to 2 mm or less. Conclusion: In our experience prostate motion after 50 Gy was significantly less than has been previously reported and may reflect early physiologic changes due to radiation which restrict prostate motion. This observation is being tested in a separate study. Intrapatient and overall population variance was minimal. With daily isocenter correction we can nearly eliminate set-up and organ motion error and therefore reduce the prostate PTV margins to 0 cm posteriorly and 0.25 cm anteriorly/laterally. We believe this will facilitate dose escalation beyond 75 Gy in high risk patients with minimal risk of increased morbidity. This technique may also be beneficial in low risk patients by sparing more normal surrounding tissue

  19. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  20. The Adaptive-Clustering and Error-Correction Method for Forecasting Cyanobacteria Blooms in Lakes and Reservoirs

    Directory of Open Access Journals (Sweden)

    Xiao-zhe Bai

    2017-01-01

    Full Text Available Globally, cyanobacteria blooms frequently occur, and effective prediction of cyanobacteria blooms in lakes and reservoirs could constitute an essential proactive strategy for water-resource protection. However, cyanobacteria blooms are very complicated because of the internal stochastic nature of the system evolution and the external uncertainty of the observation data. In this study, an adaptive-clustering algorithm is introduced to obtain some typical operating intervals. In addition, the number of nearest neighbors used for modeling was optimized by particle swarm optimization. Finally, a fuzzy linear regression method based on error-correction was used to revise the model dynamically near the operating point. We found that the combined method can characterize the evolutionary track of cyanobacteria blooms in lakes and reservoirs. The model constructed in this paper is compared to other cyanobacteria-bloom forecasting methods (e.g., phase space reconstruction and traditional-clustering linear regression, and, then, the average relative error and average absolute error are used to compare the accuracies of these models. The results suggest that the proposed model is superior. As such, the newly developed approach achieves more precise predictions, which can be used to prevent the further deterioration of the water environment.

  1. Absolute and relative dosimetry for ELIMED

    Energy Technology Data Exchange (ETDEWEB)

    Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Cuttone, G.; Candiano, G.; Musumarra, A.; Pisciotta, P.; Romano, F. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Leonora, E.; Randazzo, N. [INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Presti, D. Lo [INFN-Sezione di Catania, Via Santa Sofia 64, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Raffaele, L. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Cirio, R.; Sacchi, R.; Monaco, V. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino, Italy and Università di Torino, Dipartimento di Fisica, Via P.Giuria, 1 10125 Torino (Italy); Marchetto, F.; Giordanengo, S. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy)

    2013-07-26

    The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.

  2. Simple method for absolute calibration of geophones, seismometers, and other inertial vibration sensors

    International Nuclear Information System (INIS)

    Kann, Frank van; Winterflood, John

    2005-01-01

    A simple but powerful method is presented for calibrating geophones, seismometers, and other inertial vibration sensors, including passive accelerometers. The method requires no cumbersome or expensive fixtures such as shaker platforms and can be performed using a standard instrument commonly available in the field. An absolute calibration is obtained using the reciprocity property of the device, based on the standard mathematical model for such inertial sensors. It requires only simple electrical measurement of the impedance of the sensor as a function of frequency to determine the parameters of the model and hence the sensitivity function. The method is particularly convenient if one of these parameters, namely the suspended mass is known. In this case, no additional mechanical apparatus is required and only a single set of impedance measurements yields the desired calibration function. Moreover, this measurement can be made with the device in situ. However, the novel and most powerful aspect of the method is its ability to accurately determine the effective suspended mass. For this, the impedance measurement is made with the device hanging from a simple spring or flexible cord (depending on the orientation of its sensitive axis). To complete the calibration, the device is weighed to determine its total mass. All the required calibration parameters, including the suspended mass, are then determined from a least-squares fit to the impedance as a function of frequency. A demonstration using both a 4.5 Hz geophone and a 1 Hz seismometer shows that the method can yield accurate absolute calibrations with an error of 0.1% or better, assuming no a priori knowledge of any parameters

  3. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    Science.gov (United States)

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  4. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  5. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  6. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  7. Philosophy as Inquiry Aimed at the Absolute Knowledge

    Directory of Open Access Journals (Sweden)

    Ekaterina Snarskaya

    2017-09-01

    Full Text Available Philosophy as the absolute knowledge has been studied from two different but closely related approaches: historical and logical. The first approach exposes four main stages in the history of European metaphysics that marked out types of “philosophical absolutism”: the evolution of philosophy brought to light metaphysics of being, method, morals and logic. All of them are associated with the names of Aristotle, Bacon/Descartes, Kant and Hegel. Then these forms are considered in the second approach that defined them as subject-matter of philosophy as such. Due to their overall, comprehensive character, the focus of philosophy on them justifies its claim on absoluteness as far as philosophy is aimed at comprehension of the world’s unity regardless of the philosopher’s background, values and other preferences. And that is its prerogative since no other form of consciousness lays down this kind of aim. Thus, philosophy is defined as an everlasting attempt to succeed in conceiving the world in all its multifold manifestations. This article is to try to clarify the claim of philosophy on the absolute knowledge.

  8. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  9. SU-E-P-21: Impact of MLC Position Errors On Simultaneous Integrated Boost Intensity-Modulated Radiotherapy for Nasopharyngeal Carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Chengqiang, L; Yin, Y; Chen, L [Shandong Cancer Hospital and Institute, 440 Jiyan Road, Jinan, 250117 (China)

    2015-06-15

    Purpose: To investigate the impact of MLC position errors on simultaneous integrated boost intensity-modulated radiotherapy (SIB-IMRT) for patients with nasopharyngeal carcinoma. Methods: To compare the dosimetric differences between the simulated plans and the clinical plans, ten patients with locally advanced NPC treated with SIB-IMRT were enrolled in this study. All plans were calculated with an inverse planning system (Pinnacle3, Philips Medical System{sub )}. Random errors −2mm to 2mm{sub )},shift errors{sub (} 2mm,1mm and 0.5mm) and systematic extension/ contraction errors (±2mm, ±1mm and ±0.5mm) of the MLC leaf position were introduced respectively into the original plans to create the simulated plans. Dosimetry factors were compared between the original and the simulated plans. Results: The dosimetric impact of the random and system shift errors of MLC position was insignificant within 2mm, the maximum changes in D95% of PGTV,PTV1,PTV2 were-0.92±0.51%,1.00±0.24% and 0.62±0.17%, the maximum changes in the D0.1cc of spinal cord and brainstem were 1.90±2.80% and −1.78±1.42%, the maximum changes in the Dmean of parotids were1.36±1.23% and −2.25±2.04%.However,the impact of MLC extension or contraction errors was found significant. For 2mm leaf extension errors, the average changes in D95% of PGTV,PTV1,PTV2 were 4.31±0.67%,4.29±0.65% and 4.79±0.82%, the averaged value of the D0.1cc to spinal cord and brainstem were increased by 7.39±5.25% and 6.32±2.28%,the averaged value of the mean dose to left and right parotid were increased by 12.75±2.02%,13.39±2.17% respectively. Conclusion: The dosimetric effect was insignificant for random MLC leaf position errors up to 2mm. There was a high sensitivity to dose distribution for MLC extension or contraction errors.We should pay attention to the anatomic changes in target organs and anatomical structures during the course,individual radiotherapy was recommended to ensure adaptive doses.

  10. Statistical Diagnosis of the Best Weibull Methods for Wind Power Assessment for Agricultural Applications

    Directory of Open Access Journals (Sweden)

    Abul Kalam Azad

    2014-05-01

    Full Text Available The best Weibull distribution methods for the assessment of wind energy potential at different altitudes in desired locations are statistically diagnosed in this study. Seven different methods, namely graphical method (GM, method of moments (MOM, standard deviation method (STDM, maximum likelihood method (MLM, power density method (PDM, modified maximum likelihood method (MMLM and equivalent energy method (EEM were used to estimate the Weibull parameters and six statistical tools, namely relative percentage of error, root mean square error (RMSE, mean percentage of error, mean absolute percentage of error, chi-square error and analysis of variance were used to precisely rank the methods. The statistical fittings of the measured and calculated wind speed data are assessed for justifying the performance of the methods. The capacity factor and total energy generated by a small model wind turbine is calculated by numerical integration using Trapezoidal sums and Simpson’s rules. The results show that MOM and MLM are the most efficient methods for determining the value of k and c to fit Weibull distribution curves.

  11. Absolute pitch: a case study.

    Science.gov (United States)

    Vernon, P E

    1977-11-01

    The auditory skill known as 'absolute pitch' is discussed, and it is shown that this differs greatly in accuracy of identification or reproduction of musical tones from ordinary discrimination of 'tonal height' which is to some extent trainable. The present writer possessed absolute pitch for almost any tone or chord over the normal musical range, from about the age of 17 to 52. He then started to hear all music one semitone too high, and now at the age of 71 it is heard a full tone above the true pitch. Tests were carried out under controlled conditions, in which 68 to 95 per cent of notes were identified as one semitone or one tone higher than they should be. Changes with ageing seem more likely to occur in the elasticity of the basilar membrane mechanisms than in the long-term memory which is used for aural analysis of complex sounds. Thus this experience supports the view that some resolution of complex sounds takes place at the peripheral sense organ, and this provides information which can be incorrect, for interpretation by the cortical centres.

  12. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    Science.gov (United States)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere

  13. PERFORMANCE OF OPPORTUNISTIC SPECTRUM ACCESS WITH SENSING ERROR IN COGNITIVE RADIO AD HOC NETWORKS

    Directory of Open Access Journals (Sweden)

    N. ARMI

    2012-04-01

    Full Text Available Sensing in opportunistic spectrum access (OSA has a responsibility to detect the available channel by performing binary hypothesis as busy or idle states. If channel is busy, secondary user (SU cannot access and refrain from data transmission. SU is allowed to access when primary user (PU does not use it (idle states. However, channel is sensed on imperfect communication link. Fading, noise and any obstacles existed can cause sensing errors in PU signal detection. False alarm detects idle states as a busy channel while miss-identification detects busy states as an idle channel. False detection makes SU refrain from transmission and reduces number of bits transmitted. On the other hand, miss-identification causes SU collide to PU transmission. This paper study the performance of OSA based on the greedy approach with sensing errors by the restriction of maximum collision probability allowed (collision threshold by PU network. The throughput of SU and spectrum capacity metric is used to evaluate OSA performance and make comparisons to those ones without sensing error as function of number of slot based on the greedy approach. The relations between throughput and signal to noise ratio (SNR with different collision probability as well as false detection with different SNR are presented. According to the obtained results show that CR users can gain the reward from the previous slot for both of with and without sensing errors. It is indicated by the throughput improvement as slot number increases. However, sensing on imperfect channel with sensing errors can degrade the throughput performance. Subsequently, the throughput of SU and spectrum capacity improves by increasing maximum collision probability allowed by PU network as well. Due to frequent collision with PU, the throughput of SU and spectrum capacity decreases at certain value of collision threshold. Computer simulation is used to evaluate and validate these works.

  14. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  15. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  16. Absolute calibration technique for spontaneous fission sources

    International Nuclear Information System (INIS)

    Zucker, M.S.; Karpf, E.

    1984-01-01

    An absolute calibration technique for a spontaneously fissioning nuclide (which involves no arbitrary parameters) allows unique determination of the detector efficiency for that nuclide, hence of the fission source strength

  17. Absolute luminosity measurements with the LHCb detector at the LHC

    CERN Document Server

    Aaij, R; Adinolfi, M; Adrover, C; Affolder, A; Ajaltouni, Z; Albrecht, J; Alessio, F; Alexander, M; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amhis, Y; Anderson, J; Appleby, R B; Aquines Gutierrez, O; Archilli, F; Arrabito, L; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Bachmann, S; Back, J J; Bailey, D S; Balagura, V; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Bates, A; Bauer, C; Bauer, Th; Bay, A; Bediaga, I; Belous, K; Belyaev, I; Ben-Haim, E; Benayoun, M; Bencivenni, G; Benson, S; Benton, J; Bernet, R; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Bizzeti, A; Bjørnstad, P M; Blake, T; Blanc, F; Blanks, C; Blouw, J; Blusk, S; Bobrov, A; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borgia, A; Bowcock, T J V; Bozzi, C; Brambach, T; van den Brand, J; Bressieux, J; Brett, D; Brisbane, S; Britsch, M; Britton, T; Brook, N H; Brown, H; Büchler-Germann, A; Burducea, I; Bursche, A; Buytaert, J; Cadeddu, S; Caicedo Carvajal, J M; Callot, O; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carson, L; Carvalho Akiba, K; Casse, G; Cattaneo, M; Charles, M; Charpentier, Ph; Chiapolini, N; Ciba, K; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coca, C; Coco, V; Cogan, J; Collins, P; Constantin, F; Conti, G; Contu, A; Cook, A; Coombes, M; Corti, G; Cowan, G A; Currie, R; D'Almagne, B; D'Ambrosio, C; David, P; De Bonis, I; De Capua, S; De Cian, M; De Lorenzi, F; De Miranda, J M; De Paula, L; De Simone, P; Decamp, D; Deckenhoff, M; Degaudenzi, H; Deissenroth, M; Del Buono, L; Deplano, C; Deschamps, O; Dettori, F; Dickens, J; Dijkstra, H; Diniz Batista, P; Donleavy, S; Dordei, F; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dupertuis, F; Dzhelyadin, R; Eames, C; Easo, S; Egede, U; Egorychev, V; Eidelman, S; van Eijk, D; Eisele, F; Eisenhardt, S; Ekelhof, R; Eklund, L; Elsasser, Ch; d'Enterria, D G; Esperante Pereira, D; Estève, L; Falabella, A; Fanchini, E; Färber, C; Fardell, G; Farinelli, C; Farry, S; Fave, V; Fernandez Albor, V; Ferro-Luzzi, M; Filippov, S; Fitzpatrick, C; Fontana, M; Fontanelli, F; Forty, R; Frank, M; Frei, C; Frosini, M; Furcas, S; Gallas Torreira, A; Galli, D; Gandelman, M; Gandini, P; Gao, Y; Garnier, J-C; Garofoli, J; Garra Tico, J; Garrido, L; Gaspar, C; Gauvin, N; Gersabeck, M; Gershon, T; Ghez, Ph; Gibson, V; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gordon, H; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graziani, G; Grecu, A; Gregson, S; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Haefeli, G; Haen, C; Haines, S C; Hampson, T; Hansmann-Menzemer, S; Harji, R; Harnew, N; Harrison, J; Harrison, P F; He, J; Heijne, V; Hennessy, K; Henrard, P; Hernando Morata, J A; van Herwijnen, E; Hicks, E; Hofmann, W; Holubyev, K; Hopchev, P; Hulsbergen, W; Hunt, P; Huse, T; Huston, R S; Hutchcroft, D; Hynds, D; Iakovenko, V; Ilten, P; Imong, J; Jacobsson, R; Jaeger, A; Jahjah Hussein, M; Jans, E; Jansen, F; Jaton, P; Jean-Marie, B; Jing, F; John, M; Johnson, D; Jones, C R; Jost, B; Kandybei, S; Karacson, M; Karbach, T M; Keaveney, J; Kerzel, U; Ketel, T; Keune, A; Khanji, B; Kim, Y M; Knecht, M; Koblitz, S; Koppenburg, P; Kozlinskiy, A; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Kruzelecki, K; Kucharczyk, M; Kukulak, S; Kumar, R; Kvaratskheliya, T; La Thi, V N; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lambert, R W; Lanciotti, E; Lanfranchi, G; Langenbruch, C; Latham, T; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Leroy, O; Lesiak, T; Li, L; Li Gioi, L; Lieng, M; Liles, M; Lindner, R; Linn, C; Liu, B; Liu, G; Lopes, J H; Lopez Asamar, E; Lopez-March, N; Luisier, J; Machefert, F; Machikhiliyan, I V; Maciuc, F; Maev, O; Magnin, J; Malde, S; Mamunur, R M D; Manca, G; Mancinelli, G; Mangiafave, N; Marconi, U; Märki, R; Marks, J; Martellotti, G; Martens, A; Martin, L; Martín Sánchez, A; Martinez Santos, D; Massafferri, A; Matev, R; Mathe, Z; Matteuzzi, C; Matveev, M; Maurice, E; Maynard, B; Mazurov, A; McGregor, G; McNulty, R; Mclean, C; Meissner, M; Merk, M; Merkel, J; Messi, R; Miglioranzi, S; Milanes, D A; Minard, M-N; Monteil, S; Moran, D; Morawski, P; Mountain, R; Mous, I; Muheim, F; Müller, K; Muresan, R; Muryn, B; Musy, M; Mylroie-Smith, J; Naik, P; Nakada, T; Nandakumar, R; Nardulli, J; Nasteva, I; Nedos, M; Needham, M; Neufeld, N; Nguyen-Mau, C; Nicol, M; Nies, S; Niess, V; Nikitin, N; Oblakowska-Mucha, A; Obraztsov, V; Oggero, S; Ogilvy, S; Okhrimenko, O; Oldeman, R; Orlandea, M; Otalora Goicochea, J M; Owen, P; Pal, B; Palacios, J; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Parkes, C; Parkinson, C J; Passaleva, G; Patel, G D; Patel, M; Paterson, S K; Patrick, G N; Patrignani, C; Pavel-Nicorescu, C; Pazos Alvarez, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perego, D L; Perez Trigo, E; Pérez-Calero Yzquierdo, A; Perret, P; Perrin-Terrin, M; Pessina, G; Petrella, A; Petrolini, A; Pie Valls, B; Pietrzyk, B; Pilar, T; Pinci, D; Plackett, R; Playfer, S; Plo Casasus, M; Polok, G; Poluektov, A; Polycarpo, E; Popov, D; Popovici, B; Potterat, C; Powell, A; du Pree, T; Prisciandaro, J; Pugatch, V; Puig Navarro, A; Qian, W; Rademacker, J H; Rakotomiaramanana, B; Rangel, M S; Raniuk, I; Raven, G; Redford, S; Reid, M M; dos Reis, A C; Ricciardi, S; Rinnert, K; Roa Romero, D A; Robbe, P; Rodrigues, E; Rodrigues, F; Rodriguez Perez, P; Rogers, G J; Roiser, S; Romanovsky, V; Rouvinet, J; Ruf, T; Ruiz, H; Sabatino, G; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salzmann, C; Sannino, M; Santacesaria, R; Santamarina Rios, C; Santinelli, R; Santovetti, E; Sapunov, M; Sarti, A; Satriano, C; Satta, A; Savrie, M; Savrina, D; Schaack, P; Schiller, M; Schleich, S; Schmelling, M; Schmidt, B; Schneider, O; Schopper, A; Schune, M -H; Schwemmer, R; Sciubba, A; Seco, M; Semennikov, A; Senderowska, K; Sepp, I; Serra, N; Serrano, J; Seyfert, P; Shao, B; Shapkin, M; Shapoval, I; Shatalov, P; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, O; Shevchenko, V; Shires, A; Silva Coutinho, R; Skottowe, H P; Skwarnicki, T; Smith, A C; Smith, N A; Sobczak, K; Soler, F J P; Solomin, A; Soomro, F; Souza De Paula, B; Spaan, B; Sparkes, A; Spradlin, P; Stagni, F; Stahl, S; Steinkamp, O; Stoica, S; Stone, S; Storaci, B; Straticiuc, M; Straumann, U; Styles, N; Subbiah, V K; Swientek, S; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Teodorescu, E; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Topp-Joergensen, S; Tran, M T; Tsaregorodtsev, A; Tuning, N; Ubeda Garcia, M; Ukleja, A; Urquijo, P; Uwer, U; Vagnoni, V; Valenti, G; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; Velthuis, J J; Veltri, M; Vervink, K; Viaud, B; Videau, I; Vilasis-Cardona, X; Visniakov, J; Vollhardt, A; Voong, D; Vorobyev, A; Voss, H; Wacker, K; Wandernoth, S; Wang, J; Ward, D R; Webber, A D; Websdale, D; Whitehead, M; Wiedner, D; Wiggers, L; Wilkinson, G; Williams, M P; Williams, M; Wilson, F F; Wishahi, J; Witek, M; Witzeling, W; Wotton, S A; Wyllie, K; Xie, Y; Xing, F; Yang, Z; Young, R; Yushchenko, O; Zavertyaev, M; Zhang, F; Zhang, L; Zhang, W C; Zhang, Y; Zhelezov, A; Zhong, L; Zverev, E; Zvyagin, A

    2012-01-01

    Absolute luminosity measurements are of general interest for colliding-beam experiments at storage rings. These measurements are necessary to determine the absolute cross-sections of reaction processes and are valuable to quantify the performance of the accelerator. LHCb has applied two methods to determine the absolute scale of its luminosity measurements for proton-proton collisions at the LHC with a centre-of-mass energy of 7 TeV. In addition to the classic ``van der Meer scan'' method a novel technique has been developed which makes use of direct imaging of the individual beams using beam-gas and beam-beam interactions. This beam imaging method is made possible by the high resolution of the LHCb vertex detector and the close proximity of the detector to the beams, and allows beam parameters such as positions, angles and widths to be determined. The results of the two methods have comparable precision and are in good agreement. Combining the two methods, an overall precision of 3.5\\% in the absolute lumi...

  18. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    Energy Technology Data Exchange (ETDEWEB)

    Ju, S; Hong, C; Kim, M; Chung, K; Kim, J; Han, Y; Ahn, S; Chung, S; Shin, E; Shin, J; Kim, H; Kim, D; Choi, D [Department of Radiation Oncology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2014-06-01

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed without the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.

  19. Inherent Error in Asynchronous Digital Flight Controls.

    Science.gov (United States)

    1980-02-01

    operation will be eliminated. If T* is close to T, the inherent error (eA) is a small value. Then the deficiency of the basic model, which is de... tK2 at m ~VCONTR0RKIPPIL I~+ R tKT 2 TKT 5 ~I G E 1 i V OTN TRIL 2-REDN HNE IjT e UT 3 ~49__I 4.) 4 -4 - 4.-4 U 4.) k-4E-- Iz E-4 P E-44)-4. 4.)l 1s...indicate the channel failure. To reduce this deficiency , the new model computes a tolerance value equal to the maximum steady-state sample covariance of the

  20. Absolute calibration of TFTR helium proportional counters

    International Nuclear Information System (INIS)

    Strachan, J.D.; Diesso, M.; Jassby, D.; Johnson, L.; McCauley, S.; Munsat, T.; Roquemore, A.L.; Loughlin, M.

    1995-06-01

    The TFTR helium proportional counters are located in the central five (5) channels of the TFTR multichannel neutron collimator. These detectors were absolutely calibrated using a 14 MeV neutron generator positioned at the horizontal midplane of the TFTR vacuum vessel. The neutron generator position was scanned in centimeter steps to determine the collimator aperture width to 14 MeV neutrons and the absolute sensitivity of each channel. Neutron profiles were measured for TFTR plasmas with time resolution between 5 msec and 50 msec depending upon count rates. The He detectors were used to measure the burnup of 1 MeV tritons in deuterium plasmas, the transport of tritium in trace tritium experiments, and the residual tritium levels in plasmas following 50:50 DT experiments

  1. A note on estimating errors from the likelihood function

    International Nuclear Information System (INIS)

    Barlow, Roger

    2005-01-01

    The points at which the log likelihood falls by 12 from its maximum value are often used to give the 'errors' on a result, i.e. the 68% central confidence interval. The validity of this is examined for two simple cases: a lifetime measurement and a Poisson measurement. Results are compared with the exact Neyman construction and with the simple Bartlett approximation. It is shown that the accuracy of the log likelihood method is poor, and the Bartlett construction explains why it is flawed

  2. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    Science.gov (United States)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  3. Daily CT localization for correcting portal errors in the treatment of prostate cancer

    International Nuclear Information System (INIS)

    Lattanzi, Joseph; McNeely, Shawn; Hanlon, Alexandra; Das, Indra; Schultheiss, Timothy E.; Hanks, Gerald E.

    1998-01-01

    -Gy baseline CT scans, the isocenter changes were quantified to reflect the contribution of positional (surface marker shifts) error and absolute prostate motion relative to the bony pelvis. The maximum daily A/P shift was 7.3 mm. Motion was less than 5 mm in the remaining patients and the overall mean magnitude change was 2.9 mm. The overall variability was quantified by a pooled standard deviation of 1.7 mm. The maximum lateral shifts were less than 3 mm for all patients. With careful attention to patient positioning, maximal portal placement error was reduced to 3 mm. Conclusion: In our experience, prostate motion after 50 Gy was significantly less than previously reported. This may reflect early physiologic changes due to radiation, which restrict prostate motion. This observation is being tested in a separate study. Intrapatient and overall population variance was minimal. With daily isocenter correction of setup and organ motion errors by CT imaging, PTV margins can be significantly reduced or eliminated. We believe this will facilitate further dose escalation in high-risk patients with minimal risk of increased morbidity. This technique may also be beneficial in low-risk patients by sparing more normal surrounding tissue

  4. Stimulus Probability Effects in Absolute Identification

    Science.gov (United States)

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  5. Absolute gravity measurements in California

    Science.gov (United States)

    Zumberge, M. A.; Sasagawa, G.; Kappus, M.

    1986-08-01

    An absolute gravity meter that determines the local gravitational acceleration by timing a freely falling mass with a laser interferometer has been constructed. The instrument has made measurements at 11 sites in California, four in Nevada, and one in France. The uncertainty in the results is typically 10 microgal. Repeated measurements have been made at several of the sites; only one shows a substantial change in gravity.

  6. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters.

    Science.gov (United States)

    Song, Jin Woo; Park, Chan Gook

    2018-04-21

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms.

  7. Triaxial Accelerometer Error Coefficients Identification with a Novel Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Yanbin Gao

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is one of the state-of-the-art swarm intelligence techniques, which is widely utilized for optimization purposes. Triaxial accelerometer error coefficients are relatively unstable with the environmental disturbances and aging of the instrument. Therefore, identifying triaxial accelerometer error coefficients accurately and being with lower costs are of great importance to improve the overall performance of triaxial accelerometer-based strapdown inertial navigation system (SINS. In this study, a novel artificial fish swarm algorithm (NAFSA that eliminated the demerits (lack of using artificial fishes’ previous experiences, lack of existing balance between exploration and exploitation, and high computational cost of AFSA is introduced at first. In NAFSA, functional behaviors and overall procedure of AFSA have been improved with some parameters variations. Second, a hybrid accelerometer error coefficients identification algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS approaches. This combination leads to maximum utilization of the involved approaches for triaxial accelerometer error coefficients identification. Furthermore, the NAFSA-identified coefficients are testified with 24-position verification experiment and triaxial accelerometer-based SINS navigation experiment. The priorities of MCS-NAFSA are compared with that of conventional calibration method and optimal AFSA. Finally, both experiments results demonstrate high efficiency of MCS-NAFSA on triaxial accelerometer error coefficients identification.

  8. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  9. Relational versus absolute representation in categorization.

    Science.gov (United States)

    Edwards, Darren J; Pothos, Emmanuel M; Perlman, Amotz

    2012-01-01

    This study explores relational-like and absolute-like representations in categorization. Although there is much evidence that categorization processes can involve information about both the particular physical properties of studied instances and abstract (relational) properties, there has been little work on the factors that lead to one kind of representation as opposed to the other. We tested 370 participants in 6 experiments, in which participants had to classify new items into predefined artificial categories. In 4 experiments, we observed a predominantly relational-like mode of classification, and in 2 experiments we observed a shift toward an absolute-like mode of classification. These results suggest 3 factors that promote a relational-like mode of classification: fewer items per group, more training groups, and the presence of a time delay. Overall, we propose that less information about the distributional properties of a category or weaker memory traces for the category exemplars (induced, e.g., by having smaller categories or a time delay) can encourage relational-like categorization.

  10. High-fidelity target sequencing of individual molecules identified using barcode sequences: de novo detection and absolute quantitation of mutations in plasma cell-free DNA from cancer patients.

    Science.gov (United States)

    Kukita, Yoji; Matoba, Ryo; Uchida, Junji; Hamakawa, Takuya; Doki, Yuichiro; Imamura, Fumio; Kato, Kikuya

    2015-08-01

    Circulating tumour DNA (ctDNA) is an emerging field of cancer research. However, current ctDNA analysis is usually restricted to one or a few mutation sites due to technical limitations. In the case of massively parallel DNA sequencers, the number of false positives caused by a high read error rate is a major problem. In addition, the final sequence reads do not represent the original DNA population due to the global amplification step during the template preparation. We established a high-fidelity target sequencing system of individual molecules identified in plasma cell-free DNA using barcode sequences; this system consists of the following two steps. (i) A novel target sequencing method that adds barcode sequences by adaptor ligation. This method uses linear amplification to eliminate the errors introduced during the early cycles of polymerase chain reaction. (ii) The monitoring and removal of erroneous barcode tags. This process involves the identification of individual molecules that have been sequenced and for which the number of mutations have been absolute quantitated. Using plasma cell-free DNA from patients with gastric or lung cancer, we demonstrated that the system achieved near complete elimination of false positives and enabled de novo detection and absolute quantitation of mutations in plasma cell-free DNA. © The Author 2015. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  11. Implication of spot position error on plan quality and patient safety in pencil-beam-scanning proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G. [Division of Medical Physics, Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2014-08-15

    Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 to 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot

  12. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    Science.gov (United States)

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  13. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  14. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  15. Error exponents for entanglement concentration

    International Nuclear Information System (INIS)

    Hayashi, Masahito; Koashi, Masato; Matsumoto, Keiji; Morikoshi, Fumiaki; Winter, Andreas

    2003-01-01

    Consider entanglement concentration schemes that convert n identical copies of a pure state into a maximally entangled state of a desired size with success probability being close to one in the asymptotic limit. We give the distillable entanglement, the number of Bell pairs distilled per copy, as a function of an error exponent, which represents the rate of decrease in failure probability as n tends to infinity. The formula fills the gap between the least upper bound of distillable entanglement in probabilistic concentration, which is the well-known entropy of entanglement, and the maximum attained in deterministic concentration. The method of types in information theory enables the detailed analysis of the distillable entanglement in terms of the error rate. In addition to the probabilistic argument, we consider another type of entanglement concentration scheme, where the initial state is deterministically transformed into a (possibly mixed) final state whose fidelity to a maximally entangled state of a desired size converges to one in the asymptotic limit. We show that the same formula as in the probabilistic argument is valid for the argument on fidelity by replacing the success probability with the fidelity. Furthermore, we also discuss entanglement yield when optimal success probability or optimal fidelity converges to zero in the asymptotic limit (strong converse), and give the explicit formulae for those cases

  16. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    Science.gov (United States)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  17. Effect of Absolute From Hibiscus syriacus L. Flower on Wound Healing in Keratinocytes

    Science.gov (United States)

    Yoon, Seok Won; Lee, Kang Pa; Kim, Do-Yoon; Hwang, Dae Il; Won, Kyung-Jong; Lee, Dae Won; Lee, Hwan Myung

    2017-01-01

    Background: Proliferation and migration of keratinocytes are essential for the repair of cutaneous wounds. Hibiscus syriacus L. has been used in Asian medicine; however, research on keratinocytes is inadequate. Objective: To establish the dermatological properties of absolute from Hibiscus syriacus L. flower (HSF) and to provide fundamental research for alternative medicine. Materials and Methods: We identified the composition of HSF absolute using gas chromatography-mass spectrometry analysis. We also examined the effect of HSF absolute in HaCaT cells using the XTT assay, Boyden chamber assay, sprout-out growth assay, and western blotting. We conducted an in-vivo wound healing assay in rat tail-skin. Results: Ten major active compounds were identified from HSF absolute. As determined by the XTT assay, Boyden chamber assay, and sprout-out growth assay results, HSF absolute exhibited similar effects as that of epidermal growth factor on the proliferation and migration patterns of keratinocytes (HaCaT cells), which were significantly increased after HSF absolute treatment. The expression levels of the phosphorylated signaling proteins relevant to proliferation, including extracellular signal-regulated kinase 1/2 (Erk 1/2) and Akt, were also determined by western blot analysis. Conclusion: These results of our in-vitro and ex-vivo studies indicate that HSF absolute induced cell growth and migration of HaCaT cells by phosphorylating both Erk 1/2 and Akt. Moreover, we confirmed the wound-healing effect of HSF on injury of the rat tail-skin. Therefore, our results suggest that HSF absolute is promising for use in cosmetics and alternative medicine. SUMMARY Hisbiscus syriacus L. flower absolute increases HaCaT cell migration and proliferation.Hisbiscus syriacus L. flower absolute regulates phosphorylation of ERK 1/2 and Akt in HaCaT cell.Treatment with Hisbiscus syriacus L. flower induced sprout outgrowth.The wound in the tail-skin of rat was reduced by Hisbiscus syriacus

  18. Absolute and convective instability of a liquid sheet with transverse temperature gradient

    International Nuclear Information System (INIS)

    Fu, Qing-Fei; Yang, Li-Jun; Tong, Ming-Xi; Wang, Chen

    2013-01-01

    Highlights: • The spatial–temporal instability of a liquid sheet with thermal effects was studied. • The flow can transit to absolutely unstable with certain flow parameters. • The effects of non-dimensional parameters on the transition were studied. -- Abstract: The spatial–temporal instability behavior of a viscous liquid sheet with temperature difference between the two surfaces was investigated theoretically. The practical situation motivating this investigation is liquid sheet heated by ambient gas, usually encountered in industrial heat transfer and liquid propellant rocket engines. The existing dispersion relation was used, to explore the spatial–temporal instability of viscous liquid sheets with a nonuniform temperature profile, by setting both the wave number and frequency complex. A parametric study was performed in both sinuous and varicose modes to test the influence of dimensionless numbers on the transition between absolute and convective instability of the flow. For a small value of liquid Weber number, or a great value of gas-to-liquid density ratio, the flow was found to be absolutely unstable. The absolute instability was enhanced by increasing the liquid viscosity. It was found that variation of the Marangoni number hardly influenced the absolute instability of the sinuous mode of oscillations; however it slightly affected the absolute instability in the varicose mode

  19. Identifying Lattice, Orbit, And BPM Errors in PEP-II

    International Nuclear Information System (INIS)

    Decker, F.-J.; SLAC

    2005-01-01

    The PEP-II B-Factory is delivering peak luminosities of up to 9.2 · 10 33 1/cm 2 · l/s. This is very impressive especially considering our poor understanding of the lattice, absolute orbit and beam position monitor system (BPM). A few simple MATLAB programs were written to get lattice information, like betatron functions in a coupled machine (four all together) and the two dispersions, from the current machine and compare it the design. Big orbit deviations in the Low Energy Ring (LER) could be explained not by bad BPMs (only 3), but by many strong correctors (one corrector to fix four BPMs on average). Additionally these programs helped to uncover a sign error in the third order correction of the BPM system. Further analysis of the current information of the BPMs (sum of all buttons) indicates that there might be still more problematic BPMs

  20. The fading American dream: Trends in absolute income mobility since 1940.

    Science.gov (United States)

    Chetty, Raj; Grusky, David; Hell, Maximilian; Hendren, Nathaniel; Manduca, Robert; Narang, Jimmy

    2017-04-28

    We estimated rates of "absolute income mobility"-the fraction of children who earn more than their parents-by combining data from U.S. Census and Current Population Survey cross sections with panel data from de-identified tax records. We found that rates of absolute mobility have fallen from approximately 90% for children born in 1940 to 50% for children born in the 1980s. Increasing Gross Domestic Product (GDP) growth rates alone cannot restore absolute mobility to the rates experienced by children born in the 1940s. However, distributing current GDP growth more equally across income groups as in the 1940 birth cohort would reverse more than 70% of the decline in mobility. These results imply that reviving the "American dream" of high rates of absolute mobility would require economic growth that is shared more broadly across the income distribution. Copyright © 2017, American Association for the Advancement of Science.

  1. Dosimetric Changes Resulting From Patient Rotational Setup Errors in Proton Therapy Prostate Plans

    International Nuclear Information System (INIS)

    Sejpal, Samir V.; Amos, Richard A.; Bluett, Jaques B.; Levy, Lawrence B.; Kudchadker, Rajat J.; Johnson, Jennifer; Choi, Seungtaek; Lee, Andrew K.

    2009-01-01

    Purpose: To evaluate the dose changes to the target and critical structures from rotational setup errors in prostate cancer patients treated with proton therapy. Methods and Materials: A total of 70 plans were analyzed for 10 patients treated with parallel-opposed proton beams to a dose of 7,600 60 Co-cGy-equivalent (CcGE) in 200 CcGE fractions to the clinical target volume (i.e., prostate and proximal seminal vesicles). Rotational setup errors of +3 o , -3 deg., +5 deg., and -5 deg. (to simulate pelvic tilt) were generated by adjusting the gantry. Horizontal couch shifts of +3 deg. and -3 deg. (to simulate longitudinal setup variability) were also generated. Verification plans were recomputed, keeping the same treatment parameters as the control. Results: All changes shown are for 38 fractions. The mean clinical target volume dose was 7,780 CcGE. The mean change in the clinical target volume dose in the worse case scenario for all shifts was 2 CcGE (absolute range in worst case scenario, 7,729-7,848 CcGE). The mean changes in the critical organ dose in the worst case scenario was 6 CcGE (bladder), 18 CcGE (rectum), 36 CcGE (anterior rectal wall), and 141 CcGE (femoral heads) for all plans. In general, the percentage of change in the worse case scenario for all shifts to the critical structures was <5%. Deviations in the absolute percentage of volume of organ receiving 45 and 70 Gy for the bladder and rectum were <2% for all plans. Conclusion: Patient rotational movements of 3 deg. and 5 deg. and horizontal couch shifts of 3 deg. in prostate proton planning did not confer clinically significant dose changes to the target volumes or critical structures.

  2. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Isotherms and thermodynamics by linear and non-linear regression analysis for the sorption of methylene blue onto activated carbon: Comparison of various error functions

    International Nuclear Information System (INIS)

    Kumar, K. Vasanth; Porkodi, K.; Rocha, F.

    2008-01-01

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of methylene blue sorption by activated carbon. The r 2 was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions, namely coefficient of determination (r 2 ), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r 2 was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K 2 was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm

  4. Influence of calculation error of total field anomaly in strongly magnetic environments

    Science.gov (United States)

    Yuan, Xiaoyu; Yao, Changli; Zheng, Yuanman; Li, Zelin

    2016-04-01

    An assumption made in many magnetic interpretation techniques is that ΔTact (total field anomaly - the measurement given by total field magnetometers, after we remove the main geomagnetic field, T0) can be approximated mathematically by ΔTpro (the projection of anomalous field vector in the direction of the earth's normal field). In order to meet the demand for high-precision processing of magnetic prospecting, the approximate error E between ΔTact and ΔTpro is studied in this research. Generally speaking, the error E is extremely small when anomalies not greater than about 0.2T0. However, the errorE may be large in highly magnetic environments. This leads to significant effects on subsequent quantitative inference. Therefore, we investigate the error E through numerical experiments of high-susceptibility bodies. A systematic error analysis was made by using a 2-D elliptic cylinder model. Error analysis show that the magnitude of ΔTact is usually larger than that of ΔTpro. This imply that a theoretical anomaly computed without accounting for the error E overestimate the anomaly associated with the body. It is demonstrated through numerical experiments that the error E is obvious and should not be ignored. It is also shown that the curves of ΔTpro and the error E had a certain symmetry when the directions of magnetization and geomagnetic field changed. To be more specific, the Emax (the maximum of the error E) appeared above the center of the magnetic body when the magnetic parameters are determined. Some other characteristics about the error Eare discovered. For instance, the curve of Emax with respect to the latitude was symmetrical on both sides of magnetic equator, and the extremum of the Emax can always be found in the mid-latitudes, and so on. It is also demonstrated that the error Ehas great influence on magnetic processing transformation and inversion results. It is conclude that when the bodies have highly magnetic susceptibilities, the error E can

  5. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  6. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  7. Financial impact of errors in business forecasting: a comparative study of linear models and neural networks

    Directory of Open Access Journals (Sweden)

    Claudimar Pereira da Veiga

    2012-08-01

    Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.

  8. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  9. Absolute total cross sections for noble gas systems

    International Nuclear Information System (INIS)

    Kam, P. van der.

    1981-01-01

    This thesis deals with experiments on the elastic scattering of Ar, Kr and Xe, using the molecular beam technique. The aim of this work was the measurement of the absolute value of the total cross section and the behaviour of the total cross section, Q, as function of the relative velocity g of the scattering partners. The author gives an extensive analysis of the glory structure in the total cross section and parametrizes the experimental results using a semiclassical model function. This allows a detailed comparison of the phase and amplitude of the predicted and measured glory undulations. He indicates how the depth and position of the potential well should be changed in order to come to an optimum description of the glory structure. With this model function he has also been able to separate the glory and attractive contribution to Q, and using the results from the extrapolation measurements he has obtained absolute values for Qsub(a). From these absolute values he has calculated the parameter C 6 that determines the strength of the attractive region of the potential. In two of the four investigated gas combinations the obtained values lie outside the theoretical bounds. (Auth.)

  10. An Absolute Valve for the ITER Neutral Beam Injector

    International Nuclear Information System (INIS)

    Jones, Ch.; Chuilon, B.; Michael, W.

    2006-01-01

    In the ITER reference design a fast shutter was included to limit tritium migration into the beamline vacuum enclosures. The need was recently identified to extend the functionality of the fast shutter to that of an absolute valve in order to facilitate injector maintenance procedures and to satisfy safety requirements in case of an in-vessel loss of coolant event. Three concepts have been examined satisfying the ITER requirements for speed of actuation, sealing performance over the required lifetime, and pressure differential in fault scenarios, namely: a rectangular closure section; a circular cross section; and a rotary JET-type valve. The rectangular section represents the most efficient usage of the available space envelope and leads to a minimum-mass system, although it requires greater total force for a given load per unit length of seal. However, a metallic seal of the '' hard/hard '' type, where the seal relies on the elastic properties of the material and does not utilise any type of spring device, can provide the required seal performance with typical loading of 200 kg/cm. The conceptual design of the proposed absolute valve will be presented. The aperture dimensions are 1.45 m high by 0.6 m wide, with minimum achievable leak rate of 1 · 10 -9 mbarl/s and maximum pressure differential of 3 bar across the valve. Sealing force is provided using two seal plates, linked by a 3 mm thick ' omega ' diaphragm, by pressurisation of the interspace to 8 bar; this allows for a relative movement of the plates of 2 mm. Movement of the device perpendicular to the beam direction is carried out using a novel magnetic drive in order to transmit the motive force across the vacuum boundary, similar to that demonstrated on a test-rig in an earlier study. The conceptual design includes provision of all the services such as pneumatics and water cooling to cope with the heat loads from neutral beams in quasi steady-state operation and from the ITER plasma. A future programme

  11. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  12. Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions

    Science.gov (United States)

    Peacock, Sheila; Douglas, Alan; Bowers, David

    2017-08-01

    Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.

  13. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  14. Estimation of Power Consumption in the Circular Sawing of Stone Based on Tangential Force Distribution

    Science.gov (United States)

    Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng

    2018-04-01

    Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.

  15. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  16. High-resolution spatial databases of monthly climate variables (1961-2010) over a complex terrain region in southwestern China

    Science.gov (United States)

    Wu, Wei; Xu, An-Ding; Liu, Hong-Bin

    2015-01-01

    Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.

  17. Estimation of perspective errors in 2D2C-PIV measurements for 3D concentrated vortices

    Science.gov (United States)

    Ma, Bao-Feng; Jiang, Hong-Gang

    2018-06-01

    Two-dimensional planar PIV (2D2C) is still extensively employed in flow measurement owing to its availability and reliability, although more advanced PIVs have been developed. It has long been recognized that there exist perspective errors in velocity fields when employing the 2D2C PIV to measure three-dimensional (3D) flows, the magnitude of which depends on out-of-plane velocity and geometric layouts of the PIV. For a variety of vortex flows, however, the results are commonly represented by vorticity fields, instead of velocity fields. The present study indicates that the perspective error in vorticity fields relies on gradients of the out-of-plane velocity along a measurement plane, instead of the out-of-plane velocity itself. More importantly, an estimation approach to the perspective error in 3D vortex measurements was proposed based on a theoretical vortex model and an analysis on physical characteristics of the vortices, in which the gradient of out-of-plane velocity is uniquely determined by the ratio of the maximum out-of-plane velocity to maximum swirling velocity of the vortex; meanwhile, the ratio has upper limits for naturally formed vortices. Therefore, if the ratio is imposed with the upper limits, the perspective error will only rely on the geometric layouts of PIV that are known in practical measurements. Using this approach, the upper limits of perspective errors of a concentrated vortex can be estimated for vorticity and other characteristic quantities of the vortex. In addition, the study indicates that the perspective errors in vortex location, vortex strength, and vortex radius can be all zero for axisymmetric vortices if they are calculated by proper methods. The dynamic mode decomposition on an oscillatory vortex indicates that the perspective errors of each DMD mode are also only dependent on the gradient of out-of-plane velocity if the modes are represented by vorticity.

  18. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  19. Absolute instabilities of travelling wave solutions in a Keller-Segel model

    Science.gov (United States)

    Davis, P. N.; van Heijster, P.; Marangell, R.

    2017-11-01

    We investigate the spectral stability of travelling wave solutions in a Keller-Segel model of bacterial chemotaxis with a logarithmic chemosensitivity function and a constant, sublinear, and linear consumption rate. Linearising around the travelling wave solutions, we locate the essential and absolute spectrum of the associated linear operators and find that all travelling wave solutions have parts of the essential spectrum in the right half plane. However, we show that in the case of constant or sublinear consumption there exists a range of parameters such that the absolute spectrum is contained in the open left half plane and the essential spectrum can thus be weighted into the open left half plane. For the constant and sublinear consumption rate models we also determine critical parameter values for which the absolute spectrum crosses into the right half plane, indicating the onset of an absolute instability of the travelling wave solution. We observe that this crossing always occurs off of the real axis.

  20. Absolute photonic band gap in 2D honeycomb annular photonic crystals

    International Nuclear Information System (INIS)

    Liu, Dan; Gao, Yihua; Tong, Aihong; Hu, Sen

    2015-01-01

    Highlights: • A two-dimensional honeycomb annular photonic crystal (PC) is proposed. • The absolute photonic band gap (PBG) is studied. • Annular PCs show larger PBGs than usual air-hole PCs for high refractive index. • Annular PCs with anisotropic rods show large PBGs for low refractive index. • There exist optimal parameters to open largest band gaps. - Abstract: Using the plane wave expansion method, we investigate the effects of structural parameters on absolute photonic band gap (PBG) in two-dimensional honeycomb annular photonic crystals (PCs). The results reveal that the annular PCs possess absolute PBGs that are larger than those of the conventional air-hole PCs only when the refractive index of the material from which the PC is made is equal to 4.5 or larger. If the refractive index is smaller than 4.5, utilization of anisotropic inner rods in honeycomb annular PCs can lead to the formation of larger PBGs. The optimal structural parameters that yield the largest absolute PBGs are obtained

  1. Absolute Distance Measurements with Tunable Semiconductor Laser

    Czech Academy of Sciences Publication Activity Database

    Mikel, Břetislav; Číp, Ondřej; Lazar, Josef

    T118, - (2005), s. 41-44 ISSN 0031-8949 R&D Projects: GA AV ČR(CZ) IAB2065001 Keywords : tunable laser * absolute interferometer Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.661, year: 2004

  2. MEAN OF MEDIAN ABSOLUTE DERIVATION TECHNIQUE MEAN ...

    African Journals Online (AJOL)

    eobe

    development of mean of median absolute derivation technique based on the based on the based on .... of noise mean to estimate the speckle noise variance. Noise mean property ..... Foraging Optimization,” International Journal of. Advanced ...

  3. Comparison of computer workstation with film for detecting setup errors

    International Nuclear Information System (INIS)

    Fritsch, D.S.; Boxwala, A.A.; Raghavan, S.; Coffee, C.; Major, S.A.; Muller, K.E.; Chaney, E.L.

    1997-01-01

    Purpose/Objective: Workstations designed for portal image interpretation by radiation oncologists provide image displays and image processing and analysis tools that differ significantly compared with the standard clinical practice of inspecting portal films on a light box. An implied but unproved assumption associated with the clinical implementation of workstation technology is that patient care is improved, or at least not adversely affected. The purpose of this investigation was to conduct observer studies to test the hypothesis that radiation oncologists can detect setup errors using a workstation at least as accurately as when following standard clinical practice. Materials and Methods: A workstation, PortFolio, was designed for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools to enhance images; align cross-hairs, field edges, and anatomic structures on reference and acquired images; measure distances and angles; and view registered images superimposed on one another. In a well designed and carefully controlled observer study, nine radiation oncologists, including attendings and residents, used PortFolio to detect setup errors in realistic digitally reconstructed portal (DRPR) images computed from the NLM visible human data using a previously described approach † . Compared with actual portal images where absolute truth is ill defined or unknown, the DRPRs contained known translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Twenty DRPRs with randomly induced errors were computed for each site. The induced errors were constrained to a plane at the isocenter of the target volume and perpendicular to the central axis of the treatment beam. Images used in the study were also printed on film. Observers interpreted the film-based images using standard clinical practice. The images were reviewed in eight sessions. During each session five images were

  4. Relative and Absolute Reliability of the Professionalism in Physical Therapy Core Values Self-Assessment Tool.

    Science.gov (United States)

    Furgal, Karen E; Norris, Elizabeth S; Young, Sonia N; Wallmann, Harvey W

    2018-01-01

    Development of professional behaviors in Doctor of Physical Therapy (DPT) students is an important part of professional education. The American Physical Therapy Association (APTA) has developed the Professionalism in Physical Therapy Core Values Self-Assessment (PPTCV-SA) tool to increase awareness of personal values in practice. The PPTCV-SA has been used to measure growth in professionalism following a clinical or educational experience. There are few studies reporting psychometric properties of the PPTCV-SA. The purpose of this study was to establish properties of relative reliability (intraclass correlation coefficient, iCC) and absolute reliability (standard error of measurement, SEM; minimal detectable change, MDC) of the PPTCV-SA. in this project, 29 first-year students in a DPT program were administered the PPTCVA-SA on two occasions, 2 weeks apart. Paired t-tests were used to examine stability in PPTCV-SA scores on the two occasions. iCCs were calculated as a measure of relative reliability and for use in the calculation of the absolute reliability measures of SEM and MDC. Results of paired t-tests indicated differences in the subscale scores between times 1 and 2 were non-significant, except for three subscales: Altruism (p=0.01), Excellence (p=0.05), and Social Responsibility (p=0.02). iCCs for test-retest reliability were moderate-to-good for all subscales, with SEMs ranging from 0.30 to 0.62, and MDC95 ranging from 0.83 to 1.71. These results can guide educators and researchers when determining the likelihood of true change in professionalism following a professional development activity.

  5. The relative and absolute speed of radiographic screen - film systems

    International Nuclear Information System (INIS)

    Lee, In Ja; Huh, Joon

    1993-01-01

    Recently, a large number of new screen-film systems have become available for use in diagnostic radiology. These new screens are made of materials generally known as rare - earth phosphors which have high x-ray absorption and high x-ray to light conversion efficiency compared to calcium tungstate phosphors. The major advantage of these new systems is reduction of patient exposure due to their high speed or high sensitivity. However, a system with excessively high speed can result in a significant degradation of radiographic image quality. Therefore, the speed is important parameters for users of these system. Our aim of in this was to determine accurately and precisely the absolute speed and relative speeds of both new and conventional screen - film system. We determined the absolute speed in condition of BRH phantom beam quality and the relative speed were measured by a split - screen technique in condition of BRH and ANSI phantom beam quality. The absolute and the relative speed were determined for 8 kinds of screen - 4 kinds of film in regular system and 7 kinds pf screen - 7 kinds of film in ortho system. In this study we could know the New Rx, T - MAT G has the highest film speed, also know Green system's standard deviation of relative speed larger than blue system. It was realized that there were no relationship between the absolute speed and the blue system. It was realized that there were no relationship between the absolute speed and the relative speed in ortho or regular system

  6. Analysis of neutron reflectivity data: maximum entropy, Bayesian spectral analysis and speckle holography

    International Nuclear Information System (INIS)

    Sivia, D.S.; Hamilton, W.A.; Smith, G.S.

    1991-01-01

    The analysis of neutron reflectivity data to obtain nuclear scattering length density profiles is akin to the notorious phaseless Fourier problem, well known in many fields such as crystallography. Current methods of analysis culminate in the refinement of a few parameters of a functional model, and are often preceded by a long and laborious process of trial and error. We start by discussing the use of maximum entropy for obtained 'free-form' solutions of the density profile, as an alternative to the trial and error phase when a functional model is not available. Next we consider a Bayesian spectral analysis approach, which is appropriate for optimising the parameters of a simple (but adequate) type of model when the number of parameters is not known. Finally, we suggest a novel experimental procedure, the analogue of astronomical speckle holography, designed to alleviate the ambiguity problems inherent in traditional reflectivity measurements. (orig.)

  7. Absolute continuity of autophage measures on finite-dimensional vector spaces

    Energy Technology Data Exchange (ETDEWEB)

    Raja, C R.E. [Stat-Math Unit, Indian Statistical Institute, Bangalore (India); [Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)]. E-mail: creraja@isibang.ac.in

    2002-06-01

    We consider a class of measures called autophage which was introduced and studied by Szekely for measures on the real line. We show that the autophage measures on finite-dimensional vector spaces over real or Q{sub p} are infinitely divisible without idempotent factors and are absolutely continuous with bounded continuous density. We also show that certain semistable measures on such vector spaces are absolutely continuous. (author)

  8. Det demokratiske argument for absolut ytringsfrihed

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2014-01-01

    Artiklen diskuterer den påstand, at absolut ytringsfrihed er en nødvendig forudsætning for demokratisk legitimitet med udgangspunkt i en rekonstruktion af et argument fremsat af Ronald Dworkin. Spørgsmålet er, hvorfor ytringsfrihed skulle være en forudsætning for demokratisk legitimitet, og hvorfor...

  9. Thin-film magnetoresistive absolute position detector

    NARCIS (Netherlands)

    Groenland, J.P.J.

    1990-01-01

    The subject of this thesis is the investigation of a digital absolute posi- tion-detection system, which is based on a position-information carrier (i.e. a magnetic tape) with one single code track on the one hand, and an array of magnetoresistive sensors for the detection of the information on the

  10. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  11. Error evaluation of inelastic response spectrum method for earthquake design

    International Nuclear Information System (INIS)

    Paz, M.; Wong, J.

    1981-01-01

    Two-story, four-story and ten-story shear building-type frames subjected to earthquake excitaion, were analyzed at several levels of their yield resistance. These frames were subjected at their base to the motion recorded for north-south component of the 1940 El Centro earthquake, and to an artificial earthquake which would produce the response spectral charts recommended for design. The frames were first subjected to 25% or 50% of the intensity level of these earthquakes. The resulting maximum relative displacement for each story of the frames was assumed to be yield resistance for the subsequent analyses at 100% of intensity for the excitation. The frames analyzed were uniform along their height with the stiffness adjusted as to result in 0.20 seconds of the fundamental period for the two-story frame, 0.40 seconds for the four-story frame and 1.0 seconds for the ten-story frame. Results of the study provided the following conclusions: (1) The percentage error in floor displacement for linear behavior was less than 10%; (2) The percentage error in floor displacement for inelastic behavior (elastoplastic) could be as high as 100%; (3) In most of the cases analyzed, the error increased with damping in the system; (4) As a general rule, the error increased as the modal yield resistance decreased; (5) The error was lower for the structures subjected to the 1940 E1 Centro earthquake than for the same structures subjected to an artificial earthquake which was generated from the response spectra for design. (orig./HP)

  12. DOES ABSOLUTE SYNONYMY EXIST IN OWERE-IGBO?

    African Journals Online (AJOL)

    USER

    The researcher also interviewed native speakers of the dialect. The study ... The word 'synonymy' means sameness of meaning, i.e., a relationship in which more ... whether absolute synonymy exists in Owere–Igbo or not. ..... 'close this book'.

  13. Absolute atomic oxygen and nitrogen densities in radio-frequency driven atmospheric pressure cold plasmas: Synchrotron vacuum ultra-violet high-resolution Fourier-transform absorption measurements

    International Nuclear Information System (INIS)

    Niemi, K.; O'Connell, D.; Gans, T.; Oliveira, N. de; Joyeux, D.; Nahon, L.; Booth, J. P.

    2013-01-01

    Reactive atomic species play a key role in emerging cold atmospheric pressure plasma applications, in particular, in plasma medicine. Absolute densities of atomic oxygen and atomic nitrogen were measured in a radio-frequency driven non-equilibrium plasma operated at atmospheric pressure using vacuum ultra-violet (VUV) absorption spectroscopy. The experiment was conducted on the DESIRS synchrotron beamline using a unique VUV Fourier-transform spectrometer. Measurements were carried out in plasmas operated in helium with air-like N 2 /O 2 (4:1) admixtures. A maximum in the O-atom concentration of (9.1 ± 0.7)×10 20 m −3 was found at admixtures of 0.35 vol. %, while the N-atom concentration exhibits a maximum of (5.7 ± 0.4)×10 19 m −3 at 0.1 vol. %

  14. A Novel Artificial Fish Swarm Algorithm for Recalibration of Fiber Optic Gyroscope Error Parameters

    Directory of Open Access Journals (Sweden)

    Yanbin Gao

    2015-05-01

    Full Text Available The artificial fish swarm algorithm (AFSA is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes’ pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.

  15. A novel artificial fish swarm algorithm for recalibration of fiber optic gyroscope error parameters.

    Science.gov (United States)

    Gao, Yanbin; Guan, Lianwu; Wang, Tingjun; Sun, Yunlong

    2015-05-05

    The artificial fish swarm algorithm (AFSA) is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG) error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS) degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA) on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes' pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost) of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS) approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.

  16. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  17. Towards absolute neutrino masses

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, Petr [Kellogg Radiation Laboratory 106-38, Caltech, Pasadena, CA 91125 (United States)

    2007-06-15

    Various ways of determining the absolute neutrino masses are briefly reviewed and their sensitivities compared. The apparent tension between the announced but unconfirmed observation of the 0{nu}{beta}{beta} decay and the neutrino mass upper limit based on observational cosmology is used as an example of what could happen eventually. The possibility of a 'nonstandard' mechanism of the 0{nu}{beta}{beta} decay is stressed and the ways of deciding which of the possible mechanisms is actually operational are described. The importance of the 0{nu}{beta}{beta} nuclear matrix elements is discussed and their uncertainty estimated.

  18. Absolute migration of Pacific basin mid-ocean ridges since 85 Ma ...

    African Journals Online (AJOL)

    Mid-ocean ridges are major physiographic features that dominate the world seafloor. Their absolute motion and tectonics are recorded in magnetic lineations they created. The absolute migration of mid-ocean ridges in the Pacific basin since 85 Ma and their tectonic implications was investigated in this work and the results ...

  19. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  20. A large-area, spatially continuous assessment of land cover map error and its impact on downstream analyses.

    Science.gov (United States)

    Estes, Lyndon; Chen, Peng; Debats, Stephanie; Evans, Tom; Ferreira, Stefanus; Kuemmerle, Tobias; Ragazzo, Gabrielle; Sheffield, Justin; Wolf, Adam; Wood, Eric; Caylor, Kelly

    2018-01-01

    Land cover maps increasingly underlie research into socioeconomic and environmental patterns and processes, including global change. It is known that map errors impact our understanding of these phenomena, but quantifying these impacts is difficult because many areas lack adequate reference data. We used a highly accurate, high-resolution map of South African cropland to assess (1) the magnitude of error in several current generation land cover maps, and (2) how these errors propagate in downstream studies. We first quantified pixel-wise errors in the cropland classes of four widely used land cover maps at resolutions ranging from 1 to 100 km, and then calculated errors in several representative "downstream" (map-based) analyses, including assessments of vegetative carbon stocks, evapotranspiration, crop production, and household food security. We also evaluated maps' spatial accuracy based on how precisely they could be used to locate specific landscape features. We found that cropland maps can have substantial biases and poor accuracy at all resolutions (e.g., at 1 km resolution, up to ∼45% underestimates of cropland (bias) and nearly 50% mean absolute error (MAE, describing accuracy); at 100 km, up to 15% underestimates and nearly 20% MAE). National-scale maps derived from higher-resolution imagery were most accurate, followed by multi-map fusion products. Constraining mapped values to match survey statistics may be effective at minimizing bias (provided the statistics are accurate). Errors in downstream analyses could be substantially amplified or muted, depending on the values ascribed to cropland-adjacent covers (e.g., with forest as adjacent cover, carbon map error was 200%-500% greater than in input cropland maps, but ∼40% less for sparse cover types). The average locational error was 6 km (600%). These findings provide deeper insight into the causes and potential consequences of land cover map error, and suggest several recommendations for land

  1. The application of a Grey Markov Model to forecasting annual maximum water levels at hydrological stations

    Science.gov (United States)

    Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong

    2012-03-01

    Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.

  2. Absolute humidity and the seasonal onset of influenza in the continental United States.

    Directory of Open Access Journals (Sweden)

    Jeffrey Shaman

    2010-02-01

    Full Text Available Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent reanalysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here, we extend these findings to the human population level, showing that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions.

  3. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  4. Maximum likelihood unit rooting test in the presence GARCH: A new test with increased power

    OpenAIRE

    Cook , Steve

    2008-01-01

    Abstract The literature on testing the unit root hypothesis in the presence of GARCH errors is extended. A new test based upon the combination of local-to-unity detrending and joint maximum likelihood estimation of the autoregressive parameter and GARCH process is presented. The finite sample distribution of the test is derived under alternative decisions regarding the deterministic terms employed. Using Monte Carlo simulation, the newly proposed ML t-test is shown to exhibit incre...

  5. Overspecification of colour, pattern, and size: Salience, absoluteness, and consistency

    OpenAIRE

    Sammie eTarenskeen; Mirjam eBroersma; Mirjam eBroersma; Bart eGeurts

    2015-01-01

    The rates of overspecification of colour, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Colour and pattern are absolute attributes, whereas size is relative and less salient. Additionally, a tendency towards consistent responses is assessed. Using a within-participants design, we find similar rates of colour and pattern overspecification, which are both higher than the rate of size overspecification. Using a bet...

  6. Overspecification of color, pattern, and size: salience, absoluteness, and consistency

    OpenAIRE

    Tarenskeen, S.L.; Broersma, M.; Geurts, B.

    2015-01-01

    The rates of overspecification of color, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Color and pattern are absolute and salient attributes, whereas size is relative and less salient. Additionally, a tendency toward consistent responses is assessed. Using a within-participants design, we find similar rates of color and pattern overspecification, which are both higher than the rate of size overspecification. Usi...

  7. Absolute photoionization cross-section of the methyl radical.

    Science.gov (United States)

    Taatjes, Craig A; Osborn, David L; Selby, Talitha M; Meloni, Giovanni; Fan, Haiyan; Pratt, Stephen T

    2008-10-02

    The absolute photoionization cross-section of the methyl radical has been measured using two completely independent methods. The CH3 photoionization cross-section was determined relative to that of acetone and methyl vinyl ketone at photon energies of 10.2 and 11.0 eV by using a pulsed laser-photolysis/time-resolved synchrotron photoionization mass spectrometry method. The time-resolved depletion of the acetone or methyl vinyl ketone precursor and the production of methyl radicals following 193 nm photolysis are monitored simultaneously by using time-resolved synchrotron photoionization mass spectrometry. Comparison of the initial methyl signal with the decrease in precursor signal, in combination with previously measured absolute photoionization cross-sections of the precursors, yields the absolute photoionization cross-section of the methyl radical; sigma(CH3)(10.2 eV) = (5.7 +/- 0.9) x 10(-18) cm(2) and sigma(CH3)(11.0 eV) = (6.0 +/- 2.0) x 10(-18) cm(2). The photoionization cross-section for vinyl radical determined by photolysis of methyl vinyl ketone is in good agreement with previous measurements. The methyl radical photoionization cross-section was also independently measured relative to that of the iodine atom by comparison of ionization signals from CH3 and I fragments following 266 nm photolysis of methyl iodide in a molecular-beam ion-imaging apparatus. These measurements gave a cross-section of (5.4 +/- 2.0) x 10(-18) cm(2) at 10.460 eV, (5.5 +/- 2.0) x 10(-18) cm(2) at 10.466 eV, and (4.9 +/- 2.0) x 10(-18) cm(2) at 10.471 eV. The measurements allow relative photoionization efficiency spectra of methyl radical to be placed on an absolute scale and will facilitate quantitative measurements of methyl concentrations by photoionization mass spectrometry.

  8. A semi-empirical formula on the pre-neutron-emission fragment mass distribution in nuclear fission

    International Nuclear Information System (INIS)

    Wang Fucheng; Hu Jimin

    1988-03-01

    A 5-Gauss semi-empirical formula on the pre-neutron-emission fragment mass distribution is given. The absolute standard deviation and maximum departure between calculated values and experimental data for (n,f) and (n,n'f) fission reactions from 232 Th to 245 Cm are approximately 0.4% and 0.8%, respectively. The error will get bigger if the formula is used at higher excitation energies

  9. Simulation model of ANN based maximum power point tracking controller for solar PV system

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Anil K.; Singh, Bhupal [Department of Electrical and Electronics Engineering, Ajay Kumar Garg Engineering College, Ghaziabad 201009 (India); Kaushika, N.D.; Agarwal, Niti [School of Research and Development, Bharati Vidyapeeth College of Engineering, A-4 Paschim Vihar, New Delhi 110063 (India)

    2011-02-15

    In this paper the simulation model of an artificial neural network (ANN) based maximum power point tracking controller has been developed. The controller consists of an ANN tracker and the optimal control unit. The ANN tracker estimates the voltages and currents corresponding to a maximum power delivered by solar PV (photovoltaic) array for variable cell temperature and solar radiation. The cell temperature is considered as a function of ambient air temperature, wind speed and solar radiation. The tracker is trained employing a set of 124 patterns using the back propagation algorithm. The mean square error of tracker output and target values is set to be of the order of 10{sup -5} and the successful convergent of learning process takes 1281 epochs. The accuracy of the ANN tracker has been validated by employing different test data sets. The control unit uses the estimates of the ANN tracker to adjust the duty cycle of the chopper to optimum value needed for maximum power transfer to the specified load. (author)

  10. The Impact of Error-Management Climate, Error Type and Error Originator on Auditors’ Reporting Errors Discovered on Audit Work Papers

    NARCIS (Netherlands)

    A.H. Gold-Nöteberg (Anna); U. Gronewold (Ulfert); S. Salterio (Steve)

    2010-01-01

    textabstractWe examine factors affecting the auditor’s willingness to report their own or their peers’ self-discovered errors in working papers subsequent to detailed working paper review. Prior research has shown that errors in working papers are detected in the review process; however, such

  11. Plasma radiation dynamics with the upgraded Absolute Extreme Ultraviolet tomographical system in the Tokamak à Configuration Variable

    Energy Technology Data Exchange (ETDEWEB)

    Tal, B.; Nagy, D.; Veres, G. [Institute for Particle and Nuclear Physics, Wigner Research Centre for Physics, Hungarian Academy of Sciences, Association EURATOM, P. O. Box 49, H-1525 Budapest (Hungary); Labit, B.; Chavan, R.; Duval, B. [Ecole Polytechnique Fédérale de Lausanne (EPFL), Centre de Recherches en Physique des Plasmas, Association EURATOM-Confédération Suisse, EPFL SB CRPP, Station 13, CH-1015 Lausanne (Switzerland)

    2013-12-15

    We introduce an upgraded version of a tomographical system which is built up from Absolute Extreme Ultraviolet-type (AXUV) detectors and has been installed on the Tokamak à Configuration Variable (TCV). The system is suitable for the investigation of fast radiative processes usually observed in magnetically confined high-temperature plasmas. The upgrade consists in the detector protection by movable shutters, some modifications to correct original design errors and the improvement in the data evaluation techniques. The short-term sensitivity degradation of the detectors, which is caused by the plasma radiation itself, has been monitored and found to be severe. The results provided by the system are consistent with the measurements obtained with the usual plasma radiation diagnostics installed on TCV. Additionally, the coupling between core plasma radiation and plasma-wall interaction is revealed. This was impossible with other available diagnostics on TCV.

  12. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  13. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  14. Absolute dissipative drift-wave instabilities in tokamaks

    International Nuclear Information System (INIS)

    Chen, L.; Chance, M.S.; Cheng, C.Z.

    1979-07-01

    Contrary to previous theoretical predictions, it is shown that the dissipative drift-wave instabilities are absolute in tokamak plasmas. The existence of unstable eigenmodes is shown to be associated with a new eigenmode branch induced by the finite toroidal couplings

  15. Internal descriptions of absolute Borel classes

    Czech Academy of Sciences Publication Activity Database

    Holický, P.; Pelant, Jan

    2004-01-01

    Roč. 141, č. 1 (2004), s. 87-104 ISSN 0166-8641 R&D Projects: GA ČR GA201/00/1466; GA ČR GA201/03/0933 Institutional research plan: CEZ:AV0Z1019905 Keywords : absolute Borel class * complete sequence of covers * open map Subject RIV: BA - General Mathematics Impact factor: 0.364, year: 2004

  16. Absolute photoionization cross-section of the propargyl radical

    Energy Technology Data Exchange (ETDEWEB)

    Savee, John D.; Welz, Oliver; Taatjes, Craig A.; Osborn, David L. [Sandia National Laboratories, Combustion Research Facility, Livermore, California 94551 (United States); Soorkia, Satchin [Institut des Sciences Moleculaires d' Orsay, Universite Paris-Sud 11, Orsay (France); Selby, Talitha M. [Department of Chemistry, University of Wisconsin, Washington County Campus, West Bend, Wisconsin 53095 (United States)

    2012-04-07

    Using synchrotron-generated vacuum-ultraviolet radiation and multiplexed time-resolved photoionization mass spectrometry we have measured the absolute photoionization cross-section for the propargyl (C{sub 3}H{sub 3}) radical, {sigma}{sub propargyl}{sup ion}(E), relative to the known absolute cross-section of the methyl (CH{sub 3}) radical. We generated a stoichiometric 1:1 ratio of C{sub 3}H{sub 3} : CH{sub 3} from 193 nm photolysis of two different C{sub 4}H{sub 6} isomers (1-butyne and 1,3-butadiene). Photolysis of 1-butyne yielded values of {sigma}{sub propargyl}{sup ion}(10.213 eV)=(26.1{+-}4.2) Mb and {sigma}{sub propargyl}{sup ion}(10.413 eV)=(23.4{+-}3.2) Mb, whereas photolysis of 1,3-butadiene yielded values of {sigma}{sub propargyl}{sup ion}(10.213 eV)=(23.6{+-}3.6) Mb and {sigma}{sub propargyl}{sup ion}(10.413 eV)=(25.1{+-}3.5) Mb. These measurements place our relative photoionization cross-section spectrum for propargyl on an absolute scale between 8.6 and 10.5 eV. The cross-section derived from our results is approximately a factor of three larger than previous determinations.

  17. Absolute Standardization Of 90Sr Using 4 pi(PC) Detector

    International Nuclear Information System (INIS)

    Pujadi; Wardiyanto, Gatot; Wijaya I, Nazar; Sudarsono

    2000-01-01

    Standardization of exp.90 Sr with absolute measurement using 4 pi proporsional detector has been carried out. The counting efficiency of beta particle mainly depends on self and VYNS absorption. The correction of self absorption were determined by means of the beta maximum energy vs. absorption curve. Self absorption value of exp.90 Sr with Emax. 0,55 MeV is 2,5%, and exp.90 Y with Emax. 2,28% MeV is 0,5%. The VYNS absorptions were determined by counting using variations of VYNS thickness. The corrections of VYNS are 0,70% at Eeta max. 0,55 MeV and 0,1% at Ebeta max. 2,28 MeV, for every VYNS sheet having thickness of n 15 mugr/cm exp.2. The samples were made from the source without dilution, and with dilution factors of 6,9 and 9,9. The activity of samples were determined by counting using two methods, with and without variation of VYNS thickness. The result of measurement by variation dilution of solution and VYNS thickness is fairly good, the difference is 1,68%. The difference between result of activity measurement of exp.90 Sr with this methods and the activity of master solution is fairly good, with the discrepancies are from 0,5 to 1,16%

  18. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  19. Characterizing absolute piezoelectric microelectromechanical system displacement using an atomic force microscope

    International Nuclear Information System (INIS)

    Evans, J.; Chapman, S.

    2014-01-01

    Piezoresponse Force Microscopy (PFM) is a popular tool for the study of ferroelectric and piezoelectric materials at the nanometer level. Progress in the development of piezoelectric MEMS fabrication is highlighting the need to characterize absolute displacement at the nanometer and Ångstrom scales, something Atomic Force Microscopy (AFM) might do but PFM cannot. Absolute displacement is measured by executing a polarization measurement of the ferroelectric or piezoelectric capacitor in question while monitoring the absolute vertical position of the sample surface with a stationary AFM cantilever. Two issues dominate the execution and precision of such a measurement: (1) the small amplitude of the electrical signal from the AFM at the Ångstrom level and (2) calibration of the AFM. The authors have developed a calibration routine and test technique for mitigating the two issues, making it possible to use an atomic force microscope to measure both the movement of a capacitor surface as well as the motion of a micro-machine structure actuated by that capacitor. The theory, procedures, pitfalls, and results of using an AFM for absolute piezoelectric measurement are provided

  20. Characterizing absolute piezoelectric microelectromechanical system displacement using an atomic force microscope

    Energy Technology Data Exchange (ETDEWEB)

    Evans, J., E-mail: radiant@ferrodevices.com; Chapman, S., E-mail: radiant@ferrodevices.com [Radiant Technologies, Inc., 2835C Pan American Fwy NE, Albuquerque, New Mexico 87107 (United States)

    2014-08-14

    Piezoresponse Force Microscopy (PFM) is a popular tool for the study of ferroelectric and piezoelectric materials at the nanometer level. Progress in the development of piezoelectric MEMS fabrication is highlighting the need to characterize absolute displacement at the nanometer and Ångstrom scales, something Atomic Force Microscopy (AFM) might do but PFM cannot. Absolute displacement is measured by executing a polarization measurement of the ferroelectric or piezoelectric capacitor in question while monitoring the absolute vertical position of the sample surface with a stationary AFM cantilever. Two issues dominate the execution and precision of such a measurement: (1) the small amplitude of the electrical signal from the AFM at the Ångstrom level and (2) calibration of the AFM. The authors have developed a calibration routine and test technique for mitigating the two issues, making it possible to use an atomic force microscope to measure both the movement of a capacitor surface as well as the motion of a micro-machine structure actuated by that capacitor. The theory, procedures, pitfalls, and results of using an AFM for absolute piezoelectric measurement are provided.