WorldWideScience

Sample records for absolute percentage error

  1. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  2. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  3. Assessing energy forecasting inaccuracy by simultaneously considering temporal and absolute errors

    International Nuclear Information System (INIS)

    Frías-Paredes, Laura; Mallor, Fermín; Gastón-Romeo, Martín; León, Teresa

    2017-01-01

    Highlights: • A new method to match time series is defined to assess energy forecasting accuracy. • This method relies in a new family of step patterns that optimizes the MAE. • A new definition of the Temporal Distortion Index between two series is provided. • A parametric extension controls both the temporal distortion index and the MAE. • Pareto optimal transformations of the forecast series are obtained for both indexes. - Abstract: Recent years have seen a growing trend in wind and solar energy generation globally and it is expected that an important percentage of total energy production comes from these energy sources. However, they present inherent variability that implies fluctuations in energy generation that are difficult to forecast. Thus, forecasting errors have a considerable role in the impacts and costs of renewable energy integration, management, and commercialization. This study presents an important advance in the task of analyzing prediction models, in particular, in the timing component of prediction error, which improves previous pioneering results. A new method to match time series is defined in order to assess energy forecasting accuracy. This method relies on a new family of step patterns, an essential component of the algorithm to evaluate the temporal distortion index (TDI). This family minimizes the mean absolute error (MAE) of the transformation with respect to the reference series (the real energy series) and also allows detailed control of the temporal distortion entailed in the prediction series. The simultaneous consideration of temporal and absolute errors allows the use of Pareto frontiers as characteristic error curves. Real examples of wind energy forecasts are used to illustrate the results.

  4. Study of errors in absolute flux density measurements of Cassiopeia A

    International Nuclear Information System (INIS)

    Kanda, M.

    1975-10-01

    An error analysis for absolute flux density measurements of Cassiopeia A is discussed. The lower-bound quadrature-accumulation error for state-of-the-art measurements of the absolute flux density of Cas A around 7 GHz is estimated to be 1.71% for 3 sigma limits. The corresponding practicable error for the careful but not state-of-the-art measurement is estimated to be 4.46% for 3 sigma limits

  5. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    Science.gov (United States)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  6. Corrected Lymphocyte Percentages Reduce the Differences in Absolute CD4+ T Lymphocyte Counts between Dual-Platform and Single-Platform Flow Cytometric Approaches.

    Science.gov (United States)

    Noulsri, Egarit; Abudaya, Dinar; Lerdwana, Surada; Pattanapanyasat, Kovit

    2018-03-13

    To determine whether a corrected lymphocyte percentage could reduce bias in the absolute cluster of differentiation (CD)4+ T lymphocyte counts obtained via dual-platform (DP) vs standard single-platform (SP) flow cytometry. The correction factor (CF) for the lymphocyte percentages was calculated at 6 laboratories. The absolute CD4+ T lymphocyte counts in 300 blood specimens infected with human immunodeficiency virus (HIV) were determined using the DP and SP methods. Applying the CFs revealed that 4 sites showed a decrease in the mean bias of absolute CD4+ T lymphocyte counts determined via DP vs standard SP (-109 vs -84 cells/μL, -80 vs -58 cells/μL, -52 vs -45 cells/μL, and -32 vs 1 cells/μL). However, 2 participating laboratories revealed an increase in the difference of the mean bias (-42 vs -49 cells/μL and -20 vs -69 cells/μL). Use of the corrected lymphocyte percentage shows potential for decreasing the difference in CD4 counts between DP and the standard SP method.

  7. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  8. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  9. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  10. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    Science.gov (United States)

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  11. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    Science.gov (United States)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  12. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    Science.gov (United States)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  13. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  14. An Empirical Analysis for the Prediction of a Financial Crisis in Turkey through the Use of Forecast Error Measures

    Directory of Open Access Journals (Sweden)

    Seyma Caliskan Cavdar

    2015-08-01

    Full Text Available In this study, we try to examine whether the forecast errors obtained by the ANN models affect the breakout of financial crises. Additionally, we try to investigate how much the asymmetric information and forecast errors are reflected on the output values. In our study, we used the exchange rate of USD/TRY (USD, the Borsa Istanbul 100 Index (BIST, and gold price (GP as our output variables of our Artificial Neural Network (ANN models. We observe that the predicted ANN model has a strong explanation capability for the 2001 and 2008 crises. Our calculations of some symmetry measures such as mean absolute percentage error (MAPE, symmetric mean absolute percentage error (sMAPE, and Shannon entropy (SE, clearly demonstrate the degree of asymmetric information and the deterioration of the financial system prior to, during, and after the financial crisis. We found that the asymmetric information prior to crisis is larger as compared to other periods. This situation can be interpreted as early warning signals before the potential crises. This evidence seems to favor an asymmetric information view of financial crises.

  15. Errors of absolute methods of reactor neutron activation analysis caused by non-1/E epithermal neutron spectra

    International Nuclear Information System (INIS)

    Erdtmann, G.

    1993-08-01

    A sufficiently accurate characterization of the neutron flux and spectrum, i.e. the determination of the thermal flux, the flux ratio and the epithermal flux spectrum shape factor, α, is a prerequisite for all types of absolute and monostandard methods of reactor neutron activation analysis. A convenient method for these measurements is the bare triple monitor method. However, the results of this method, are very imprecise, because there are high error propagation factors form the counting errors of the monitor activities. Procedures are described to calculate the errors of the flux parameters, the α-dependent cross-section ratios, and of the analytical results from the errors of the activities of the monitor isotopes. They are included in FORTRAN programs which also allow a graphical representation of the results. A great number of examples were calculated for ten different irradiation facilities in four reactors and for 28 elements. Plots of the results are presented and discussed. (orig./HP) [de

  16. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  17. Errors and limits in the determination of plasma electron density by measuring the absolute values of the emitted continuum radiation intensity

    International Nuclear Information System (INIS)

    Bilbao, L.; Bruzzone, H.; Grondona, D.

    1994-01-01

    The reliable determination of a plasma electron structure requires a good knowledge of the errors affecting the employed technique. A technique based on the measurements of the absolute light intensity emitted by travelling plasma structures in plasma focus devices has been used, but it can be easily modified to other geometries and even to stationary plasma structures with time-varying plasma densities. The purpose of this work is to discuss in some detail the errors and limits of this technique. Three separate errors are shown: the minimum size of the density structure that can be resolved, an overall error in the measurements themselves, and an uncertainty in the shape of the density profile. (author)

  18. Incorrect Weighting of Absolute Performance in Self-Assessment

    Science.gov (United States)

    Jeffrey, Scott A.; Cozzarin, Brian

    Students spend much of their life in an attempt to assess their aptitude for numerous tasks. For example, they expend a great deal of effort to determine their academic standing given a distribution of grades. This research finds that students use their absolute performance, or percentage correct as a yardstick for their self-assessment, even when relative standing is much more informative. An experiment shows that this reliance on absolute performance for self-evaluation causes a misallocation of time and financial resources. Reasons for this inappropriate responsiveness to absolute performance are explored.

  19. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  20. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  1. Absolute and Relative Reliability of the Timed 'Up & Go' Test and '30second Chair-Stand' Test in Hospitalised Patients with Stroke

    DEFF Research Database (Denmark)

    Lyders Johansen, Katrine; Derby Stistrup, Rikke; Skibdal Schjøtt, Camilla

    2016-01-01

    OBJECTIVE: The timed 'Up & Go' test and '30second Chair-Stand' test are simple clinical outcome measures widely used to assess functional performance. The reliability of both tests in hospitalised stroke patients is unknown. The purpose was to investigate the relative and absolute reliability...... of both tests in patients admitted to an acute stroke unit. METHODS: Sixty-two patients (men, n = 41) attended two test sessions separated by a one hours rest. Intraclass correlation coefficients (ICC2,1) were calculated to assess relative reliability. Absolute reliability was expressed as Standard Error...... of Measurement (with 95% certainty-SEM95) and Smallest Real Difference (SRD) and as percentage of their respective means if heteroscedasticity was observed in Bland Altman plots (SEM95% and SRD%). RESULTS: ICC values for interrater reliability were 0.97 and 0.99 for the timed 'Up & Go' test and 0.88 and 0...

  2. Laboratory and field evaluation of the Partec CyFlow miniPOC for absolute and relative CD4 T-cell enumeration.

    Directory of Open Access Journals (Sweden)

    Djibril Wade

    Full Text Available A new CD4 point-of-care instrument, the CyFlow miniPOC, which provides absolute and percentage CD4 T-cells, used for screening and monitoring of HIV-infected patients in resource-limited settings, was introduced recently. We assessed the performance of this novel instrument in a reference laboratory and in a field setting in Senegal.A total of 321 blood samples were obtained from 297 adults and 24 children, all HIV-patients attending university hospitals in Dakar, or health centers in Ziguinchor. Samples were analyzed in parallel on CyFlow miniPOC, FACSCount CD4 and FACSCalibur to assess CyFlow miniPOC precision and accuracy.At the reference lab, CyFlow miniPOC, compared to FACSCalibur, showed an absolute mean bias of -12.6 cells/mm3 and a corresponding relative mean bias of -2.3% for absolute CD4 counts. For CD4 percentages, the absolute mean bias was -0.1%. Compared to FACSCount CD4, the absolute and relative mean biases were -31.2 cells/mm3 and -4.7%, respectively, for CD4 counts, whereas the absolute mean bias for CD4 percentages was 1.3%. The CyFlow miniPOC was able to classify HIV-patients eligible for ART with a sensitivity of ≥ 95% at the different ART-initiation thresholds (200, 350 and 500 CD4 cells/mm3. In the field lab, the room temperature ranged from 30 to 35°C during the working hours. At those temperatures, the CyFlow miniPOC, compared to FACSCount CD4, had an absolute and relative mean bias of 7.6 cells/mm3 and 2.8%, respectively, for absolute CD4 counts, and an absolute mean bias of 0.4% for CD4 percentages. The CyFlow miniPOC showed sensitivity equal or greater than 94%.The CyFlow miniPOC showed high agreement with FACSCalibur and FACSCount CD4. The CyFlow miniPOC provides both reliable absolute CD4 counts and CD4 percentages even under the field conditions, and is suitable for monitoring HIV-infected patients in resource-limited settings.

  3. Partial sums of arithmetical functions with absolutely convergent ...

    Indian Academy of Sciences (India)

    For an arithmetical function f with absolutely convergent Ramanujan expansion, we derive an asymptotic formula for the ∑ n ≤ N f(n)$ with explicit error term. As a corollary we obtain new results about sum-of-divisors functions and Jordan's totient functions.

  4. Dosimetric Changes Resulting From Patient Rotational Setup Errors in Proton Therapy Prostate Plans

    International Nuclear Information System (INIS)

    Sejpal, Samir V.; Amos, Richard A.; Bluett, Jaques B.; Levy, Lawrence B.; Kudchadker, Rajat J.; Johnson, Jennifer; Choi, Seungtaek; Lee, Andrew K.

    2009-01-01

    Purpose: To evaluate the dose changes to the target and critical structures from rotational setup errors in prostate cancer patients treated with proton therapy. Methods and Materials: A total of 70 plans were analyzed for 10 patients treated with parallel-opposed proton beams to a dose of 7,600 60 Co-cGy-equivalent (CcGE) in 200 CcGE fractions to the clinical target volume (i.e., prostate and proximal seminal vesicles). Rotational setup errors of +3 o , -3 deg., +5 deg., and -5 deg. (to simulate pelvic tilt) were generated by adjusting the gantry. Horizontal couch shifts of +3 deg. and -3 deg. (to simulate longitudinal setup variability) were also generated. Verification plans were recomputed, keeping the same treatment parameters as the control. Results: All changes shown are for 38 fractions. The mean clinical target volume dose was 7,780 CcGE. The mean change in the clinical target volume dose in the worse case scenario for all shifts was 2 CcGE (absolute range in worst case scenario, 7,729-7,848 CcGE). The mean changes in the critical organ dose in the worst case scenario was 6 CcGE (bladder), 18 CcGE (rectum), 36 CcGE (anterior rectal wall), and 141 CcGE (femoral heads) for all plans. In general, the percentage of change in the worse case scenario for all shifts to the critical structures was <5%. Deviations in the absolute percentage of volume of organ receiving 45 and 70 Gy for the bladder and rectum were <2% for all plans. Conclusion: Patient rotational movements of 3 deg. and 5 deg. and horizontal couch shifts of 3 deg. in prostate proton planning did not confer clinically significant dose changes to the target volumes or critical structures.

  5. Relative and absolute risk in epidemiology and health physics

    International Nuclear Information System (INIS)

    Goldsmith, R.; Peterson, H.T. Jr.

    1983-01-01

    The health risk from ionizing radiation commonly is expressed in two forms: (1) the relative risk, which is the percentage increase in natural disease rate and (2) the absolute or attributable risk which represents the difference between the natural rate and the rate associated with the agent in question. Relative risk estimates for ionizing radiation generally are higher than those expressed as the absolute risk. This raises the question of which risk estimator is the most appropriate under different conditions. The absolute risk has generally been used for radiation risk assessment, although mathematical combinations such as the arithmetic or geometric mean of both the absolute and relative risks, have also been used. Combinations of the two risk estimators are not valid because the absolute and relative risk are not independent variables. Both human epidemiologic studies and animal experimental data can be found to illustrate the functional relationship between the natural cancer risk and the risk associated with radiation. This implies that the radiation risk estimate derived from one population may not be appropriate for predictions in another population, unless it is adjusted for the difference in the natural disease incidence between the two populations

  6. Definition of correcting factors for absolute radon content measurement formula

    International Nuclear Information System (INIS)

    Ji Changsong; Xiao Ziyun; Yang Jianfeng

    1992-01-01

    The absolute method of radio content measurement is based on thomas radon measurement formula. It was found in experiment that the systematic error existed in radon content measurement by means of thomas formula. By the analysis on the behaviour of radon daughter five factors including filter efficiency, detector construction factor, self-absorbance, energy spectrum factor, and gravity factor were introduced into the thomas formula, so that the systematic error was eliminated. The measuring methods of the five factors are given

  7. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  8. Absolute GPS Positioning Using Genetic Algorithms

    Science.gov (United States)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  9. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Science.gov (United States)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  10. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    International Nuclear Information System (INIS)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  11. Absolute measurement of 152Eu

    International Nuclear Information System (INIS)

    Baba, Hiroshi; Baba, Sumiko; Ichikawa, Shinichi; Sekine, Toshiaki; Ishikawa, Isamu

    1981-08-01

    A new method of the absolute measurement for 152 Eu was established based on the 4πβ-γ spectroscopic anti-coincidence method. It is a coincidence counting method consisting of a 4πβ-counter and a Ge(Li) γ-ray detector, in which the effective counting efficiencies of the 4πβ-counter for β-rays, conversion electrons, and Auger electrons were obtained by taking the intensity ratios for certain γ-rays between the single spectrum and the spectrum coincident with the pulses from the 4πβ-counter. First, in order to verify the method, three different methods of the absolute measurement were performed with a prepared 60 Co source to find excellent agreement among the results deduced by them. Next, the 4πβ-γ spectroscopic coincidence measurement was applied to 152 Eu sources prepared by irradiating an enriched 151 Eu target in a reactor. The result was compared with that obtained by the γ-ray spectrometry using a 152 Eu standard source supplied by LMRI. They agreed with each other within the error of 2%. (author)

  12. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement

    Science.gov (United States)

    Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu

    2017-03-01

    In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.

  13. Pseudo-absolute quantitative analysis using gas chromatography – Vacuum ultraviolet spectroscopy – A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Ling [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States); Smuts, Jonathan; Walsh, Phillip [VUV Analytics, Inc., Cedar Park, TX (United States); Qiu, Changling [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States); McNair, Harold M. [Department of Chemistry, Virginia Tech, Blacksburg, VA (United States); Schug, Kevin A., E-mail: kschug@uta.edu [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States)

    2017-02-08

    The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120–240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method. - Highlights: • Gas chromatography diagnostics and quantification using VUV detector. • Absorption cross-sections for molecules enable pseudo-absolute quantitation. • Injection diagnostics reveal systematic errors in hardware settings. • Internal

  14. Pseudo-absolute quantitative analysis using gas chromatography – Vacuum ultraviolet spectroscopy – A tutorial

    International Nuclear Information System (INIS)

    Bai, Ling; Smuts, Jonathan; Walsh, Phillip; Qiu, Changling; McNair, Harold M.; Schug, Kevin A.

    2017-01-01

    The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120–240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method. - Highlights: • Gas chromatography diagnostics and quantification using VUV detector. • Absorption cross-sections for molecules enable pseudo-absolute quantitation. • Injection diagnostics reveal systematic errors in hardware settings. • Internal

  15. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    Science.gov (United States)

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  16. Absolute beam-charge measurement for single-bunch electron beams

    International Nuclear Information System (INIS)

    Suwada, Tsuyoshi; Ohsawa, Satoshi; Furukawa, Kazuro; Akasaka, Nobumasa

    2000-01-01

    The absolute beam charge of a single-bunch electron beam with a pulse width of 10 ps and that of a short-pulsed electron beam with a pulse width of 1 ns were measured with a Faraday cup in a beam test for the KEK B-Factory (KEKB) injector linac. It is strongly desired to obtain a precise beam-injection rate to the KEKB rings, and to estimate the amount of beam loss. A wall-current monitor was also recalibrated within an error of ±2%. This report describes the new results for an absolute beam-charge measurement for single-bunch and short-pulsed electron beams, and recalibration of the wall-current monitors in detail. (author)

  17. Absolute method of measuring magnetic susceptibility

    Science.gov (United States)

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  18. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    Directory of Open Access Journals (Sweden)

    Tianzhou Chen

    2013-09-01

    Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.

  19. Uncertainties in pipeline water percentage measurement

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Bentley N.

    2005-07-01

    Measurement of the quantity, density, average temperature and water percentage in petroleum pipelines has been an issue of prime importance. The methods of measurement have been investigated and have seen continued improvement over the years. Questions are being asked as to the reliability of the measurement of water in the oil through sampling systems originally designed and tested for a narrow range of densities. Today most facilities sampling systems handle vastly increased ranges of density and types of crude oils. Issues of pipeline integrity, product loss and production balances are placing further demands on the issues of accurate measurement. Water percentage is one area that has not received the attention necessary to understand the many factors involved in making a reliable measurement. A previous paper1 discussed the issues of uncertainty of the measurement from a statistical perspective. This paper will outline many of the issues of where the errors lie in the manual and automatic methods in use today. A routine to use the data collected by the analyzers in the on line system for validation of the measurements will be described. (author) (tk)

  20. The input ambiguity hypothesis and case blindness: an account of cross-linguistic and intra-linguistic differences in case errors.

    Science.gov (United States)

    Pelham, Sabra D

    2011-03-01

    English-acquiring children frequently make pronoun case errors, while German-acquiring children rarely do. Nonetheless, German-acquiring children frequently make article case errors. It is proposed that when child-directed speech contains a high percentage of case-ambiguous forms, case errors are common in child language; when percentages are low, case errors are rare. Input to English and German children was analyzed for percentage of case-ambiguous personal pronouns on adult tiers of corpora from 24 English-acquiring and 24 German-acquiring children. Also analyzed for German was the percentage of case-ambiguous articles. Case-ambiguous pronouns averaged 63·3% in English, compared with 7·6% in German. The percentage of case-ambiguous articles in German was 77·0%. These percentages align with the children's errors reported in the literature. It appears children may be sensitive to levels of ambiguity such that low ambiguity may aid error-free acquisition, while high ambiguity may blind children to case distinctions, resulting in errors.

  1. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  2. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors

    Directory of Open Access Journals (Sweden)

    Heon-Ju Kwon

    2018-03-01

    Full Text Available Background/Aims Computed tomography (CT hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT. However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Methods Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (VP was measured via the assumptive hepatectomy plane. Retrospective liver volume (VR was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W, errors in percentage (% VP and VR were evaluated. Plane-dependent error in VP was defined as the absolute difference between VP and VR. % plane-dependent error was defined as follows: |VP–VR|/W∙100. Results Mean VP, VR, and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in VP were 73.3 mL and 10.7%. Mean error and % error in VR were 64.4 mL and 9.3%. Mean plane-dependent error in VP was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in VP exceeded 10% of W in approximately 10% of the subjects in our study. Conclusions There was approximately 5% plane-dependent error in liver VP on CT volumetry. Plane-dependent error in VP exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  3. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors.

    Science.gov (United States)

    Kwon, Heon-Ju; Kim, Kyoung Won; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-03-01

    Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (V P ) was measured via the assumptive hepatectomy plane. Retrospective liver volume (V R ) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) V P and V R were evaluated. Plane-dependent error in V P was defined as the absolute difference between V P and V R . % plane-dependent error was defined as follows: |V P -V R |/W∙100. Mean V P , V R , and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in V P were 73.3 mL and 10.7%. Mean error and % error in V R were 64.4 mL and 9.3%. Mean plane-dependent error in V P was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in V P exceeded 10% of W in approximately 10% of the subjects in our study. There was approximately 5% plane-dependent error in liver V P on CT volumetry. Plane-dependent error in V P exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  4. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    Science.gov (United States)

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    Science.gov (United States)

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  6. STAR barrel electromagnetic calorimeter absolute calibration using 'minimum ionizing particles' from collisions at RHIC

    International Nuclear Information System (INIS)

    Cormier, T.M.; Pavlinov, A.I.; Rykov, M.V.; Rykov, V.L.; Shestermanov, K.E.

    2002-01-01

    The procedure for the STAR Barrel Electromagnetic Calorimeter (BEMC) absolute calibrations, using penetrating charged particle hits (MIP-hits) from physics events at RHIC, is presented. Its systematic and statistical errors are evaluated. It is shown that, using this technique, the equalization and transfer of the absolute scale from the test beam can be done to a percent level accuracy in a reasonable amount of time for the entire STAR BEMC. MIP-hits would also be an effective tool for continuously monitoring the variations of the BEMC tower's gains, virtually without interference to STAR's main physics program. The method does not rely on simulations for anything other than geometric and some other small corrections, and also for estimations of the systematic errors. It directly transfers measured test beam responses to operations at RHIC

  7. Valuation Biases, Error Measures, and the Conglomerate Discount

    NARCIS (Netherlands)

    I. Dittmann (Ingolf); E.G. Maug (Ernst)

    2006-01-01

    textabstractWe document the importance of the choice of error measure (percentage vs. logarithmic errors) for the comparison of alternative valuation procedures. We demonstrate for several multiple valuation methods (averaging with the arithmetic mean, harmonic mean, median, geometric mean) that the

  8. An absolute distance interferometer with two external cavity diode lasers

    International Nuclear Information System (INIS)

    Hartmann, L; Meiners-Hagen, K; Abou-Zeid, A

    2008-01-01

    An absolute interferometer for length measurements in the range of several metres has been developed. The use of two external cavity diode lasers allows the implementation of a two-step procedure which combines the length measurement with a variable synthetic wavelength and its interpolation with a fixed synthetic wavelength. This synthetic wavelength is obtained at ≈42 µm by a modulation-free stabilization of both lasers to Doppler-reduced rubidium absorption lines. A stable reference interferometer is used as length standard. Different contributions to the total measurement uncertainty are discussed. It is shown that the measurement uncertainty can considerably be reduced by correcting the influence of vibrations on the measurement result and by applying linear regression to the quadrature signals of the absolute interferometer and the reference interferometer. The comparison of the absolute interferometer with a counting interferometer for distances up to 2 m results in a linearity error of 0.4 µm in good agreement with an estimation of the measurement uncertainty

  9. Globular Clusters: Absolute Proper Motions and Galactic Orbits

    Science.gov (United States)

    Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.

    2018-04-01

    We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.

  10. A review on Black-Scholes model in pricing warrants in Bursa Malaysia

    Science.gov (United States)

    Gunawan, Nur Izzaty Ilmiah Indra; Ibrahim, Siti Nur Iqmal; Rahim, Norhuda Abdul

    2017-01-01

    This paper studies the accuracy of the Black-Scholes (BS) model and the dilution-adjusted Black-Scholes (DABS) model to pricing some warrants traded in the Malaysian market. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are used to compare the two models. Results show that the DABS model is more accurate than the BS model for the selected data.

  11. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  12. The percentage of nosocomial-related out of total hospitalizations for rotavirus gastroenteritis and its association with hand hygiene compliance.

    Science.gov (United States)

    Waisbourd-Zinman, Orith; Ben-Ziony, Shiri; Solter, Ester; Chodick, Gabriel; Ashkenazi, Shai; Livni, Gilat

    2011-03-01

    Because the absolute numbers of both community-acquired and nosocomial rotavirus gastroenteritis (RVGE) vary, we studied the percentage of hospitalizations for RVGE that were transmitted nosocomially as an indicator of in-hospital acquisition of the infection. In a 4-year prospective study, the percentage of nosocomial RVGE declined steadily, from 20.3% in 2003 to 12.7% in 2006 (P = .001). Concomitantly, the rate of compliance with hand hygiene increased from 33.7% to 49% (P = .012), with a significant (P Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  13. Genomic DNA-based absolute quantification of gene expression in Vitis.

    Science.gov (United States)

    Gambetta, Gregory A; McElrone, Andrew J; Matthews, Mark A

    2013-07-01

    Many studies in which gene expression is quantified by polymerase chain reaction represent the expression of a gene of interest (GOI) relative to that of a reference gene (RG). Relative expression is founded on the assumptions that RG expression is stable across samples, treatments, organs, etc., and that reaction efficiencies of the GOI and RG are equal; assumptions which are often faulty. The true variability in RG expression and actual reaction efficiencies are seldom determined experimentally. Here we present a rapid and robust method for absolute quantification of expression in Vitis where varying concentrations of genomic DNA were used to construct GOI standard curves. This methodology was utilized to absolutely quantify and determine the variability of the previously validated RG ubiquitin (VvUbi) across three test studies in three different tissues (roots, leaves and berries). In addition, in each study a GOI was absolutely quantified. Data sets resulting from relative and absolute methods of quantification were compared and the differences were striking. VvUbi expression was significantly different in magnitude between test studies and variable among individual samples. Absolute quantification consistently reduced the coefficients of variation of the GOIs by more than half, often resulting in differences in statistical significance and in some cases even changing the fundamental nature of the result. Utilizing genomic DNA-based absolute quantification is fast and efficient. Through eliminating error introduced by assuming RG stability and equal reaction efficiencies between the RG and GOI this methodology produces less variation, increased accuracy and greater statistical power. © 2012 Scandinavian Plant Physiology Society.

  14. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  15. Absolute risk, absolute risk reduction and relative risk

    Directory of Open Access Journals (Sweden)

    Jose Andres Calvache

    2012-12-01

    Full Text Available This article illustrates the epidemiological concepts of absolute risk, absolute risk reduction and relative risk through a clinical example. In addition, it emphasizes the usefulness of these concepts in clinical practice, clinical research and health decision-making process.

  16. Changes in relative and absolute concentrations of plasma phospholipid fatty acids observed in a randomized trial of Omega-3 fatty acids supplementation in Uganda.

    Science.gov (United States)

    Song, Xiaoling; Diep, Pho; Schenk, Jeannette M; Casper, Corey; Orem, Jackson; Makhoul, Zeina; Lampe, Johanna W; Neuhouser, Marian L

    2016-11-01

    Expressing circulating phospholipid fatty acids (PLFAs) in relative concentrations has some limitations: the total of all fatty acids are summed to 100%; therefore, the values of individual fatty acid are not independent. In this study we examined if both relative and absolute metrics could effectively measure changes in circulating PLFA concentrations in an intervention trial. 66 HIV and HHV8 infected patients in Uganda were randomized to take 3g/d of either long-chain omega-3 fatty acids (1856mg EPA and 1232mg DHA) or high-oleic safflower oil in a 12-week double-blind trial. Plasma samples were collected at baseline and end of trial. Relative weight percentage and absolute concentrations of 41 plasma PLFAs were measured using gas chromatography. Total cholesterol was also measured. Intervention-effect changes in concentrations were calculated as differences between end of 12-week trial and baseline. Pearson correlations of relative and absolute concentration changes in individual PLFAs were high (>0.6) for 37 of the 41 PLFAs analyzed. In the intervention arm, 17 PLFAs changed significantly in relative concentration and 16 in absolute concentration, 15 of which were identical. Absolute concentration of total PLFAs decreased 95.1mg/L (95% CI: 26.0, 164.2; P=0.0085), but total cholesterol did not change significantly in the intervention arm. No significant change was observed in any of the measurements in the placebo arm. Both relative weight percentage and absolute concentrations could effectively measure changes in plasma PLFA concentrations. EPA and DHA supplementation changes the concentrations of multiple plasma PLFAs besides EPA and DHA.Both relative weight percentage and absolute concentrations could effectively measure changes in plasma phospholipid fatty acid (PLFA) concentrations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.

    Science.gov (United States)

    1985-05-08

    also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies

  18. [Errors in Peruvian medical journals references].

    Science.gov (United States)

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  19. Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements

    Science.gov (United States)

    Anthony, Robert E.; Ringler, Adam; Wilson, David

    2018-01-01

    The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.

  20. Short-Term Forecasting of Loads and Wind Power for Latvian Power System: Accuracy and Capacity of the Developed Tools

    Directory of Open Access Journals (Sweden)

    Radziukynas V.

    2016-04-01

    Full Text Available The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011 and planned wind power capacities (the year 2023.

  1. Short-Term Forecasting of Loads and Wind Power for Latvian Power System: Accuracy and Capacity of the Developed Tools

    Science.gov (United States)

    Radziukynas, V.; Klementavičius, A.

    2016-04-01

    The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011) and planned wind power capacities (the year 2023).

  2. Simplified fringe order correction for absolute phase maps recovered with multiple-spatial-frequency fringe projections

    International Nuclear Information System (INIS)

    Ding, Yi; Peng, Kai; Lu, Lei; Zhong, Kai; Zhu, Ziqi

    2017-01-01

    Various kinds of fringe order errors may occur in the absolute phase maps recovered with multi-spatial-frequency fringe projections. In existing methods, multiple successive pixels corrupted by fringe order errors are detected and corrected pixel-by-pixel with repeating searches, which is inefficient for applications. To improve the efficiency of multiple successive fringe order corrections, in this paper we propose a method to simplify the error detection and correction by the stepwise increasing property of fringe order. In the proposed method, the numbers of pixels in each step are estimated to find the possible true fringe order values, repeating the search in detecting multiple successive errors can be avoided for efficient error correction. The effectiveness of our proposed method is validated by experimental results. (paper)

  3. Reliability and error analysis on xenon/CT CBF

    International Nuclear Information System (INIS)

    Zhang, Z.

    2000-01-01

    This article provides a quantitative error analysis of a simulation model of xenon/CT CBF in order to investigate the behavior and effect of different types of errors such as CT noise, motion artifacts, lower percentage of xenon supply, lower tissue enhancements, etc. A mathematical model is built to simulate these errors. By adjusting the initial parameters of the simulation model, we can scale the Gaussian noise, control the percentage of xenon supply, and change the tissue enhancement with different kVp settings. The motion artifact will be treated separately by geometrically shifting the sequential CT images. The input function is chosen from an end-tidal xenon curve of a practical study. Four kinds of cerebral blood flow, 10, 20, 50, and 80 cc/100 g/min, are examined under different error environments and the corresponding CT images are generated following the currently popular timing protocol. The simulated studies will be fed to a regular xenon/CT CBF system for calculation and evaluation. A quantitative comparison is given to reveal the behavior and effect of individual error resources. Mixed error testing is also provided to inspect the combination effect of errors. The experiment shows that CT noise is still a major error resource. The motion artifact affects the CBF results more geometrically than quantitatively. Lower xenon supply has a lesser effect on the results, but will reduce the signal/noise ratio. The lower xenon enhancement will lower the flow values in all areas of brain. (author)

  4. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    Energy Technology Data Exchange (ETDEWEB)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  5. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    International Nuclear Information System (INIS)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D

    2015-01-01

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  6. Absolute measurement of the $\\beta\\alpha$ decay of $^{16}$N

    CERN Multimedia

    We propose to study the $\\beta$-decay of $^{16}$N at ISOLDE with the aim of determining the branching ratio for $\\beta\\alpha$ decay on an absolute scale. There are indications that the previously measured branching ratio is in error by an amount significantly larger than the quoted uncertainty. This limits the precision with which the S-factor of the astrophysically important $^{12}$C($\\alpha, \\gamma)^{16}$O reaction can be determined.

  7. The Korean version of relative and absolute reliability of gait and balance assessment tools for patients with dementia in day care center and nursing home.

    Science.gov (United States)

    Lee, Han Suk; Park, Sun Wook; Chung, Hyung Kuk

    2017-11-01

    [Purpose] This study was aimed to determine the relative and absolute reliability of Korean version tools of the Berg Balance Scale (BBS), the Timed Up and Go (TUG), the Four-Meter Walking Test (4MWT) and the Groningen Meander Walking Test (GMWT) in patients with dementia. [Subjects and Methods] A total of 53 patients with dementia were tested on TUG, BBS, 4MWT and GMWT with a prospective cohort methodological design. Intra-class Correlation Coefficients (ICCs) to assess relative reliability and the standard error of measurement (SEM), minimal detectable change (MDC 95 ) and its percentage (MDC % ) to analyze the absolute reliability were calculated. [Results] Inter-rater reliability (ICC (2,3) ) of TUG, BBS and GMWT was 0.99 and that of 4MWT was 0.82. Inter-rater reliability was high for TUG, BBS and GMWT, with low SEM, MDC 95 , and MDC % . Inter-rater reliability was low for 4MWT, with high SEM, MDC 95 , and MDC % . Test-retest (ICC (2,3) ) of TUG, BBS and GMWT was 0.96-0.99 and Test-retest (ICC (2,3) ) of 4MWT was 0.85. The test-retest was high for TUG, BBS and GMWT, with low SEM, MDC 95 , and MDC % , but it was low for 4MWT, with high SEM, MDC 95 , and MDC % . [Conclusion] The relative reliability was high for all the assessment tools. The absolute reliability has a reasonable level of stability except the 4MWT.

  8. Comparison of a mobile application to estimate percentage body fat to other non-laboratory based measurements

    Directory of Open Access Journals (Sweden)

    Shaw Matthew P.

    2017-02-01

    Full Text Available Study aim: The measurement of body composition is important from a population perspective as it is a variable associated with a person’s health, and also from a sporting perspective as it can be used to evaluate training. This study aimed to examine the reliability of a mobile application that estimates body composition by digitising a two-dimensional image. Materials and methods: Thirty participants (15 men and 15 women volunteered to have their percentage body fat (%BF estimated via three different methods (skinfold measurements, SFM; bio-electrical impedance, BIA; LeanScreenTM mobile application, LSA. Intra-method reproducibility was assessed using intra-class correlation coefficients (ICC, coefficient of variance (CV and typical error of measurement (TEM. The average measurement for each method were also compared. Results: There were no significant differences between the methods for estimated %BF (p = 0.818 and the reliability of each method as assessed via ICC was good (≥0.974. However the absolute reproducibility, as measured by CV and TEM, was much higher in SFM and BIA (≤1.07 and ≤0.37 respectively compared with LSA (CV 6.47, TEM 1.6. Conclusion: LSA may offer an alternative to other field-based measures for practitioners, however individual variance should be considered to develop an understanding of minimal worthwhile change, as it may not be suitable for a one-off measurement.

  9. Encasing the Absolutes

    Directory of Open Access Journals (Sweden)

    Uroš Martinčič

    2014-05-01

    Full Text Available The paper explores the issue of structure and case in English absolute constructions, whose subjects are deduced by several descriptive grammars as being in the nominative case due to its supposed neutrality in terms of register. This deduction is countered by systematic accounts presented within the framework of the Minimalist Program which relate the case of absolute constructions to specific grammatical factors. Each proposal is shown as an attempt of analysing absolute constructions as basic predication structures, either full clauses or small clauses. I argue in favour of the small clause approach due to its minimal reliance on transformations and unique stipulations. Furthermore, I propose that small clauses project a singular category, and show that the use of two cases in English absolute constructions can be accounted for if they are analysed as depictive phrases, possibly selected by prepositions. The case of the subject in absolutes is shown to be a result of syntactic and non-syntactic factors. I thus argue in accordance with Minimalist goals that syntactic case does not exist, attributing its role in absolutes to other mechanisms.

  10. Micro ionization chamber dosimetry in IMRT verification: Clinical implications of dosimetric errors in the PTV

    International Nuclear Information System (INIS)

    Sanchez-Doblado, Francisco; Capote, Roberto; Rosello, Joan V.; Leal, Antonio; Lagares, Juan I.; Arrans, Rafael; Hartmann, Guenther H.

    2005-01-01

    Background and purpose: Absolute dose measurements for Intensity Modulated Radiotherapy (IMRT) beamlets is difficult due to the lack of lateral electron equilibrium. Recently we found that the absolute dosimetry in the penumbra region of the IMRT beamlet, can suffer from significant errors (Capote et al., Med Phys 31 (2004) 2416-2422). This work has the goal to estimate the error made when measuring the Planning Target Volume's (PTV) absolute dose by a micro ion chamber (μIC) in typical IMRT treatment. The dose error comes from the assumption that the dosimetric parameters determining the absolute dose are the same as for the reference conditions. Materials and Methods: Two IMRT treatment plans for common prostate carcinoma case, derived by forward and inverse optimisation, were considered. Detailed geometrical simulation of the μIC and the dose verification set-up was performed. The Monte Carlo (MC) simulation allows us to calculate the delivered dose to water and the dose delivered to the active volume of the ion chamber. However, the measured dose in water is usually derived from chamber readings assuming reference conditions. The MC simulation provides needed correction factors for ion chamber dosimetry in non reference conditions. Results: Dose calculations were carried out for some representative beamlets, a combination of segments and for the delivered IMRT treatments. We observe that the largest dose errors (i.e. the largest correction factors) correspond to the smaller contribution of the corresponding IMRT beamlets to the total dose delivered in the ionization chamber within PTV. Conclusion: The clinical impact of the calculated dose error in PTV measured dose was found to be negligible for studied IMRT treatments

  11. Identifiability of Baranyi model and comparison with empirical ...

    African Journals Online (AJOL)

    In addition, performance of the Baranyi model was compared with those of the empirical modified Gompertz and logistic models and Huang models. Higher values of R2, modeling efficiency and lower absolute values of mean bias error, root mean square error, mean percentage error and chi-square were obtained with ...

  12. A novel capacitive absolute positioning sensor based on time grating with nanometer resolution

    Science.gov (United States)

    Pu, Hongji; Liu, Hongzhong; Liu, Xiaokang; Peng, Kai; Yu, Zhicheng

    2018-05-01

    The present work proposes a novel capacitive absolute positioning sensor based on time grating. The sensor includes a fine incremental-displacement measurement component combined with a coarse absolute-position measurement component to obtain high-resolution absolute positioning measurements. A single row type sensor was proposed to achieve fine displacement measurement, which combines the two electrode rows of a previously proposed double-row type capacitive displacement sensor based on time grating into a single row. To achieve absolute positioning measurement, the coarse measurement component is designed as a single-row type displacement sensor employing a single spatial period over the entire measurement range. In addition, this component employs a rectangular induction electrode and four groups of orthogonal discrete excitation electrodes with half-sinusoidal envelope shapes, which were formed by alternately extending the rectangular electrodes of the fine measurement component. The fine and coarse measurement components are tightly integrated to form a compact absolute positioning sensor. A prototype sensor was manufactured using printed circuit board technology for testing and optimization of the design in conjunction with simulations. Experimental results show that the prototype sensor achieves a ±300 nm measurement accuracy with a 1 nm resolution over a displacement range of 200 mm when employing error compensation. The proposed sensor is an excellent alternative to presently available long-range absolute nanometrology sensors owing to its low cost, simple structure, and ease of manufacturing.

  13. The impact of a closed-loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before-and-after study.

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-08-01

    To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; chi(2) test). A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.

  14. Absolute advantage

    NARCIS (Netherlands)

    J.G.M. van Marrewijk (Charles)

    2008-01-01

    textabstractA country is said to have an absolute advantage over another country in the production of a good or service if it can produce that good or service using fewer real resources. Equivalently, using the same inputs, the country can produce more output. The concept of absolute advantage can

  15. Wechsler Adult Intelligence Scale-Revised Block Design broken configuration errors in nonpenetrating traumatic brain injury.

    Science.gov (United States)

    Wilde, M C; Boake, C; Sherer, M

    2000-01-01

    Final broken configuration errors on the Wechsler Adult Intelligence Scale-Revised (WAIS-R; Wechsler, 1981) Block Design subtest were examined in 50 moderate and severe nonpenetrating traumatically brain injured adults. Patients were divided into left (n = 15) and right hemisphere (n = 19) groups based on a history of unilateral craniotomy for treatment of an intracranial lesion and were compared to a group with diffuse or negative brain CT scan findings and no history of neurosurgery (n = 16). The percentage of final broken configuration errors was related to injury severity, Benton Visual Form Discrimination Test (VFD; Benton, Hamsher, Varney, & Spreen, 1983) total score and the number of VFD rotation and peripheral errors. The percentage of final broken configuration errors was higher in the patients with right craniotomies than in the left or no craniotomy groups, which did not differ. Broken configuration errors did not occur more frequently on designs without an embedded grid pattern. Right craniotomy patients did not show a greater percentage of broken configuration errors on nongrid designs as compared to grid designs.

  16. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  17. A digital, constant-frequency pulsed phase-locked-loop instrument for real-time, absolute ultrasonic phase measurements

    Science.gov (United States)

    Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.

    2018-05-01

    A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.

  18. Demand forecasting of electricity in Indonesia with limited historical data

    Science.gov (United States)

    Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif

    2018-03-01

    Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).

  19. The Absolute Stability Analysis in Fuzzy Control Systems with Parametric Uncertainties and Reference Inputs

    Science.gov (United States)

    Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei

    This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.

  20. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    Science.gov (United States)

    Brunke, Heinz-Peter; Matzka, Jürgen

    2018-01-01

    At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer). Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination) measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996).We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters) of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017).

  1. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    Directory of Open Access Journals (Sweden)

    H.-P. Brunke

    2018-01-01

    Full Text Available At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer. Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996.We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017.

  2. Phonological errors predominate in Arabic spelling across grades 1-9.

    Science.gov (United States)

    Abu-Rabia, Salim; Taha, Haitham

    2006-03-01

    Most of the spelling error analysis has been conducted in Latin orthographies and rarely conducted in other orthographies like Arabic. Two hundred and eighty-eight students in grades 1-9 participated in the study. They were presented nine lists of words to test their spelling skills. Their spelling errors were analyzed by error categories. The most frequent errors were phonological. The results did not indicate any significant differences in the percentages of phonological errors across grades one to nine.Thus, phonology probably presents the greatest challenge to students developing spelling skills in Arabic.

  3. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    Science.gov (United States)

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed.

  4. Standard Error Computations for Uncertainty Quantification in Inverse Problems: Asymptotic Theory vs. Bootstrapping.

    Science.gov (United States)

    Banks, H T; Holm, Kathleen; Robbins, Danielle

    2010-11-01

    We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods.

  5. Implementation of Automatic Clustering Algorithm and Fuzzy Time Series in Motorcycle Sales Forecasting

    Science.gov (United States)

    Rasim; Junaeti, E.; Wirantika, R.

    2018-01-01

    Accurate forecasting for the sale of a product depends on the forecasting method used. The purpose of this research is to build motorcycle sales forecasting application using Fuzzy Time Series method combined with interval determination using automatic clustering algorithm. Forecasting is done using the sales data of motorcycle sales in the last ten years. Then the error rate of forecasting is measured using Means Percentage Error (MPE) and Means Absolute Percentage Error (MAPE). The results of forecasting in the one-year period obtained in this study are included in good accuracy.

  6. A kinetic-based sigmoidal model for the polymerase chain reaction and its application to high-capacity absolute quantitative real-time PCR

    Directory of Open Access Journals (Sweden)

    Stewart Don

    2008-05-01

    Full Text Available Abstract Background Based upon defining a common reference point, current real-time quantitative PCR technologies compare relative differences in amplification profile position. As such, absolute quantification requires construction of target-specific standard curves that are highly resource intensive and prone to introducing quantitative errors. Sigmoidal modeling using nonlinear regression has previously demonstrated that absolute quantification can be accomplished without standard curves; however, quantitative errors caused by distortions within the plateau phase have impeded effective implementation of this alternative approach. Results Recognition that amplification rate is linearly correlated to amplicon quantity led to the derivation of two sigmoid functions that allow target quantification via linear regression analysis. In addition to circumventing quantitative errors produced by plateau distortions, this approach allows the amplification efficiency within individual amplification reactions to be determined. Absolute quantification is accomplished by first converting individual fluorescence readings into target quantity expressed in fluorescence units, followed by conversion into the number of target molecules via optical calibration. Founded upon expressing reaction fluorescence in relation to amplicon DNA mass, a seminal element of this study was to implement optical calibration using lambda gDNA as a universal quantitative standard. Not only does this eliminate the need to prepare target-specific quantitative standards, it relegates establishment of quantitative scale to a single, highly defined entity. The quantitative competency of this approach was assessed by exploiting "limiting dilution assay" for absolute quantification, which provided an independent gold standard from which to verify quantitative accuracy. This yielded substantive corroborating evidence that absolute accuracies of ± 25% can be routinely achieved. Comparison

  7. The impact of a closed‐loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before‐and‐after study

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-01-01

    Objectives To assess the impact of a closed‐loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants Before‐and‐after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention Closed‐loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results Prescribing errors were identified in 3.8% of 2450 medication orders pre‐intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre‐intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre‐intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions A closed‐loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication‐related tasks increased. PMID:17693676

  8. Fluctuation theorems in feedback-controlled open quantum systems: Quantum coherence and absolute irreversibility

    Science.gov (United States)

    Murashita, Yûto; Gong, Zongping; Ashida, Yuto; Ueda, Masahito

    2017-10-01

    The thermodynamics of quantum coherence has attracted growing attention recently, where the thermodynamic advantage of quantum superposition is characterized in terms of quantum thermodynamics. We investigate the thermodynamic effects of quantum coherent driving in the context of the fluctuation theorem. We adopt a quantum-trajectory approach to investigate open quantum systems under feedback control. In these systems, the measurement backaction in the forward process plays a key role, and therefore the corresponding time-reversed quantum measurement and postselection must be considered in the backward process, in sharp contrast to the classical case. The state reduction associated with quantum measurement, in general, creates a zero-probability region in the space of quantum trajectories of the forward process, which causes singularly strong irreversibility with divergent entropy production (i.e., absolute irreversibility) and hence makes the ordinary fluctuation theorem break down. In the classical case, the error-free measurement ordinarily leads to absolute irreversibility, because the measurement restricts classical paths to the region compatible with the measurement outcome. In contrast, in open quantum systems, absolute irreversibility is suppressed even in the presence of the projective measurement due to those quantum rare events that go through the classically forbidden region with the aid of quantum coherent driving. This suppression of absolute irreversibility exemplifies the thermodynamic advantage of quantum coherent driving. Absolute irreversibility is shown to emerge in the absence of coherent driving after the measurement, especially in systems under time-delayed feedback control. We show that absolute irreversibility is mitigated by increasing the duration of quantum coherent driving or decreasing the delay time of feedback control.

  9. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  10. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  11. Comparing absolute and normalized indicators in scientific collaboration: a study in Environmental Science in Latin America

    Energy Technology Data Exchange (ETDEWEB)

    Cabrini-Grácio, M.C.; Oliveira, E.F.T.

    2016-07-01

    This paper aims to conduct a comparative analysis of scientific collaboration proximity trends generated from absolute indicators and indicators of collaboration intensity in the field of Environmental Sciences in Latin America (LA), in order to identify possible existing biases in the absolute indicators of international cooperation, due to the magnitude of scientific production of these countries in mainstream science. More specifically, the objective is to analyze the compared forms of absolute and normalized values of co-authorship among Latin America countries and their main collaborators, in order to observe similarities and differences expressed by two indexes of frequency in relation to scientific collaboration trends in LA countries. In addition, we aim to visualize and analyze scientific collaboration networks with absolute and normalized indexes of co-authorship through SC among Latin America countries and their collaborators, comparing proximity evidenced by two generated collaborative networks - absolute and relative indicators. Data collection comprised a period of 10 years (2006-2015) for the countries from LA: Brazil, Mexico, Argentina, Chile and Colombia as they produced 94% of total production, a percentage considered representative and significant for this study. Then, we verified the co-authorship frequencies among the five countries and their key collaborators and builted the matrix with the indexes of co-authorship normalized through SC. Then, we generated two egocentric networks of scientific collaboration - absolute frequencies and normalized frequencies through SC using Pajek software. From the results, we observed the need for absolute and normalized indicators to describe the scientific collaboration phenomenon in a more thoroughly way, once these indicators provide complementary information. (Author)

  12. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry.

    Science.gov (United States)

    Wang, Guochao; Tan, Lilong; Yan, Shuhua

    2018-02-07

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  13. ABSOLUTE NEUTRINO MASSES

    DEFF Research Database (Denmark)

    Schechter, J.; Shahid, M. N.

    2012-01-01

    We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos.......We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos....

  14. Forecasting Nord Pool day-ahead prices with an autoregressive model

    International Nuclear Information System (INIS)

    Kristiansen, Tarjei

    2012-01-01

    This paper presents a model to forecast Nord Pool hourly day-ahead prices. The model is based on but reduced in terms of estimation parameters (from 24 sets to 1) and modified to include Nordic demand and Danish wind power as exogenous variables. We model prices across all hours in the analysis period rather than across each single hour of 24 hours. By applying three model variants on Nord Pool data, we achieve a weekly mean absolute percentage error (WMAE) of around 6–7% and an hourly mean absolute percentage error (MAPE) ranging from 8% to 11%. Out of sample results yields a WMAE and an hourly MAPE of around 5%. The models enable analysts and traders to forecast hourly day-ahead prices accurately. Moreover, the models are relatively straightforward and user-friendly to implement. They can be set up in any trading organization. - Highlights: ► Forecasting Nord Pool day-ahead prices with an autoregressive model. ► The model is based on but with the set of parameters reduced from 24 to 1. ► The model includes Nordic demand and Danish wind power as exogenous variables. ► Hourly mean absolute percentage error ranges from 8% to 11%. ► Out of sample results yields a WMAE and an hourly MAPE of around 5%.

  15. Auto-calibration of Systematic Odometry Errors in Mobile Robots

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Andersen, Nils Axel

    1999-01-01

    This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement....... By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations...... and experiments on a mobile robot....

  16. Data error effects on net radiation and evapotranspiration estimation

    International Nuclear Information System (INIS)

    Llasat, M.C.; Snyder, R.L.

    1998-01-01

    The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)

  17. Absolute nuclear material assay

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  18. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry

    Directory of Open Access Journals (Sweden)

    Guochao Wang

    2018-02-01

    Full Text Available We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  19. "First, know thyself": cognition and error in medicine.

    Science.gov (United States)

    Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo

    2016-04-01

    Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.

  20. Absolute transition probabilities in the NeI 3p-3s fine structure by beam-gas-dye laser spectroscopy

    International Nuclear Information System (INIS)

    Hartmetz, P.; Schmoranzer, H.

    1983-01-01

    The beam-gas-dye laser two-step excitation technique is further developed and applied to the direct measurement of absolute atomic transition probabilities in the NeI 3p-3s fine-structure transition array with a maximum experimental error of 5%. (orig.)

  1. Thermodynamics of negative absolute pressures

    International Nuclear Information System (INIS)

    Lukacs, B.; Martinas, K.

    1984-03-01

    The authors show that the possibility of negative absolute pressure can be incorporated into the axiomatic thermodynamics, analogously to the negative absolute temperature. There are examples for such systems (GUT, QCD) processing negative absolute pressure in such domains where it can be expected from thermodynamical considerations. (author)

  2. Determination of heat capacity of ionic liquid based nanofluids using group method of data handling technique

    Science.gov (United States)

    Sadi, Maryam

    2018-01-01

    In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.

  3. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  4. The correction of vibration in frequency scanning interferometry based absolute distance measurement system for dynamic measurements

    Science.gov (United States)

    Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu

    2015-10-01

    Absolute distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an absolute distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based absolute distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement error more than 103 times bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.

  5. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    Science.gov (United States)

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  6. Automated absolute activation analysis with californium-252 sources

    International Nuclear Information System (INIS)

    MacMurdo, K.W.; Bowman, W.W.

    1978-09-01

    A 100-mg 252 Cf neutron activation analysis facility is used routinely at the Savannah River Laboratory for multielement analysis of many solid and liquid samples. An absolute analysis technique converts counting data directly to elemental concentration without the use of classical comparative standards and flux monitors. With the totally automated pneumatic sample transfer system, cyclic irradiation-decay-count regimes can be pre-selected for up to 40 samples, and samples can be analyzed with the facility unattended. An automatic data control system starts and stops a high-resolution gamma-ray spectrometer and/or a delayed-neutron detector; the system also stores data and controls output modes. Gamma ray data are reduced by three main programs in the IBM 360/195 computer: the 4096-channel spectrum and pertinent experimental timing, counting, and sample data are stored on magnetic tape; the spectrum is then reduced to a list of significant photopeak energies, integrated areas, and their associated statistical errors; and the third program assigns gamma ray photopeaks to the appropriate neutron activation product(s) by comparing photopeak energies to tabulated gamma ray energies. Photopeak areas are then converted to elemental concentration by using experimental timing and sample data, calculated elemental neutron capture rates, absolute detector efficiencies, and absolute spectroscopic decay data. Calculational procedures have been developed so that fissile material can be analyzed by cyclic neutron activation and delayed-neutron counting procedures. These calculations are based on a 6 half-life group model of delayed neutron emission; calculations include corrections for delayed neutron interference from 17 O. Detection sensitivities of 239 Pu were demonstrated with 15-g samples at a throughput of up to 140 per day. Over 40 elements can be detected at the sub-ppM level

  7. Danish Towns during Absolutism

    DEFF Research Database (Denmark)

    This anthology, No. 4 in the Danish Urban Studies Series, presents in English recent significant research on Denmark's urban development during the Age of Absolutism, 1660-1848, and features 13 articles written by leading Danish urban historians. The years of Absolutism were marked by a general...

  8. Automated drug dispensing system reduces medication errors in an intensive care setting.

    Science.gov (United States)

    Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick

    2010-12-01

    We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; perror (20.4% and 13.5%; perror showed a significant impact of the automated dispensing system in reducing preparation errors (perrors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.

  9. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  10. Solving Problems with the Percentage Bar

    Science.gov (United States)

    van Galen, Frans; van Eerde, Dolly

    2013-01-01

    At the end of primary school all children more of less know what a percentage is, but yet they often struggle with percentage problems. This article describes a study in which students of 13 and 14 years old were given a written test with percentage problems and a week later were interviewed about the way they solved some of these problems. In a…

  11. Using total quality management approach to improve patient safety by preventing medication error incidences*.

    Science.gov (United States)

    Yousef, Nadin; Yousef, Farah

    2017-09-04

    Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated

  12. DI3 - A New Procedure for Absolute Directional Measurements

    Directory of Open Access Journals (Sweden)

    A Geese

    2011-06-01

    Full Text Available The standard observatory procedure for determining a geomagnetic field's declination and inclination absolutely is the DI-flux measurement. The instrument consists of a non-magnetic theodolite equipped with a single-axis fluxgate magnetometer. Additionally, a scalar magnetometer is needed to provide all three components of the field. Using only 12 measurement steps, all systematic errors can be accounted for, but if only one of the readings is wrong, the whole measurement has to be rejected. We use a three-component sensor on top of the theodolites telescope. By performing more measurement steps, we gain much better control of the whole procedure: As the magnetometer can be fully calibrated by rotating about two independent directions, every combined reading of magnetometer output and theodolite angles provides the absolute field vector. We predefined a set of angle positions that the observer has to try to achieve. To further simplify the measurement procedure, the observer is guided by a pocket pc, in which he has only to confirm the theodolite position. The magnetic field is then stored automatically, together with the horizontal and vertical angles. The DI3 measurement is periodically performed at the Niemegk Observatory, allowing for a direct comparison with the traditional measurements.

  13. Influence of slice thickness of computed tomography and type of rapid protyping on the accuracy of 3-dimensional medical model

    Energy Technology Data Exchange (ETDEWEB)

    Um, Ki Doo; Lee, Byung Do [Wonkwang University College of Medicine, Iksan (Korea, Republic of)

    2004-03-15

    This study was to evaluate the influence of slice thickness of computed tomography (CT) and rapid protyping (RP) type on the accuracy of 3-dimensional medical model. Transaxial CT data of human dry skull were taken from multi-detector spiral CT. Slice thickness were 1, 2, 3 and 4 mm respectively. Three-dimensional image model reconstruction using 3-D visualization medical software (V-works 3.0) and RP model fabrication were followed. 2-RP models were 3D printing (Z402, Z Corp., Burlington, USA) and Stereolithographic Apparatus model. Linear measurements of anatomical landmarks on dry skull, 3-D image model, and 2-RP models were done and compared according to slice thickness and RP model type. There were relative error percentage in absolute value of 0.97, 1.98, 3.83 between linear measurements of dry skull and image models of 1, 2, 3 mm slice thickness respectively. There was relative error percentage in absolute value of 0.79 between linear measurements of dry skull and SLA model. There was relative error difference in absolute value of 2.52 between linear measurements of dry skull and 3D printing model. These results indicated that 3-dimensional image model of thin slice thickness and stereolithographic RP model showed relative high accuracy.

  14. Influence of slice thickness of computed tomography and type of rapid protyping on the accuracy of 3-dimensional medical model

    International Nuclear Information System (INIS)

    Um, Ki Doo; Lee, Byung Do

    2004-01-01

    This study was to evaluate the influence of slice thickness of computed tomography (CT) and rapid protyping (RP) type on the accuracy of 3-dimensional medical model. Transaxial CT data of human dry skull were taken from multi-detector spiral CT. Slice thickness were 1, 2, 3 and 4 mm respectively. Three-dimensional image model reconstruction using 3-D visualization medical software (V-works 3.0) and RP model fabrication were followed. 2-RP models were 3D printing (Z402, Z Corp., Burlington, USA) and Stereolithographic Apparatus model. Linear measurements of anatomical landmarks on dry skull, 3-D image model, and 2-RP models were done and compared according to slice thickness and RP model type. There were relative error percentage in absolute value of 0.97, 1.98, 3.83 between linear measurements of dry skull and image models of 1, 2, 3 mm slice thickness respectively. There was relative error percentage in absolute value of 0.79 between linear measurements of dry skull and SLA model. There was relative error difference in absolute value of 2.52 between linear measurements of dry skull and 3D printing model. These results indicated that 3-dimensional image model of thin slice thickness and stereolithographic RP model showed relative high accuracy.

  15. Prediction of monthly mean daily global solar radiation using ...

    Indian Academy of Sciences (India)

    a 4-layer MLFF network was developed and the average value of the mean absolute percentage error ... and sunshine hours to estimate the monthly mean .... work. The outputs of the layers are com- puted using the equations (1) and (2).

  16. Tinker-OpenMM: Absolute and relative alchemical free energies using AMOEBA on GPUs.

    Science.gov (United States)

    Harger, Matthew; Li, Daniel; Wang, Zhi; Dalby, Kevin; Lagardère, Louis; Piquemal, Jean-Philip; Ponder, Jay; Ren, Pengyu

    2017-09-05

    The capabilities of the polarizable force fields for alchemical free energy calculations have been limited by the high computational cost and complexity of the underlying potential energy functions. In this work, we present a GPU-based general alchemical free energy simulation platform for polarizable potential AMOEBA. Tinker-OpenMM, the OpenMM implementation of the AMOEBA simulation engine has been modified to enable both absolute and relative alchemical simulations on GPUs, which leads to a ∼200-fold improvement in simulation speed over a single CPU core. We show that free energy values calculated using this platform agree with the results of Tinker simulations for the hydration of organic compounds and binding of host-guest systems within the statistical errors. In addition to absolute binding, we designed a relative alchemical approach for computing relative binding affinities of ligands to the same host, where a special path was applied to avoid numerical instability due to polarization between the different ligands that bind to the same site. This scheme is general and does not require ligands to have similar scaffolds. We show that relative hydration and binding free energy calculated using this approach match those computed from the absolute free energy approach. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Proton spectroscopic imaging of polyacrylamide gel dosimeters for absolute radiation dosimetry

    International Nuclear Information System (INIS)

    Murphy, P.S.; Schwarz, A.J.; Leach, M.O.

    2000-01-01

    Proton spectroscopy has been evaluated as a method for quantifying radiation induced changes in polyacrylamide gel dosimeters. A calibration was first performed using BANG-type gel samples receiving uniform doses of 6 MV photons from 0 to 9 Gy in 1 Gy intervals. The peak integral of the acrylic protons belonging to acrylamide and methylenebisacrylamide normalized to the water signal was plotted against absorbed dose. Response was approximately linear within the range 0-7 Gy. A large gel phantom irradiated with three, coplanar 3x3cm square fields to 5.74 Gy at isocentre was then imaged with an echo-filter technique to map the distribution of monomers directly. The image, normalized to the water signal, was converted into an absolute dose map. At the isocentre the measured dose was 5.69 Gy (SD = 0.09) which was in good agreement with the planned dose. The measured dose distribution elsewhere in the sample shows greater errors. A T 2 derived dose map demonstrated a better relative distribution but gave an overestimate of the dose at isocentre of 18%. The data indicate that MR measurements of monomer concentration can complement T 2 -based measurements and can be used to verify absolute dose. Compared with the more usual T 2 measurements for assessing gel polymerization, monomer concentration analysis is less sensitive to parameters such as gel pH and temperature, which can cause ambiguous relaxation time measurements and erroneous absolute dose calculations. (author)

  18. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  19. Optimal design of the absolute positioning sensor for a high-speed maglev train and research on its fault diagnosis.

    Science.gov (United States)

    Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge

    2012-01-01

    This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.

  20. Optimal Design of the Absolute Positioning Sensor for a High-Speed Maglev Train and Research on Its Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Junge Zhang

    2012-08-01

    Full Text Available This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.

  1. Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?

    Science.gov (United States)

    Kiernan, D; Hosking, J; O'Brien, T

    2016-03-01

    Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Application of a soft computing technique in predicting the percentage of shear force carried by walls in a rectangular channel with non-homogeneous roughness.

    Science.gov (United States)

    Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein

    2016-01-01

    Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.

  3. Error evaluation of inelastic response spectrum method for earthquake design

    International Nuclear Information System (INIS)

    Paz, M.; Wong, J.

    1981-01-01

    Two-story, four-story and ten-story shear building-type frames subjected to earthquake excitaion, were analyzed at several levels of their yield resistance. These frames were subjected at their base to the motion recorded for north-south component of the 1940 El Centro earthquake, and to an artificial earthquake which would produce the response spectral charts recommended for design. The frames were first subjected to 25% or 50% of the intensity level of these earthquakes. The resulting maximum relative displacement for each story of the frames was assumed to be yield resistance for the subsequent analyses at 100% of intensity for the excitation. The frames analyzed were uniform along their height with the stiffness adjusted as to result in 0.20 seconds of the fundamental period for the two-story frame, 0.40 seconds for the four-story frame and 1.0 seconds for the ten-story frame. Results of the study provided the following conclusions: (1) The percentage error in floor displacement for linear behavior was less than 10%; (2) The percentage error in floor displacement for inelastic behavior (elastoplastic) could be as high as 100%; (3) In most of the cases analyzed, the error increased with damping in the system; (4) As a general rule, the error increased as the modal yield resistance decreased; (5) The error was lower for the structures subjected to the 1940 E1 Centro earthquake than for the same structures subjected to an artificial earthquake which was generated from the response spectra for design. (orig./HP)

  4. The importance of intra-hospital pharmacovigilance in the detection of medication errors

    Science.gov (United States)

    Villegas, Francisco; Figueroa-Montero, David; Barbero-Becerra, Varenka; Juárez-Hernández, Eva; Uribe, Misael; Chávez-Tapia, Norberto; González-Chon, Octavio

    2018-01-01

    Hospitalized patients are susceptible to medication errors, which represent between the fourth and the sixth cause of death. The department of intra-hospital pharmacovigilance intervenes in the entire process of medication with the purpose to prevent, repair and assess damages. To analyze medication errors reported by Mexican Fundación Clínica Médica Sur pharmacovigilance system and their impact on patients. Prospective study carried out from 2012 to 2015, where medication prescriptions given to patients were recorded. Owing to heterogeneity, data were described as absolute numbers in a logarithmic scale. 292 932 prescriptions of 56 368 patients were analyzed, and 8.9% of medication errors were identified. The treating physician was responsible of 83.32% of medication errors, residents of 6.71% and interns of 0.09%. No error caused permanent damage or death. This is the pharmacovigilance study with the largest sample size reported. Copyright: © 2018 SecretarÍa de Salud.

  5. Absolute beam current monitoring in endstation c

    International Nuclear Information System (INIS)

    Bochna, C.

    1995-01-01

    The first few experiments at CEBAF require approximately 1% absolute measurements of beam currents expected to range from 10-25μA. This represents errors of 100-250 nA. The initial complement of beam current monitors are of the non intercepting type. CEBAF accelerator division has provided a stripline monitor and a cavity monitor, and the authors have installed an Unser monitor (parametric current transformer or PCT). After calibrating the Unser monitor with a precision current reference, the authors plan to transfer this calibration using CW beam to the stripline monitors and cavity monitors. It is important that this be done fairly rapidly because while the gain of the Unser monitor is quite stable, the offset may drift on the order of .5μA per hour. A summary of what the authors have learned about the linearity, zero drift, and gain drift of each type of current monitor will be presented

  6. Near threshold absolute TDCS: First results

    International Nuclear Information System (INIS)

    Roesel, T.; Schlemmer, P.; Roeder, J.; Frost, L.; Jung, K.; Ehrhardt, H.

    1992-01-01

    A new method, and first results for an impact energy 2 eV above the threshold of ionisation of helium, are presented for the measurement of absolute triple differential cross sections (TDCS) in a crossed beam experiment. The method is based upon measurement of beam/target overlap densities using known absolute total ionisation cross sections and of detection efficiencies using known absolute double differential cross sections (DDCS). For the present work the necessary absolute DDCS for 1 eV electrons had also to be measured. Results are presented for several different coplanar kinematics and are compared with recent DWBA calculations. (orig.)

  7. Absolute entropy of ions in methanol

    International Nuclear Information System (INIS)

    Abakshin, V.A.; Kobenin, V.A.; Krestov, G.A.

    1978-01-01

    By measuring the initial thermoelectromotive forces of chains with bromo-silver electrodes in tetraalkylammonium bromide solutions the absolute entropy of bromide-ion in methanol is determined in the 298.15-318.15 K range. The anti Ssub(Brsup(-))sup(0) = 9.8 entropy units value is used for calculation of the absolute partial molar entropy of alkali metal ions and halogenide ions. It has been found that, absolute entropy of Cs + =12.0 entropy units, I - =14.0 entropy units. The obtained ion absolute entropies in methanol at 298.15 K within 1-2 entropy units is in an agreement with published data

  8. Percentage Energy from Fat Screener: Overview

    Science.gov (United States)

    A short assessment instrument to estimate an individual's usual intake of percentage energy from fat. The foods asked about on the instrument were selected because they were the most important predictors of variability in percentage energy.

  9. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  10. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  11. The effect of insulin resistance and exercise on the percentage of CD16(+) monocyte subset in obese individuals.

    Science.gov (United States)

    de Matos, Mariana A; Duarte, Tamiris C; Ottone, Vinícius de O; Sampaio, Pâmela F da M; Costa, Karine B; de Oliveira, Marcos F Andrade; Moseley, Pope L; Schneider, Suzanne M; Coimbra, Cândido C; Brito-Melo, Gustavo E A; Magalhães, Flávio de C; Amorim, Fabiano T; Rocha-Vieira, Etel

    2016-06-01

    Obesity is a low-grade chronic inflammation condition, and macrophages, and possibly monocytes, are involved in the pathological outcomes of obesity. Physical exercise is a low-cost strategy to prevent and treat obesity, probably because of its anti-inflammatory action. We evaluated the percentage of CD16(-) and CD16(+) monocyte subsets in obese insulin-resistant individuals and the effect of an exercise bout on the percentage of these cells. Twenty-seven volunteers were divided into three experimental groups: lean insulin sensitive, obese insulin sensitive and obese insulin resistant. Venous blood samples collected before and 1 h after an aerobic exercise session on a cycle ergometer were used for determination of monocyte subsets by flow cytometry. Insulin-resistant obese individuals have a higher percentage of CD16(+) monocytes (14.8 ± 2.4%) than the lean group (10.0 ± 1.3%). A positive correlation of the percentage of CD16(+) monocytes with body mass index and fasting plasma insulin levels was found. One bout of moderate exercise reduced the percentage of CD16(+) monocytes by 10% in all the groups evaluated. Also, the absolute monocyte count, as well as all other leukocyte populations, in lean and obese individuals, increased after exercise. This fact may partially account for the observed reduction in the percentage of CD16(+) cells in response to exercise. Insulin-resistant, but not insulin-sensitive obese individuals, have an increased percentage of CD16(+) monocytes that can be slightly modulated by a single bout of moderate aerobic exercise. These findings may be clinically relevant to the population studied, considering the involvement of CD16(+) monocytes in the pathophysiology of obesity. Copyright © 2016 John Wiley & Sons, Ltd. Obesity is now considered to be an inflammatory condition associated with many pathological consequences, including insulin resistance. It is proposed that insulin resistance contributes to the aggravation of the

  12. Pertinence analysis of intensity-modulated radiation therapy dosimetry error and parameters of beams

    International Nuclear Information System (INIS)

    Chi Zifeng; Liu Dan; Cao Yankun; Li Runxiao; Han Chun

    2012-01-01

    Objective: To study the relationship between parameter settings in the intensity-modulated radiation therapy (IMRT) planning in order to explore the effect of parameters on absolute dose verification. Methods: Forty-three esophageal carcinoma cases were optimized with Pinnacle 7.6c by experienced physicist using appropriate optimization parameters and dose constraints with a number of iterations to meet the clinical acceptance criteria. The plans were copied to water-phantom, 0.13 cc ion Farmer chamber and DOSE1 dosimeter was used to measure the absolute dose. The statistical data of the parameters of beams for the 43 cases were collected, and the relationships among them were analyzed. The statistical data of the dosimetry error were collected, and comparative analysis was made for the relation between the parameters of beams and ion chamber absolute dose verification results. Results: The parameters of beams were correlated among each other. Obvious affiliation existed between the dose accuracy and parameter settings. When the beam segment number of IMRT plan was more than 80, the dose deviation would be greater than 3%; however, if the beam segment number was less than 80, the dose deviation was smaller than 3%. When the beam segment number was more than 100, part of the dose deviation of this plan was greater than 4%. On the contrary, if the beam segment number was less than 100, the dose deviation was smaller than 4% definitely. Conclusions: In order to decrease the absolute dose verification error, less beam angles and less beam segments are needed and the beam segment number should be controlled within the range of 80. (authors)

  13. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    Science.gov (United States)

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not

  14. [Medication errors in Spanish intensive care units].

    Science.gov (United States)

    Merino, P; Martín, M C; Alonso, A; Gutiérrez, I; Alvarez, J; Becerril, F

    2013-01-01

    To estimate the incidence of medication errors in Spanish intensive care units. Post hoc study of the SYREC trial. A longitudinal observational study carried out during 24 hours in patients admitted to the ICU. Spanish intensive care units. Patients admitted to the intensive care unit participating in the SYREC during the period of study. Risk, individual risk, and rate of medication errors. The final study sample consisted of 1017 patients from 79 intensive care units; 591 (58%) were affected by one or more incidents. Of these, 253 (43%) had at least one medication-related incident. The total number of incidents reported was 1424, of which 350 (25%) were medication errors. The risk of suffering at least one incident was 22% (IQR: 8-50%) while the individual risk was 21% (IQR: 8-42%). The medication error rate was 1.13 medication errors per 100 patient-days of stay. Most incidents occurred in the prescription (34%) and administration (28%) phases, 16% resulted in patient harm, and 82% were considered "totally avoidable". Medication errors are among the most frequent types of incidents in critically ill patients, and are more common in the prescription and administration stages. Although most such incidents have no clinical consequences, a significant percentage prove harmful for the patient, and a large proportion are avoidable. Copyright © 2012 Elsevier España, S.L. and SEMICYUC. All rights reserved.

  15. Comparative study of holt-winters triples exponential smoothing and seasonal Arima: Forecasting short term seasonal car sales in South Africa

    Directory of Open Access Journals (Sweden)

    Katleho Daniel Makatjane

    2016-02-01

    Full Text Available In this paper, both Seasonal ARIMA and Holt-Winters models are developed to predict the monthly car sales in South Africa using data for the period of January 1994 to December 2013. The purpose of this study is to choose an optimal model suited for the sector. The three error metrics; mean absolute error, mean absolute percentage error and root mean square error were used in making such a choice. Upon realizing that the three forecast errors could not provide concrete basis to make conclusion, the power test was calculated for each model proving Holt-Winters to having about 0.3% more predictive power. Empirical results also indicate that Holt-Winters model produced more precise short-term seasonal forecasts. The findings also revealed a structural break in April 2009, implying that the car industry was significantly affected by the 2008 and 2009 US financial crisis

  16. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600-MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600-MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV

  17. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurements of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV. (auth)

  18. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    Science.gov (United States)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  19. ACCESS, Absolute Color Calibration Experiment for Standard Stars: Integration, Test, and Ground Performance

    Science.gov (United States)

    Kaiser, Mary Elizabeth; Morris, Matthew; Aldoroty, Lauren; Kurucz, Robert; McCandliss, Stephan; Rauscher, Bernard; Kimble, Randy; Kruk, Jeffrey; Wright, Edward L.; Feldman, Paul; Riess, Adam; Gardner, Jonathon; Bohlin, Ralph; Deustua, Susana; Dixon, Van; Sahnow, David J.; Perlmutter, Saul

    2018-01-01

    Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. Systematic errors associated with astrophysical data used to probe fundamental astrophysical questions, such as SNeIa observations used to constrain dark energy theories, now exceed the statistical errors associated with merged databases of these measurements. ACCESS, “Absolute Color Calibration Experiment for Standard Stars”, is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35‑1.7μm bandpass. To achieve this goal ACCESS (1) observes HST/ Calspec stars (2) above the atmosphere to eliminate telluric spectral contaminants (e.g. OH) (3) using a single optical path and (HgCdTe) detector (4) that is calibrated to NIST laboratory standards and (5) monitored on the ground and in-flight using a on-board calibration monitor. The observations are (6) cross-checked and extended through the generation of stellar atmosphere models for the targets. The ACCESS telescope and spectrograph have been designed, fabricated, and integrated. Subsystems have been tested. Performance results for subsystems, operations testing, and the integrated spectrograph will be presented. NASA sounding rocket grant NNX17AC83G supports this work.

  20. Projective absoluteness for Sacks forcing

    NARCIS (Netherlands)

    Ikegami, D.

    2009-01-01

    We show that Sigma(1)(3)-absoluteness for Sacks forcing is equivalent to the nonexistence of a Delta(1)(2) Bernstein set. We also show that Sacks forcing is the weakest forcing notion among all of the preorders that add a new real with respect to Sigma(1)(3) forcing absoluteness.

  1. Making Sense of Fractions and Percentages

    Science.gov (United States)

    Whitin, David J.; Whitin, Phyllis

    2012-01-01

    Because fractions and percentages can be difficult for children to grasp, connecting them whenever possible is beneficial. Linking them can foster representational fluency as children simultaneously see the part-whole relationship expressed numerically (as a fraction and as a percentage) and visually (as a pie chart). NCTM advocates these…

  2. PyForecastTools

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-22

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient of variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.

  3. Statistical Diagnosis of the Best Weibull Methods for Wind Power Assessment for Agricultural Applications

    Directory of Open Access Journals (Sweden)

    Abul Kalam Azad

    2014-05-01

    Full Text Available The best Weibull distribution methods for the assessment of wind energy potential at different altitudes in desired locations are statistically diagnosed in this study. Seven different methods, namely graphical method (GM, method of moments (MOM, standard deviation method (STDM, maximum likelihood method (MLM, power density method (PDM, modified maximum likelihood method (MMLM and equivalent energy method (EEM were used to estimate the Weibull parameters and six statistical tools, namely relative percentage of error, root mean square error (RMSE, mean percentage of error, mean absolute percentage of error, chi-square error and analysis of variance were used to precisely rank the methods. The statistical fittings of the measured and calculated wind speed data are assessed for justifying the performance of the methods. The capacity factor and total energy generated by a small model wind turbine is calculated by numerical integration using Trapezoidal sums and Simpson’s rules. The results show that MOM and MLM are the most efficient methods for determining the value of k and c to fit Weibull distribution curves.

  4. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    Science.gov (United States)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  5. Meniscal tear. Diagnostic errors in MR imaging

    International Nuclear Information System (INIS)

    Barrera, M. C.; Recondo, J. A.; Gervas, C.; Fernandez, E.; Villanua, J. A.M.; Salvador, E.

    2003-01-01

    To analyze diagnostic discrepancies found between magnetic resonance (MR) and arthroscopy, and the determine the reasons that they occur. Two-hundred and forty-eight MR knee explorations were retrospectively checked. Forty of these showed diagnostic discrepancies between MR and arthroscopy. Two radiologists independently re-analyzed the images from 29 of the 40 studies without knowing which diagnosis had resulted from which of the two techniques. Their interpretations were correlated with the initial MR diagnosis, MR images and arthroscopic results. Initial errors in MR imaging were classified as either unavoidable, interpretive, or secondary to equivocal findings. Eleven MR examinations could not be checked since their corresponding imaging results could not be located. Of 34 errors found in the original diagnoses, 12 (35.5%)were classified as unavoidable, 14 (41.2%) as interpretative and 8 (23.5%) as secondary to equivocal findings. 41.2% of the errors were avoided in the retrospective study probably due to our department having greater experience in interpreting MR images, 25.5% were unavailable even in the retrospective study. A small percentage of diagnostic errors were due to the presence of subtle equivocal findings. (Author) 15 refs

  6. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  7. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  8. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    International Nuclear Information System (INIS)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-01-01

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (±1 mm in two banks, ±0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  9. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    Energy Technology Data Exchange (ETDEWEB)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)

    2010-07-15

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  10. Impact and quantification of the sources of error in DNA pooling designs.

    Science.gov (United States)

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  11. Error and objectivity: cognitive illusions and qualitative research.

    Science.gov (United States)

    Paley, John

    2005-07-01

    Psychological research has shown that cognitive illusions, of which visual illusions are just a special case, are systematic and pervasive, raising epistemological questions about how error in all forms of research can be identified and eliminated. The quantitative sciences make use of statistical techniques for this purpose, but it is not clear what the qualitative equivalent is, particularly in view of widespread scepticism about validity and objectivity. I argue that, in the light of cognitive psychology, the 'error question' cannot be dismissed as a positivist obsession, and that the concepts of truth and objectivity are unavoidable. However, they constitute only a 'minimal realism', which does not necessarily bring a commitment to 'absolute' truth, certainty, correspondence, causation, reductionism, or universal laws in its wake. The assumption that it does reflects a misreading of positivism and, ironically, precipitates a 'crisis of legitimation and representation', as described by constructivist authors.

  12. The approach of Bayesian model indicates media awareness of medical errors

    Science.gov (United States)

    Ravichandran, K.; Arulchelvan, S.

    2016-06-01

    This research study brings out the factors behind the increase in medical malpractices in the Indian subcontinent in the present day environment and impacts of television media awareness towards it. Increased media reporting of medical malpractices and errors lead to hospitals taking corrective action and improve the quality of medical services that they provide. The model of Cultivation Theory can be used to measure the influence of media in creating awareness of medical errors. The patient's perceptions of various errors rendered by the medical industry from different parts of India were taken up for this study. Bayesian method was used for data analysis and it gives absolute values to indicate satisfaction of the recommended values. To find out the impact of maintaining medical records of a family online by the family doctor in reducing medical malpractices which creates the importance of service quality in medical industry through the ICT.

  13. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  14. Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas

    Science.gov (United States)

    Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.

    2017-12-01

    Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.

  15. Absolute determination of the deuterium content of heavy water, measurement of absolute density

    International Nuclear Information System (INIS)

    Ceccaldi, M.; Riedinger, M.; Menache, M.

    1975-01-01

    The absolute density of two heavy water samples rich in deuterium (with a grade higher than 99.9%) was determined with the hydrostatic method. The exact isotopic composition of this water (hydrogen and oxygen isotopes) was very carefully studied. A theoretical estimate enabled us to get the absolute density value of isotopically pure D 2 16 O. This value was found to be 1104.750 kg.m -3 at t 68 =22.3 0 C and under the pressure of one atmosphere. (orig.) [de

  16. Assessment of the possibility of using data mining methods to predict sorption isotherms of selected organic compounds on activated carbon

    Directory of Open Access Journals (Sweden)

    Dąbek Lidia

    2017-01-01

    Full Text Available The paper analyses the use of four data mining methods (Support Vector Machines. Cascade Neural Networks. Random Forests and Boosted Trees to predict sorption on activated carbons. The input data for statistical models included the activated carbon parameters, organic substances and equilibrium concentrations in the solution. The assessment of the predictive abilities of the developed models was made with the use of mean absolute error (MAE, mean absolute percentage error (MAPE, and root mean squared error (RMSE. The computations proved that methods of data mining considered in the study can be applied to predict sorption of selected organic compounds 011 activated carbon. The lowest values of sorption prediction errors were obtained with the Cascade Neural Networks method (MAE = 1.23 g/g; MAPE = 7.90% and RMSE = 1.81 g/g, while the highest error values were produced by the Boosted Trees method (MAE=14.31 g/g; MAPE = 39.43% and RMSE = 27.76 g/g.

  17. Absolute Position Sensing Based on a Robust Differential Capacitive Sensor with a Grounded Shield Window

    Directory of Open Access Journals (Sweden)

    Yang Bai

    2016-05-01

    Full Text Available A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10−4 pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range.

  18. Comparing different error conditions in filmdosemeter evaluation

    International Nuclear Information System (INIS)

    Roed, H.; Figel, M.

    2005-01-01

    Full text: In the evaluation of a film used as a personal dosemeter it may be necessary to mark the dosemeters when possible error conditions are recognized. These are errors that might have an influence on the ability to make a correct evaluation of the dose value, and include broken, contaminated or improperly handled dosemeters. In this project we have examined how two services (NIRH, GSF), from two different countries within the EU, mark their dosemeters. The services have a large difference in size, customer composition and issuing period, but both use film as their primary dosemeters. The possible error conditions that are examined here are dosemeters being contaminated, dosemeters exposed to moisture or light, missing filters in the dosemeter badges among others. The data are collected for the year 2003 where NIRH evaluated approximately 50 thousand and GSF about one million filmdosemeters. For each error condition the percentage of filmdosemeters belonging hereto is calculated as well as the distribution among different employee categories, i.e. industry, medicine, research, veterinary and other. For some error conditions we see a common pattern, while for others there is a large discrepancy between the services. The differences and possible explanations are discussed. The results of the investigation may motivate further comparisons between the different monitoring services in Europe. (author)

  19. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan.

    Science.gov (United States)

    Najat, Dereen

    2017-01-01

    Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani

  20. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan.

    Directory of Open Access Journals (Sweden)

    Dereen Najat

    Full Text Available Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan.Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs.The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%, incorrect sample identification (8% and clotted samples (6%. Most quality control schemes

  1. The absolute environmental performance of buildings

    DEFF Research Database (Denmark)

    Brejnrod, Kathrine Nykjær; Kalbar, Pradip; Petersen, Steffen

    2017-01-01

    Our paper presents a novel approach for absolute sustainability assessment of a building's environmental performance. It is demonstrated how the absolute sustainable share of the earth carrying capacity of a specific building type can be estimated using carrying capacity based normalization factors....... A building is considered absolute sustainable if its annual environmental burden is less than its share of the earth environmental carrying capacity. Two case buildings – a standard house and an upcycled single-family house located in Denmark – were assessed according to this approach and both were found...... to exceed the target values of three (almost four) of the eleven impact categories included in the study. The worst-case excess was for the case building, representing prevalent Danish building practices, which utilized 1563% of the Climate Change carrying capacity. Four paths to reach absolute...

  2. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. © 2014 Diabetes Technology Society.

  3. Absolute Summ

    Science.gov (United States)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  4. Growth models for morphological traits of sunn hemp

    Directory of Open Access Journals (Sweden)

    Cláudia Marques de Bem

    2017-10-01

    Full Text Available The objective of the present study was to fit Gompertz and Logistic nonlinear to descriptions of morphological traits of sunn hemp. Two uniformity trials were conducted and the crops received identical treatment in all experimental area. Sunn hemp seeds were sown in rows 0.5 m apart with a plant density of 20 plants per row meter in a usable area of 52 m × 50 m. The following morphological traits were evaluated: plant height (PH, number of leaves (NL, stem diameter (SD, and root length (RL. These traits were assessed daily during two sowing periods—seeds were sown on October 22, 2014 (first period and December 3, 2014 (second period. Four plants were randomly collected daily, beginning 7 days after first period and 13 days after for second period, totaling 94 and 76 evaluation days, respectively. For Gompertz models the equation was used y=a*e^((?-e?^((b-c*xiand Logistic models the equation was used yi= a/(1+e^((-b-c*xi. The inflection points of the Gompertz and Logistic models were calculated and the goodness of fit was quantified using the adjusted coefficient of determination, Akaike information criterion, standard deviation of residuals, mean absolute deviation, mean absolute percentage error, and mean prediction error. Differences were observed between the Gompertz and Logistic models and between the experimental periods in the parameter estimate for all morphological traits measured. Satisfactory growth curve fittings were achieved for plant height, number of leaves, and stem diameter in both models using the evaluation criteria: coefficient of determination (R², Akaike information criterion (AIC, standard deviation of residuals (SDR, mean absolute deviation (MAD, mean absolute percentage error (MAPE, and mean prediction error (MPE.

  5. Medical Errors in Cyprus: The 2005 Eurobarometer Survey

    Directory of Open Access Journals (Sweden)

    Andreas Pavlakis

    2012-01-01

    Full Text Available Background: Medical errors have been highlighted in recent years by different agencies, scientific bodies and research teams alike. We sought to explore the issue of medical errors in Cyprus using data from the Eurobarometer survey.Methods: Data from the special Eurobarometer survey conducted in 2005 across all European Union countries (EU-25 and the acceding countries were obtained from the corresponding EU office. Statisticalanalyses including logistic regression models were performed using SPSS.Results: A total of 502 individuals participated in the Cyprus survey. About 90% reported that they had often or sometimes heard about medical errors, while 22% reported that a family member or they had suffered a serious medical error in a local hospital. In addition, 9.4% reported a serious problem from a prescribed medicine. We also found statistically significant differences across different ages and gender and in rural versus urban residents. Finally, using multivariable-adjusted logistic regression models, wefound that residents in rural areas were more likely to have suffered a serious medical error in a local hospital or from a prescribed medicine.Conclusion: Our study shows that the vast majority of residents in Cyprus in parallel with the other Europeans worry about medical errors and a significant percentage report having suffered a serious medical error at a local hospital or from a prescribed medicine. The results of our study could help the medical community in Cyprus and the society at large to enhance its vigilance with respect to medical errors in order to improve medical care.

  6. Absolute flux scale for radioastronomy

    International Nuclear Information System (INIS)

    Ivanov, V.P.; Stankevich, K.S.

    1986-01-01

    The authors propose and provide support for a new absolute flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on absolute measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of absolute scales in radio astronomy are summarized

  7. A global algorithm for estimating Absolute Salinity

    Science.gov (United States)

    McDougall, T. J.; Jackett, D. R.; Millero, F. J.; Pawlowicz, R.; Barker, P. M.

    2012-12-01

    The International Thermodynamic Equation of Seawater - 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density) than does Practical Salinity. When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic), Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg-1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p) in the world ocean. To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811). In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally).

  8. Relative and Absolute Reliability of Timed Up and Go Test in Community Dwelling Older Adult and Healthy Young People

    Directory of Open Access Journals (Sweden)

    Farhad Azadi

    2014-01-01

    Full Text Available Objectives: Relative and absolute reliability are psychometric properties of the test that many clinical decisions are based on them. In many cases, only relative reliability takes into consideration while the absolute reliability is also very important. Methods & Materials: Eleven community-dwelling older adults aged 65 years and older (69.64±3.58 and 20 healthy young in the age range 20 to 35 years (28.80±4.15 using three versions of Timed Up and Go test were evaluated twice with an interval of 2 to 5 days. Results: Generally, the non-homogeneity of the study population was stratified to increase the Intra-class Correlation Coefficient (ICC this coefficient in elderly people is greater than young people and with a secondary task is reduced. In This study, absolute reliability indices using different data sources and equations lead to in more or less similar results. At general, in test–retest situations, the elderly more than the young people must be changed to be interpreted as a real change, not random. The random error contribution is slightly greater in elderly than young and with a secondary task is increased.It seems, heterogeneity leads to moderation in absolute reliability indices. Conclusion: In relative reliability studies, researchers and clinicians should pay attention to factors such as homogeneity of population and etc. As well as, absolute reliability beside relative reliability is needed and necessary in clinical decision making.

  9. Mathematical model for body fat percentage of children with cerebral palsy

    Directory of Open Access Journals (Sweden)

    Eduardo Borba Neves

    Full Text Available Abstract Introduction The aim of this study was to develop a specific mathematical model to estimate the body fat percentage (BF% of children with cerebral palsy, based on a Brazilian population of patients with this condition. Method This is a descriptive cross-sectional study. The study included 63 Caucasian children with cerebral palsy, both males and females, aged between three and ten-years-old. Participants were assessed for functional motor impairment using the Gross Motor Function Classification System (GMFCS, dual energy x-ray absorptiometry (DXA and skinfold thickness. Total body mass (TBM and skinfolds thickness from: triceps (Tr, biceps (Bi, Suprailiac (Si, medium thigh (Th, abdominal (Ab, medial calf (Ca and subscapular (Se were collected. Fat mass (FM was estimated by dual energy x-ray absorptiometry (gold standard. Results The model was built from multivariate linear regression; FM was set as a dependent variable and other anthropometric variables, age and sex, were set as independent variables. The final model was established as F%=((0.433xTBM + 0.063xTh + 0.167xSi - 6.768 ÷ TBM × 100, the R2 value was 0.950, R2adjusted=0.948 and the standard error of estimate was 1.039 kg. Conclusion This method was shown to be valid to estimate body fat percentage of children with cerebral palsy. Also, the measurement of skinfolds on both sides of the body showed good results in this modelling.

  10. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    Science.gov (United States)

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  11. Errores en la determinación de acciones

    Directory of Open Access Journals (Sweden)

    González Valle, E.

    1979-12-01

    Full Text Available This article analyses the causes of «building diseases» due to errors In the determination of actions in the following types of construction: — Flats. — Industrial buildings. — Retention walls . — Bridges. — Tanks and silos. Based on the report by the Bureau Securitas and Secotec after Investigating 2979 accidents and events, these errors would only be the first cause in 10% of the cases but this percentage will be considerably higher if the effects were also evaluated.

    En este artículo se analizan las causas de patología debidas a errores en la determinación de acciones en los siguientes tipos de construcción: ––Edificios de viviendas. ––Naves industriales. ––Muros de contención. ––Puentes. ––Depósitos y silos. Estos errores que, basados en el Informe del Bureau Securitas y Secotec, realizado sobre 2.979 siniestros, sólo serian causa primera de éstos en menos de un 10%; tendrían un porcentaje bastante mayor si se valorasen sus efectos.

  12. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.

    Science.gov (United States)

    Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A

    2018-01-01

    Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  13. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer

    Directory of Open Access Journals (Sweden)

    Marco A. Rendón-Medina

    2018-01-01

    Full Text Available Summary:. Rapid prototyping models (RPMs had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co, with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96. Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  14. A global algorithm for estimating Absolute Salinity

    Directory of Open Access Journals (Sweden)

    T. J. McDougall

    2012-12-01

    Full Text Available The International Thermodynamic Equation of Seawater – 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density than does Practical Salinity.

    When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic, Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg−1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p in the world ocean.

    To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811. In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally.

  15. An Analysis of Students Error In Solving PISA 2012 And Its Scaffolding

    Directory of Open Access Journals (Sweden)

    Yurizka Melia Sari

    2017-08-01

    Full Text Available Based on PISA survey in 2012, Indonesia was only placed on 64 out of 65 participating countries. The survey suggest that the students’ ability of reasoning, spatial orientation, and problem solving are lower compare with other participants countries, especially in Shouth East Asia. Nevertheless, the result of PISA does not elicit clearly on the students’ inability in solving PISA problem such as the location and the types of student’s errors. Therefore, analyzing students’ error in solving PISA problem would be essential countermeasure to help the students in solving mathematics problems and to develop scaffolding. Based on the data analysis, it is found that there are 5 types of error which is made by the subject. They consist of reading error, comprehension error, transformation error, process skill error, and encoding error. The most common mistake that subject do is encoding error with a percentage of 26%. While reading is the fewest errors made by the subjects that is only 12%. The types of given scaffolding was explaining the problem carefully and making a summary of new words and find the meaning of them, restructuring problem-solving strategies and reviewing the results of the completion of the problem.

  16. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.

    Science.gov (United States)

    Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep

    2017-06-12

    Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  17. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  18. The Standard Error of a Proportion for Different Scores and Test Length.

    Directory of Open Access Journals (Sweden)

    David A. Walker

    2005-06-01

    Full Text Available This paper examines Smith's (2003 proposed standard error of a proportion index..associated with the idea of reliability as sufficiency of information. A detailed table..indexing all of the standard error values affiliated with assessments that range from 5 to..100 items, where students scored as low as 50% correct and 50% incorrect to as high as..95% correct and 5% incorrect, calculated in increments of 1 percentage point, is..presented, along with distributional qualities. Examples using this measure for classroom..teachers and higher education instructors of assessment are provided.

  19. Compensating additional optical power in the central zone of a multifocal contact lens forminimization of the shrinkage error of the shell mold in the injection molding process.

    Science.gov (United States)

    Vu, Lien T; Chen, Chao-Chang A; Lee, Chia-Cheng; Yu, Chia-Wei

    2018-04-20

    This study aims to develop a compensating method to minimize the shrinkage error of the shell mold (SM) in the injection molding (IM) process to obtain uniform optical power in the central optical zone of soft axial symmetric multifocal contact lenses (CL). The Z-shrinkage error along the Z axis or axial axis of the anterior SM corresponding to the anterior surface of a dry contact lens in the IM process can be minimized by optimizing IM process parameters and then by compensating for additional (Add) powers in the central zone of the original lens design. First, the shrinkage error is minimized by optimizing three levels of four IM parameters, including mold temperature, injection velocity, packing pressure, and cooling time in 18 IM simulations based on an orthogonal array L 18 (2 1 ×3 4 ). Then, based on the Z-shrinkage error from IM simulation, three new contact lens designs are obtained by increasing the Add power in the central zone of the original multifocal CL design to compensate for the optical power errors. Results obtained from IM process simulations and the optical simulations show that the new CL design with 0.1 D increasing in Add power has the closest shrinkage profile to the original anterior SM profile with percentage of reduction in absolute Z-shrinkage error of 55% and more uniform power in the central zone than in the other two cases. Moreover, actual experiments of IM of SM for casting soft multifocal CLs have been performed. The final product of wet CLs has been completed for the original design and the new design. Results of the optical performance have verified the improvement of the compensated design of CLs. The feasibility of this compensating method has been proven based on the measurement results of the produced soft multifocal CLs of the new design. Results of this study can be further applied to predict or compensate for the total optical power errors of the soft multifocal CLs.

  20. [Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].

    Science.gov (United States)

    Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang

    2016-07-12

    To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.

  1. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  2. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    Science.gov (United States)

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  3. Identification and Assessment of Human Errors in Postgraduate Endodontic Students of Kerman University of Medical Sciences by Using the SHERPA Method

    Directory of Open Access Journals (Sweden)

    Saman Dastaran

    2016-03-01

    Full Text Available Introduction: Human errors are the cause of many accidents, including industrial and medical, therefore finding out an approach for identifying and reducing them is very important. Since no study has been done about human errors in the dental field, this study aimed to identify and assess human errors in postgraduate endodontic students of Kerman University of Medical Sciences by using the SHERPA Method. Methods: This cross-sectional study was performed during year 2014. Data was collected using task observation and interviewing postgraduate endodontic students. Overall, 10 critical tasks, which were most likely to cause harm to patients were determined. Next, Hierarchical Task Analysis (HTA was conducted and human errors in each task were identified by the Systematic Human Error Reduction Prediction Approach (SHERPA technique worksheets. Results: After analyzing the SHERPA worksheets, 90 human errors were identified including (67.7% action errors, (13.3% checking errors, (8.8% selection errors, (5.5% retrieval errors and (4.4% communication errors. As a result, most of them were action errors and less of them were communication errors. Conclusions: The results of the study showed that the highest percentage of errors and the highest level of risk were associated with action errors, therefore, to reduce the occurrence of such errors and limit their consequences, control measures including periodical training of work procedures, providing work check-lists, development of guidelines and establishment of a systematic and standardized reporting system, should be put in place. Regarding the results of this study, the control of recovery errors with the highest percentage of undesirable risk and action errors with the highest frequency of errors should be in the priority of control

  4. Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment

    International Nuclear Information System (INIS)

    Liu, Ming; Cygler, Joanna; Vandervoort, Eric

    2016-01-01

    The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and the current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99 th percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.

  5. Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Ming [Carleton University (Canada); Cygler, Joanna [The Ottawa Hospital Cancer Centre, Carleton University, Ottawa University (Canada); Vandervoort, Eric [The Ottawa Hospital Cancer Centre, Ottawa University (Canada)

    2016-08-15

    The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and the current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.

  6. Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.

    Science.gov (United States)

    Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2017-05-06

    The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

  7. Invariant and Absolute Invariant Means of Double Sequences

    Directory of Open Access Journals (Sweden)

    Abdullah Alotaibi

    2012-01-01

    Full Text Available We examine some properties of the invariant mean, define the concepts of strong σ-convergence and absolute σ-convergence for double sequences, and determine the associated sublinear functionals. We also define the absolute invariant mean through which the space of absolutely σ-convergent double sequences is characterized.

  8. The stars: an absolute radiometric reference for the on-orbit calibration of PLEIADES-HR satellites

    Science.gov (United States)

    Meygret, Aimé; Blanchet, Gwendoline; Mounier, Flore; Buil, Christian

    2017-09-01

    The accurate on-orbit radiometric calibration of optical sensors has become a challenge for space agencies who gather their effort through international working groups such as CEOS/WGCV or GSICS with the objective to insure the consistency of space measurements and to reach an absolute accuracy compatible with more and more demanding scientific needs. Different targets are traditionally used for calibration depending on the sensor or spacecraft specificities: from on-board calibration systems to ground targets, they all take advantage of our capacity to characterize and model them. But achieving the in-flight stability of a diffuser panel is always a challenge while the calibration over ground targets is often limited by their BDRF characterization and the atmosphere variability. Thanks to their agility, some satellites have the capability to view extra-terrestrial targets such as the moon or stars. The moon is widely used for calibration and its albedo is known through ROLO (RObotic Lunar Observatory) USGS model but with a poor absolute accuracy limiting its use to sensor drift monitoring or cross-calibration. Although the spectral irradiance of some stars is known with a very high accuracy, it was not really shown that they could provide an absolute reference for remote sensors calibration. This paper shows that high resolution optical sensors can be calibrated with a high absolute accuracy using stars. The agile-body PLEIADES 1A satellite is used for this demonstration. The star based calibration principle is described and the results are provided for different stars, each one being acquired several times. These results are compared to the official calibration provided by ground targets and the main error contributors are discussed.

  9. An absolute calibration system for millimeter-accuracy APOLLO measurements

    Science.gov (United States)

    Adelberger, E. G.; Battat, J. B. R.; Birkmeier, K. J.; Colmenares, N. R.; Davis, R.; Hoyle, C. D.; Huang, L. R.; McMillan, R. J.; Murphy, T. W., Jr.; Schlerman, E.; Skrobol, C.; Stubbs, C. W.; Zach, A.

    2017-12-01

    Lunar laser ranging provides a number of leading experimental tests of gravitation—important in our quest to unify general relativity and the standard model of physics. The apache point observatory lunar laser-ranging operation (APOLLO) has for years achieved median range precision at the  ∼2 mm level. Yet residuals in model-measurement comparisons are an order-of-magnitude larger, raising the question of whether the ranging data are not nearly as accurate as they are precise, or if the models are incomplete or ill-conditioned. This paper describes a new absolute calibration system (ACS) intended both as a tool for exposing and eliminating sources of systematic error, and also as a means to directly calibrate ranging data in situ. The system consists of a high-repetition-rate (80 MHz) laser emitting short (motivating continued work on model capabilities. The ACS provides the means to deliver APOLLO data both accurate and precise below the 2 mm level.

  10. Absolute measurement of a tritium standard

    International Nuclear Information System (INIS)

    Hadzisehovic, M.; Mocilnik, I.; Buraei, K.; Pongrac, S.; Milojevic, A.

    1978-01-01

    For the determination of a tritium absolute activity standard, a method of internal gas counting has been used. The procedure involves water reduction by uranium and zinc further the measurement of the absolute disintegration rate of tritium per unit of the effective volume of the counter by a compensation method. Criteria for the choice of methods and procedures concerning the determination and measurement of gaseous 3 H yield, parameters of gaseous hydrogen, sample mass of HTO and the absolute disintegration rate of tritium are discussed. In order to obtain gaseous sources of 3 H (and 2 H), the same reversible chemical reaction was used, namely, the water - uranium hydride - hydrogen system. This reaction was proved to be quantitative above 500 deg C by measuring the yield of the gas obtained and the absolute activity of an HTO standard. A brief description of the measuring apparatus is given, as well as a critical discussion of the brass counter quality and the possibility of obtaining equal working conditions at the counter ends. (T.G.)

  11. Cryogenic, Absolute, High Pressure Sensor

    Science.gov (United States)

    Chapman, John J. (Inventor); Shams. Qamar A. (Inventor); Powers, William T. (Inventor)

    2001-01-01

    A pressure sensor is provided for cryogenic, high pressure applications. A highly doped silicon piezoresistive pressure sensor is bonded to a silicon substrate in an absolute pressure sensing configuration. The absolute pressure sensor is bonded to an aluminum nitride substrate. Aluminum nitride has appropriate coefficient of thermal expansion for use with highly doped silicon at cryogenic temperatures. A group of sensors, either two sensors on two substrates or four sensors on a single substrate are packaged in a pressure vessel.

  12. A developmental study of latent absolute pitch memory.

    Science.gov (United States)

    Jakubowski, Kelly; Müllensiefen, Daniel; Stewart, Lauren

    2017-03-01

    The ability to recall the absolute pitch level of familiar music (latent absolute pitch memory) is widespread in adults, in contrast to the rare ability to label single pitches without a reference tone (overt absolute pitch memory). The present research investigated the developmental profile of latent absolute pitch (AP) memory and explored individual differences related to this ability. In two experiments, 288 children from 4 to12 years of age performed significantly above chance at recognizing the absolute pitch level of familiar melodies. No age-related improvement or decline, nor effects of musical training, gender, or familiarity with the stimuli were found in regard to latent AP task performance. These findings suggest that latent AP memory is a stable ability that is developed from as early as age 4 and persists into adulthood.

  13. Percentage of Fast-Track Receipts

    Data.gov (United States)

    Social Security Administration — The dataset provides the percentage of fast-track receipts by state during the reporting fiscal year. Fast-tracked cases consist of those cases identified as Quick...

  14. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    Science.gov (United States)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  15. A Simultaneously Calibration Approach for Installation and Attitude Errors of an INS/GPS/LDS Target Tracker

    Directory of Open Access Journals (Sweden)

    Jianhua Cheng

    2015-02-01

    Full Text Available To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS/Global position system (GPS/Laser distance scanner (LDS integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1 the attitude measure error of INS/GPS; (2 the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  16. A simultaneously calibration approach for installation and attitude errors of an INS/GPS/LDS target tracker.

    Science.gov (United States)

    Cheng, Jianhua; Chen, Daidai; Sun, Xiangyu; Wang, Tongda

    2015-02-04

    To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS)/Global position system (GPS)/Laser distance scanner (LDS) integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1) the attitude measure error of INS/GPS; (2) the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  17. Validity of Garmin Vívofit and Polar Loop for measuring daily step counts in free-living conditions in adults

    Directory of Open Access Journals (Sweden)

    Adam Šimůnek

    2016-09-01

    Full Text Available Background: Wrist activity trackers (WATs are becoming popular and widely used for the monitoring of physical activity. However, the validity of many WATs in measuring steps remains unknown. Objective: To determine the validity of the following WATs: Garmin Vívofit (Vívofit and Polar Loop (Loop, by comparing them with well-validated devices, Yamax Digiwalker SW-701 pedometer (Yamax and hip-mounted ActiGraph GT3X+ accelerometer (ActiGraph, in healthy adults. Methods: In free-living conditions, adult volunteers (N = 20 aged 25 to 52 years wore two WATs (Vívofit and Loop with Yamax and ActiGraph simultaneously over a 7 day period. The validity of Vívofit and Loop was assessed by comparing each device with the Yamax and ActiGraph, using a paired samples t-test, mean absolute percentage errors, intraclass correlation coefficients (ICC and Bland-Altman plots. Results: The differences between average steps per day were significant for all devices, except the difference between Vívofit and Yamax (p = .06; d = 0.2. Compared with Yamax and ActiGraph, the mean absolute percentage errors of Vívofit were -4.0% and 12.5%, respectively. For Loop the mean absolute percentage error was 8.9% compared with Yamax and 28.0% compared with ActiGraph. Vívofit showed a very strong correlation with both Yamax and ActiGraph (ICC = .89. Loop showed a very strong correlation with Yamax (ICC = .89 and a strong correlation with ActiGraph (ICC = .70. Conclusions: Vívofit showed higher validity than Loop in measuring daily step counts in free-living conditions. Loop appears to overestimate the daily number of steps in individuals who take more steps during a day.

  18. Systematic Review of Errors in Inhaler Use

    DEFF Research Database (Denmark)

    Sanchis, Joaquin; Gich, Ignasi; Pedersen, Søren

    2016-01-01

    in these outcomes over these 40 years and when partitioned into years 1 to 20 and years 21 to 40. Analyses were conducted in accordance with recommendations from Preferred Reporting Items for Systematic Reviews and Meta-Analyses and Strengthening the Reporting of Observational Studies in Epidemiology. Results Data...... A systematic search for articles reporting direct observation of inhaler technique by trained personnel covered the period from 1975 to 2014. Outcomes were the nature and frequencies of the three most common errors; the percentage of patients demonstrating correct, acceptable, or poor technique; and variations...

  19. Advancing Absolute Calibration for JWST and Other Applications

    Science.gov (United States)

    Rieke, George; Bohlin, Ralph; Boyajian, Tabetha; Carey, Sean; Casagrande, Luca; Deustua, Susana; Gordon, Karl; Kraemer, Kathleen; Marengo, Massimo; Schlawin, Everett; Su, Kate; Sloan, Greg; Volk, Kevin

    2017-10-01

    We propose to exploit the unique optical stability of the Spitzer telescope, along with that of IRAC, to (1) transfer the accurate absolute calibration obtained with MSX on very bright stars directly to two reference stars within the dynamic range of the JWST imagers (and of other modern instrumentation); (2) establish a second accurate absolute calibration based on the absolutely calibrated spectrum of the sun, transferred onto the astronomical system via alpha Cen A; and (3) provide accurate infrared measurements for the 11 (of 15) highest priority stars with no such data but with accurate interferometrically measured diameters, allowing us to optimize determinations of effective temperatures using the infrared flux method and thus to extend the accurate absolute calibration spectrally. This program is integral to plans for an accurate absolute calibration of JWST and will also provide a valuable Spitzer legacy.

  20. aCNViewer: Comprehensive genome-wide visualization of absolute copy number and copy neutral variations.

    Directory of Open Access Journals (Sweden)

    Victor Renault

    Full Text Available Copy number variations (CNV include net gains or losses of part or whole chromosomal regions. They differ from copy neutral loss of heterozygosity (cn-LOH events which do not induce any net change in the copy number and are often associated with uniparental disomy. These phenomena have long been reported to be associated with diseases and particularly in cancer. Losses/gains of genomic regions are often correlated with lower/higher gene expression. On the other hand, loss of heterozygosity (LOH and cn-LOH are common events in cancer and may be associated with the loss of a functional tumor suppressor gene. Therefore, identifying recurrent CNV and cn-LOH events can be important as they may highlight common biological components and give insights into the development or mechanisms of a disease. However, no currently available tools allow a comprehensive whole-genome visualization of recurrent CNVs and cn-LOH in groups of samples providing absolute quantification of the aberrations leading to the loss of potentially important information.To overcome these limitations, we developed aCNViewer (Absolute CNV Viewer, a visualization tool for absolute CNVs and cn-LOH across a group of samples. aCNViewer proposes three graphical representations: dendrograms, bi-dimensional heatmaps showing chromosomal regions sharing similar abnormality patterns, and quantitative stacked histograms facilitating the identification of recurrent absolute CNVs and cn-LOH. We illustrated aCNViewer using publically available hepatocellular carcinomas (HCCs Affymetrix SNP Array data (Fig 1A. Regions 1q and 8q present a similar percentage of total gains but significantly different copy number gain categories (p-value of 0.0103 with a Fisher exact test, validated by another cohort of HCCs (p-value of 5.6e-7 (Fig 2B.aCNViewer is implemented in python and R and is available with a GNU GPLv3 license on GitHub https://github.com/FJD-CEPH/aCNViewer and Docker https://hub.docker.com/r/fjdceph/acnviewer/.aCNViewer@cephb.fr.

  1. aCNViewer: Comprehensive genome-wide visualization of absolute copy number and copy neutral variations.

    Science.gov (United States)

    Renault, Victor; Tost, Jörg; Pichon, Fabien; Wang-Renault, Shu-Fang; Letouzé, Eric; Imbeaud, Sandrine; Zucman-Rossi, Jessica; Deleuze, Jean-François; How-Kit, Alexandre

    2017-01-01

    Copy number variations (CNV) include net gains or losses of part or whole chromosomal regions. They differ from copy neutral loss of heterozygosity (cn-LOH) events which do not induce any net change in the copy number and are often associated with uniparental disomy. These phenomena have long been reported to be associated with diseases and particularly in cancer. Losses/gains of genomic regions are often correlated with lower/higher gene expression. On the other hand, loss of heterozygosity (LOH) and cn-LOH are common events in cancer and may be associated with the loss of a functional tumor suppressor gene. Therefore, identifying recurrent CNV and cn-LOH events can be important as they may highlight common biological components and give insights into the development or mechanisms of a disease. However, no currently available tools allow a comprehensive whole-genome visualization of recurrent CNVs and cn-LOH in groups of samples providing absolute quantification of the aberrations leading to the loss of potentially important information. To overcome these limitations, we developed aCNViewer (Absolute CNV Viewer), a visualization tool for absolute CNVs and cn-LOH across a group of samples. aCNViewer proposes three graphical representations: dendrograms, bi-dimensional heatmaps showing chromosomal regions sharing similar abnormality patterns, and quantitative stacked histograms facilitating the identification of recurrent absolute CNVs and cn-LOH. We illustrated aCNViewer using publically available hepatocellular carcinomas (HCCs) Affymetrix SNP Array data (Fig 1A). Regions 1q and 8q present a similar percentage of total gains but significantly different copy number gain categories (p-value of 0.0103 with a Fisher exact test), validated by another cohort of HCCs (p-value of 5.6e-7) (Fig 2B). aCNViewer is implemented in python and R and is available with a GNU GPLv3 license on GitHub https://github.com/FJD-CEPH/aCNViewer and Docker https

  2. Medical Error Types and Causes Made by Nurses in Turkey

    Directory of Open Access Journals (Sweden)

    Dilek Kucuk Alemdar

    2013-06-01

    Full Text Available AIM: This study was carried out as a descriptive study in order to determine types, causes and prevalence of medical errors made by nurses in Turkey. METHOD: Seventy eight (78 nurses who have worked in a randomly selected hospital from five hospitals in Giresun city centre were enrolled in the study. The data was collected by the researchers using the ‘Information Form for Nurses’ and ‘Medical Error Form’. The Medical Error Form consists of 2 parts and 40 items including types and causes of medical errors. Nurses’ socio-demographic variables, medical error types and causes were evaluated using the percentage distribution and mean. RESULTS: The mean age of the nurses was 25.5 years, with a standard deviation 6.03 years. 50% of the nurses graduated health professional high school in the study. 53.8% of the nurses are single, 63.1% worked between 1-5 years, 71.8% day and night shifts and 42.3% in medical clinics. The common types of medical errors were hospital infection rate of 15.4%, diagnostic errors 12.8%, needle or cutting tool injuries and problems related to drug usage which has side effects 10.3%. In the study 38.5% of the nurses reported that they thought the cause of medical error highly was tiredness, 36.4% increased workload and 34.6% long working hours. CONCLUSION: As a result of the present study, nurses mentioned hospital infection, diagnostic errors, needle or cutting tool injuries as the most common medical errors and fatigue, over work load and long working hours as the most common medical error reasons. [TAF Prev Med Bull 2013; 12(3.000: 307-314

  3. Neural network versus classical time series forecasting models

    Science.gov (United States)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  4. Comparison of INAR(1)-Poisson model and Markov prediction model in forecasting the number of DHF patients in west java Indonesia

    Science.gov (United States)

    Ahdika, Atina; Lusiyana, Novyan

    2017-02-01

    World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).

  5. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. PERBANDINGAN ANALISIS LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR DAN PARTIAL LEAST SQUARES (Studi Kasus: Data Microarray

    Directory of Open Access Journals (Sweden)

    KADEK DWI FARMANI

    2012-09-01

    Full Text Available Linear regression analysis is one of the parametric statistical methods which utilize the relationship between two or more quantitative variables. In linear regression analysis, there are several assumptions that must be met that is normal distribution of errors, there is no correlation between the error and error variance is constant and homogent. There are some constraints that caused the assumption can not be met, for example, the correlation between independent variables (multicollinearity, constraints on the number of data and independent variables are obtained. When the number of samples obtained less than the number of independent variables, then the data is called the microarray data. Least Absolute shrinkage and Selection Operator (LASSO and Partial Least Squares (PLS is a statistical method that can be used to overcome the microarray, overfitting, and multicollinearity. From the above description, it is necessary to study with the intention of comparing LASSO and PLS method. This study uses coronary heart and stroke patients data which is a microarray data and contain multicollinearity. With these two characteristics of the data that most have a weak correlation between independent variables, LASSO method produces a better model than PLS seen from the large RMSEP.

  7. Absolute pitch among students at the Shanghai Conservatory of Music: a large-scale direct-test study.

    Science.gov (United States)

    Deutsch, Diana; Li, Xiaonuo; Shen, Jing

    2013-11-01

    This paper reports a large-scale direct-test study of absolute pitch (AP) in students at the Shanghai Conservatory of Music. Overall note-naming scores were very high, with high scores correlating positively with early onset of musical training. Students who had begun training at age ≤5 yr scored 83% correct not allowing for semitone errors and 90% correct allowing for semitone errors. Performance levels were higher for white key pitches than for black key pitches. This effect was greater for orchestral performers than for pianists, indicating that it cannot be attributed to early training on the piano. Rather, accuracy in identifying notes of different names (C, C#, D, etc.) correlated with their frequency of occurrence in a large sample of music taken from the Western tonal repertoire. There was also an effect of pitch range, so that performance on tones in the two-octave range beginning on Middle C was higher than on tones in the octave below Middle C. In addition, semitone errors tended to be on the sharp side. The evidence also ran counter to the hypothesis, previously advanced by others, that the note A plays a special role in pitch identification judgments.

  8. NGS Absolute Gravity Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NGS Absolute Gravity data (78 stations) was received in July 1993. Principal gravity parameters include Gravity Value, Uncertainty, and Vertical Gradient. The...

  9. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    International Nuclear Information System (INIS)

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward; Szczygieł, Dorota M.; Gould, Andrew; Sneden, Christopher; Dong, Subo

    2013-01-01

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M V,RRc = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M V,RRab = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M V,RRc = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W π , W θ , W z ) = (12.0, –209.9, 3.0) km s –1 in the radial, rotational, and vertical directions with dispersions (σ W π ,σ W θ ,σ W z ) = (150.4, 106.1, 96.0) km s -1 . For the disk, we find (W π , W θ , W z ) = (13.0, –42.0, –27.3) km s –1 relative to the Sun with dispersions (σ W π ,σ W θ ,σ W z ) = (67.7,59.2,54.9) km s -1 . Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4

  10. A Comparison of the American Society of Cataract and Refractive Surgery post-myopic LASIK/PRK Intraocular Lens (IOL calculator and the Ocular MD IOL calculator

    Directory of Open Access Journals (Sweden)

    Hsu M

    2011-09-01

    Full Text Available David L DeMill1, Majid Moshirfar1, Marcus C Neuffer1, Maylon Hsu1, Shameema Sikder21John A Moran Eye Center, University of Utah, Salt Lake City, UT, USA; 2Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, USABackground: To compare the average values of the American Society of Cataract and Refractive Surgery (ASCRS and Ocular MD intraocular lens (IOL calculators to assess their accuracy in predicting IOL power in patients with prior laser-in-situ keratomileusis (LASIK or photorefractive keratectomy.Methods: In this retrospective study, data from 21 eyes with previous LASIK or photorefractive keratectomy for myopia and subsequent cataract surgery was used in an IOL calculator comparison. The predicted IOL powers of the Ocular MD SRK/T, Ocular MD Haigis, and ASCRS averages were compared. The Ocular MD average (composed of an average of Ocular MD SRK/T and Ocular MD Haigis and the all calculator average (composed of an average of Ocular MD SRK/T, Ocular MD Haigis, and ASCRS were also compared. Primary outcome measures were mean arithmetic and absolute IOL prediction error, variance in mean arithmetic IOL prediction error, and the percentage of eyes within ±0.50 and ±1.00 D.Results: The Ocular MD SRK/T and Ocular MD Haigis averages produced mean arithmetic IOL prediction errors of 0.57 and –0.61 diopters (D, respectively, which were significantly larger than errors from the ASCRS, Ocular MD, and all calculator averages (0.11, –0.02, and 0.02 D, respectively, all P < 0.05. There was no statistically significant difference between the methods in absolute IOL prediction error, variance, or the percentage of eyes with outcomes within ±0.50 and ±1.00 D.Conclusion: The ASCRS average was more accurate in predicting IOL power than the Ocular MD SRK/T and Ocular MD Haigis averages alone. Our methods using combinations of these averages which, when compared with the individual averages, showed a trend of decreased mean arithmetic IOL

  11. Quantitative structure activity relationship model for predicting the depletion percentage of skin allergic chemical substances of glutathione

    International Nuclear Information System (INIS)

    Si Hongzong; Wang Tao; Zhang Kejun; Duan Yunbo; Yuan Shuping; Fu Aiping; Hu Zhide

    2007-01-01

    A quantitative model was developed to predict the depletion percentage of glutathione (DPG) compounds by gene expression programming (GEP). Each kind of compound was represented by several calculated structural descriptors involving constitutional, topological, geometrical, electrostatic and quantum-chemical features of compounds. The GEP method produced a nonlinear and five-descriptor quantitative model with a mean error and a correlation coefficient of 10.52 and 0.94 for the training set, 22.80 and 0.85 for the test set, respectively. It is shown that the GEP predicted results are in good agreement with experimental ones, better than those of the heuristic method

  12. Maximizing percentage depletion in solid minerals

    International Nuclear Information System (INIS)

    Tripp, J.; Grove, H.D.; McGrath, M.

    1982-01-01

    This article develops a strategy for maximizing percentage depletion deductions when extracting uranium or other solid minerals. The goal is to avoid losing percentage depletion deductions by staying below the 50% limitation on taxable income from the property. The article is divided into two major sections. The first section is comprised of depletion calculations that illustrate the problem and corresponding solutions. The last section deals with the feasibility of applying the strategy and complying with the Internal Revenue Code and appropriate regulations. Three separate strategies or appropriate situations are developed and illustrated. 13 references, 3 figures, 7 tables

  13. Absolute isotopic abundances of Ti in meteorites

    International Nuclear Information System (INIS)

    Niederer, F.R.; Papanastassiou, D.A.; Wasserburg, G.J.

    1985-01-01

    The absolute isotope abundance of Ti has been determined in Ca-Al-rich inclusions from the Allende and Leoville meteorites and in samples of whole meteorites. The absolute Ti isotope abundances differ by a significant mass dependent isotope fractionation transformation from the previously reported abundances, which were normalized for fractionation using 46 Ti/ 48 Ti. Therefore, the absolute compositions define distinct nucleosynthetic components from those previously identified or reflect the existence of significant mass dependent isotope fractionation in nature. We provide a general formalism for determining the possible isotope compositions of the exotic Ti from the measured composition, for different values of isotope fractionation in nature and for different mixing ratios of the exotic and normal components. The absolute Ti and Ca isotopic compositions still support the correlation of 50 Ti and 48 Ca effects in the FUN inclusions and imply contributions from neutron-rich equilibrium or quasi-equilibrium nucleosynthesis. The present identification of endemic effects at 46 Ti, for the absolute composition, implies a shortfall of an explosive-oxygen component or reflects significant isotope fractionation. Additional nucleosynthetic components are required by 47 Ti and 49 Ti effects. Components are also defined in which 48 Ti is enhanced. Results are given and discussed. (author)

  14. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  15. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    Science.gov (United States)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  16. Absolute earthquake locations using 3-D versus 1-D velocity models below a local seismic network: example from the Pyrenees

    Science.gov (United States)

    Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.

    2018-03-01

    Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.

  17. Investigating Absolute Value: A Real World Application

    Science.gov (United States)

    Kidd, Margaret; Pagni, David

    2009-01-01

    Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…

  18. Improving Papanicolaou test quality and reducing medical errors by using Toyota production system methods.

    Science.gov (United States)

    Raab, Stephen S; Andrew-Jaja, Carey; Condel, Jennifer L; Dabbs, David J

    2006-01-01

    The objective of the study was to determine whether the Toyota production system process improves Papanicolaou test quality and patient safety. An 8-month nonconcurrent cohort study that included 464 case and 639 control women who had a Papanicolaou test was performed. Office workflow was redesigned using Toyota production system methods by introducing a 1-by-1 continuous flow process. We measured the frequency of Papanicolaou tests without a transformation zone component, follow-up and Bethesda System diagnostic frequency of atypical squamous cells of undetermined significance, and diagnostic error frequency. After the intervention, the percentage of Papanicolaou tests lacking a transformation zone component decreased from 9.9% to 4.7% (P = .001). The percentage of Papanicolaou tests with a diagnosis of atypical squamous cells of undetermined significance decreased from 7.8% to 3.9% (P = .007). The frequency of error per correlating cytologic-histologic specimen pair decreased from 9.52% to 7.84%. The introduction of the Toyota production system process resulted in improved Papanicolaou test quality.

  19. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  20. Comparing different error-conditions in film dosemeter evaluation

    International Nuclear Information System (INIS)

    Roed, H.; Figel, M.

    2007-01-01

    In the evaluation of a film used as a personal dosemeter it may be necessary to mark the dosemeters when possible error-conditions are recognised, such as errors that have an influence on the ability to make a correct evaluation of the dose value. In this project a comparison has been carried out to examine how two individual monitoring services, IMS [National Inst. of Radiation Hygiene (Denmark) (NIRH) and National Research Centre for Environment and Health (Germany) (GSF)], from two different EU countries mark their dosemeters. The IMS are different in size, type of customers and issuing period, but both use films as their primary dosemeters. The error-conditions examined are dosemeters exposed to moisture or light, contaminated dosemeters, films exposed outside the badge, missing filters in the badge, films inserted incorrectly in the badge and dosemeters not returned or returned too late to the IMS. The data are collected for the year 2003 where NIRH evaluated ∼50,000 and GSF ∼1.4 million film dosemeters. The percentage of film dosemeters is calculated for each error-condition as well as the distribution among eight different employee categories, i.e. medicine, nuclear medicine, nuclear industry, industry, radiography, laboratories, veterinary and others. It turned out, that incorrect insertion of the film in the badge was the most common error-condition observed at both IMS and that veterinarians, as the employee category, generally have the highest number of errors. NIRH has a significantly higher relative number of dosemeters in most error-conditions than GSF, which perhaps reflects that a comparison is difficult due to different systemic and methodical differences between the IMS and countries, e.g. regulations and monitoring programs etc. Also the non-existence of a common categorisation method for employee categories contributes to make a comparison like this difficult. (authors)

  1. A novel setup for the determination of absolute cross sections for low-energy electron induced strand breaks in oligonucleotides - The effect of the radiosensitizer 5-fluorouracil

    International Nuclear Information System (INIS)

    Rackwitz, J.; Rankovic, M.L.; Milosavljevic, A.R.; Bald, I.

    2017-01-01

    Low-energy electrons (LEEs) play an important role in DNA radiation damage. Here we present a method to quantify LEE induced strand breakage in well-defined oligonucleotide single strands in terms of absolute cross sections. An LEE irradiation setup covering electron energies <500 eV is constructed and optimized to irradiate DNA origami triangles carrying well-defined oligonucleotide target strands. Measurements are presented for 10.0 and 5.5 eV for different oligonucleotide targets. The determination of absolute strand break cross sections is performed by atomic force microscopy analysis. An accurate fluence determination ensures small margins of error of the determined absolute single strand break cross sections σ SSB . In this way, the influence of sequence modification with the radiosensitive 5-Fluorouracil ( 5F U) is studied using an absolute and relative data analysis. We demonstrate an increase in the strand break yields of 5F U containing oligonucleotides by a factor of 1.5 to 1.6 compared with non-modified oligonucleotide sequences when irradiated with 10 eV electrons. (authors)

  2. [Absolute and relative strength-endurance of the knee flexor and extensor muscles: a reliability study using the IsoMed 2000-dynamometer].

    Science.gov (United States)

    Dirnberger, J; Wiesinger, H P; Stöggl, T; Kösters, A; Müller, E

    2012-09-01

    Isokinetic devices are highly rated in strength-related performance diagnosis. A few years ago, the broad variety of existing products was extended by the IsoMed 2000-dynamometer. In order for an isokinetic device to be clinically useful, the reliability of specific applications must be established. Although there have already been single studies on this topic for the IsoMed 2000 concerning maximum strength measurements, there has been no study regarding the assessment of strength-endurance so far. The aim of the present study was to establish the reliability for various methods of quantification of strength-endurance using the IsoMed 2000. A sample of 33 healthy young subjects (age: 23.8 ± 2.6 years) participated in one familiarisation and two testing sessions, 3-4 days apart. Testing consisted of a series 30 full effort concentric extension-flexion cycles of the right knee muscles at an angular velocity of 180 °/s. Based on the parameters Peak, Torque and Work for each repetition, indices of absolute (KADabs) and relative (KADrel) strength-endurance were derived. KADabs was calculated as the mean value of all testing repetitions, KADrel was determined in two ways: on the one hand, as the percentage decrease between the first and the last 5 repetitions (KADrelA) and on the other, as the negative slope derived from the linear regression equitation of all repetitions (KADrelB). Detection of systematic errors was performed using paired sample t-tests, relative and absolute reliability were examined using intraclass correlation coefficient (ICC 2.1) and standard error of measurement (SEM%), respectively. In general, for extension measurements concerning KADabs and - in an weakened form - KADrel high ICC -values of 0.76-0.89 combined with clinically acceptable values of SEM% of 1.2-5.9 % could be found. For flexion measurements this only applies to KADabs, whereas results for KADrel turned out to be clearly weaker with ICC- and SEM% values of 0.42-0.62 and 9

  3. Approach To Absolute Zero

    Indian Academy of Sciences (India)

    more and more difficult to remove heat as one approaches absolute zero. This is the ... A new and active branch of engineering ... This temperature is called the critical temperature, Te' For sulfur dioxide the critical ..... adsorbent charcoal.

  4. Reducing errors benefits the field-based learning of a fundamental movement skill in children.

    Science.gov (United States)

    Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W

    2013-03-01

    Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.

  5. 7 CFR 982.41 - Free and restricted percentages.

    Science.gov (United States)

    2010-01-01

    ... percentages in effect at the end of the previous marketing year shall be applicable. [51 FR 29548, Aug. 19... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... WASHINGTON Order Regulating Handling Marketing Policy § 982.41 Free and restricted percentages. The free and...

  6. Absolute spectrophotometry of Nova Cygni 1975

    International Nuclear Information System (INIS)

    Kontizas, E.; Kontizas, M.; Smyth, M.J.

    1976-01-01

    Radiometric photoelectric spectrophotometry of Nova Cygni 1975 was carried out on 1975 August 31, September 2, 3. α Lyr was used as reference star and its absolute spectral energy distribution was used to reduce the spectrophotometry of the nova to absolute units. Emission strengths of Hα, Hβ, Hγ (in W cm -2 ) were derived. The Balmer decrement Hα:Hβ:Hγ was compared with theory, and found to deviate less than had been reported for an earlier nova. (author)

  7. The Pragmatics of "Unruly" Dative Absolutes in Early Slavic

    Directory of Open Access Journals (Sweden)

    Daniel E. Collins

    2011-08-01

    Full Text Available This chapter examines some uses of the dative absolute in Old Church Slavonic and in early recensional Slavonic texts that depart from notions of how Indo-European absolute constructions should behave, either because they have subjects coreferential with the (putative main-clause subjects or because they function as if they were main clauses in their own right. Such "noncanonical" absolutes have generally been written off as mechanistic translations or as mistakes by scribes who did not understand the proper uses of the construction. In reality, the problem is not with literalistic translators or incompetent scribes but with the definition of the construction itself; it is quite possible to redefine the Early Slavic dative absolute in a way that accounts for the supposedly deviant cases. While the absolute is generally dependent semantically on an adjacent unit of discourse, it should not always be regarded as subordinated syntactically. There are good grounds for viewing some absolutes not as dependent clauses but as independent sentences whose collateral character is an issue not of syntax but of the pragmatics of discourse.

  8. ASSOCIATION BETWEEN REFRACTIVE ERRORS AND SENILE CATARACT IN RURAL AREA OF WESTERN MAHARASHTRA

    Directory of Open Access Journals (Sweden)

    Chaudhari Sagar V, Shelke Sanjay T, BangalSurekha V, Bhandari Akshay J, Kulkarni Ameya A

    2015-04-01

    Full Text Available Purpose: To study the association between refractive errors and senile cataract in rural area of western MaharashtraMaterials & Methods: It is a prospective cross sectional study carried out on 420 eyes of 210 patients with senile cataract was included in the study. The age and sex of the patient, grade and the refractive status of the cataract of the eyes were recorded. The grade of the cataract was recorded by the LOCS III (Lens Opacities Classification System, version III. Refractive status was measured subjectively using retinoscope and refractive error for each eye was converted into spherical equivalent units. Results: The age variation in the study was between 60-85 years.The maximum number of patients was in the age group of 60-65 years.The spherical equivalent ranged between -3.0 D to +4.25D.45.95% of the study population had a spherical equivalent between -2 to -1.73.81 % of the study population had a myopic refraction.20% had a hypermetropic refraction. Percentage of patients with a score of nuclear opalescence and colour between 1.0-2.0 was 41.90%, between 2.1-3.0 was 26.67% and above 3.0 was 31.43%.Percentage of patients with a score of cortical cataract between 0.1-1.0 was 69.76% and with a grade between 2.1-3.0 was 26.91 %. Percentage of patients with a score of posterior subcapsular cataract between 0.1-1.0 was 53.57% and with a grade between 2.1-3.0 was 39.05%. Conclusion: The myopic refraction was associated with nuclear, cortical and posterior subcapsular cataract and this refractive error was stastically significant with nuclear, cortical and posterior subcapsular cataract.

  9. Absolutyzm i pluralizm (ABSOLUTISM AND PLURALISM

    Directory of Open Access Journals (Sweden)

    Renata Ziemińska

    2005-06-01

    Full Text Available Alethic absolutism is a thesis that propositions can not be more or less true, that they are true or false for ever (if true at all and that their truth is independent on any circumstances of their assertion. In negative version, easier to defend, alethic absolutism claims the very same proposition can not be both true and false relative to circumstances of its assertion. Simple alethic pluralism is a thesis that we have many concepts of truth. It is a very good way to dissolve the controversy between alethic relativism and absolutism. Many philosophical concepts of truth are the best reason for such pluralism. If concept is meaning of a name, we have many concepts of truth because the name 'truth' was understood in many ways. The variety of meanings however can be superficial. Under it we can find one idea of truth expressed in correspondence truism or schema (T. The content of the truism is too poor to be content of anyone concept of truth, so it usually is connected with some picture of the world (ontology and we have so many concepts of truth as many pictures of the world. The authoress proposes the hierarchical pluralism with privileged classic (or correspondence in weak sense concept of truth as absolute property.Other author's publications:

  10. 78 FR 33757 - Rural Determination and Financing Percentage

    Science.gov (United States)

    2013-06-05

    ... Agency for determining what percentage of a project is eligible for RUS financing if the Rural Percentage... defined as rural. As the Agency investigates financing options for projects owned by entities other than... inability to fund 100 percent of the financing needs of a given project has undermined the Agency's effort...

  11. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  12. Introducing the Mean Absolute Deviation "Effect" Size

    Science.gov (United States)

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  13. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  14. Absolute gravity measurements at three sites characterized by different environmental conditions using two portable ballistic gravimeters

    Science.gov (United States)

    Greco, Filippo; Biolcati, Emanuele; Pistorio, Antonio; D'Agostino, Giancarlo; Germak, Alessandro; Origlia, Claudio; Del Negro, Ciro

    2015-03-01

    The performances of two absolute gravimeters at three different sites in Italy between 2009 and 2011 is presented. The measurements of the gravity acceleration g were performed using the absolute gravimeters Micro-g LaCoste FG5#238 and the INRiM prototype IMGC-02, which represent the state of the art in ballistic gravimeter technology (relative uncertainty of a few parts in 109). For the comparison, the measured g values were reported at the same height by means of the vertical gravity gradient estimated at each site with relative gravimeters. The consistency and reliability of the gravity observations, as well as the performance and efficiency of the instruments, were assessed by measurements made in sites characterized by different logistics and environmental conditions. Furthermore, the various factors affecting the measurements and their uncertainty were thoroughly investigated. The measurements showed good agreement, with the minimum and maximum differences being 4.0 and 8.3 μGal. The normalized errors are very much lower than 1, ranging between 0.06 and 0.45, confirming the compatibility between the results. This excellent agreement can be attributed to several factors, including the good working order of gravimeters and the correct setup and use of the instruments in different conditions. These results can contribute to the standardization of absolute gravity surveys largely for applications in geophysics, volcanology and other branches of geosciences, allowing achieving a good trade-off between uncertainty and efficiency of gravity measurements.

  15. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    Energy Technology Data Exchange (ETDEWEB)

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward [Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Szczygieł, Dorota M.; Gould, Andrew [Department of Astronomy, The Ohio State University, 4051 McPherson Laboratory, Columbus, OH 43210 (United States); Sneden, Christopher [Department of Astronomy, University of Texas at Austin, TX 78712 (United States); Dong, Subo [Institute for Advanced Study, 500 Einstein Drive, Princeton, NJ 08540 (United States)

    2013-09-20

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M{sub V,RRc} = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M{sub V,RRab} = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M{sub V,RRc} = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W{sub π}, W{sub θ}, W{sub z} ) = (12.0, –209.9, 3.0) km s{sup –1} in the radial, rotational, and vertical directions with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (150.4, 106.1, 96.0) km s{sup -1}. For the disk, we find (W{sub π}, W{sub θ}, W{sub z} ) = (13.0, –42.0, –27.3) km s{sup –1} relative to the Sun with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (67.7,59.2,54.9) km s{sup -1}. Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4.

  16. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Errors in translation made by English major students: A study on types and causes

    Directory of Open Access Journals (Sweden)

    Pattanapong Wongranu

    2017-05-01

    Full Text Available Many Thai English major students have problems when they translate Thai texts into English, as numerous errors can be found. Therefore, a study of translation errors is needed to find solutions to these problems. The objectives of this research were: 1 to examine types of translation errors in translation from Thai into English, 2 to determine the types of translation errors that are most common, and 3 to find possible explanations for the causes of errors. The results of this study will be used to improve translation teaching and the course “Translation from Thai into English”. The participants were 26 third-year, English major students at Kasetsart University. The data were collected from the students' exercises and examinations. Interviews and stimulated recall were also used to determine translation problems and causes of errors. The data were analyzed by considering the frequency and percentage, and by content analysis. The results shows that the most frequent translation errors were syntactic errors (65%, followed by semantic errors (26.5% and miscellaneous errors (8.5%, respectively. The causes of errors found in this study included translation procedures, carelessness, low self-confidence, and anxiety. It is recommended that more class time be spent to address the problematic points. In addition, more authentic translation and group work should be implemented to increase self-confidence and decrease anxiety.

  18. Absolute calibration in vivo measurement systems

    International Nuclear Information System (INIS)

    Kruchten, D.A.; Hickman, D.P.

    1991-02-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs

  19. Solar radiation estimation using sunshine hour and air pollution index in China

    International Nuclear Information System (INIS)

    Zhao, Na; Zeng, Xiaofan; Han, Shumin

    2013-01-01

    Highlights: • Aerosol can affect coefficients of A–P equation to estimate solar radiation. • Logarithmic model performed best, according to MBE, MABE, MPE, MAPE, RMSE and NSE. • Parameters of A–P model can be adjusted by API, geographical position and altitude. • A general equation to estimate solar radiation was established in China. - Abstract: Angström–Prescott (A–P) equation is the most widely used empirical relationship to estimate global solar radiation from sunshine hours. A new approach based on Air Pollution Index (API) data is introduced to adjust the coefficients of A–P equation in this study. Based on daily solar radiation, sunshine hours and API data at nine meteorological stations from 2001 to 2011 in China, linear, exponential and logarithmic models are developed and validated. When evaluated by performance indicators of mean bias error, mean absolute bias error, mean percentage error, mean absolute percentage error, root mean square error, and Nash–Sutcliffe Equation, it is demonstrated that logarithmic model performed better than the other models. Then empirical coefficients for three models are given for each station and the variations of these coefficients are affected by API, geographical position, and altitude. This indicates that aerosol can play an important role in estimation solar radiation from sunshine hours, especially in those highly polluted regions. Finally, a countrywide general equation is established based on the sunshine hour data, API and geographical parameters, which can be used to estimate the daily solar radiation in areas where the radiation data is not available

  20. Artificial neural network modelling of a large-scale wastewater treatment plant operation.

    Science.gov (United States)

    Güçlü, Dünyamin; Dursun, Sükrü

    2010-11-01

    Artificial Neural Networks (ANNs), a method of artificial intelligence method, provide effective predictive models for complex processes. Three independent ANN models trained with back-propagation algorithm were developed to predict effluent chemical oxygen demand (COD), suspended solids (SS) and aeration tank mixed liquor suspended solids (MLSS) concentrations of the Ankara central wastewater treatment plant. The appropriate architecture of ANN models was determined through several steps of training and testing of the models. ANN models yielded satisfactory predictions. Results of the root mean square error, mean absolute error and mean absolute percentage error were 3.23, 2.41 mg/L and 5.03% for COD; 1.59, 1.21 mg/L and 17.10% for SS; 52.51, 44.91 mg/L and 3.77% for MLSS, respectively, indicating that the developed model could be efficiently used. The results overall also confirm that ANN modelling approach may have a great implementation potential for simulation, precise performance prediction and process control of wastewater treatment plants.

  1. Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data

    Science.gov (United States)

    Young, Alistair A.; Li, Xiaosong

    2014-01-01

    Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382

  2. Evaluation of Multiple Linear Regression-Based Limited Sampling Strategies for Enteric-Coated Mycophenolate Sodium in Adult Kidney Transplant Recipients.

    Science.gov (United States)

    Brooks, Emily K; Tett, Susan E; Isbel, Nicole M; McWhinney, Brett; Staatz, Christine E

    2018-04-01

    Although multiple linear regression-based limited sampling strategies (LSSs) have been published for enteric-coated mycophenolate sodium, none have been evaluated for the prediction of subsequent mycophenolic acid (MPA) exposure. This study aimed to examine the predictive performance of the published LSS for the estimation of future MPA area under the concentration-time curve from 0 to 12 hours (AUC0-12) in renal transplant recipients. Total MPA plasma concentrations were measured in 20 adult renal transplant patients on 2 occasions a week apart. All subjects received concomitant tacrolimus and were approximately 1 month after transplant. Samples were taken at 0, 0.33, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 6, and 8 hours and 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 3, 4, 6, 9, and 12 hours after dose on the first and second sampling occasion, respectively. Predicted MPA AUC0-12 was calculated using 19 published LSSs and data from the first or second sampling occasion for each patient and compared with the second occasion full MPA AUC0-12 calculated using the linear trapezoidal rule. Bias (median percentage prediction error) and imprecision (median absolute prediction error) were determined. Median percentage prediction error and median absolute prediction error for the prediction of full MPA AUC0-12 were multiple linear regression-based LSS was not possible without concentrations up to at least 8 hours after the dose.

  3. Absolute high-resolution Se+ photoionization cross-section measurements with Rydberg-series analysis

    International Nuclear Information System (INIS)

    Esteves, D. A.; Bilodeau, R. C.; Sterling, N. C.; Phaneuf, R. A.; Kilcoyne, A. L. D.; Red, E. C.; Aguilar, A.

    2011-01-01

    Absolute single photoionization cross-section measurements for Se + ions were performed at the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory using the photo-ion merged-beams technique. Measurements were made at a photon energy resolution of 5.5 meV from 17.75 to 21.85 eV spanning the 4s 2 4p 3 4 S 3/2 o ground-state ionization threshold and the 2 P 3/2 o , 2 P 1/2 o , 2 D 5/2 o , and 2 D 3/2 o metastable state thresholds. Extensive analysis of the complex resonant structure in this region identified numerous Rydberg series of resonances and obtained the Se 2+ 4s 2 4p 23 P 2 and 4s 2 4p 21 S 0 state energies. In addition, particular attention was given to removing significant effects in the measurements due to a small percentage of higher-order undulator radiation.

  4. Absolute instrumental neutron activation analysis at Lawrence Livermore Laboratory

    International Nuclear Information System (INIS)

    Heft, R.E.

    1977-01-01

    The Environmental Science Division at Lawrence Livermore Laboratory has in use a system of absolute Instrumental Neutron Activation Analysis (INAA). Basically, absolute INAA is dependent upon the absolute measurement of the disintegration rates of the nuclides produced by neutron capture. From such disintegration rate data, the amount of the target element present in the irradiated sample is calculated by dividing the observed disintegration rate for each nuclide by the expected value for the disintegration rate per microgram of the target element that produced the nuclide. In absolute INAA, the expected value for disintegration rate per microgram is calculated from nuclear parameters and from measured values of both thermal and epithermal neutron fluxes which were present during irradiation. Absolute INAA does not depend on the concurrent irradiation of elemental standards but does depend on the values for thermal and epithermal neutron capture cross-sections for the target nuclides. A description of the analytical method is presented

  5. Nitrate leaching from a potato field using fuzzy inference system combined with genetic algorithm

    DEFF Research Database (Denmark)

    Shekofteh, Hosein; Afyuni, Majid M; Hajabbasi, Mohammad-Ali

    2012-01-01

    in MFIS were tuned by Genetic Algorithm. The correlation coefficient, normalized root mean square error and relative mean absolute error percentage between the data obtained by HYDRUS-2D and the estimated values using MFIS model were 0.986, 0.086 and 2.38 respectively. It appears that MFIS can predict......The conventional application of nitrogen fertilizers via irrigation is likely to be responsible for the increased nitrate concentration in groundwater of areas dominated by irrigated agriculture. This requires appropriate water and nutrient management to minimize groundwater pollution...

  6. Absolute Navigation Information Estimation for Micro Planetary Rovers

    Directory of Open Access Journals (Sweden)

    Muhammad Ilyas

    2016-03-01

    Full Text Available This paper provides algorithms to estimate absolute navigation information, e.g., absolute attitude and position, by using low power, weight and volume Microelectromechanical Systems-type (MEMS sensors that are suitable for micro planetary rovers. Planetary rovers appear to be easily navigable robots due to their extreme slow speed and rotation but, unfortunately, the sensor suites available for terrestrial robots are not always available for planetary rover navigation. This makes them difficult to navigate in a completely unexplored, harsh and complex environment. Whereas the relative attitude and position can be tracked in a similar way as for ground robots, absolute navigation information, unlike in terrestrial applications, is difficult to obtain for a remote celestial body, such as Mars or the Moon. In this paper, an algorithm called the EASI algorithm (Estimation of Attitude using Sun sensor and Inclinometer is presented to estimate the absolute attitude using a MEMS-type sun sensor and inclinometer, only. Moreover, the output of the EASI algorithm is fused with MEMS gyros to produce more accurate and reliable attitude estimates. An absolute position estimation algorithm has also been presented based on these on-board sensors. Experimental results demonstrate the viability of the proposed algorithms and the sensor suite for low-cost and low-weight micro planetary rovers.

  7. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  8. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  9. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Mark [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Tuen Mun Hospital, Hong Kong (China); Grehn, Melanie [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Cremers, Florian [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Siebert, Frank-Andre [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Wurster, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Department for Radiation Oncology, University Medicine Greifswald, Greifswald (Germany); Huttenlocher, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Dunst, Jürgen [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Department for Radiation Oncology, University Clinic Copenhagen, Copenhagen (Denmark); Hildebrandt, Guido [Department for Radiation Oncology, University Medicine Rostock, Rostock (Germany); Schweikard, Achim [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Rades, Dirk [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Ernst, Floris [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); and others

    2017-03-15

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  10. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    International Nuclear Information System (INIS)

    Chan, Mark; Grehn, Melanie; Cremers, Florian; Siebert, Frank-Andre; Wurster, Stefan; Huttenlocher, Stefan; Dunst, Jürgen; Hildebrandt, Guido; Schweikard, Achim; Rades, Dirk; Ernst, Floris

    2017-01-01

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  11. Oligomeric models for estimation of polydimethylsiloxane-water partition ratios with COSMO-RS theory: impact of the combinatorial term on absolute error.

    Science.gov (United States)

    Parnis, J Mark; Mackay, Donald

    2017-03-22

    A series of 12 oligomeric models for polydimethylsiloxane (PDMS) were evaluated for their effectiveness in estimating the PDMS-water partition ratio, K PDMS-w . Models ranging in size and complexity from the -Si(CH 3 ) 2 -O- model previously published by Goss in 2011 to octadeca-methyloctasiloxane (CH 3 -(Si(CH 3 ) 2 -O-) 8 CH 3 ) were assessed based on their RMS error with 253 experimental measurements of log K PDMS-w from six published works. The lowest RMS error for log K PDMS-w (0.40 in log K) was obtained with the cyclic oligomer, decamethyl-cyclo-penta-siloxane (D5), (-Si(CH 3 ) 2 -O-) 5 , with the mixing-entropy associated combinatorial term included in the chemical potential calculation. The presence or absence of terminal methyl groups on linear oligomer models is shown to have significant impact only for oligomers containing 1 or 2 -Si(CH 3 ) 2 -O- units. Removal of the combinatorial term resulted in a significant increase in the RMS error for most models, with the smallest increase associated with the largest oligomer studied. The importance of inclusion of the combinatorial term in the chemical potential for liquid oligomer models is discussed.

  12. Percentage Retail Mark-Ups

    OpenAIRE

    Thomas von Ungern-Sternberg

    1999-01-01

    A common assumption in the literature on the double marginalization problem is that the retailer can set his mark-up only in the second stage of the game after the producer has moved. To the extent that the sequence of moves is designed to reflect the relative bargaining power of the two parties it is just as plausible to let the retailer move first. Furthermore, retailers frequently calculate their selling prices by adding a percentage mark-up to their wholesale prices. This allows a retaile...

  13. Absolute-magnitude distributions of supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, Dean; Wright, John [Department of Physics, Xavier University of Louisiana, New Orleans, LA 70125 (United States); Jenkins III, Robert L. [Applied Physics Department, Richard Stockton College, Galloway, NJ 08205 (United States); Maddox, Larry, E-mail: drichar7@xula.edu [Department of Chemistry and Physics, Southeastern Louisiana University, Hammond, LA 70402 (United States)

    2014-05-01

    The absolute-magnitude distributions of seven supernova (SN) types are presented. The data used here were primarily taken from the Asiago Supernova Catalogue, but were supplemented with additional data. We accounted for both foreground and host-galaxy extinction. A bootstrap method is used to correct the samples for Malmquist bias. Separately, we generate volume-limited samples, restricted to events within 100 Mpc. We find that the superluminous events (M{sub B} < –21) make up only about 0.1% of all SNe in the bias-corrected sample. The subluminous events (M{sub B} > –15) make up about 3%. The normal Ia distribution was the brightest with a mean absolute blue magnitude of –19.25. The IIP distribution was the dimmest at –16.75.

  14. Calibration with Absolute Shrinkage

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul

    2001-01-01

    In this paper, penalized regression using the L-1 norm on the estimated parameters is proposed for chemometric je calibration. The algorithm is of the lasso type, introduced by Tibshirani in 1996 as a linear regression method with bound on the absolute length of the parameters, but a modification...

  15. ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS

    Energy Technology Data Exchange (ETDEWEB)

    ANDREWS, J.W.

    1998-12-31

    One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method of reconciling any conflicting results from the two leakage tests.

  16. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  17. Coral Reef Coverage Percentage on Binor Paiton-Probolinggo Seashore

    Directory of Open Access Journals (Sweden)

    Dwi Budi Wiyanto

    2016-01-01

    Full Text Available The coral reef damage in Probolinggo region was expected to be caused by several factors. The first one comes from its society that exploits fishery by using cyanide toxin and bomb. The second one goes to the extraction of coral reef, which is used as decoration or construction materials. The other factor is likely caused by the existence of large industry on the seashore, such as Electric Steam Power Plant (PLTU Paiton and others alike. Related to the development of coral reef ecosystem, availability of an accurate data is crucially needed to support the manner of future policy, so the research of coral reef coverage percentage needs to be conducted continuously. The aim of this research is to collect biological data of coral reef and to identify coral reef coverage percentage in the effort of constructing coral reef condition basic data on Binor, Paiton, and Probolinggo regency seashore. The method used in this research is Line Intercept Transect (LIT method. LIT method is a method that used to decide benthic community on coral reef based on percentage growth, and to take note of benthic quantity along transect line. Percentage of living coral coverage in 3 meters depth on this Binor Paiton seashore that may be categorized in a good condition is 57,65%. While the rest are dead coral that is only 1,45%, other life form in 23,2%, and non-life form in 17,7%. A good condition of coral reef is caused by coral reef transplantation on the seashore, so this coral reef is dominated by Acropora Branching. On the other hand, Mortality Index (IM of coral reef resulted in 24,5%. The result from observation and calculation of coral reef is dominated by Hard Coral in Acropora Branching (ACB with coral reef coverage percentage of 39%, Coral Massive (CM with coral reef coverage percentage of 2,85%, Coral Foliose (CF with coral reef coverage percentage of 1,6%, and Coral Mushroom (CRM with coral reef coverage percentage of 8,5%. Observation in 10 meters depth

  18. Coral Reef Coverage Percentage on Binor Paiton-Probolinggo Seashore

    Directory of Open Access Journals (Sweden)

    Dwi Budi Wiyanto

    2016-02-01

    Full Text Available The coral reef damage in Probolinggo region was expected to be caused by several factors. The first one comes from its society that exploits fishery by using cyanide toxin and bomb. The second one goes to the extraction of coral reef, which is used as decoration or construction materials. The other factor is likely caused by the existence of large industry on the seashore, such as Electric Steam Power Plant (PLTU Paiton and others alike. Related to the development of coral reef ecosystem, availability of an accurate data is crucially needed to support the manner of future policy, so the research of coral reef coverage percentage needs to be conducted continuously. The aim of this research is to collect biological data of coral reef and to identify coral reef coverage percentage in the effort of constructing coral reef condition basic data on Binor, Paiton, and Probolinggo regency seashore. The method used in this research is Line Intercept Transect (LIT method. LIT method is a method that used to decide benthic community on coral reef based on percentage growth, and to take note of benthic quantity along transect line. Percentage of living coral coverage in 3 meters depth on this Binor Paiton seashore that may be categorized in a good condition is 57,65%. While the rest are dead coral that is only 1,45%, other life form in 23,2%, and non-life form in 17,7%. A good condition of coral reef is caused by coral reef transplantation on the seashore, so this coral reef is dominated by Acropora Branching. On the other hand, Mortality Index (IM of coral reef resulted in 24,5%. The result from observation and calculation of coral reef is dominated by Hard Coral in Acropora Branching (ACB with coral reef coverage percentage of 39%, Coral Massive (CM with coral reef coverage percentage of 2,85%, Coral Foliose (CF with coral reef coverage percentage of 1,6%, and Coral Mushroom (CRM with coral reef coverage percentage of 8,5%. Observation in 10 meters depth

  19. Efficacy of intrahepatic absolute alcohol in unrespectable hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Farooqi, J.I.; Hameed, K.; Khan, I.U.; Shah, S.

    2001-01-01

    To determine efficacy of intrahepatic absolute alcohol injection in researchable hepatocellular carcinoma. A randomized, controlled, experimental and interventional clinical trial. Gastroenterology Department, PGMI, Hayatabad Medical Complex, Peshawar during the period from June, 1998 to June, 2000. Thirty patients were treated by percutaneous, intrahepatic absolute alcohol injection sin repeated sessions, 33 patients were not given or treated with alcohol to serve as control. Both the groups were comparable for age, sex and other baseline characteristics. Absolute alcohol therapy significantly improved quality of life of patients, reduced the tumor size and mortality as well as showed significantly better results regarding survival (P< 0.05) than the patients of control group. We conclude that absolute alcohol is a beneficial and safe palliative treatment measure in advanced hepatocellular carcinoma (HCC). (author)

  20. Planck absolute entropy of a rotating BTZ black hole

    Science.gov (United States)

    Riaz, S. M. Jawwad

    2018-04-01

    In this paper, the Planck absolute entropy and the Bekenstein-Smarr formula of the rotating Banados-Teitelboim-Zanelli (BTZ) black hole are presented via a complex thermodynamical system contributed by its inner and outer horizons. The redefined entropy approaches zero as the temperature of the rotating BTZ black hole tends to absolute zero, satisfying the Nernst formulation of a black hole. Hence, it can be regarded as the Planck absolute entropy of the rotating BTZ black hole.

  1. Absolute nuclear material assay using count distribution (LAMBDA) space

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  2. Error Analysis Of Students Working About Word Problem Of Linear Program With NEA Procedure

    Science.gov (United States)

    Santoso, D. A.; Farid, A.; Ulum, B.

    2017-06-01

    Evaluation and assessment is an important part of learning. In evaluation process of learning, written test is still commonly used. However, the tests usually do not following-up by further evaluation. The process only up to grading stage not to evaluate the process and errors which done by students. Whereas if the student has a pattern error and process error, actions taken can be more focused on the fault and why is that happen. NEA procedure provides a way for educators to evaluate student progress more comprehensively. In this study, students’ mistakes in working on some word problem about linear programming have been analyzed. As a result, mistakes are often made students exist in the modeling phase (transformation) and process skills (process skill) with the overall percentage distribution respectively 20% and 15%. According to the observations, these errors occur most commonly due to lack of precision of students in modeling and in hastiness calculation. Error analysis with students on this matter, it is expected educators can determine or use the right way to solve it in the next lesson.

  3. Verification of setup errors in external beam radiation therapy using electronic portal imaging

    International Nuclear Information System (INIS)

    Krishna Murthy, K.; Al-Rahbi, Zakiya; Sivakumar, S.S.; Davis, C.A.; Ravichandran, R.

    2008-01-01

    The objective of this study was to conduct an audit on QA aspects of treatment delivery by the verification of the treatment fields position on different days to document the efficiency of immobilization methods and reproducibility of treatment. A retrospective study was carried out on 60 patients, each 20 treated for head and neck, breast, and pelvic sites; and a total of 506 images obtained by electronic portal imaging device (EPID) were analyzed. The portal images acquired using the EPID systems attached to the Varian linear accelerators were superimposed on the reference images. The anatomy matching software (Varian portal Vision. 6.0) was used, and the displacements in two dimensions and rotation were noted for each treated field to study the patient setup errors. The percentages of mean deviations more than 3 mm in lateral (X) and longitudinal (Y) directions were 17.5%, 11.25%, and 7.5% for breast, pelvis, and head and neck cases respectively. In all cases, the percentage of mean deviation with more than 5 mm error was 0.83%. The maximum average mean deviation in all the cases was 1.87. The average mean SD along X and Y directions in all the cases was less than 2.65. The results revealed that the ranges of setup errors are site specific and immobilization methods improve reproducibility. The observed variations were well within the limits. The study confirmed the accuracy and quality of treatments delivered to the patients. (author)

  4. Day-Ahead Probabilistic Model for Scheduling the Operation of a Wind Pumped-Storage Hybrid Power Station: Overcoming Forecasting Errors to Ensure Reliability of Supply to the Grid

    Directory of Open Access Journals (Sweden)

    Jakub Jurasz

    2018-06-01

    Full Text Available Variable renewable energy sources (VRES, such as solarphotovoltaic (PV and wind turbines (WT, are starting to play a significant role in several energy systems around the globe. To overcome the problem of their non-dispatchable and stochastic nature, several approaches have been proposed so far. This paper describes a novel mathematical model for scheduling the operation of a wind-powered pumped-storage hydroelectricity (PSH hybrid for 25 to 48 h ahead. The model is based on mathematical programming and wind speed forecasts for the next 1 to 24 h, along with predicted upper reservoir occupancy for the 24th hour ahead. The results indicate that by coupling a 2-MW conventional wind turbine with a PSH of energy storing capacity equal to 54 MWh it is possible to significantly reduce the intraday energy generation coefficient of variation from 31% for pure wind turbine to 1.15% for a wind-powered PSH The scheduling errors calculated based on mean absolute percentage error (MAPE are significantly smaller for such a coupling than those seen for wind generation forecasts, at 2.39% and 27%, respectively. This is even stronger emphasized by the fact that, those for wind generation were calculated for forecasts made for the next 1 to 24 h, while those for scheduled generation were calculated for forecasts made for the next 25 to 48 h. The results clearly show that the proposed scheduling approach ensures the high reliability of the WT–PSH energy source.

  5. THE DISKMASS SURVEY. II. ERROR BUDGET

    International Nuclear Information System (INIS)

    Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A.

    2010-01-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ * ), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25 0 -35 0 is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F bar ) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σ dyn ), disk stellar mass-to-light ratio (Υ disk * ), and disk maximality (F *,max disk ≡V disk *,max / V c ). Random and systematic errors in these quantities for individual galaxies will be ∼25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  6. Writing Skill and Categorical Error Analysis: A Study of First Year Undergraduate University Students

    Directory of Open Access Journals (Sweden)

    Adnan Satariyan

    2014-09-01

    Full Text Available Abstract This study identifies and analyses the common errors in writing skill of the first year students of Azad University of South Tehran Branch in relation to their first language (L1, the type of high school they graduated, and their exposure to media and technology in order to learn English. It also determines the categories in which the errors are committed (content, organisation/discourse, vocabulary, mechanics, or syntax and whether or not there is a significant difference in the percentage of errors committed and these categories. Participants of this study are 190 first year students that are asked to write an essay. An error analysis model adapted from Brown (2001 and Gayeta (2002 is then used to evaluate the essay writings in terms of content, organisation, vocabulary, mechanics, and syntax or language use. The results of the study show that the students have greater difficulties in organisation, content, and vocabulary and experience less difficulties in mechanics and syntax.

  7. Reduction in specimen labeling errors after implementation of a positive patient identification system in phlebotomy.

    Science.gov (United States)

    Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F

    2010-06-01

    Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.

  8. Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry.

    Science.gov (United States)

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric

    2010-04-01

    The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  9. Modeling of electricity consumption in the Asian gaming and tourism center - Macao SAR, People's Republic of China

    Energy Technology Data Exchange (ETDEWEB)

    Lai, T.M.; To, W.M. [School of Business, Macao Polytechnic Institute, Macao SAR (China); Lo, W.C. [Department of Electrical Engineering, Hong Kong Polytechnic University, Hong Kong SAR (China); Choy, Y.S. [Department of Mechanical Engineering, Hong Kong Polytechnic University, Hong Kong SAR (China)

    2008-05-15

    The use of electricity is indispensable to modern life. As Macao Special Administrative Region becomes a gaming and tourism center in Asia, modeling the consumption of electricity is critical to Macao's economic development. The purposes of this paper are to conduct an extensive literature review on modeling of electricity consumption, and to identify key climatic, demographic, economic and/or industrial factors that may affect the electricity consumption of a country/city. It was identified that the five factors, namely temperature, population, the number of tourists, hotel room occupancy and days per month, could be used to characterize Macao's monthly electricity consumption. Three selected approaches including multiple regression, artificial neural network (ANN) and wavelet ANN were used to derive mathematical models of the electricity consumption. The accuracy of these models was assessed by using the mean squared error (MSE), the mean squared percentage error (MSPE) and the mean absolute percentage error (MAPE). The error analysis shows that wavelet ANN has a very promising forecasting capability and can reveal the periodicity of electricity consumption. (author)

  10. Approach to Absolute Zero

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 10. Approach to Absolute Zero Below 10 milli-Kelvin. R Srinivasan. Series Article Volume 2 Issue 10 October 1997 pp 8-16. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/002/10/0008-0016 ...

  11. Absolute total and one and two electron transfer cross sections for Ar8+ on Ar as a function of energy

    International Nuclear Information System (INIS)

    Vancura, J.; Kostroun, V.O.

    1992-01-01

    The absolute total and one and two electron transfer cross sections for Ar 8+ on Ar were measured as a function of projectile laboratory energy from 0.090 to 0.550 keV/amu. The effective one electron transfer cross section dominates above 0.32 keV/amu, while below this energy, the effective two electron transfer starts to become appreciable. The total cross section varies by a factor over the energy range explored. The overall error in the cross section measurement is estimated to be ± 15%

  12. Population-based absolute risk estimation with survey data

    Science.gov (United States)

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  13. Absolute marine gravimetry with matter-wave interferometry.

    Science.gov (United States)

    Bidel, Y; Zahzam, N; Blanchard, C; Bonnin, A; Cadoret, M; Bresson, A; Rouxel, D; Lequentrec-Lalancette, M F

    2018-02-12

    Measuring gravity from an aircraft or a ship is essential in geodesy, geophysics, mineral and hydrocarbon exploration, and navigation. Today, only relative sensors are available for onboard gravimetry. This is a major drawback because of the calibration and drift estimation procedures which lead to important operational constraints. Atom interferometry is a promising technology to obtain onboard absolute gravimeter. But, despite high performances obtained in static condition, no precise measurements were reported in dynamic. Here, we present absolute gravity measurements from a ship with a sensor based on atom interferometry. Despite rough sea conditions, we obtained precision below 10 -5  m s -2 . The atom gravimeter was also compared with a commercial spring gravimeter and showed better performances. This demonstration opens the way to the next generation of inertial sensors (accelerometer, gyroscope) based on atom interferometry which should provide high-precision absolute measurements from a moving platform.

  14. determination of perce rmination of percentage of caffeine content

    African Journals Online (AJOL)

    userpc

    ABSTRACT. Two methods were employed for the deter brands of analgesic tablets which are;. Extraction using both water and chlorofor weighing apparatus and the percentage of. The percentage of caffeine using only water. Boska, and Panadol Extra were 7.40%, 5.60 caffeine using both water and chloroform i.

  15. Spelling errors among children with ADHD symptoms: the role of working memory.

    Science.gov (United States)

    Re, Anna Maria; Mirandola, Chiara; Esposito, Stefania Sara; Capodieci, Agnese

    2014-09-01

    Research has shown that children with attention deficit/hyperactivity disorder (ADHD) may present a series of academic difficulties, including spelling errors. Given that correct spelling is supported by the phonological component of working memory (PWM), the present study examined whether or not the spelling difficulties of children with ADHD are emphasized when children's PWM is overloaded. A group of 19 children with ADHD symptoms (between 8 and 11 years of age), and a group of typically developing children matched for age, schooling, gender, rated intellectual abilities, and socioeconomic status, were administered two dictation texts: one under typical conditions and one under a pre-load condition that required the participants to remember a series of digits while writing. The results confirmed that children with ADHD symptoms have spelling difficulties, produce a higher percentages of errors compared to the control group children, and that these difficulties are enhanced under a higher load of PWM. An analysis of errors showed that this holds true, especially for phonological errors. The increased errors in the PWM condition was not due to a tradeoff between working memory and writing, as children with ADHD also performed more poorly in the PWM task. The theoretical and practical implications are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. A nonlinear model of gold production in Malaysia

    Science.gov (United States)

    Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi

    2014-06-01

    Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.

  17. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. 26 CFR 1.613-1 - Percentage depletion; general rule.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Percentage depletion; general rule. 1.613-1... TAX (CONTINUED) INCOME TAXES (CONTINUED) Natural Resources § 1.613-1 Percentage depletion; general rule. (a) In general. In the case of a taxpayer computing the deduction for depletion under section 611...

  19. Determination of percentage of caffeine content in some analgesic ...

    African Journals Online (AJOL)

    Two methods were employed for the determination of percentage Caffeine content in three brands of analgesic tablets which are; Extraction using only water as a solvent and Extraction using both water and chloroform as solvents, watch glass has been used as the weighing apparatus and the percentage of Caffeine ...

  20. Redetermination and absolute configuration of atalaphylline

    Directory of Open Access Journals (Sweden)

    Hoong-Kun Fun

    2010-02-01

    Full Text Available The title acridone alkaloid [systematic name: 1,3,5-trihydroxy-2,4-bis(3-methylbut-2-enylacridin-9(10H-one], C23H25NO4, has previously been reported as crystallizing in the chiral orthorhombic space group P212121 [Chantrapromma et al. (2010. Acta Cryst. E66, o81–o82] but the absolute configuration could not be determined from data collected with Mo radiation. The absolute configuration has now been determined by refinement of the Flack parameter with data collected using Cu radiation. All features of the molecule and its crystal packing are similar to those previously described.

  1. Absolute calibration of sniffer probes on Wendelstein 7-X

    International Nuclear Information System (INIS)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-01-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m 2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m 2 per MW injected beam power is measured.

  2. Absolute calibration of sniffer probes on Wendelstein 7-X

    Science.gov (United States)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m2 per MW injected beam power is measured.

  3. Absolute calibration of sniffer probes on Wendelstein 7-X

    Energy Technology Data Exchange (ETDEWEB)

    Moseev, D., E-mail: dmitry.moseev@ipp.mpg.de; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Gellert, F. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Ernst-Moritz-Arndt-Universität Greifswald, Greifswald (Germany); Oosterbeek, J. W. [Eindhoven University of Technology, Eindhoven (Netherlands)

    2016-08-15

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m{sup 2} per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m{sup 2} per MW injected beam power is measured.

  4. Absolute magnitudes by statistical parallaxes

    International Nuclear Information System (INIS)

    Heck, A.

    1978-01-01

    The author describes an algorithm for stellar luminosity calibrations (based on the principle of maximum likelihood) which allows the calibration of relations of the type: Msub(i)=sup(N)sub(j=1)Σqsub(j)Csub(ij), i=1,...,n, where n is the size of the sample at hand, Msub(i) are the individual absolute magnitudes, Csub(ij) are observational quantities (j=1,...,N), and qsub(j) are the coefficients to be determined. If one puts N=1 and Csub(iN)=1, one has q 1 =M(mean), the mean absolute magnitude of the sample. As additional output, the algorithm provides one also with the dispersion in magnitude of the sample sigmasub(M), the mean solar motion (U,V,W) and the corresponding velocity ellipsoid (sigmasub(u), sigmasub(v), sigmasub(w). The use of this algorithm is illustrated. (Auth.)

  5. Noncircular features in Saturn's rings IV: Absolute radius scale and Saturn's pole direction

    Science.gov (United States)

    French, Richard G.; McGhee-French, Colleen A.; Lonergan, Katherine; Sepersky, Talia; Jacobson, Robert A.; Nicholson, Philip D.; Hedman, Mathew M.; Marouf, Essam A.; Colwell, Joshua E.

    2017-07-01

    We present a comprehensive solution for the geometry of Saturn's ring system, based on orbital fits to an extensive set of occultation observations of 122 individual ring edges and gaps. We begin with a restricted set of very high quality Cassini VIMS, UVIS, and RSS measurements for quasi-circular features in the C and B rings and the Cassini Division, and then successively add suitably weighted additional Cassini and historical occultation measurements (from Voyager, HST and the widely-observed 28 Sgr occultation of 3 Jul 1989) for additional non-circular features, to derive an absolute radius scale applicable across the entire classical ring system. As part of our adopted solution, we determine first-order corrections to the spacecraft trajectories used to determine the geometry of individual occultation chords. We adopt a simple linear model for Saturn's precession, and our favored solution yields a precession rate on the sky n^˙P = 0.207 ± 0 .006‧‧yr-1 , equivalent to an angular rate of polar motion ΩP = 0.451 ± 0 .014‧‧yr-1 . The 3% formal uncertainty in the fitted precession rate is approaching the point where it can provide a useful constraint on models of Saturn's interior, although realistic errors are likely to be larger, given the linear approximation of the precession model and possible unmodeled systematic errors in the spacecraft ephemerides. Our results are largely consistent with independent estimates of the precession rate based on historical RPX times (Nicholson et al., 1999 AAS/Division for Planetary Sciences Meeting Abstracts #31 31, 44.01) and from theoretical expectations that account for Titan's 700-yr precession period (Vienne and Duriez 1992, Astronomy and Astrophysics 257, 331-352). The fitted precession rate based on Cassini data only is somewhat lower, which may be an indication of unmodeled shorter term contributions to Saturn's polar motion from other satellites, or perhaps the result of inconsistencies in the assumed

  6. A Hierarchical Approach Using Machine Learning Methods in Solar Photovoltaic Energy Production Forecasting

    Directory of Open Access Journals (Sweden)

    Zhaoxuan Li

    2016-01-01

    Full Text Available We evaluate and compare two common methods, artificial neural networks (ANN and support vector regression (SVR, for predicting energy productions from a solar photovoltaic (PV system in Florida 15 min, 1 h and 24 h ahead of time. A hierarchical approach is proposed based on the machine learning algorithms tested. The production data used in this work corresponds to 15 min averaged power measurements collected from 2014. The accuracy of the model is determined using computing error statistics such as mean bias error (MBE, mean absolute error (MAE, root mean square error (RMSE, relative MBE (rMBE, mean percentage error (MPE and relative RMSE (rRMSE. This work provides findings on how forecasts from individual inverters will improve the total solar power generation forecast of the PV system.

  7. Regional absolute conductivity reconstruction using projected current density in MREIT

    International Nuclear Information System (INIS)

    Sajib, Saurav Z K; Kim, Hyung Joong; Woo, Eung Je; Kwon, Oh In

    2012-01-01

    slice and the reconstructed regional projected current density, we propose a direct non-iterative algorithm to reconstruct the absolute conductivity in the ROI. The numerical simulations in the presence of various degrees of noise, as well as a phantom MRI imaging experiment showed that the proposed method reconstructs the regional absolute conductivity in a ROI within a subject including the defective regions. In the simulation experiment, the relative L 2 -mode errors of the reconstructed regional and global conductivities were 0.79 and 0.43, respectively, using a noise level of 50 db in the defective region. (paper)

  8. Coordinated joint motion control system with position error correction

    Science.gov (United States)

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  9. Thermo-kinetic modeling and optimization of the sulfur recovery unit thermal stage

    International Nuclear Information System (INIS)

    Zarei, Samane; Ganji, Hamid; Sadi, Maryam; Rashidzadeh, Mehdi

    2016-01-01

    Highlights: • The Claus reaction furnace was modeled using the corrected Gibbs energy minimization. • Using the corrected model, a significant error reduction from 33.50 to 7.86% occurred. • The waste heat boiler was modeled using plant data and a new H_2S decomposition rate. • The combined model could reasonably predict the experimental data with 6.50% error. • An optimization was carried out to control the operating variables of an existing plant. - Abstract: In this study, the reaction furnace of Claus process was modeled using the Gibbs free energy minimization method, which involved new parameters in correlations of thermodynamic properties. Using the new parameters, a significant error reduction from 33.50% to 7.86% occurred in the prediction of molar flow rate of components. Subsequently, the waste heat boiler attached to the reaction furnace was modeled using experimental plant data and a new hydrogen sulfide decomposition rate. Utilizing this new rate expression, the capability of the model in H_2 molar flow rate prediction was enhanced, and the mean absolute percentage error of the model for H_2 and H_2S species reached 12.94% and 9.43%, respectively. The combined model including corrected equilibrium model for the reaction furnace and corrected kinetic model for the waste heat boiler could reasonably predict the experimental data so that the mean absolute percentage error reached to 6.50%. An optimization study was carried out to examine the operating condition of the Claus reaction furnace and the waste heat boiler in order to maximize sulfur production and minimize COS emission while maintaining H_2S to SO_2 flow ratio at constant value of 2.

  10. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  11. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

    Science.gov (United States)

    2010-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the numerical... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51...

  12. Correcting electrode modelling errors in EIT on realistic 3D head models.

    Science.gov (United States)

    Jehl, Markus; Avery, James; Malone, Emma; Holder, David; Betcke, Timo

    2015-12-01

    Electrical impedance tomography (EIT) is a promising medical imaging technique which could aid differentiation of haemorrhagic from ischaemic stroke in an ambulance. One challenge in EIT is the ill-posed nature of the image reconstruction, i.e., that small measurement or modelling errors can result in large image artefacts. It is therefore important that reconstruction algorithms are improved with regard to stability to modelling errors. We identify that wrongly modelled electrode positions constitute one of the biggest sources of image artefacts in head EIT. Therefore, the use of the Fréchet derivative on the electrode boundaries in a realistic three-dimensional head model is investigated, in order to reconstruct electrode movements simultaneously to conductivity changes. We show a fast implementation and analyse the performance of electrode position reconstructions in time-difference and absolute imaging for simulated and experimental voltages. Reconstructing the electrode positions and conductivities simultaneously increased the image quality significantly in the presence of electrode movement.

  13. Strongly nonlinear theory of rapid solidification near absolute stability

    Science.gov (United States)

    Kowal, Katarzyna N.; Altieri, Anthony L.; Davis, Stephen H.

    2017-10-01

    We investigate the nonlinear evolution of the morphological deformation of a solid-liquid interface of a binary melt under rapid solidification conditions near two absolute stability limits. The first of these involves the complete stabilization of the system to cellular instabilities as a result of large enough surface energy. We derive nonlinear evolution equations in several limits in this scenario and investigate the effect of interfacial disequilibrium on the nonlinear deformations that arise. In contrast to the morphological stability problem in equilibrium, in which only cellular instabilities appear and only one absolute stability boundary exists, in disequilibrium the system is prone to oscillatory instabilities and a second absolute stability boundary involving attachment kinetics arises. Large enough attachment kinetics stabilize the oscillatory instabilities. We derive a nonlinear evolution equation to describe the nonlinear development of the solid-liquid interface near this oscillatory absolute stability limit. We find that strong asymmetries develop with time. For uniform oscillations, the evolution equation for the interface reduces to the simple form f''+(βf')2+f =0 , where β is the disequilibrium parameter. Lastly, we investigate a distinguished limit near both absolute stability limits in which the system is prone to both cellular and oscillatory instabilities and derive a nonlinear evolution equation that captures the nonlinear deformations in this limit. Common to all these scenarios is the emergence of larger asymmetries in the resulting shapes of the solid-liquid interface with greater departures from equilibrium and larger morphological numbers. The disturbances additionally sharpen near the oscillatory absolute stability boundary, where the interface becomes deep-rooted. The oscillations are time-periodic only for small-enough initial amplitudes and their frequency depends on a single combination of physical parameters, including the

  14. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  15. Forcing absoluteness and regularity properties

    NARCIS (Netherlands)

    Ikegami, D.

    2010-01-01

    For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.

  16. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    Energy Technology Data Exchange (ETDEWEB)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L. [NASA Marshall Space Flight Center, Huntsville, AL 35812 (United States); Khazanov, Igor, E-mail: David.a.Falconer@nasa.gov [Center for Space Plasma and Aeronomic Research, University of Alabama in Huntsville, Huntsville, AL 35899 (United States)

    2016-12-20

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter at central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.

  17. Identifikasi Kedalaman Laut (Bathymetry berdasarkan Warna Permukaan Laut pada Citra Satelit menggunakan Metode ANFIS

    Directory of Open Access Journals (Sweden)

    Diwan Mukti Pambuko

    2013-10-01

    mengetahui warna permukaan pada posisi tersebut dapat dibuat sebuah sistem yang bisa mengidentifikasi kedalaman laut pada posisi tertentu dari warna pada permukaan laut tersebut. Sistem yang dibangun ini menggunakan data kedalaman laut hasil pengukuran manual dan dipadukan dengan data gambar satelit pada posisi yang sama. Kemudian dilakukan proses learning menggunakan teknik Neuro-Fuzzy dengan metode ANFIS (Adaptive Neuro-Fuzzy Inference System dengan kinerja model identifikasi dapat diketahui dari nilai MAPE (Mean Absolute Percentage Error dan MSE (Mean Square Error. Hasil dari pembuatan model identifikasi, diperoleh sistem yang dapat melakukan identifikasi sangat baik dengan error yang diperoleh pada saat proses pengujian sebesar MAPE 9.0024 % dan MSE 0.0034. Kata kunci: bathymetry, citra satelit, neuro-fuzzy, ANFIS

  18. Absolute calibration of sniffer probes on Wendelstein 7-X

    NARCIS (Netherlands)

    Moseev, D.; Laqua, H.P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.J.; Oosterbeek, J.W.

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of

  19. Absolute tense forms in Tswana | Pretorius | Journal for Language ...

    African Journals Online (AJOL)

    These views were compared in an attempt to put forth an applicable framework for the classification of the tenses in Tswana and to identify the absolute tenses of Tswana. Keywords: tense; simple tenses; compound tenses; absolute tenses; relative tenses; aspect; auxiliary verbs; auxiliary verbal groups; Tswana Opsomming

  20. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  1. Is a shift from research on individual medical error to research on health information technology underway? A 40-year analysis of publication trends in medical journals.

    Science.gov (United States)

    Erlewein, Daniel; Bruni, Tommaso; Gadebusch Bondio, Mariacarla

    2018-06-07

    In 1983, McIntyre and Popper underscored the need for more openness in dealing with errors in medicine. Since then, much has been written on individual medical errors. Furthermore, at the beginning of the 21st century, researchers and medical practitioners increasingly approached individual medical errors through health information technology. Hence, the question arises whether the attention of biomedical researchers shifted from individual medical errors to health information technology. We ran a study to determine publication trends concerning individual medical errors and health information technology in medical journals over the last 40 years. We used the Medical Subject Headings (MeSH) taxonomy in the database MEDLINE. Each year, we analyzed the percentage of relevant publications to the total number of publications in MEDLINE. The trends identified were tested for statistical significance. Our analysis showed that the percentage of publications dealing with individual medical errors increased from 1976 until the beginning of the 21st century but began to drop in 2003. Both the upward and the downward trends were statistically significant (P information technology doubled between 2003 and 2015. The upward trend was statistically significant (P information technology in the USA and the UK. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  2. Probative value of absolute and relative judgments in eyewitness identification.

    Science.gov (United States)

    Clark, Steven E; Erickson, Michael A; Breneman, Jesse

    2011-10-01

    It is well-accepted that eyewitness identification decisions based on relative judgments are less accurate than identification decisions based on absolute judgments. However, the theoretical foundation for this view has not been established. In this study relative and absolute judgments were compared through simulations of the WITNESS model (Clark, Appl Cogn Psychol 17:629-654, 2003) to address the question: Do suspect identifications based on absolute judgments have higher probative value than suspect identifications based on relative judgments? Simulations of the WITNESS model showed a consistent advantage for absolute judgments over relative judgments for suspect-matched lineups. However, simulations of same-foils lineups showed a complex interaction based on the accuracy of memory and the similarity relationships among lineup members.

  3. An Ensemble Learning for Predicting Breakdown Field Strength of Polyimide Nanocomposite Films

    Directory of Open Access Journals (Sweden)

    Hai Guo

    2015-01-01

    Full Text Available Using the method of Stochastic Gradient Boosting, ten SMO-SVR are constructed into a strong prediction model (SGBS model that is efficient in predicting the breakdown field strength. Adopting the method of in situ polymerization, thirty-two samples of nanocomposite films with different percentage compositions, components, and thicknesses are prepared. Then, the breakdown field strength is tested by using voltage test equipment. From the test results, the correlation coefficient (CC, the mean absolute error (MAE, the root mean squared error (RMSE, the relative absolute error (RAE, and the root relative squared error (RRSE are 0.9664, 14.2598, 19.684, 22.26%, and 25.01% with SGBS model. The result indicates that the predicted values fit well with the measured ones. Comparisons between models such as linear regression, BP, GRNN, SVR, and SMO-SVR have also been made under the same conditions. They show that CC of the SGBS model is higher than those of other models. Nevertheless, the MAE, RMSE, RAE, and RRSE of the SGBS model are lower than those of other models. This demonstrates that the SGBS model is better than other models in predicting the breakdown field strength of polyimide nanocomposite films.

  4. Estimation of monthly global solar radiation in the eastern Mediterranean region in Turkey by using artificial neural networks

    International Nuclear Information System (INIS)

    Sahan, Muhittin; Yakut, Emre

    2016-01-01

    In this study, an artificial neural network (ANN) model was used to estimate monthly average global solar radiation on a horizontal surface for selected 5 locations in Mediterranean region for period of 18 years (1993-2010). Meteorological and geographical data were taken from Turkish State Meteorological Service. The ANN architecture designed is a feed-forward back-propagation model with one-hidden layer containing 21 neurons with hyperbolic tangent sigmoid as the transfer function and one output layer utilized a linear transfer function (purelin). The training algorithm used in ANN model was the Levenberg Marquand back propagation algorith (trainlm). Results obtained from ANN model were compared with measured meteorological values by using statistical methods. A correlation coefficient of 97.97 (~98%) was obtained with root mean square error (RMSE) of 0.852 MJ/m 2 , mean square error (MSE) of 0.725 MJ/m 2 , mean absolute bias error (MABE) 10.659MJ/m 2 , and mean absolute percentage error (MAPE) of 4.8%. Results show good agreement between the estimated and measured values of global solar radiation. We suggest that the developed ANN model can be used to predict solar radiation another location and conditions

  5. AN ANALYSIS OF GRAMMATICAL ERRORS IN SPEECH AT THE STUDENTS OF ENGLISH EDUCATION STUDY PROGRAM OF MUHAMMADIYAH UNIVERSITY OF METRO ACADEMIC YEAR 2013/2014

    Directory of Open Access Journals (Sweden)

    Septian Dwi Sondiana

    2017-02-01

    Full Text Available The objectives of the research are to find out the types of grammatical errors in English students’ speech; to find out the percentage of grammatical errors in English students’ speech; to find out the factors influencing English students’ grammatical errors in their speech. Based on the data, the students have problem in producing verb group, errors in subject-verb agreement, errors in the use of articles, errors in the use of prepositions, errors in noun pluralization, errors in the use of pronouns, and errors in the use of conjunctions. It shows that Anisa made eleven sentences in 2 minutes 28 seconds. She made eight errors. Dewi made seven sentences in 1 minutes 57 seconds. She made five errors. Fatika made sixteen sentences in 4 minutes 14 seconds. She made eight errors. Fitri made sixteen sentences in 4 minutes 23 seconds. She made seven errors. Ibnu  made  ten sentences in 2 minutes 18 seconds. He made eight errors. Linda made fiveteen sentences in 3 minutes 7 seconds. She made eight errors. Musli made fourteen sentences in 2 minutes 39 seconds. She made six errors. Nyoman made twelve sentences in 3 minutes 43 seconds. He made nine errors. Pera made ten sentences in 2 minutes 23 seconds. She made seven errors. Sri made fourteen sentences in 6 minutes 34 seconds. She made eleven errors. And about the percentages of errors, here is the data; Anisa: 72,73% of errors; Dewi: 71,4% of errors; Fatika: 50% of errors; Fitri: 43,75% of errors; Ibnu: 80% of errors; Linda: 53,3% of errors; Musli: 42,8% of errors; Nyoman: 75% of errors; Pera: 70% of errors; Sri: 78,57% of errors. Based on interview, it shows it can be concluded the factors influence of English students’ grammatical errors in their speech when their speak using grammar rule. The internal factors are; The first, the students are still difficult to make feeling, for example; confident, feel scary, when they are speaking in public. The second, the students are not mastered in

  6. Positioning, alignment and absolute pointing of the ANTARES neutrino telescope

    International Nuclear Information System (INIS)

    Fehr, F; Distefano, C

    2010-01-01

    A precise detector alignment and absolute pointing is crucial for point-source searches. The ANTARES neutrino telescope utilises an array of hydrophones, tiltmeters and compasses for the relative positioning of the optical sensors. The absolute calibration is accomplished by long-baseline low-frequency triangulation of the acoustic reference devices in the deep-sea with a differential GPS system at the sea surface. The absolute pointing can be independently verified by detecting the shadow of the Moon in cosmic rays.

  7. Does Absolute Synonymy exist in Owere-Igbo? | Omego | AFRREV ...

    African Journals Online (AJOL)

    Among Igbo linguistic researchers, determining whether absolute synonymy exists in Owere–Igbo, a dialect of the Igbo language predominantly spoken by the people of Owerri, Imo State, Nigeria, has become a thorny issue. While some linguistic scholars strive to establish that absolute synonymy exists in the lexical ...

  8. Three-dimensional patient setup errors at different treatment sites measured by the Tomotherapy megavoltage CT

    Energy Technology Data Exchange (ETDEWEB)

    Hui, S.K.; Lusczek, E.; Dusenbery, K. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; DeFor, T. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Biostatistics and Informatics Core; Levitt, S. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; Karolinska Institutet, Stockholm (Sweden). Dept. of Onkol-Patol

    2012-04-15

    Reduction of interfraction setup uncertainty is vital for assuring the accuracy of conformal radiotherapy. We report a systematic study of setup error to assess patients' three-dimensional (3D) localization at various treatment sites. Tomotherapy megavoltage CT (MVCT) images were scanned daily in 259 patients from 2005-2008. We analyzed 6,465 MVCT images to measure setup error for head and neck (H and N), chest/thorax, abdomen, prostate, legs, and total marrow irradiation (TMI). Statistical comparisons of the absolute displacements across sites and time were performed in rotation (R), lateral (x), craniocaudal (y), and vertical (z) directions. The global systematic errors were measured to be less than 3 mm in each direction with increasing order of errors for different sites: H and N, prostate, chest, pelvis, spine, legs, and TMI. The differences in displacements in the x, y, and z directions, and 3D average displacement between treatment sites were significant (p < 0.01). Overall improvement in patient localization with time (after 3-4 treatment fractions) was observed. Large displacement (> 5 mm) was observed in the 75{sup th} percentile of the patient groups for chest, pelvis, legs, and spine in the x and y direction in the second week of the treatment. MVCT imaging is essential for determining 3D setup error and to reduce uncertainty in localization at all anatomical locations. Setup error evaluation should be performed daily for all treatment regions, preferably for all treatment fractions. (orig.)

  9. Lunch-time food choices in preschoolers: relationships between absolute and relative intake of different food categories, and appetitive characteristics and weight

    Science.gov (United States)

    Carnell, S; Pryor, K; Mais, LA; Warkentin, S; Benson, L; Cheng, R

    2016-01-01

    Children’s appetitive characteristics measured by parent-report questionnaires are reliably associated with body weight, as well as behavioral tests of appetite, but relatively little is known about relationships with food choice. As part of a larger preloading study, we served 4-5y olds from primary school classes five school lunches at which they were presented with the same standardized multi-item meal. Parents completed Child Eating Behavior Questionnaire (CEBQ) sub-scales assessing satiety responsiveness (CEBQ-SR), food responsiveness (CEBQ-FR) and enjoyment of food (CEBQ-EF), and children were weighed and measured. Despite differing preload conditions, children showed remarkable consistency of intake patterns across all five meals with day-to-day intra-class correlations in absolute and percentage intake of each food category ranging from .78 to .91. Higher CEBQ-SR was associated with lower mean intake of all food categories across all five meals, with the weakest association apparent for snack foods. Higher CEBQ-FR was associated with higher intake of white bread and fruits and vegetables, and higher CEBQ-EF was associated with greater intake of all categories, with the strongest association apparent for white bread. Analyses of intake of each food group as a percentage of total intake, treated here as an index of the child’s choice to consume relatively more or relatively less of each different food category when composing their total lunch-time meal, further suggested that children who were higher in CEBQ-SR ate relatively more snack foods and relatively less fruits and vegetables, while children with higher CEBQ-EF ate relatively less snack foods and relatively more white bread. Higher absolute intakes of white bread and snack foods were associated with higher BMI z score. CEBQ sub-scale associations with food intake variables were largely unchanged by controlling for daily metabolic needs. However, descriptive comparisons of lunch intakes with

  10. Comparative study of four time series methods in forecasting typhoid fever incidence in China.

    Science.gov (United States)

    Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A; Li, Xiaosong

    2013-01-01

    Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.

  11. Comparative study of four time series methods in forecasting typhoid fever incidence in China.

    Directory of Open Access Journals (Sweden)

    Xingyu Zhang

    Full Text Available Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN, radial basis function neural networks (RBFNN, and Elman recurrent neural networks (ERNN were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE, mean absolute percentage error (MAPE, and mean square error (MSE. The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.

  12. Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques

    Science.gov (United States)

    Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi

    2017-08-01

    The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.

  13. Bounds on absolutely maximally entangled states from shadow inequalities, and the quantum MacWilliams identity

    Science.gov (United States)

    Huber, Felix; Eltschka, Christopher; Siewert, Jens; Gühne, Otfried

    2018-04-01

    A pure multipartite quantum state is called absolutely maximally entangled (AME), if all reductions obtained by tracing out at least half of its parties are maximally mixed. Maximal entanglement is then present across every bipartition. The existence of such states is in many cases unclear. With the help of the weight enumerator machinery known from quantum error correction and the shadow inequalities, we obtain new bounds on the existence of AME states in dimensions larger than two. To complete the treatment on the weight enumerator machinery, the quantum MacWilliams identity is derived in the Bloch representation. Finally, we consider AME states whose subsystems have different local dimensions, and present an example for a 2×3×3×3 system that shows maximal entanglement across every bipartition.

  14. Relationships between GPS-signal propagation errors and EISCAT observations

    Directory of Open Access Journals (Sweden)

    N. Jakowski

    1996-12-01

    Full Text Available When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS, the TEC over Europe is estimated within the geographic ranges -20°≤ λ ≤40°E and 32.5°≤ Φ ≤70°N in longitude and latitude, respectively. The derived TEC maps over Europe contribute to the study of horizontal coupling and transport proces- ses during significant ionospheric events. Due to their comprehensive information about the high-latitude ionosphere, EISCAT observations may help to study the influence of ionospheric phenomena upon propagation errors in GPS navigation systems. Since there are still some accuracy limiting problems to be solved in TEC determination using GPS, data comparison of TEC with vertical electron density profiles derived from EISCAT observations is valuable to enhance the accuracy of propagation-error estimations. This is evident both for absolute TEC calibration as well as for the conversion of ray-path-related observations to vertical TEC. The combination of EISCAT data and GPS-derived TEC data enables a better understanding of large-scale ionospheric processes.

  15. The DiskMass Survey. II. Error Budget

    Science.gov (United States)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  16. Moral absolutism and ectopic pregnancy.

    Science.gov (United States)

    Kaczor, C

    2001-02-01

    If one accepts a version of absolutism that excludes the intentional killing of any innocent human person from conception to natural death, ectopic pregnancy poses vexing difficulties. Given that the embryonic life almost certainly will die anyway, how can one retain one's moral principle and yet adequately respond to a situation that gravely threatens the life of the mother and her future fertility? The four options of treatment most often discussed in the literature are non-intervention, salpingectomy (removal of tube with embryo), salpingostomy (removal of embryo alone), and use of methotrexate (MXT). In this essay, I review these four options and introduce a fifth (the milking technique). In order to assess these options in terms of the absolutism mentioned, it will also be necessary to discuss various accounts of the intention/foresight distinction. I conclude that salpingectomy, salpingostomy, and the milking technique are compatible with absolutist presuppositions, but not the use of methotrexate.

  17. 13 CFR 126.701 - Can these subcontracting percentages requirements change?

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Can these subcontracting percentages requirements change? 126.701 Section 126.701 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION HUBZONE PROGRAM Contract Performance Requirements § 126.701 Can these subcontracting percentages...

  18. Technical Note: Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry

    International Nuclear Information System (INIS)

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frederic

    2010-01-01

    Purpose: The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Methods: Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side (''face up'' or ''face down'') were simulated. Results: Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%--3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Conclusions: Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  19. Percentage compensation arrangements: suspect, but not illegal.

    Science.gov (United States)

    Fedor, F P

    2001-01-01

    Percentage compensation arrangements, in which a service is outsourced to a contractor that is paid in accordance with the level of its performance, are widely used in many business sectors. The HHS Office of Inspector General (OIG) has shown concern that these arrangements in the healthcare industry may offer incentives for the performance of unnecessary services or cause false claims to be made to Federal healthcare programs in violation of the antikickback statute and the False Claims Act. Percentage compensation arrangements can work and need not run afoul of the law as long as the healthcare organization carefully oversees the arrangement and sets specific safeguards in place. These safeguards include screening contractors, carefully evaluating their compliance programs, and obligating them contractually to perform within the limits of the law.

  20. The Language of Comparisons: Communicating about Percentages

    Directory of Open Access Journals (Sweden)

    Jessica Polito

    2014-01-01

    Full Text Available While comparisons between percentages or rates appear frequently in journalism and advertising, and are an essential component of quantitative writing, many students fail to understand precisely what percentages mean, and lack fluency with the language used for comparisons. After reviewing evidence demonstrating this weakness, this experience-based perspective lays out a framework for teaching the language of comparisons in a structured way, and illustrates it with several authentic examples that exemplify mistaken or misleading uses of such numbers. The framework includes three common types of erroneous or misleading quantitative writing: the missing comparison, where a key number is omitted; the apples-to-pineapples comparison, where two subtly incomparable rates are presented; and the implied fallacy, where an invalid quantitative conclusion is left to the reader to infer.

  1. Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Krzysztof Gajowniczek

    2017-10-01

    Full Text Available Forecasting of electricity demand has become one of the most important areas of research in the electric power industry, as it is a critical component of cost-efficient power system management and planning. In this context, accurate and robust load forecasting is supposed to play a key role in reducing generation costs, and deals with the reliability of the power system. However, due to demand peaks in the power system, forecasts are inaccurate and prone to high numbers of errors. In this paper, our contributions comprise a proposed data-mining scheme for demand modeling through peak detection, as well as the use of this information to feed the forecasting system. For this purpose, we have taken a different approach from that of time series forecasting, representing it as a two-stage pattern recognition problem. We have developed a peak classification model followed by a forecasting model to estimate an aggregated demand volume. We have utilized a set of machine learning algorithms to benefit from both accurate detection of the peaks and precise forecasts, as applied to the Polish power system. The key finding is that the algorithms can detect 96.3% of electricity peaks (load value equal to or above the 99th percentile of the load distribution and deliver accurate forecasts, with mean absolute percentage error (MAPE of 3.10% and resistant mean absolute percentage error (r-MAPE of 2.70% for the 24 h forecasting horizon.

  2. Statistical variability comparison in MODIS and AERONET derived aerosol optical depth over Indo-Gangetic Plains using time series modeling.

    Science.gov (United States)

    Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant

    2016-05-15

    A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Prevalence and cost of hospital medical errors in the general and elderly United States populations.

    Science.gov (United States)

    Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S

    2013-12-01

    The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.

  4. Automated objective determination of percentage of malignant nuclei for mutation testing.

    Science.gov (United States)

    Viray, Hollis; Coulter, Madeline; Li, Kevin; Lane, Kristin; Madan, Aruna; Mitchell, Kisha; Schalper, Kurt; Hoyt, Clifford; Rimm, David L

    2014-01-01

    Detection of DNA mutations in tumor tissue can be a critical companion diagnostic test before prescription of a targeted therapy. Each method for detection of these mutations is associated with an analytic sensitivity that is a function of the percentage of tumor cells present in the specimen. Currently, tumor cell percentage is visually estimated resulting in an ordinal and highly variant result for a biologically continuous variable. We proposed that this aspect of DNA mutation testing could be standardized by developing a computer algorithm capable of accurately determining the percentage of malignant nuclei in an image of a hematoxylin and eosin-stained tissue. Using inForm software, we developed an algorithm, to calculate the percentage of malignant cells in histologic specimens of colon adenocarcinoma. A criterion standard was established by manually counting malignant and benign nuclei. Three pathologists also estimated the percentage of malignant nuclei in each image. Algorithm #9 had a median deviation from the criterion standard of 5.4% on the training set and 6.2% on the validation set. Compared with pathologist estimation, Algorithm #9 showed a similar ability to determine percentage of malignant nuclei. This method represents a potential future tool to assist in determining the percent of malignant nuclei present in a tissue section. Further validation of this algorithm or an improved algorithm may have value to more accurately assess percentage of malignant cells for companion diagnostic mutation testing.

  5. An Improved CO2-Crude Oil Minimum Miscibility Pressure Correlation

    Directory of Open Access Journals (Sweden)

    Hao Zhang

    2015-01-01

    Full Text Available Minimum miscibility pressure (MMP, which plays an important role in miscible flooding, is a key parameter in determining whether crude oil and gas are completely miscible. On the basis of 210 groups of CO2-crude oil system minimum miscibility pressure data, an improved CO2-crude oil system minimum miscibility pressure correlation was built by modified conjugate gradient method and global optimizing method. The new correlation is a uniform empirical correlation to calculate the MMP for both thin oil and heavy oil and is expressed as a function of reservoir temperature, C7+ molecular weight of crude oil, and mole fractions of volatile components (CH4 and N2 and intermediate components (CO2, H2S, and C2~C6 of crude oil. Compared to the eleven most popular and relatively high-accuracy CO2-oil system MMP correlations in the previous literature by other nine groups of CO2-oil MMP experimental data, which have not been used to develop the new correlation, it is found that the new empirical correlation provides the best reproduction of the nine groups of CO2-oil MMP experimental data with a percentage average absolute relative error (%AARE of 8% and a percentage maximum absolute relative error (%MARE of 21%, respectively.

  6. Absolute and Relative Socioeconomic Health Inequalities across Age Groups.

    Science.gov (United States)

    van Zon, Sander K R; Bültmann, Ute; Mendes de Leon, Carlos F; Reijneveld, Sijmen A

    2015-01-01

    The magnitude of socioeconomic health inequalities differs across age groups. It is less clear whether socioeconomic health inequalities differ across age groups by other factors that are known to affect the relation between socioeconomic position and health, like the indicator of socioeconomic position, the health outcome, gender, and as to whether socioeconomic health inequalities are measured in absolute or in relative terms. The aim is to investigate whether absolute and relative socioeconomic health inequalities differ across age groups by indicator of socioeconomic position, health outcome and gender. The study sample was derived from the baseline measurement of the LifeLines Cohort Study and consisted of 95,432 participants. Socioeconomic position was measured as educational level and household income. Physical and mental health were measured with the RAND-36. Age concerned eleven 5-years age groups. Absolute inequalities were examined by comparing means. Relative inequalities were examined by comparing Gini-coefficients. Analyses were performed for both health outcomes by both educational level and household income. Analyses were performed for all age groups, and stratified by gender. Absolute and relative socioeconomic health inequalities differed across age groups by indicator of socioeconomic position, health outcome, and gender. Absolute inequalities were most pronounced for mental health by household income. They were larger in younger than older age groups. Relative inequalities were most pronounced for physical health by educational level. Gini-coefficients were largest in young age groups and smallest in older age groups. Absolute and relative socioeconomic health inequalities differed cross-sectionally across age groups by indicator of socioeconomic position, health outcome and gender. Researchers should critically consider the implications of choosing a specific age group, in addition to the indicator of socioeconomic position and health outcome

  7. Adaptive finite element analysis of incompressible viscous flow using posteriori error estimation and control of node density distribution

    International Nuclear Information System (INIS)

    Yashiki, Taturou; Yagawa, Genki; Okuda, Hiroshi

    1995-01-01

    The adaptive finite element method based on an 'a posteriori error estimation' is known to be a powerful technique for analyzing the engineering practical problems, since it excludes the instinctive aspect of the mesh subdivision and gives high accuracy with relatively low computational cost. In the adaptive procedure, both the error estimation and the mesh generation according to the error estimator are essential. In this paper, the adaptive procedure is realized by the automatic mesh generation based on the control of node density distribution, which is decided according to the error estimator. The global percentage error, CPU time, the degrees of freedom and the accuracy of the solution of the adaptive procedure are compared with those of the conventional method using regular meshes. Such numerical examples as the driven cavity flows of various Reynolds numbers and the flows around a cylinder have shown the very high performance of the proposed adaptive procedure. (author)

  8. 78 FR 48789 - Loan Guaranty: Percentage to Determine Net Value

    Science.gov (United States)

    2013-08-09

    ... DEPARTMENT OF VETERANS AFFAIRS Loan Guaranty: Percentage to Determine Net Value AGENCY: Department... mortgage holders in the Department of Veterans Affairs (VA) loan guaranty program concerning the percentage to be used in calculating the purchase price of a property that secured a terminated loan. The new...

  9. Some things ought never be done: moral absolutes in clinical ethics.

    Science.gov (United States)

    Pellegrino, Edmund D

    2005-01-01

    Moral absolutes have little or no moral standing in our morally diverse modern society. Moral relativism is far more palatable for most ethicists and to the public at large. Yet, when pressed, every moral relativist will finally admit that there are some things which ought never be done. It is the rarest of moral relativists that will take rape, murder, theft, child sacrifice as morally neutral choices. In general ethics, the list of those things that must never be done will vary from person to person. In clinical ethics, however, the nature of the physician-patient relationship is such that certain moral absolutes are essential to the attainment of the good of the patient - the end of the relationship itself. These are all derivatives of the first moral absolute of all morality: Do good and avoid evil. In the clinical encounter, this absolute entails several subsidiary absolutes - act for the good of the patient, do not kill, keep promises, protect the dignity of the patient, do not lie, avoid complicity with evil. Each absolute is intrinsic to the healing and helping ends of the clinical encounter.

  10. Use of a urinary sugars biomarker to assess measurement error in self-reported sugars intake in the Nutrition and Physical Activity Assessment Study (NPAAS)

    Science.gov (United States)

    Tasevska, Natasha; Midthune, Douglas; Tinker, Lesley F.; Potischman, Nancy; Lampe, Johanna W.; Neuhouser, Marian L.; Beasley, Jeannette M.; Van Horn, Linda; Prentice, Ross L.; Kipnis, Victor

    2014-01-01

    Background Measurement error (ME) in self-reported sugars intake may be obscuring the association between sugars and cancer risk in nutritional epidemiologic studies. Methods We used 24-hour urinary sucrose and fructose as a predictive biomarker for total sugars, to assess ME in self-reported sugars intake. The Nutrition and Physical Activity Assessment Study (NPAAS) is a biomarker study within the Women’s Health Initiative (WHI) Observational Study, that includes 450 post-menopausal women aged 60–91. Food Frequency Questionnaires (FFQ), 4-day food records (4DFR) and three 24-h dietary recalls (24HRs) were collected along with sugars and energy dietary biomarkers. Results Using the biomarker, we found self-reported sugars to be substantially and roughly equally misreported across the FFQ, 4DFR and 24HR. All instruments were associated with considerable intake- and person-specific bias. Three 24HRs would provide the least attenuated risk estimate for sugars (attenuation factor, AF=0.57), followed by FFQ (AF=0.48), and 4DFR (AF=0.32), in studies of energy-adjusted sugars and disease risk. In calibration models, self-reports explained little variation in true intake (5–6% for absolute sugars; 7–18% for sugars density). Adding participants’ characteristics somewhat improved the percentage variation explained (16–18% for absolute sugars; 29–40% for sugars density). Conclusions None of the self-report instruments provided a good estimate of sugars intake, although overall 24HRs seemed to perform the best. Impact Assuming the calibrated sugars biomarker is unbiased, this analysis suggests that, measuring the biomarker in a subsample of the study population for calibration purposes may be necessary for obtaining unbiased risk estimates in cancer association studies. PMID:25234237

  11. Relativistic Absolutism in Moral Education.

    Science.gov (United States)

    Vogt, W. Paul

    1982-01-01

    Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess absolute rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)

  12. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  13. Design and validation of a portable, inexpensive and multi-beam timing light system using the Nintendo Wii hand controllers.

    Science.gov (United States)

    Clark, Ross A; Paterson, Kade; Ritchie, Callan; Blundell, Simon; Bryant, Adam L

    2011-03-01

    Commercial timing light systems (CTLS) provide precise measurement of athletes running velocity, however they are often expensive and difficult to transport. In this study an inexpensive, wireless and portable timing light system was created using the infrared camera in Nintendo Wii hand controllers (NWHC). System creation with gold-standard validation. A Windows-based software program using NWHC to replicate a dual-beam timing gate was created. Firstly, data collected during 2m walking and running trials were validated against a 3D kinematic system. Secondly, data recorded during 5m running trials at various intensities from standing or flying starts were compared to a single beam CTLS and the independent and average scores of three handheld stopwatch (HS) operators. Intraclass correlation coefficient and Bland-Altman plots were used to assess validity. Absolute error quartiles and percentage of trials in absolute error threshold ranges were used to determine accuracy. The NWHC system was valid when compared against the 3D kinematic system (ICC=0.99, median absolute error (MAR)=2.95%). For the flying 5m trials the NWHC system possessed excellent validity and precision (ICC=0.97, MAR8%). A NWHC timing light system is inexpensive, portable and valid for assessing running velocity. Errors in the 5m standing start trials may have been due to erroneous event detection by either the commercial or NWHC-based timing light systems. Copyright © 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  14. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  15. Short Term Prediction of Freeway Exiting Volume Based on SVM and KNN

    Directory of Open Access Journals (Sweden)

    Xiang Wang

    2015-09-01

    The model results indicate that the proposed algorithm is feasible and accurate. The Mean Absolute Percentage Error is under 10%. When comparing with the results of single KNN or SVM method, the results show that the combination of KNN and SVM can improve the reliability of the prediction significantly. The proposed method can be implemented in the on-line application of exiting volume prediction, which is able to consider different vehicle types.

  16. Effect of breed and non-genetic factors on percentage milk ...

    African Journals Online (AJOL)

    This study was done to determine the effect of breed and non-genetic factors on percentage milk composition of smallholders' dual-purpose cattle on-farm in the Ashanti Region. Fresh milk samples from various breeds of cows were assessed for percentage components of protein, fat, lactose, cholesterol, solidnon- fat and ...

  17. Effekten af absolut kumulation

    DEFF Research Database (Denmark)

    Kyvsgaard, Britta; Klement, Christian

    2012-01-01

    Som led i finansloven for 2011 blev regeringen og forligspartierne enige om at undersøge reglerne om strafudmåling ved samtidig pådømmelse af flere kriminelle forhold og i forbindelse hermed vurdere konsekvenserne af at ændre de gældende regler i forhold til kapacitetsbehovet i Kriminalforsorgens...... samlet bødesum ved en absolut kumulation i forhold til en modereret kumulation, som nu er gældende....

  18. Some absolutely effective product methods

    Directory of Open Access Journals (Sweden)

    H. P. Dikshit

    1992-01-01

    Full Text Available It is proved that the product method A(C,1, where (C,1 is the Cesàro arithmetic mean matrix, is totally effective under certain conditions concerning the matrix A. This general result is applied to study absolute Nörlund summability of Fourier series and other related series.

  19. Comparison of Percentage of Syllables Stuttered With Parent-Reported Severity Ratings as a Primary Outcome Measure in Clinical Trials of Early Stuttering Treatment.

    Science.gov (United States)

    Onslow, Mark; Jones, Mark; O'Brian, Sue; Packman, Ann; Menzies, Ross; Lowe, Robyn; Arnott, Simone; Bridgman, Kate; de Sonneville, Caroline; Franken, Marie-Christine

    2018-04-17

    This report investigates whether parent-reported stuttering severity ratings (SRs) provide similar estimates of effect size as percentage of syllables stuttered (%SS) for randomized trials of early stuttering treatment with preschool children. Data sets from 3 randomized controlled trials of an early stuttering intervention were selected for analyses. Analyses included median changes and 95% confidence intervals per treatment group, Bland-Altman plots, analysis of covariance, and Spearman rho correlations. Both SRs and %SS showed large effect sizes from pretreatment to follow-up, although correlations between the 2 measures were moderate at best. Absolute agreement between the 2 measures improved as percentage reduction of stuttering frequency and severity increased, probably due to innate measurement limitations for participants with low baseline severity. Analysis of covariance for the 3 trials showed consistent results. There is no statistical reason to favor %SS over parent-reported stuttering SRs as primary outcomes for clinical trials of early stuttering treatment. However, there are logistical reasons to favor parent-reported stuttering SRs. We conclude that parent-reported rating of the child's typical stuttering severity for the week or month prior to each assessment is a justifiable alternative to %SS as a primary outcome measure in clinical trials of early stuttering treatment.

  20. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  1. Absolute measurement method of environment radon content

    International Nuclear Information System (INIS)

    Ji Changsong

    1989-11-01

    A portable environment radon content device with a 40 liter decay chamber based on the method of Thomas double filter radon content absolute measurement has been developed. The correctness of the method of Thomas double filter absolute measurement has been verified by the experiments to measure the sampling gas density of radon that the theoretical density has been known. In addition, the intrinsic uncertainty of this method is also determined in the experiments. The confidence of this device is about 95%, the sensitivity is better than 0.37 Bqm -3 and the intrinsic uncertainty is less than 10%. The results show that the selected measuring and structure parameters are reasonable and the experimental methods are acceptable. In this method, the influence on the measured values from the radioactive equilibrium of radon and its daughters, the ratio of combination daughters to the total daughters and the fraction of charged particles has been excluded in the theory and experimental methods. The formula of Thomas double filter absolute measuring radon is applicable to the cylinder decay chamber, and the applicability is also verified when the diameter of exit filter is much smaller than the diameter of inlet filter

  2. Study on absolute humidity influence of NRL-1 measuring apparatus for radon

    International Nuclear Information System (INIS)

    Shan Jian; Xiao Detao; Zhao Guizhi; Zhou Qingzhi; Liu Yan; Qiu Shoukang; Meng Yecheng; Xiong Xinming; Liu Xiaosong; Ma Wenrong

    2014-01-01

    The absolute humidity and temperature's effects on the NRL-1 measuring apparatus for radon were studied in this paper. By controlling the radon activity concentration of the radon laboratory in University of South China and improving the temperature and humidity adjust strategy, different correction factor values under different absolute humidities were obtained. Moreover, a correction curve between 1.90 and 14.91 g/m"3 was also attained. The results show that in the case of absolute humidity, when it is less than 2.4 g/m"3, collection efficiency of the NRL-1 measuring apparatus for radon tends to be constant, and the correction factor of the absolute humidity closes to 1. However, the correction factor increases nonlinearly along with the absolute humidity. (authors)

  3. Performance evaluations of continuous glucose monitoring systems: precision absolute relative deviation is part of the assessment.

    Science.gov (United States)

    Obermaier, Karin; Schmelzeisen-Redeker, Günther; Schoemaker, Michael; Klötzer, Hans-Martin; Kirchsteiger, Harald; Eikmeier, Heino; del Re, Luigi

    2013-07-01

    Even though a Clinical and Laboratory Standards Institute proposal exists on the design of studies and performance criteria for continuous glucose monitoring (CGM) systems, it has not yet led to a consistent evaluation of different systems, as no consensus has been reached on the reference method to evaluate them or on acceptance levels. As a consequence, performance assessment of CGM systems tends to be inconclusive, and a comparison of the outcome of different studies is difficult. Published information and available data (as presented in this issue of Journal of Diabetes Science and Technology by Freckmann and coauthors) are used to assess the suitability of several frequently used methods [International Organization for Standardization, continuous glucose error grid analysis, mean absolute relative deviation (MARD), precision absolute relative deviation (PARD)] when assessing performance of CGM systems in terms of accuracy and precision. The combined use of MARD and PARD seems to allow for better characterization of sensor performance. The use of different quantities for calibration and evaluation, e.g., capillary blood using a blood glucose (BG) meter versus venous blood using a laboratory measurement, introduces an additional error source. Using BG values measured in more or less large intervals as the only reference leads to a significant loss of information in comparison with the continuous sensor signal and possibly to an erroneous estimation of sensor performance during swings. Both can be improved using data from two identical CGM sensors worn by the same patient in parallel. Evaluation of CGM performance studies should follow an identical study design, including sufficient swings in glycemia. At least a part of the study participants should wear two identical CGM sensors in parallel. All data available should be used for evaluation, both by MARD and PARD, a good PARD value being a precondition to trust a good MARD value. Results should be analyzed and

  4. A new hybrid support vector machine–wavelet transform approach for estimation of horizontal global solar radiation

    International Nuclear Information System (INIS)

    Mohammadi, Kasra; Shamshirband, Shahaboddin; Tong, Chong Wen; Arif, Muhammad; Petković, Dalibor; Ch, Sudheer

    2015-01-01

    Highlights: • Horizontal global solar radiation (HGSR) is predicted based on a new hybrid approach. • Support Vector Machines and Wavelet Transform algorithm (SVM–WT) are combined. • Different sets of meteorological elements are used to predict HGSR. • The precision of SVM–WT is assessed thoroughly against ANN, GP and ARMA. • SVM–WT would be an appealing approach to predict HGSR and outperforms others. - Abstract: In this paper, a new hybrid approach by combining the Support Vector Machine (SVM) with Wavelet Transform (WT) algorithm is developed to predict horizontal global solar radiation. The predictions are conducted on both daily and monthly mean scales for an Iranian coastal city. The proposed SVM–WT method is compared against other existing techniques to demonstrate its efficiency and viability. Three different sets of parameters are served as inputs to establish three models. The results indicate that the model using relative sunshine duration, difference between air temperatures, relative humidity, average temperature and extraterrestrial solar radiation as inputs shows higher performance than other models. The statistical analysis demonstrates that SVM–WT approach enjoys very good performance and outperforms other approaches. For the best SVM–WT model, the obtained statistical indicators of mean absolute percentage error, mean absolute bias error, root mean square error, relative root mean square error and coefficient of determination for daily estimation are 6.9996%, 0.8405 MJ/m 2 , 1.4245 MJ/m 2 , 7.9467% and 0.9086, respectively. Also, for monthly mean estimation the values are 3.2601%, 0.5104 MJ/m 2 , 0.6618 MJ/m 2 , 3.6935% and 0.9742, respectively. Based upon relative percentage error, for the best SVM–WT model, 88.70% of daily predictions fall within the acceptable range of −10% to +10%

  5. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  6. Absolutely minimal extensions of functions on metric spaces

    International Nuclear Information System (INIS)

    Milman, V A

    1999-01-01

    Extensions of a real-valued function from the boundary ∂X 0 of an open subset X 0 of a metric space (X,d) to X 0 are discussed. For the broad class of initial data coming under discussion (linearly bounded functions) locally Lipschitz extensions to X 0 that preserve localized moduli of continuity are constructed. In the set of these extensions an absolutely minimal extension is selected, which was considered before by Aronsson for Lipschitz initial functions in the case X 0 subset of R n . An absolutely minimal extension can be regarded as an ∞-harmonic function, that is, a limit of p-harmonic functions as p→+∞. The proof of the existence of absolutely minimal extensions in a metric space with intrinsic metric is carried out by the Perron method. To this end, ∞-subharmonic, ∞-superharmonic, and ∞-harmonic functions on a metric space are defined and their properties are established

  7. Absolute carrier phase effects in the two-color excitation of dipolar molecules

    International Nuclear Information System (INIS)

    Brown, Alex; Meath, W.J.; Kondo, A.E.

    2002-01-01

    The pump-probe excitation of a two-level dipolar (d≠0) molecule, where the pump frequency is tuned to the energy level separation while the probe frequency is extremely small, is examined theoretically as an example of absolute phase control of excitation processes. The state populations depend on the probe field's absolute carrier phase but are independent of the pump field's absolute carrier phase. Interestingly, the absolute phase effects occur for pulse durations much longer and field intensities much weaker than those required to see such effects in single pulse excitation

  8. Determination of absolute detection efficiencies for detectors of interest in homeland security

    International Nuclear Information System (INIS)

    Ayaz-Maierhafer, Birsen; DeVol, Timothy A.

    2007-01-01

    The absolute total and absolute peak detection efficiencies of gamma ray detector materials NaI:Tl, CdZnTe, HPGe, HPXe, LaBr 3 :Ce and LaCl 3 :Ce were simulated and compared to that of polyvinyltoluene (PVT). The dimensions of the PVT detector were 188.82 cmx60.96 cmx5.08 cm, which is a typical size for a single-panel portal monitor. The absolute total and peak detection efficiencies for these detector materials for the point, line and spherical source geometries of 60 Co (1332 keV), 137 Cs (662 keV) and 241 Am (59.5 keV) were simulated at various source-to-detector distances using the Monte Carlo N-Particle software (MCNP5-V1.30). The comparison of the absolute total detection efficiencies for a point, line and spherical source geometry of 60 Co and 137 Cs at different source-to-detector distance showed that the absolute detection efficiency for PVT is higher relative to the other detectors of typical dimensions for that material. However, the absolute peak detection efficiency of some of these detectors are higher relative to PVT, for example the absolute peak detection efficiency of NaI:Tl (7.62 cm diameterx7.62 cm long), HPGe (7.62 cm diameterx7.62 cm long), HPXe (11.43 cm diameterx60.96 cm long), and LaCl 3 :Ce (5.08 cm diameterx5.08 cm long) are all greater than that of a 188.82 cmx60.96 cmx5.08 cm PVT detector for 60 Co and 137 Cs for all geometries studied. The absolute total and absolute peak detection efficiencies of a right circular cylinder of NaI:Tl with various diameters and thicknesses were determined for a point source. The effect of changing the solid angle on the NaI:Tl detectors showed that with increasing solid angle and detector thickness, the absolute efficiency increases. This work establishes a common basis for differentiating detector materials for passive portal monitoring of gamma ray radiation

  9. Regional and site-specific absolute humidity data for use in tritium dose calculations

    International Nuclear Information System (INIS)

    Etnier, E.L.

    1980-01-01

    Due to the potential variability in average absolute humidity over the continental U.S., and the dependence of atmospheric 3 H specific activity on absolute humidity, availability of regional absolute humidity data is of value in estimating the radiological significance of 3 H releases. Most climatological data are in the form of relative humidity, which must be converted to absolute humidity for dose calculations. Absolute humidity was calculated for 218 points across the U.S., using the 1977 annual summary of U.S. Climatological Data, and is given in a table. Mean regional values are shown on a map. (author)

  10. Absolute quantitation of gallium-67 citrate accumulation in the lungs and its importance for the evaluation of disease activity in pulmonary sarcoidosis

    International Nuclear Information System (INIS)

    Myslivecek, M.; Husak, V.; Budikova, M.; Koranda, P.; Kolek, V.

    1992-01-01

    Our modification of a method for the absolute quantification of gallium-67 uptake in lungs with a scintillation camera and computer is described. The uptake of 67 Ga in lungs, expressed in percentage of administered radioactivity, was determined by the transmission-emission method. We proved theoretically and experimentally that a 67 Ga planar source could be replaced with a 57 Co planar source. The performance of lung perfusion scans allows a more accurate delineation of the regions of interest on gallium scans. The method was applied to control subjects (n=27) and to patients (n=114) suffering from biopsy-proven pulmonary sarcoidosis (28 with inactive and 86 with active disease). The obtained results were compared with chest X-ray findings, the percentage of lymphocytes in the bronchoalveolar fluid (BAF-ly%), and serum angiotensin-converting enzyme (SACE) values. The method seems suitable for the assessment of disease activity in sarcoidosis. It is more accurate in detecting parenchymal involvement in lung sarcoidosis than the commonly used X-ray criteria. No correlation was found between 67 Ga uptake and the BFA-ly% and SACE values. (orig.)

  11. Absolute decay parametric instability of high-temperature plasma

    International Nuclear Information System (INIS)

    Zozulya, A.A.; Silin, V.P.; Tikhonchuk, V.T.

    1986-01-01

    A new absolute decay parametric instability having wide spatial localization region is shown to be possible near critical plasma density. Its excitation is conditioned by distributed feedback of counter-running Langmuir waves occurring during parametric decay of incident and reflected pumping wave components. In a hot plasma with the temperature of the order of kiloelectronvolt its threshold is lower than that of a known convective decay parametric instability. Minimum absolute instability threshold is shown to be realized under conditions of spatial parametric resonance of higher orders

  12. Reliability of estimated glomerular filtration rate in patients treated with platinum containing therapy

    DEFF Research Database (Denmark)

    Lauritsen, Jakob; Gundgaard, Maria G; Mortensen, Mette S

    2014-01-01

    (median percentage error), precision (median absolute percentage error) and accuracy (p10 and p30). The precision of carboplatin dosage based on eGFR was calculated. Data on mGFR, eGFR, and PCr were available in 390 patients, with a total of ∼ 1,600 measurements. Median PCr and mGFR synchronically...... decreased after chemotherapy, yielding high bias and low precision of most estimates. Post-chemotherapy, bias ranged from -0.2% (MDRD after four cycles) to 33.8% (CKD-EPI after five cycles+), precision ranged from 11.6% (MDRD after four cycles) to 33.8% (CKD-EPI after five cycles+) and accuracy (p30) ranged...... from 37.5% (CKD-EPI after five cycles+) to 86.9% (MDRD after four cycles). Although MDRD appeared acceptable after chemotherapy because of high accuracy, this equation underestimated GFR in all other measurements. Before and years after treatment, Cockcroft-Gault and Wright offered best results...

  13. PERAMALAN JUMLAH KUNJUNGAN WISATAWAN AUSTRALIA YANG BERKUNJUNG KE BALI MENGGUNAKAN MODEL TIME VARYING PARAMETER (TVP

    Directory of Open Access Journals (Sweden)

    I PUTU GEDE DIAN GERRY SUWEDAYANA

    2016-08-01

    Full Text Available The purpose of this research is to forecast the number of Australian tourists arrival to Bali using Time Varying Parameter (TVP model based on inflation of Indonesia and exchange rate AUD to IDR from January 2010 – December 2015 as explanatory variables. TVP model is specified in a state space model and estimated by Kalman filter algorithm. The result shows that the TVP model can be used to forecast the number of Australian tourists arrival to Bali because it satisfied the assumption that the residuals are distributed normally and the residuals in the measurement and transition equations are not correlated. The estimated TVP model is . This model has a value of mean absolute percentage error (MAPE is equal to dan root mean square percentage error (RMSPE is equal to . The number of Australian tourists arrival to Bali for the next five periods is predicted: ; ; ; ; and (January - May 2016.

  14. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  15. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  16. Confidence-Accuracy Calibration in Absolute and Relative Face Recognition Judgments

    Science.gov (United States)

    Weber, Nathan; Brewer, Neil

    2004-01-01

    Confidence-accuracy (CA) calibration was examined for absolute and relative face recognition judgments as well as for recognition judgments from groups of stimuli presented simultaneously or sequentially (i.e., simultaneous or sequential mini-lineups). When the effect of difficulty was controlled, absolute and relative judgments produced…

  17. Auditory working memory predicts individual differences in absolute pitch learning.

    Science.gov (United States)

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the

  18. Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.

    Science.gov (United States)

    Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra

    2014-04-01

    To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Depicting mass flow rate of R134a /LPG refrigerant through straight and helical coiled adiabatic capillary tubes of vapor compression refrigeration system using artificial neural network approach

    Science.gov (United States)

    Gill, Jatinder; Singh, Jagdev

    2018-07-01

    In this work, an experimental investigation is carried out with R134a and LPG refrigerant mixture for depicting mass flow rate through straight and helical coil adiabatic capillary tubes in a vapor compression refrigeration system. Various experiments were conducted under steady-state conditions, by changing capillary tube length, inner diameter, coil diameter and degree of subcooling. The results showed that mass flow rate through helical coil capillary tube was found lower than straight capillary tube by about 5-16%. Dimensionless correlation and Artificial Neural Network (ANN) models were developed to predict mass flow rate. It was found that dimensionless correlation and ANN model predictions agreed well with experimental results and brought out an absolute fraction of variance of 0.961 and 0.988, root mean square error of 0.489 and 0.275 and mean absolute percentage error of 4.75% and 2.31% respectively. The results suggested that ANN model shows better statistical prediction than dimensionless correlation model.

  20. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  1. El problema de la conciencia en Los errores de José Revueltas

    Directory of Open Access Journals (Sweden)

    Evodio Escalante Betancourt

    2014-07-01

    Full Text Available Los errores es la gran novela que José Revueltas estaba destinado a escribir. En ella se concentra su etapa de madurez intelectual, filosófica y literaria. “El hombre es un ser erróneo y en eso radica su condición trágica” —dice Jacobo Ponce, personaje de la novela y alter ego de Revueltas. Reducir el error al grueso del ínfimo diámetro de un cabello, puesto en dimensiones cósmicas, se revela como un abismo puesto en relación con la categoría de saber absoluto que prefigura G. W. F. Hegel en su Fenomenología del espíritu. En este breve ensayo se da cuenta de las reflexiones intelectuales de Revueltas en torno a los postulados sobre la autoconciencia y el saber absoluto de Hegel. Los errores it’s the great novel that Revueltas was bound to write some day. It reveals his intellectual, philosophical and literary maturity. In words of Jacobo Ponce —character in the novel and alter ego of Revueltas itself—: “the man is erroneous by nature; here’s within his own tragedy”. Diminish the error up to the thickness of a capillar hair’s wide, in cosmic measures, means and reveals a huge gap facing the absolute knowledge coined by H.W. Hegel itself. So, in this brief essay Revueltas utter’s his owns intellectual thoughts bounding self-awareness and total knowledge within Hegel’s mind.

  2. Calibrating the absolute amplitude scale for air showers measured at LOFAR

    International Nuclear Information System (INIS)

    Nelles, A.; Hörandel, J. R.; Karskens, T.; Krause, M.; Corstanje, A.; Enriquez, J. E.; Falcke, H.; Rachen, J. P.; Rossetto, L.; Schellart, P.; Buitink, S.; Erdmann, M.; Krause, R.; Haungs, A.; Hiller, R.; Huege, T.; Link, K.; Schröder, F. G.; Norden, M. J.; Scholten, O.

    2015-01-01

    Air showers induced by cosmic rays create nanosecond pulses detectable at radio frequencies. These pulses have been measured successfully in the past few years at the LOw-Frequency ARray (LOFAR) and are used to study the properties of cosmic rays. For a complete understanding of this phenomenon and the underlying physical processes, an absolute calibration of the detecting antenna system is needed. We present three approaches that were used to check and improve the antenna model of LOFAR and to provide an absolute calibration of the whole system for air shower measurements. Two methods are based on calibrated reference sources and one on a calibration approach using the diffuse radio emission of the Galaxy, optimized for short data-sets. An accuracy of 19% in amplitude is reached. The absolute calibration is also compared to predictions from air shower simulations. These results are used to set an absolute energy scale for air shower measurements and can be used as a basis for an absolute scale for the measurement of astronomical transients with LOFAR

  3. AC Own Motion Percentage of Randomly Sampled Cases

    Data.gov (United States)

    Social Security Administration — Longitudinal report detailing the numbers and percentages of Appeals Council (AC) own motion review actions taken on un-appealed favorable hearing level decisions...

  4. New design and facilities for the International Database for Absolute Gravity Measurements (AGrav): A support for the Establishment of a new Global Absolute Gravity Reference System

    Science.gov (United States)

    Wziontek, Hartmut; Falk, Reinhard; Bonvalot, Sylvain; Rülke, Axel

    2017-04-01

    After about 10 years of successful joint operation by BGI and BKG, the International Database for Absolute Gravity Measurements "AGrav" (see references hereafter) was under a major revision. The outdated web interface was replaced by a responsive, high level web application framework based on Python and built on top of Pyramid. Functionality was added, like interactive time series plots or a report generator and the interactive map-based station overview was updated completely, comprising now clustering and the classification of stations. Furthermore, the database backend was migrated to PostgreSQL for better support of the application framework and long-term availability. As comparisons of absolute gravimeters (AG) become essential to realize a precise and uniform gravity standard, the database was extended to document the results on international and regional level, including those performed at monitoring stations equipped with SGs. By this it will be possible to link different AGs and to trace their equivalence back to the key comparisons under the auspices of International Committee for Weights and Measures (CIPM) as the best metrological realization of the absolute gravity standard. In this way the new AGrav database accommodates the demands of the new Global Absolute Gravity Reference System as recommended by the IAG Resolution No. 2 adopted in Prague 2015. The new database will be presented with focus on the new user interface and new functionality, calling all institutions involved in absolute gravimetry to participate and contribute with their information to built up a most complete picture of high precision absolute gravimetry and improve its visibility. A Digital Object Identifier (DOI) will be provided by BGI to contributors to give a better traceability and facilitate the referencing of their gravity surveys. Links and references: BGI mirror site : http://bgi.obs-mip.fr/data-products/Gravity-Databases/Absolute-Gravity-data/ BKG mirror site: http

  5. Interleaving cerebral CT perfusion with neck CT angiography. Pt. I. Proof of concept and accuracy of cerebral perfusion values

    Energy Technology Data Exchange (ETDEWEB)

    Oei, Marcel T.H.; Meijer, Frederick J.A.; Woude, Willem-Jan van der; Smit, Ewoud J.; Ginneken, Bram van; Prokop, Mathias; Manniesing, Rashindra [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, P.O. Box 9101, Nijmegen (Netherlands)

    2017-06-15

    We present a novel One-Step-Stroke protocol for wide-detector CT scanners that interleaves cerebral CTP with volumetric neck CTA (vCTA). We evaluate whether the resulting time gap in CTP affects the accuracy of CTP values. Cerebral CTP maps were retrospectively obtained from 20 patients with suspicion of acute ischemic stroke and served as the reference standard. To simulate a 4 s gap for interleaving CTP with vCTA, we eliminated one acquisition at various time points of CTP starting from the bolus-arrival-time(BAT). Optimal timing of the vCTA was evaluated. At the time point with least errors, we evaluated elimination of a second time point (6 s gap). Mean absolute percentage errors of all perfusion values remained below 10 % in all patients when eliminating any one time point in the CTP sequence starting from the BAT. Acquiring the vCTA 2 s after reaching a threshold of 70HU resulted in the lowest errors (mean <3.0 %). Eliminating a second time point still resulted in mean errors <3.5 %. CBF/CBV showed no significant differences in perfusion values except MTT. However, the percentage errors were always below 10 % compared to the original protocol. Interleaving cerebral CTP with neck CTA is feasible with minor effects on the perfusion values. (orig.)

  6. Absolute nutrient concentration measurements in cell culture media: 1H q-NMR spectra and data to compare the efficiency of pH-controlled protein precipitation versus CPMG or post-processing filtering approaches

    Directory of Open Access Journals (Sweden)

    Luca Goldoni

    2016-09-01

    Full Text Available The NMR spectra and data reported in this article refer to the research article titled “A simple and accurate protocol for absolute polar metabolite quantification in cell cultures using q-NMR” [1]. We provide the 1H q-NMR spectra of cell culture media (DMEM after removal of serum proteins, which show the different efficiency of various precipitating solvents, the solvent/DMEM ratios, and pH of the solution. We compare the data of the absolute nutrient concentrations, measured by PULCON external standard method, before and after precipitation of serum proteins and those obtained using CPMG (Carr-Purcell-Meiboom-Gill sequence or applying post-processing filtering algorithms to remove, from the 1H q-NMR spectra, the proteins signal contribution. For each of these approaches, the percent error in the absolute value of every measurement for all the nutrients is also plotted as accuracy assessment. Keywords: 1H NMR, pH-controlled serum removal, PULCON, Accuracy, CPMG, Deconvolution

  7. The good, the bad and the outliers: automated detection of errors and outliers from groundwater hydrographs

    Science.gov (United States)

    Peterson, Tim J.; Western, Andrew W.; Cheng, Xiang

    2018-03-01

    Suspicious groundwater-level observations are common and can arise for many reasons ranging from an unforeseen biophysical process to bore failure and data management errors. Unforeseen observations may provide valuable insights that challenge existing expectations and can be deemed outliers, while monitoring and data handling failures can be deemed errors, and, if ignored, may compromise trend analysis and groundwater model calibration. Ideally, outliers and errors should be identified but to date this has been a subjective process that is not reproducible and is inefficient. This paper presents an approach to objectively and efficiently identify multiple types of errors and outliers. The approach requires only the observed groundwater hydrograph, requires no particular consideration of the hydrogeology, the drivers (e.g. pumping) or the monitoring frequency, and is freely available in the HydroSight toolbox. Herein, the algorithms and time-series model are detailed and applied to four observation bores with varying dynamics. The detection of outliers was most reliable when the observation data were acquired quarterly or more frequently. Outlier detection where the groundwater-level variance is nonstationary or the absolute trend increases rapidly was more challenging, with the former likely to result in an under-estimation of the number of outliers and the latter an overestimation in the number of outliers.

  8. Forecasting of Water Consumptions Expenditure Using Holt-Winter’s and ARIMA

    Science.gov (United States)

    Razali, S. N. A. M.; Rusiman, M. S.; Zawawi, N. I.; Arbin, N.

    2018-04-01

    This study is carried out to forecast water consumption expenditure of Malaysian university specifically at University Tun Hussein Onn Malaysia (UTHM). The proposed Holt-Winter’s and Auto-Regressive Integrated Moving Average (ARIMA) models were applied to forecast the water consumption expenditure in Ringgit Malaysia from year 2006 until year 2014. The two models were compared and performance measurement of the Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD) were used. It is found that ARIMA model showed better results regarding the accuracy of forecast with lower values of MAPE and MAD. Analysis showed that ARIMA (2,1,4) model provided a reasonable forecasting tool for university campus water usage.

  9. Absolute cross sections from the ''boomerang model'' for resonant electron-molecule scattering

    International Nuclear Information System (INIS)

    Dube, L.; Herzenberg, A.

    1979-01-01

    The boomerang model is used to calculate absolute cross sections near the 2 Pi/sub g/ shape resonance in e-N 2 scattering. The calculated cross sections are shown to satisfy detailed balancing. The exchange of electrons is taken into account. A parametrized complex-potential curve for the intermediate N 2 /sup ts-/ ion is determined from a small part of the experimental data, and then used to calculate other properties. The calculations are in good agreement with the absolute cross sections for vibrational excitation from the ground state, the absolute cross section v = 1 → 2, and the absolute total cross section

  10. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  11. Quantificação da falha na madeira em juntas coladas utilizando técnicas de visão artificial Measuring wood failure percentage using a machine vision system

    Directory of Open Access Journals (Sweden)

    Christovão Pereira Abrahão

    2003-02-01

    measurement can replace the manual grid method. The proposed algorithms presented an average absolute error of 3%, as compared to the manual grid method.

  12. A highly accurate absolute gravimetric network for Albania, Kosovo and Montenegro

    Science.gov (United States)

    Ullrich, Christian; Ruess, Diethard; Butta, Hubert; Qirko, Kristaq; Pavicevic, Bozidar; Murat, Meha

    2016-04-01

    The objective of this project is to establish a basic gravity network in Albania, Kosovo and Montenegro to enable further investigations in geodetic and geophysical issues. Therefore the first time in history absolute gravity measurements were performed in these countries. The Norwegian mapping authority Kartverket is assisting the national mapping authorities in Kosovo (KCA) (Kosovo Cadastral Agency - Agjencia Kadastrale e Kosovës), Albania (ASIG) (Autoriteti Shtetëror i Informacionit Gjeohapësinor) and in Montenegro (REA) (Real Estate Administration of Montenegro - Uprava za nekretnine Crne Gore) in improving the geodetic frameworks. The gravity measurements are funded by Kartverket. The absolute gravimetric measurements were performed from BEV (Federal Office of Metrology and Surveying) with the absolute gravimeter FG5-242. As a national metrology institute (NMI) the Metrology Service of the BEV maintains the national standards for the realisation of the legal units of measurement and ensures their international equivalence and recognition. Laser and clock of the absolute gravimeter were calibrated before and after the measurements. The absolute gravimetric survey was carried out from September to October 2015. Finally all 8 scheduled stations were successfully measured: there are three stations located in Montenegro, two stations in Kosovo and three stations in Albania. The stations are distributed over the countries to establish a gravity network for each country. The vertical gradients were measured at all 8 stations with the relative gravimeter Scintrex CG5. The high class quality of some absolute gravity stations can be used for gravity monitoring activities in future. The measurement uncertainties of the absolute gravity measurements range around 2.5 micro Gal at all stations (1 microgal = 10-8 m/s2). In Montenegro the large gravity difference of 200 MilliGal between station Zabljak and Podgorica can be even used for calibration of relative gravimeters

  13. Absolute Hugoniot measurements from a spherically convergent shock using x-ray radiography

    Science.gov (United States)

    Swift, Damian C.; Kritcher, Andrea L.; Hawreliak, James A.; Lazicki, Amy; MacPhee, Andrew; Bachmann, Benjamin; Döppner, Tilo; Nilsen, Joseph; Collins, Gilbert W.; Glenzer, Siegfried; Rothman, Stephen D.; Kraus, Dominik; Falcone, Roger W.

    2018-05-01

    The canonical high pressure equation of state measurement is to induce a shock wave in the sample material and measure two mechanical properties of the shocked material or shock wave. For accurate measurements, the experiment is normally designed to generate a planar shock which is as steady as possible in space and time, and a single state is measured. A converging shock strengthens as it propagates, so a range of shock pressures is induced in a single experiment. However, equation of state measurements must then account for spatial and temporal gradients. We have used x-ray radiography of spherically converging shocks to determine states along the shock Hugoniot. The radius-time history of the shock, and thus its speed, was measured by radiographing the position of the shock front as a function of time using an x-ray streak camera. The density profile of the shock was then inferred from the x-ray transmission at each instant of time. Simultaneous measurement of the density at the shock front and the shock speed determines an absolute mechanical Hugoniot state. The density profile was reconstructed using the known, unshocked density which strongly constrains the density jump at the shock front. The radiographic configuration and streak camera behavior were treated in detail to reduce systematic errors. Measurements were performed on the Omega and National Ignition Facility lasers, using a hohlraum to induce a spatially uniform drive over the outside of a solid, spherical sample and a laser-heated thermal plasma as an x-ray source for radiography. Absolute shock Hugoniot measurements were demonstrated for carbon-containing samples of different composition and initial density, up to temperatures at which K-shell ionization reduced the opacity behind the shock. Here we present the experimental method using measurements of polystyrene as an example.

  14. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  15. The Adaptive-Clustering and Error-Correction Method for Forecasting Cyanobacteria Blooms in Lakes and Reservoirs

    Directory of Open Access Journals (Sweden)

    Xiao-zhe Bai

    2017-01-01

    Full Text Available Globally, cyanobacteria blooms frequently occur, and effective prediction of cyanobacteria blooms in lakes and reservoirs could constitute an essential proactive strategy for water-resource protection. However, cyanobacteria blooms are very complicated because of the internal stochastic nature of the system evolution and the external uncertainty of the observation data. In this study, an adaptive-clustering algorithm is introduced to obtain some typical operating intervals. In addition, the number of nearest neighbors used for modeling was optimized by particle swarm optimization. Finally, a fuzzy linear regression method based on error-correction was used to revise the model dynamically near the operating point. We found that the combined method can characterize the evolutionary track of cyanobacteria blooms in lakes and reservoirs. The model constructed in this paper is compared to other cyanobacteria-bloom forecasting methods (e.g., phase space reconstruction and traditional-clustering linear regression, and, then, the average relative error and average absolute error are used to compare the accuracies of these models. The results suggest that the proposed model is superior. As such, the newly developed approach achieves more precise predictions, which can be used to prevent the further deterioration of the water environment.

  16. Absolute and relative dosimetry for ELIMED

    Energy Technology Data Exchange (ETDEWEB)

    Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Cuttone, G.; Candiano, G.; Musumarra, A.; Pisciotta, P.; Romano, F. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Leonora, E.; Randazzo, N. [INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Presti, D. Lo [INFN-Sezione di Catania, Via Santa Sofia 64, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Raffaele, L. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Cirio, R.; Sacchi, R.; Monaco, V. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino, Italy and Università di Torino, Dipartimento di Fisica, Via P.Giuria, 1 10125 Torino (Italy); Marchetto, F.; Giordanengo, S. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy)

    2013-07-26

    The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.

  17. Simple method for absolute calibration of geophones, seismometers, and other inertial vibration sensors

    International Nuclear Information System (INIS)

    Kann, Frank van; Winterflood, John

    2005-01-01

    A simple but powerful method is presented for calibrating geophones, seismometers, and other inertial vibration sensors, including passive accelerometers. The method requires no cumbersome or expensive fixtures such as shaker platforms and can be performed using a standard instrument commonly available in the field. An absolute calibration is obtained using the reciprocity property of the device, based on the standard mathematical model for such inertial sensors. It requires only simple electrical measurement of the impedance of the sensor as a function of frequency to determine the parameters of the model and hence the sensitivity function. The method is particularly convenient if one of these parameters, namely the suspended mass is known. In this case, no additional mechanical apparatus is required and only a single set of impedance measurements yields the desired calibration function. Moreover, this measurement can be made with the device in situ. However, the novel and most powerful aspect of the method is its ability to accurately determine the effective suspended mass. For this, the impedance measurement is made with the device hanging from a simple spring or flexible cord (depending on the orientation of its sensitive axis). To complete the calibration, the device is weighed to determine its total mass. All the required calibration parameters, including the suspended mass, are then determined from a least-squares fit to the impedance as a function of frequency. A demonstration using both a 4.5 Hz geophone and a 1 Hz seismometer shows that the method can yield accurate absolute calibrations with an error of 0.1% or better, assuming no a priori knowledge of any parameters

  18. Predicted percentage dissatisfied with ankle draft.

    Science.gov (United States)

    Liu, S; Schiavon, S; Kabanshi, A; Nazaroff, W W

    2017-07-01

    Draft is unwanted local convective cooling. The draft risk model of Fanger et al. (Energy and Buildings 12, 21-39, 1988) estimates the percentage of people dissatisfied with air movement due to overcooling at the neck. There is no model for predicting draft at ankles, which is more relevant to stratified air distribution systems such as underfloor air distribution (UFAD) and displacement ventilation (DV). We developed a model for predicted percentage dissatisfied with ankle draft (PPD AD ) based on laboratory experiments with 110 college students. We assessed the effect on ankle draft of various combinations of air speed (nominal range: 0.1-0.6 m/s), temperature (nominal range: 16.5-22.5°C), turbulence intensity (at ankles), sex, and clothing insulation (thermal sensation and air speed at ankles are the dominant parameters affecting draft. The seated subjects accepted a vertical temperature difference of up to 8°C between ankles (0.1 m) and head (1.1 m) at neutral whole-body thermal sensation, 5°C more than the maximum difference recommended in existing standards. The developed ankle draft model can be implemented in thermal comfort and air diffuser testing standards. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  20. Philosophy as Inquiry Aimed at the Absolute Knowledge

    Directory of Open Access Journals (Sweden)

    Ekaterina Snarskaya

    2017-09-01

    Full Text Available Philosophy as the absolute knowledge has been studied from two different but closely related approaches: historical and logical. The first approach exposes four main stages in the history of European metaphysics that marked out types of “philosophical absolutism”: the evolution of philosophy brought to light metaphysics of being, method, morals and logic. All of them are associated with the names of Aristotle, Bacon/Descartes, Kant and Hegel. Then these forms are considered in the second approach that defined them as subject-matter of philosophy as such. Due to their overall, comprehensive character, the focus of philosophy on them justifies its claim on absoluteness as far as philosophy is aimed at comprehension of the world’s unity regardless of the philosopher’s background, values and other preferences. And that is its prerogative since no other form of consciousness lays down this kind of aim. Thus, philosophy is defined as an everlasting attempt to succeed in conceiving the world in all its multifold manifestations. This article is to try to clarify the claim of philosophy on the absolute knowledge.

  1. Does Accrual Management Impair the Performance of Earnings-Based Valuation Models?

    OpenAIRE

    Lucie Courteau; Jennifer L. Kao; Yao Tian

    2013-01-01

    This study examines empirically how the presence of accrual management may affect firm valuation. We compare the performance of earnings-based and non-earnings-based valuation models, represented by Residual Income Model (RIM) and Discounted Cash Flow (DCF), respectively, based on the absolute percentage pricing and valuation errors for two subsets of US firms: “Suspect” firms that are likely to have engaged in accrual management and “Normal” firms matched on industry, year and size. Results ...

  2. Automated Quantification of the Landing Error Scoring System With a Markerless Motion-Capture System.

    Science.gov (United States)

    Mauntel, Timothy C; Padua, Darin A; Stanley, Laura E; Frank, Barnett S; DiStefano, Lindsay J; Peck, Karen Y; Cameron, Kenneth L; Marshall, Stephen W

    2017-11-01

      The Landing Error Scoring System (LESS) can be used to identify individuals with an elevated risk of lower extremity injury. The limitation of the LESS is that raters identify movement errors from video replay, which is time-consuming and, therefore, may limit its use by clinicians. A markerless motion-capture system may be capable of automating LESS scoring, thereby removing this obstacle.   To determine the reliability of an automated markerless motion-capture system for scoring the LESS.   Cross-sectional study.   United States Military Academy.   A total of 57 healthy, physically active individuals (47 men, 10 women; age = 18.6 ± 0.6 years, height = 174.5 ± 6.7 cm, mass = 75.9 ± 9.2 kg).   Participants completed 3 jump-landing trials that were recorded by standard video cameras and a depth camera. Their movement quality was evaluated by expert LESS raters (standard video recording) using the LESS rubric and by software that automates LESS scoring (depth-camera data). We recorded an error for a LESS item if it was present on at least 2 of 3 jump-landing trials. We calculated κ statistics, prevalence- and bias-adjusted κ (PABAK) statistics, and percentage agreement for each LESS item. Interrater reliability was evaluated between the 2 expert rater scores and between a consensus expert score and the markerless motion-capture system score.   We observed reliability between the 2 expert LESS raters (average κ = 0.45 ± 0.35, average PABAK = 0.67 ± 0.34; percentage agreement = 0.83 ± 0.17). The markerless motion-capture system had similar reliability with consensus expert scores (average κ = 0.48 ± 0.40, average PABAK = 0.71 ± 0.27; percentage agreement = 0.85 ± 0.14). However, reliability was poor for 5 LESS items in both LESS score comparisons.   A markerless motion-capture system had the same level of reliability as expert LESS raters, suggesting that an automated system can accurately assess movement. Therefore, clinicians can use

  3. Lunch-time food choices in preschoolers: Relationships between absolute and relative intakes of different food categories, and appetitive characteristics and weight.

    Science.gov (United States)

    Carnell, S; Pryor, K; Mais, L A; Warkentin, S; Benson, L; Cheng, R

    2016-08-01

    Children's appetitive characteristics measured by parent-report questionnaires are reliably associated with body weight, as well as behavioral tests of appetite, but relatively little is known about relationships with food choice. As part of a larger preloading study, we served 4-5year olds from primary school classes five school lunches at which they were presented with the same standardized multi-item meal. Parents completed Child Eating Behavior Questionnaire (CEBQ) sub-scales assessing satiety responsiveness (CEBQ-SR), food responsiveness (CEBQ-FR) and enjoyment of food (CEBQ-EF), and children were weighed and measured. Despite differing preload conditions, children showed remarkable consistency of intake patterns across all five meals with day-to-day intra-class correlations in absolute and percentage intake of each food category ranging from 0.78 to 0.91. Higher CEBQ-SR was associated with lower mean intake of all food categories across all five meals, with the weakest association apparent for snack foods. Higher CEBQ-FR was associated with higher intake of white bread and fruits and vegetables, and higher CEBQ-EF was associated with greater intake of all categories, with the strongest association apparent for white bread. Analyses of intake of each food group as a percentage of total intake, treated here as an index of the child's choice to consume relatively more or relatively less of each different food category when composing their total lunch-time meal, further suggested that children who were higher in CEBQ-SR ate relatively more snack foods and relatively less fruits and vegetables, while children with higher CEBQ-EF ate relatively less snack foods and relatively more white bread. Higher absolute intakes of white bread and snack foods were associated with higher BMI z score. CEBQ sub-scale associations with food intake variables were largely unchanged by controlling for daily metabolic needs. However, descriptive comparisons of lunch intakes with

  4. 7 CFR 981.47 - Method of establishing salable and reserve percentages.

    Science.gov (United States)

    2010-01-01

    ...) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF... effectuate the declared policy of the act, he shall designate such percentages. Except as provided in § 981... percentages, the Secretary shall give consideration to the ratio of estimated trade demand (domestic plus...

  5. Absolute pitch: a case study.

    Science.gov (United States)

    Vernon, P E

    1977-11-01

    The auditory skill known as 'absolute pitch' is discussed, and it is shown that this differs greatly in accuracy of identification or reproduction of musical tones from ordinary discrimination of 'tonal height' which is to some extent trainable. The present writer possessed absolute pitch for almost any tone or chord over the normal musical range, from about the age of 17 to 52. He then started to hear all music one semitone too high, and now at the age of 71 it is heard a full tone above the true pitch. Tests were carried out under controlled conditions, in which 68 to 95 per cent of notes were identified as one semitone or one tone higher than they should be. Changes with ageing seem more likely to occur in the elasticity of the basilar membrane mechanisms than in the long-term memory which is used for aural analysis of complex sounds. Thus this experience supports the view that some resolution of complex sounds takes place at the peripheral sense organ, and this provides information which can be incorrect, for interpretation by the cortical centres.

  6. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    Science.gov (United States)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere

  7. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  8. Absolute calibration technique for spontaneous fission sources

    International Nuclear Information System (INIS)

    Zucker, M.S.; Karpf, E.

    1984-01-01

    An absolute calibration technique for a spontaneously fissioning nuclide (which involves no arbitrary parameters) allows unique determination of the detector efficiency for that nuclide, hence of the fission source strength

  9. A hybrid model for dissolved oxygen prediction in aquaculture based on multi-scale features

    Directory of Open Access Journals (Sweden)

    Chen Li

    2018-03-01

    Full Text Available To increase prediction accuracy of dissolved oxygen (DO in aquaculture, a hybrid model based on multi-scale features using ensemble empirical mode decomposition (EEMD is proposed. Firstly, original DO datasets are decomposed by EEMD and we get several components. Secondly, these components are used to reconstruct four terms including high frequency term, intermediate frequency term, low frequency term and trend term. Thirdly, according to the characteristics of high and intermediate frequency terms, which fluctuate violently, the least squares support vector machine (LSSVR is used to predict the two terms. The fluctuation of low frequency term is gentle and periodic, so it can be modeled by BP neural network with an optimal mind evolutionary computation (MEC-BP. Then, the trend term is predicted using grey model (GM because it is nearly linear. Finally, the prediction values of DO datasets are calculated by the sum of the forecasting values of all terms. The experimental results demonstrate that our hybrid model outperforms EEMD-ELM (extreme learning machine based on EEMD, EEMD-BP and MEC-BP models based on the mean absolute error (MAE, mean absolute percentage error (MAPE, mean square error (MSE and root mean square error (RMSE. Our hybrid model is proven to be an effective approach to predict aquaculture DO.

  10. Absolute luminosity measurements with the LHCb detector at the LHC

    CERN Document Server

    Aaij, R; Adinolfi, M; Adrover, C; Affolder, A; Ajaltouni, Z; Albrecht, J; Alessio, F; Alexander, M; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amhis, Y; Anderson, J; Appleby, R B; Aquines Gutierrez, O; Archilli, F; Arrabito, L; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Bachmann, S; Back, J J; Bailey, D S; Balagura, V; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Bates, A; Bauer, C; Bauer, Th; Bay, A; Bediaga, I; Belous, K; Belyaev, I; Ben-Haim, E; Benayoun, M; Bencivenni, G; Benson, S; Benton, J; Bernet, R; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Bizzeti, A; Bjørnstad, P M; Blake, T; Blanc, F; Blanks, C; Blouw, J; Blusk, S; Bobrov, A; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borgia, A; Bowcock, T J V; Bozzi, C; Brambach, T; van den Brand, J; Bressieux, J; Brett, D; Brisbane, S; Britsch, M; Britton, T; Brook, N H; Brown, H; Büchler-Germann, A; Burducea, I; Bursche, A; Buytaert, J; Cadeddu, S; Caicedo Carvajal, J M; Callot, O; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carson, L; Carvalho Akiba, K; Casse, G; Cattaneo, M; Charles, M; Charpentier, Ph; Chiapolini, N; Ciba, K; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coca, C; Coco, V; Cogan, J; Collins, P; Constantin, F; Conti, G; Contu, A; Cook, A; Coombes, M; Corti, G; Cowan, G A; Currie, R; D'Almagne, B; D'Ambrosio, C; David, P; De Bonis, I; De Capua, S; De Cian, M; De Lorenzi, F; De Miranda, J M; De Paula, L; De Simone, P; Decamp, D; Deckenhoff, M; Degaudenzi, H; Deissenroth, M; Del Buono, L; Deplano, C; Deschamps, O; Dettori, F; Dickens, J; Dijkstra, H; Diniz Batista, P; Donleavy, S; Dordei, F; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dupertuis, F; Dzhelyadin, R; Eames, C; Easo, S; Egede, U; Egorychev, V; Eidelman, S; van Eijk, D; Eisele, F; Eisenhardt, S; Ekelhof, R; Eklund, L; Elsasser, Ch; d'Enterria, D G; Esperante Pereira, D; Estève, L; Falabella, A; Fanchini, E; Färber, C; Fardell, G; Farinelli, C; Farry, S; Fave, V; Fernandez Albor, V; Ferro-Luzzi, M; Filippov, S; Fitzpatrick, C; Fontana, M; Fontanelli, F; Forty, R; Frank, M; Frei, C; Frosini, M; Furcas, S; Gallas Torreira, A; Galli, D; Gandelman, M; Gandini, P; Gao, Y; Garnier, J-C; Garofoli, J; Garra Tico, J; Garrido, L; Gaspar, C; Gauvin, N; Gersabeck, M; Gershon, T; Ghez, Ph; Gibson, V; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gordon, H; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graziani, G; Grecu, A; Gregson, S; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Haefeli, G; Haen, C; Haines, S C; Hampson, T; Hansmann-Menzemer, S; Harji, R; Harnew, N; Harrison, J; Harrison, P F; He, J; Heijne, V; Hennessy, K; Henrard, P; Hernando Morata, J A; van Herwijnen, E; Hicks, E; Hofmann, W; Holubyev, K; Hopchev, P; Hulsbergen, W; Hunt, P; Huse, T; Huston, R S; Hutchcroft, D; Hynds, D; Iakovenko, V; Ilten, P; Imong, J; Jacobsson, R; Jaeger, A; Jahjah Hussein, M; Jans, E; Jansen, F; Jaton, P; Jean-Marie, B; Jing, F; John, M; Johnson, D; Jones, C R; Jost, B; Kandybei, S; Karacson, M; Karbach, T M; Keaveney, J; Kerzel, U; Ketel, T; Keune, A; Khanji, B; Kim, Y M; Knecht, M; Koblitz, S; Koppenburg, P; Kozlinskiy, A; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Kruzelecki, K; Kucharczyk, M; Kukulak, S; Kumar, R; Kvaratskheliya, T; La Thi, V N; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lambert, R W; Lanciotti, E; Lanfranchi, G; Langenbruch, C; Latham, T; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Leroy, O; Lesiak, T; Li, L; Li Gioi, L; Lieng, M; Liles, M; Lindner, R; Linn, C; Liu, B; Liu, G; Lopes, J H; Lopez Asamar, E; Lopez-March, N; Luisier, J; Machefert, F; Machikhiliyan, I V; Maciuc, F; Maev, O; Magnin, J; Malde, S; Mamunur, R M D; Manca, G; Mancinelli, G; Mangiafave, N; Marconi, U; Märki, R; Marks, J; Martellotti, G; Martens, A; Martin, L; Martín Sánchez, A; Martinez Santos, D; Massafferri, A; Matev, R; Mathe, Z; Matteuzzi, C; Matveev, M; Maurice, E; Maynard, B; Mazurov, A; McGregor, G; McNulty, R; Mclean, C; Meissner, M; Merk, M; Merkel, J; Messi, R; Miglioranzi, S; Milanes, D A; Minard, M-N; Monteil, S; Moran, D; Morawski, P; Mountain, R; Mous, I; Muheim, F; Müller, K; Muresan, R; Muryn, B; Musy, M; Mylroie-Smith, J; Naik, P; Nakada, T; Nandakumar, R; Nardulli, J; Nasteva, I; Nedos, M; Needham, M; Neufeld, N; Nguyen-Mau, C; Nicol, M; Nies, S; Niess, V; Nikitin, N; Oblakowska-Mucha, A; Obraztsov, V; Oggero, S; Ogilvy, S; Okhrimenko, O; Oldeman, R; Orlandea, M; Otalora Goicochea, J M; Owen, P; Pal, B; Palacios, J; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Parkes, C; Parkinson, C J; Passaleva, G; Patel, G D; Patel, M; Paterson, S K; Patrick, G N; Patrignani, C; Pavel-Nicorescu, C; Pazos Alvarez, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perego, D L; Perez Trigo, E; Pérez-Calero Yzquierdo, A; Perret, P; Perrin-Terrin, M; Pessina, G; Petrella, A; Petrolini, A; Pie Valls, B; Pietrzyk, B; Pilar, T; Pinci, D; Plackett, R; Playfer, S; Plo Casasus, M; Polok, G; Poluektov, A; Polycarpo, E; Popov, D; Popovici, B; Potterat, C; Powell, A; du Pree, T; Prisciandaro, J; Pugatch, V; Puig Navarro, A; Qian, W; Rademacker, J H; Rakotomiaramanana, B; Rangel, M S; Raniuk, I; Raven, G; Redford, S; Reid, M M; dos Reis, A C; Ricciardi, S; Rinnert, K; Roa Romero, D A; Robbe, P; Rodrigues, E; Rodrigues, F; Rodriguez Perez, P; Rogers, G J; Roiser, S; Romanovsky, V; Rouvinet, J; Ruf, T; Ruiz, H; Sabatino, G; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salzmann, C; Sannino, M; Santacesaria, R; Santamarina Rios, C; Santinelli, R; Santovetti, E; Sapunov, M; Sarti, A; Satriano, C; Satta, A; Savrie, M; Savrina, D; Schaack, P; Schiller, M; Schleich, S; Schmelling, M; Schmidt, B; Schneider, O; Schopper, A; Schune, M -H; Schwemmer, R; Sciubba, A; Seco, M; Semennikov, A; Senderowska, K; Sepp, I; Serra, N; Serrano, J; Seyfert, P; Shao, B; Shapkin, M; Shapoval, I; Shatalov, P; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, O; Shevchenko, V; Shires, A; Silva Coutinho, R; Skottowe, H P; Skwarnicki, T; Smith, A C; Smith, N A; Sobczak, K; Soler, F J P; Solomin, A; Soomro, F; Souza De Paula, B; Spaan, B; Sparkes, A; Spradlin, P; Stagni, F; Stahl, S; Steinkamp, O; Stoica, S; Stone, S; Storaci, B; Straticiuc, M; Straumann, U; Styles, N; Subbiah, V K; Swientek, S; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Teodorescu, E; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Topp-Joergensen, S; Tran, M T; Tsaregorodtsev, A; Tuning, N; Ubeda Garcia, M; Ukleja, A; Urquijo, P; Uwer, U; Vagnoni, V; Valenti, G; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; Velthuis, J J; Veltri, M; Vervink, K; Viaud, B; Videau, I; Vilasis-Cardona, X; Visniakov, J; Vollhardt, A; Voong, D; Vorobyev, A; Voss, H; Wacker, K; Wandernoth, S; Wang, J; Ward, D R; Webber, A D; Websdale, D; Whitehead, M; Wiedner, D; Wiggers, L; Wilkinson, G; Williams, M P; Williams, M; Wilson, F F; Wishahi, J; Witek, M; Witzeling, W; Wotton, S A; Wyllie, K; Xie, Y; Xing, F; Yang, Z; Young, R; Yushchenko, O; Zavertyaev, M; Zhang, F; Zhang, L; Zhang, W C; Zhang, Y; Zhelezov, A; Zhong, L; Zverev, E; Zvyagin, A

    2012-01-01

    Absolute luminosity measurements are of general interest for colliding-beam experiments at storage rings. These measurements are necessary to determine the absolute cross-sections of reaction processes and are valuable to quantify the performance of the accelerator. LHCb has applied two methods to determine the absolute scale of its luminosity measurements for proton-proton collisions at the LHC with a centre-of-mass energy of 7 TeV. In addition to the classic ``van der Meer scan'' method a novel technique has been developed which makes use of direct imaging of the individual beams using beam-gas and beam-beam interactions. This beam imaging method is made possible by the high resolution of the LHCb vertex detector and the close proximity of the detector to the beams, and allows beam parameters such as positions, angles and widths to be determined. The results of the two methods have comparable precision and are in good agreement. Combining the two methods, an overall precision of 3.5\\% in the absolute lumi...

  11. Absolute calibration of TFTR helium proportional counters

    International Nuclear Information System (INIS)

    Strachan, J.D.; Diesso, M.; Jassby, D.; Johnson, L.; McCauley, S.; Munsat, T.; Roquemore, A.L.; Loughlin, M.

    1995-06-01

    The TFTR helium proportional counters are located in the central five (5) channels of the TFTR multichannel neutron collimator. These detectors were absolutely calibrated using a 14 MeV neutron generator positioned at the horizontal midplane of the TFTR vacuum vessel. The neutron generator position was scanned in centimeter steps to determine the collimator aperture width to 14 MeV neutrons and the absolute sensitivity of each channel. Neutron profiles were measured for TFTR plasmas with time resolution between 5 msec and 50 msec depending upon count rates. The He detectors were used to measure the burnup of 1 MeV tritons in deuterium plasmas, the transport of tritium in trace tritium experiments, and the residual tritium levels in plasmas following 50:50 DT experiments

  12. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    Science.gov (United States)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  13. Stimulus Probability Effects in Absolute Identification

    Science.gov (United States)

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  14. Absolute gravity measurements in California

    Science.gov (United States)

    Zumberge, M. A.; Sasagawa, G.; Kappus, M.

    1986-08-01

    An absolute gravity meter that determines the local gravitational acceleration by timing a freely falling mass with a laser interferometer has been constructed. The instrument has made measurements at 11 sites in California, four in Nevada, and one in France. The uncertainty in the results is typically 10 microgal. Repeated measurements have been made at several of the sites; only one shows a substantial change in gravity.

  15. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  16. Relational versus absolute representation in categorization.

    Science.gov (United States)

    Edwards, Darren J; Pothos, Emmanuel M; Perlman, Amotz

    2012-01-01

    This study explores relational-like and absolute-like representations in categorization. Although there is much evidence that categorization processes can involve information about both the particular physical properties of studied instances and abstract (relational) properties, there has been little work on the factors that lead to one kind of representation as opposed to the other. We tested 370 participants in 6 experiments, in which participants had to classify new items into predefined artificial categories. In 4 experiments, we observed a predominantly relational-like mode of classification, and in 2 experiments we observed a shift toward an absolute-like mode of classification. These results suggest 3 factors that promote a relational-like mode of classification: fewer items per group, more training groups, and the presence of a time delay. Overall, we propose that less information about the distributional properties of a category or weaker memory traces for the category exemplars (induced, e.g., by having smaller categories or a time delay) can encourage relational-like categorization.

  17. ERRORS IN NARRATIVE TEXT COMMITTED BY STUDENTS IN GRADE XI OF VOCATIONAL HIGH SCHOOL (SMK STATE 4 SURAKARTA

    Directory of Open Access Journals (Sweden)

    Eko Mulyono -

    2017-05-01

    Full Text Available This study aimes to identify the types of  errors in the students’ writing, to  know the frequencies of each type of errors, and to investigate the causes of errors. There are three types of errors occured in the students’ writing namely lexical error, syntactical error, and discourse error. The errors can be categorized into twenty four subcategories of errors: wrong spelling words (10,35%, wrong selection words (15,53%, omission verb (0,74%, omission v-ing after preposition for (0,55%, addition unnecessary verb (0,74%, using simple present tense refers to simple past (22,37%, use simple future instead of past future (2,40%, using irregular past verb tense after to infinitive (2,40%, addition final ed after to infinitive (1,85%, addition v-ing after to infinitive (1,11%, addition double marking verb (1,66%, omission to be (11,65%, addition to be (1,29%, omission s/es in the use of plural noun (2,40%, addition s in singular noun (1,29%, omission article (6,47%, addition unnecessary article (1,66%, wrong article (1,11%, wrong subject pronoun (2,03%, wrong object pronoun (0,55%, wrong possessive pronoun (2,03%, generic structure (2,96%, reference (2,03%, wrong selection conjunction (4,81%. The most dominant error is in syntactical error i.e. using simple present tense refers to simple past with the percentage 22,37%. Those errors are caused by four aspects, they are overgeneralization, incomplete application of rules, ignorance of rule restrictions, and false concept hypothesized.

  18. High-fidelity target sequencing of individual molecules identified using barcode sequences: de novo detection and absolute quantitation of mutations in plasma cell-free DNA from cancer patients.

    Science.gov (United States)

    Kukita, Yoji; Matoba, Ryo; Uchida, Junji; Hamakawa, Takuya; Doki, Yuichiro; Imamura, Fumio; Kato, Kikuya

    2015-08-01

    Circulating tumour DNA (ctDNA) is an emerging field of cancer research. However, current ctDNA analysis is usually restricted to one or a few mutation sites due to technical limitations. In the case of massively parallel DNA sequencers, the number of false positives caused by a high read error rate is a major problem. In addition, the final sequence reads do not represent the original DNA population due to the global amplification step during the template preparation. We established a high-fidelity target sequencing system of individual molecules identified in plasma cell-free DNA using barcode sequences; this system consists of the following two steps. (i) A novel target sequencing method that adds barcode sequences by adaptor ligation. This method uses linear amplification to eliminate the errors introduced during the early cycles of polymerase chain reaction. (ii) The monitoring and removal of erroneous barcode tags. This process involves the identification of individual molecules that have been sequenced and for which the number of mutations have been absolute quantitated. Using plasma cell-free DNA from patients with gastric or lung cancer, we demonstrated that the system achieved near complete elimination of false positives and enabled de novo detection and absolute quantitation of mutations in plasma cell-free DNA. © The Author 2015. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  19. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  20. Dressing percentage and Carcass characteristics of four Indigenous ...

    African Journals Online (AJOL)

    Dressing percentage and Carcass characteristics of four Indigenous cattle breeds in Nigeria. ... Nigerian Journal of Animal Production ... Their feed intake, live and carcasses weights and the weights of their major carcass components and ...

  1. Adverse Drug Events and Medication Errors in African Hospitals: A Systematic Review.

    Science.gov (United States)

    Mekonnen, Alemayehu B; Alhawassi, Tariq M; McLachlan, Andrew J; Brien, Jo-Anne E

    2018-03-01

    Medication errors and adverse drug events are universal problems contributing to patient harm but the magnitude of these problems in Africa remains unclear. The objective of this study was to systematically investigate the literature on the extent of medication errors and adverse drug events, and the factors contributing to medication errors in African hospitals. We searched PubMed, MEDLINE, EMBASE, Web of Science and Global Health databases from inception to 31 August, 2017 and hand searched the reference lists of included studies. Original research studies of any design published in English that investigated adverse drug events and/or medication errors in any patient population in the hospital setting in Africa were included. Descriptive statistics including median and interquartile range were presented. Fifty-one studies were included; of these, 33 focused on medication errors, 15 on adverse drug events, and three studies focused on medication errors and adverse drug events. These studies were conducted in nine (of the 54) African countries. In any patient population, the median (interquartile range) percentage of patients reported to have experienced any suspected adverse drug event at hospital admission was 8.4% (4.5-20.1%), while adverse drug events causing admission were reported in 2.8% (0.7-6.4%) of patients but it was reported that a median of 43.5% (20.0-47.0%) of the adverse drug events were deemed preventable. Similarly, the median mortality rate attributed to adverse drug events was reported to be 0.1% (interquartile range 0.0-0.3%). The most commonly reported types of medication errors were prescribing errors, occurring in a median of 57.4% (interquartile range 22.8-72.8%) of all prescriptions and a median of 15.5% (interquartile range 7.5-50.6%) of the prescriptions evaluated had dosing problems. Major contributing factors for medication errors reported in these studies were individual practitioner factors (e.g. fatigue and inadequate knowledge

  2. Effect of Absolute From Hibiscus syriacus L. Flower on Wound Healing in Keratinocytes

    Science.gov (United States)

    Yoon, Seok Won; Lee, Kang Pa; Kim, Do-Yoon; Hwang, Dae Il; Won, Kyung-Jong; Lee, Dae Won; Lee, Hwan Myung

    2017-01-01

    Background: Proliferation and migration of keratinocytes are essential for the repair of cutaneous wounds. Hibiscus syriacus L. has been used in Asian medicine; however, research on keratinocytes is inadequate. Objective: To establish the dermatological properties of absolute from Hibiscus syriacus L. flower (HSF) and to provide fundamental research for alternative medicine. Materials and Methods: We identified the composition of HSF absolute using gas chromatography-mass spectrometry analysis. We also examined the effect of HSF absolute in HaCaT cells using the XTT assay, Boyden chamber assay, sprout-out growth assay, and western blotting. We conducted an in-vivo wound healing assay in rat tail-skin. Results: Ten major active compounds were identified from HSF absolute. As determined by the XTT assay, Boyden chamber assay, and sprout-out growth assay results, HSF absolute exhibited similar effects as that of epidermal growth factor on the proliferation and migration patterns of keratinocytes (HaCaT cells), which were significantly increased after HSF absolute treatment. The expression levels of the phosphorylated signaling proteins relevant to proliferation, including extracellular signal-regulated kinase 1/2 (Erk 1/2) and Akt, were also determined by western blot analysis. Conclusion: These results of our in-vitro and ex-vivo studies indicate that HSF absolute induced cell growth and migration of HaCaT cells by phosphorylating both Erk 1/2 and Akt. Moreover, we confirmed the wound-healing effect of HSF on injury of the rat tail-skin. Therefore, our results suggest that HSF absolute is promising for use in cosmetics and alternative medicine. SUMMARY Hisbiscus syriacus L. flower absolute increases HaCaT cell migration and proliferation.Hisbiscus syriacus L. flower absolute regulates phosphorylation of ERK 1/2 and Akt in HaCaT cell.Treatment with Hisbiscus syriacus L. flower induced sprout outgrowth.The wound in the tail-skin of rat was reduced by Hisbiscus syriacus

  3. Absolute and convective instability of a liquid sheet with transverse temperature gradient

    International Nuclear Information System (INIS)

    Fu, Qing-Fei; Yang, Li-Jun; Tong, Ming-Xi; Wang, Chen

    2013-01-01

    Highlights: • The spatial–temporal instability of a liquid sheet with thermal effects was studied. • The flow can transit to absolutely unstable with certain flow parameters. • The effects of non-dimensional parameters on the transition were studied. -- Abstract: The spatial–temporal instability behavior of a viscous liquid sheet with temperature difference between the two surfaces was investigated theoretically. The practical situation motivating this investigation is liquid sheet heated by ambient gas, usually encountered in industrial heat transfer and liquid propellant rocket engines. The existing dispersion relation was used, to explore the spatial–temporal instability of viscous liquid sheets with a nonuniform temperature profile, by setting both the wave number and frequency complex. A parametric study was performed in both sinuous and varicose modes to test the influence of dimensionless numbers on the transition between absolute and convective instability of the flow. For a small value of liquid Weber number, or a great value of gas-to-liquid density ratio, the flow was found to be absolutely unstable. The absolute instability was enhanced by increasing the liquid viscosity. It was found that variation of the Marangoni number hardly influenced the absolute instability of the sinuous mode of oscillations; however it slightly affected the absolute instability in the varicose mode

  4. Comparison of mathematical models and artificial neural networks for prediction of drying kinetics of mushroom in microwave vacuum dryer

    Directory of Open Access Journals (Sweden)

    Ghaderi A.

    2012-01-01

    Full Text Available Drying characteristics of button mushroom slices were determined using microwave vacuum drier at various powers (130, 260, 380, 450 W and absolute pressures (200, 400, 600, 800 mbar. To select a suitable mathematical model, 6 thin-layer drying models were fitted to the experimental data. The fitting rates of models were assessed based on three parameters; highest R2, lowest chi square ( and root mean square error (RMSE. In addition, using the experimental data, an ANN trained by standard back-propagation algorithm, was developed in order to predict moisture ratio (MR and drying rate (DR values based on the three input variables (drying time, absolute pressure, microwave power. Different activation functions and several rules were used to assess percentage error between the desired and the predicted values. According to our findings, Midilli et al. model showed a reasonable fitting with experimental data. While, the ANN model showed its high capability to predict the MR and DR quite well with determination coefficients (R2 of 0.9991, 0.9995 and 0.9996 for training, validation and testing, respectively. Furthermore, their predictions Mean Square Error were 0.00086, 0.00042 and 0.00052, respectively.

  5. Airline Sustainability Modeling: A New Framework with Application of Bayesian Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Hashem Salarzadeh Jenatabadi

    2016-11-01

    Full Text Available There are many factors which could influence the sustainability of airlines. The main purpose of this study is to introduce a framework for a financial sustainability index and model it based on structural equation modeling (SEM with maximum likelihood and Bayesian predictors. The introduced framework includes economic performance, operational performance, cost performance, and financial performance. Based on both Bayesian SEM (Bayesian-SEM and Classical SEM (Classical-SEM, it was found that economic performance with both operational performance and cost performance are significantly related to the financial performance index. The four mathematical indices employed are root mean square error, coefficient of determination, mean absolute error, and mean absolute percentage error to compare the efficiency of Bayesian-SEM and Classical-SEM in predicting the airline financial performance. The outputs confirmed that the framework with Bayesian prediction delivered a good fit with the data, although the framework predicted with a Classical-SEM approach did not prepare a well-fitting model. The reasons for this discrepancy between Classical and Bayesian predictions, as well as the potential advantages and caveats with the application of Bayesian approach in airline sustainability studies, are debated.

  6. Hybrid methodology for tuberculosis incidence time-series forecasting based on ARIMA and a NAR neural network.

    Science.gov (United States)

    Wang, K W; Deng, C; Li, J P; Zhang, Y Y; Li, X Y; Wu, M C

    2017-04-01

    Tuberculosis (TB) affects people globally and is being reconsidered as a serious public health problem in China. Reliable forecasting is useful for the prevention and control of TB. This study proposes a hybrid model combining autoregressive integrated moving average (ARIMA) with a nonlinear autoregressive (NAR) neural network for forecasting the incidence of TB from January 2007 to March 2016. Prediction performance was compared between the hybrid model and the ARIMA model. The best-fit hybrid model was combined with an ARIMA (3,1,0) × (0,1,1)12 and NAR neural network with four delays and 12 neurons in the hidden layer. The ARIMA-NAR hybrid model, which exhibited lower mean square error, mean absolute error, and mean absolute percentage error of 0·2209, 0·1373, and 0·0406, respectively, in the modelling performance, could produce more accurate forecasting of TB incidence compared to the ARIMA model. This study shows that developing and applying the ARIMA-NAR hybrid model is an effective method to fit the linear and nonlinear patterns of time-series data, and this model could be helpful in the prevention and control of TB.

  7. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  8. Evaluating the performance of the Lee-Carter method and its variants in modelling and forecasting Malaysian mortality

    Science.gov (United States)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-12-01

    This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.

  9. Identifying Lattice, Orbit, And BPM Errors in PEP-II

    International Nuclear Information System (INIS)

    Decker, F.-J.; SLAC

    2005-01-01

    The PEP-II B-Factory is delivering peak luminosities of up to 9.2 · 10 33 1/cm 2 · l/s. This is very impressive especially considering our poor understanding of the lattice, absolute orbit and beam position monitor system (BPM). A few simple MATLAB programs were written to get lattice information, like betatron functions in a coupled machine (four all together) and the two dispersions, from the current machine and compare it the design. Big orbit deviations in the Low Energy Ring (LER) could be explained not by bad BPMs (only 3), but by many strong correctors (one corrector to fix four BPMs on average). Additionally these programs helped to uncover a sign error in the third order correction of the BPM system. Further analysis of the current information of the BPMs (sum of all buttons) indicates that there might be still more problematic BPMs

  10. The fading American dream: Trends in absolute income mobility since 1940.

    Science.gov (United States)

    Chetty, Raj; Grusky, David; Hell, Maximilian; Hendren, Nathaniel; Manduca, Robert; Narang, Jimmy

    2017-04-28

    We estimated rates of "absolute income mobility"-the fraction of children who earn more than their parents-by combining data from U.S. Census and Current Population Survey cross sections with panel data from de-identified tax records. We found that rates of absolute mobility have fallen from approximately 90% for children born in 1940 to 50% for children born in the 1980s. Increasing Gross Domestic Product (GDP) growth rates alone cannot restore absolute mobility to the rates experienced by children born in the 1940s. However, distributing current GDP growth more equally across income groups as in the 1940 birth cohort would reverse more than 70% of the decline in mobility. These results imply that reviving the "American dream" of high rates of absolute mobility would require economic growth that is shared more broadly across the income distribution. Copyright © 2017, American Association for the Advancement of Science.

  11. 26 CFR 1.1502-44 - Percentage depletion for independent producers and royalty owners.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Percentage depletion for independent producers...-44 Percentage depletion for independent producers and royalty owners. (a) In general. The sum of the percentage depletion deductions for the taxable year for all oil or gas property owned by all members, plus...

  12. Relative frequency of immature CD34+/CD90+ subset in peripheral blood following mobilization correlates closely and inversely with the absolute count of harvested stem cells in multiple myeloma patients

    Directory of Open Access Journals (Sweden)

    Balint Bela

    2017-01-01

    Full Text Available Background/Aim. Stem cells (SCs guarantee complete/longterm bone marrow (BM repopulation after SC-transplants. The aim of the study was to evaluate absolute count of total SCs (determined by ISHAGE-sequential-gating protocol – SCish and relative frequency of immature CD34+/CD90+ (CD90+SCish subset in peripheral blood (PB as predictive factors of mobilization and apheresis product (AP quality. Methods. Mobilization included chemotherapy and granulocytegrowth- factor (G-CSF. Harvesting was performed by Spectra- Optia-IDL-system. The SCsish were determined as a constitutional part of CD34+ cells in the “stem-cell-region” using FC- 500 flow-cytometer. In this study, the original ISHAGEsequential- gating protocol was modified by introduction of anti-CD90-PE monoclonal-antibody into the analysis of CD90 expression on SCish (CD90+SCish. The results were presented as a percentage of SCish per nucleated-cell count, absolute SCish count in μL of the PB or the AP, percentage of the CD90+SCish expressed to SCish and absolute CD90+SCish count in μL of the PB or the AP. Results. The absolute count of total SCish and CD90+SCish was significantly higher (p = 0.0007 and p = 0.0266, respectively in the AP than in the PB samples. The CD90+SCish/total SCish indexes from PB were higher than indexes from the AP (p = 0.039. The relative frequency of CD90+SCish showed a highly significant inverse correlation with the absolute count of total SCish in both, the PB and AP (p = 0.0003 and p = 0.0013 respectively. The relative frequency of CD90+SCish from the PB also showed a significant (p = 0.0002 inverse relationship with total SCish count in the AP. Patients with less than 10% CD90+SCish in the PB had evidently higher (p = 0.0025 total SCish count in the AP. Conclusion. We speculate that lower CD90+SCish yield in the AP is not a consequence of an inferior collection efficacy, but most likely a result of several still not fully resolved immature SC

  13. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Isotherms and thermodynamics by linear and non-linear regression analysis for the sorption of methylene blue onto activated carbon: Comparison of various error functions

    International Nuclear Information System (INIS)

    Kumar, K. Vasanth; Porkodi, K.; Rocha, F.

    2008-01-01

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of methylene blue sorption by activated carbon. The r 2 was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions, namely coefficient of determination (r 2 ), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r 2 was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K 2 was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm

  15. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    Energy Technology Data Exchange (ETDEWEB)

    You, Sung Hye; Son, Gyu Ri; Lee, Nam Joon [Dept. of Radiology, Korea University Anam Hospital, Seoul (Korea, Republic of); Suh, Sangil; Ryoo, In Seon; Seol, Hae Young [Dept. of Radiology, Korea University Guro Hospital, Seoul (Korea, Republic of); Lee, Young Hen; Seo, Hyung Suk [Dept. of Radiology, Korea University Ansan Hospital, Ansan (Korea, Republic of)

    2017-01-15

    The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL), for measuring the volume of parathyroid glands. Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D) and three-dimensional (3D) methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs) and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism.

  16. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    International Nuclear Information System (INIS)

    You, Sung Hye; Son, Gyu Ri; Lee, Nam Joon; Suh, Sangil; Ryoo, In Seon; Seol, Hae Young; Lee, Young Hen; Seo, Hyung Suk

    2017-01-01

    The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL), for measuring the volume of parathyroid glands. Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D) and three-dimensional (3D) methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs) and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism

  17. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  18. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  19. Financial impact of errors in business forecasting: a comparative study of linear models and neural networks

    Directory of Open Access Journals (Sweden)

    Claudimar Pereira da Veiga

    2012-08-01

    Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.

  20. Absolute total cross sections for noble gas systems

    International Nuclear Information System (INIS)

    Kam, P. van der.

    1981-01-01

    This thesis deals with experiments on the elastic scattering of Ar, Kr and Xe, using the molecular beam technique. The aim of this work was the measurement of the absolute value of the total cross section and the behaviour of the total cross section, Q, as function of the relative velocity g of the scattering partners. The author gives an extensive analysis of the glory structure in the total cross section and parametrizes the experimental results using a semiclassical model function. This allows a detailed comparison of the phase and amplitude of the predicted and measured glory undulations. He indicates how the depth and position of the potential well should be changed in order to come to an optimum description of the glory structure. With this model function he has also been able to separate the glory and attractive contribution to Q, and using the results from the extrapolation measurements he has obtained absolute values for Qsub(a). From these absolute values he has calculated the parameter C 6 that determines the strength of the attractive region of the potential. In two of the four investigated gas combinations the obtained values lie outside the theoretical bounds. (Auth.)

  1. Body fat percentage of urban South African children: implications for health and fitness.

    Science.gov (United States)

    Goon, D T; Toriola, A L; Shaw, B S; Amusa, L O; Khoza, L B; Shaw, I

    2013-09-01

    To explore gender and racial profiling of percentage body fat of 1136 urban South African children attending public schools in Pretoria Central. This is a cross-sectional survey of 1136 randomly selected children (548 boys and 588 girls) aged 9-13 years in urban (Pretoria Central) South Africa. Body mass, stature, skinfolds (subscapular and triceps) were measured. Data were analysed using descriptive statistics (means and standard deviations). Differences in the mean body fat percentage were examined for boys and girls according to their age group/race, using independent t-test samples. Girls had a significantly (p = 0.001) higher percentage body fat (22.7 ± 5.7%, 95% CI = 22.3, 23.2) compared to boys (16.1 ± 7.7%, 95% CI = 15.5, 16.8). Percentage body fat fluctuated with age in both boys and girls. Additionally, girls had significantly (p = 0.001) higher percentage body fat measurements at all ages compared to boys. Viewed racially, black children (20.1 ± 7.5) were significantly (p = 0.010) fatter than white children (19.0 ± 7.4) with a mean difference of 4.0. Black children were fatter than white children at ages 9, 10, 12 and 13 years, with a significant difference (p = 0.009) observed at age 12 years. There was a considerably higher level of excessive percentage body fat among school children in Central Pretoria, South Africa, with girls having significantly higher percentage body fat compared to boys. Racially, black children were fatter than white children. The excessive percentage body fat observed among the children in this study has implications for their health and fitness. Therefore, an intervention programme must be instituted in schools to prevent and control possible excessive percentage body fat in this age group.

  2. High body fat percentage among adult women in Malaysia: the role ...

    African Journals Online (AJOL)

    Body fat percentage is regarded as an important measurement for diagnosis of obesity. The aim of this study is to determine the association of high body fat percentage (BF%) and lifestyle among adult women. The study was conducted on 327 women, aged 40-59 years, recruited during a health screening program. Data on ...

  3. 12 CFR Appendix A to Part 230 - Annual Percentage Yield Calculation

    Science.gov (United States)

    2010-01-01

    ... following simple formula: APY=100 (Interest/Principal) Examples (1) If an institution pays $61.68 in... percentage yield is 5.39%, using the simple formula: APY=100(134.75/2,500) APY=5.39% For $15,000, interest is... Yield Calculation The annual percentage yield measures the total amount of interest paid on an account...

  4. Absolute instabilities of travelling wave solutions in a Keller-Segel model

    Science.gov (United States)

    Davis, P. N.; van Heijster, P.; Marangell, R.

    2017-11-01

    We investigate the spectral stability of travelling wave solutions in a Keller-Segel model of bacterial chemotaxis with a logarithmic chemosensitivity function and a constant, sublinear, and linear consumption rate. Linearising around the travelling wave solutions, we locate the essential and absolute spectrum of the associated linear operators and find that all travelling wave solutions have parts of the essential spectrum in the right half plane. However, we show that in the case of constant or sublinear consumption there exists a range of parameters such that the absolute spectrum is contained in the open left half plane and the essential spectrum can thus be weighted into the open left half plane. For the constant and sublinear consumption rate models we also determine critical parameter values for which the absolute spectrum crosses into the right half plane, indicating the onset of an absolute instability of the travelling wave solution. We observe that this crossing always occurs off of the real axis.

  5. Absolute photonic band gap in 2D honeycomb annular photonic crystals

    International Nuclear Information System (INIS)

    Liu, Dan; Gao, Yihua; Tong, Aihong; Hu, Sen

    2015-01-01

    Highlights: • A two-dimensional honeycomb annular photonic crystal (PC) is proposed. • The absolute photonic band gap (PBG) is studied. • Annular PCs show larger PBGs than usual air-hole PCs for high refractive index. • Annular PCs with anisotropic rods show large PBGs for low refractive index. • There exist optimal parameters to open largest band gaps. - Abstract: Using the plane wave expansion method, we investigate the effects of structural parameters on absolute photonic band gap (PBG) in two-dimensional honeycomb annular photonic crystals (PCs). The results reveal that the annular PCs possess absolute PBGs that are larger than those of the conventional air-hole PCs only when the refractive index of the material from which the PC is made is equal to 4.5 or larger. If the refractive index is smaller than 4.5, utilization of anisotropic inner rods in honeycomb annular PCs can lead to the formation of larger PBGs. The optimal structural parameters that yield the largest absolute PBGs are obtained

  6. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  7. Absolute Distance Measurements with Tunable Semiconductor Laser

    Czech Academy of Sciences Publication Activity Database

    Mikel, Břetislav; Číp, Ondřej; Lazar, Josef

    T118, - (2005), s. 41-44 ISSN 0031-8949 R&D Projects: GA AV ČR(CZ) IAB2065001 Keywords : tunable laser * absolute interferometer Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.661, year: 2004

  8. MEAN OF MEDIAN ABSOLUTE DERIVATION TECHNIQUE MEAN ...

    African Journals Online (AJOL)

    eobe

    development of mean of median absolute derivation technique based on the based on the based on .... of noise mean to estimate the speckle noise variance. Noise mean property ..... Foraging Optimization,” International Journal of. Advanced ...

  9. Comparison of computer workstation with film for detecting setup errors

    International Nuclear Information System (INIS)

    Fritsch, D.S.; Boxwala, A.A.; Raghavan, S.; Coffee, C.; Major, S.A.; Muller, K.E.; Chaney, E.L.

    1997-01-01

    Purpose/Objective: Workstations designed for portal image interpretation by radiation oncologists provide image displays and image processing and analysis tools that differ significantly compared with the standard clinical practice of inspecting portal films on a light box. An implied but unproved assumption associated with the clinical implementation of workstation technology is that patient care is improved, or at least not adversely affected. The purpose of this investigation was to conduct observer studies to test the hypothesis that radiation oncologists can detect setup errors using a workstation at least as accurately as when following standard clinical practice. Materials and Methods: A workstation, PortFolio, was designed for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools to enhance images; align cross-hairs, field edges, and anatomic structures on reference and acquired images; measure distances and angles; and view registered images superimposed on one another. In a well designed and carefully controlled observer study, nine radiation oncologists, including attendings and residents, used PortFolio to detect setup errors in realistic digitally reconstructed portal (DRPR) images computed from the NLM visible human data using a previously described approach † . Compared with actual portal images where absolute truth is ill defined or unknown, the DRPRs contained known translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Twenty DRPRs with randomly induced errors were computed for each site. The induced errors were constrained to a plane at the isocenter of the target volume and perpendicular to the central axis of the treatment beam. Images used in the study were also printed on film. Observers interpreted the film-based images using standard clinical practice. The images were reviewed in eight sessions. During each session five images were

  10. Relative and Absolute Reliability of the Professionalism in Physical Therapy Core Values Self-Assessment Tool.

    Science.gov (United States)

    Furgal, Karen E; Norris, Elizabeth S; Young, Sonia N; Wallmann, Harvey W

    2018-01-01

    Development of professional behaviors in Doctor of Physical Therapy (DPT) students is an important part of professional education. The American Physical Therapy Association (APTA) has developed the Professionalism in Physical Therapy Core Values Self-Assessment (PPTCV-SA) tool to increase awareness of personal values in practice. The PPTCV-SA has been used to measure growth in professionalism following a clinical or educational experience. There are few studies reporting psychometric properties of the PPTCV-SA. The purpose of this study was to establish properties of relative reliability (intraclass correlation coefficient, iCC) and absolute reliability (standard error of measurement, SEM; minimal detectable change, MDC) of the PPTCV-SA. in this project, 29 first-year students in a DPT program were administered the PPTCVA-SA on two occasions, 2 weeks apart. Paired t-tests were used to examine stability in PPTCV-SA scores on the two occasions. iCCs were calculated as a measure of relative reliability and for use in the calculation of the absolute reliability measures of SEM and MDC. Results of paired t-tests indicated differences in the subscale scores between times 1 and 2 were non-significant, except for three subscales: Altruism (p=0.01), Excellence (p=0.05), and Social Responsibility (p=0.02). iCCs for test-retest reliability were moderate-to-good for all subscales, with SEMs ranging from 0.30 to 0.62, and MDC95 ranging from 0.83 to 1.71. These results can guide educators and researchers when determining the likelihood of true change in professionalism following a professional development activity.

  11. The relative and absolute speed of radiographic screen - film systems

    International Nuclear Information System (INIS)

    Lee, In Ja; Huh, Joon

    1993-01-01

    Recently, a large number of new screen-film systems have become available for use in diagnostic radiology. These new screens are made of materials generally known as rare - earth phosphors which have high x-ray absorption and high x-ray to light conversion efficiency compared to calcium tungstate phosphors. The major advantage of these new systems is reduction of patient exposure due to their high speed or high sensitivity. However, a system with excessively high speed can result in a significant degradation of radiographic image quality. Therefore, the speed is important parameters for users of these system. Our aim of in this was to determine accurately and precisely the absolute speed and relative speeds of both new and conventional screen - film system. We determined the absolute speed in condition of BRH phantom beam quality and the relative speed were measured by a split - screen technique in condition of BRH and ANSI phantom beam quality. The absolute and the relative speed were determined for 8 kinds of screen - 4 kinds of film in regular system and 7 kinds pf screen - 7 kinds of film in ortho system. In this study we could know the New Rx, T - MAT G has the highest film speed, also know Green system's standard deviation of relative speed larger than blue system. It was realized that there were no relationship between the absolute speed and the blue system. It was realized that there were no relationship between the absolute speed and the relative speed in ortho or regular system

  12. The effect of biomechanical variables on force sensitive resistor error: Implications for calibration and improved accuracy.

    Science.gov (United States)

    Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P

    2016-03-21

    Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Absolute continuity of autophage measures on finite-dimensional vector spaces

    Energy Technology Data Exchange (ETDEWEB)

    Raja, C R.E. [Stat-Math Unit, Indian Statistical Institute, Bangalore (India); [Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)]. E-mail: creraja@isibang.ac.in

    2002-06-01

    We consider a class of measures called autophage which was introduced and studied by Szekely for measures on the real line. We show that the autophage measures on finite-dimensional vector spaces over real or Q{sub p} are infinitely divisible without idempotent factors and are absolutely continuous with bounded continuous density. We also show that certain semistable measures on such vector spaces are absolutely continuous. (author)

  14. Det demokratiske argument for absolut ytringsfrihed

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2014-01-01

    Artiklen diskuterer den påstand, at absolut ytringsfrihed er en nødvendig forudsætning for demokratisk legitimitet med udgangspunkt i en rekonstruktion af et argument fremsat af Ronald Dworkin. Spørgsmålet er, hvorfor ytringsfrihed skulle være en forudsætning for demokratisk legitimitet, og hvorfor...

  15. Thin-film magnetoresistive absolute position detector

    NARCIS (Netherlands)

    Groenland, J.P.J.

    1990-01-01

    The subject of this thesis is the investigation of a digital absolute posi- tion-detection system, which is based on a position-information carrier (i.e. a magnetic tape) with one single code track on the one hand, and an array of magnetoresistive sensors for the detection of the information on the

  16. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  17. Pilotaje en la detección de errores de prescripción de citostáticos Pilot study in the detection of errors in cytostatics prescription

    Directory of Open Access Journals (Sweden)

    María Antonieta Arbesú Michelena

    2004-12-01

    Institute of Oncology and Radiobiology in 43 medical orders. The errors were divided into errors caused by omission (that make difficult the checking on the part of the pharmacist, and errors caused by incorrectness (that may be potentially severe for the patient. There were 299 errors in all. The lack of the physician's signature in 43 prescriptions, as well as the use of abbreviations, acronyms and commercial names in 88.4 % of them, were among the most common errors. As to the severe errors, it was observed the non-inclusion of the weight and height in any medical order, the erroneous body surface (bs above the real in 15 cases (34.8 %, the subdosing in 41 occasions (47.7 % and the non-correspondance with the Protocol according to the Institutional Norms in 17 mistakes. It was concluded that the occurrence of prescription errors is high at the service, which shows that it is important to protocol the medical orders to reduce the percentage or errors detected in this pilot study and to go deep into this matter.

  18. DOES ABSOLUTE SYNONYMY EXIST IN OWERE-IGBO?

    African Journals Online (AJOL)

    USER

    The researcher also interviewed native speakers of the dialect. The study ... The word 'synonymy' means sameness of meaning, i.e., a relationship in which more ... whether absolute synonymy exists in Owere–Igbo or not. ..... 'close this book'.

  19. Prevalence of absolute pitch: a comparison between Japanese and Polish music students.

    Science.gov (United States)

    Miyazaki, Ken'ichi; Makomaska, Sylwia; Rakowski, Andrzej

    2012-11-01

    Comparable large-scale surveys including an on-site pitch-naming test were conducted with music students in Japan and Poland to obtain more convincing estimates of the prevalence of absolute pitch (AP) and examine how musical experience relates to AP. Participants with accurate AP (95% correct identification) accounted for 30% of the Japanese music students, but only 7% of the Polish music students. This difference in the performance of pitch naming was related to the difference in musical experience. Participants with AP had begun music training at an earlier age (6 years or earlier), and the average year of commencement of musical training was more than 2 years earlier for the Japanese music students than for the Polish students. The percentage of participants who had received early piano lessons was 94% for the Japanese musically trained students but was 72% for the Polish music students. Approximately one-third of the Japanese musically trained students had attended the Yamaha Music School, where lessons on piano or electric organ were given to preschool children in parallel with fixed-do solfège singing training. Such early music instruction was not as common in Poland. The relationship of AP with early music training is discussed.

  20. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  1. Towards absolute neutrino masses

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, Petr [Kellogg Radiation Laboratory 106-38, Caltech, Pasadena, CA 91125 (United States)

    2007-06-15

    Various ways of determining the absolute neutrino masses are briefly reviewed and their sensitivities compared. The apparent tension between the announced but unconfirmed observation of the 0{nu}{beta}{beta} decay and the neutrino mass upper limit based on observational cosmology is used as an example of what could happen eventually. The possibility of a 'nonstandard' mechanism of the 0{nu}{beta}{beta} decay is stressed and the ways of deciding which of the possible mechanisms is actually operational are described. The importance of the 0{nu}{beta}{beta} nuclear matrix elements is discussed and their uncertainty estimated.

  2. Absolute migration of Pacific basin mid-ocean ridges since 85 Ma ...

    African Journals Online (AJOL)

    Mid-ocean ridges are major physiographic features that dominate the world seafloor. Their absolute motion and tectonics are recorded in magnetic lineations they created. The absolute migration of mid-ocean ridges in the Pacific basin since 85 Ma and their tectonic implications was investigated in this work and the results ...

  3. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  4. A large-area, spatially continuous assessment of land cover map error and its impact on downstream analyses.

    Science.gov (United States)

    Estes, Lyndon; Chen, Peng; Debats, Stephanie; Evans, Tom; Ferreira, Stefanus; Kuemmerle, Tobias; Ragazzo, Gabrielle; Sheffield, Justin; Wolf, Adam; Wood, Eric; Caylor, Kelly

    2018-01-01

    Land cover maps increasingly underlie research into socioeconomic and environmental patterns and processes, including global change. It is known that map errors impact our understanding of these phenomena, but quantifying these impacts is difficult because many areas lack adequate reference data. We used a highly accurate, high-resolution map of South African cropland to assess (1) the magnitude of error in several current generation land cover maps, and (2) how these errors propagate in downstream studies. We first quantified pixel-wise errors in the cropland classes of four widely used land cover maps at resolutions ranging from 1 to 100 km, and then calculated errors in several representative "downstream" (map-based) analyses, including assessments of vegetative carbon stocks, evapotranspiration, crop production, and household food security. We also evaluated maps' spatial accuracy based on how precisely they could be used to locate specific landscape features. We found that cropland maps can have substantial biases and poor accuracy at all resolutions (e.g., at 1 km resolution, up to ∼45% underestimates of cropland (bias) and nearly 50% mean absolute error (MAE, describing accuracy); at 100 km, up to 15% underestimates and nearly 20% MAE). National-scale maps derived from higher-resolution imagery were most accurate, followed by multi-map fusion products. Constraining mapped values to match survey statistics may be effective at minimizing bias (provided the statistics are accurate). Errors in downstream analyses could be substantially amplified or muted, depending on the values ascribed to cropland-adjacent covers (e.g., with forest as adjacent cover, carbon map error was 200%-500% greater than in input cropland maps, but ∼40% less for sparse cover types). The average locational error was 6 km (600%). These findings provide deeper insight into the causes and potential consequences of land cover map error, and suggest several recommendations for land

  5. An empirical model for estimating solar radiation in the Algerian Sahara

    Science.gov (United States)

    Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous

    2018-05-01

    The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.

  6. Absolute humidity and the seasonal onset of influenza in the continental United States.

    Directory of Open Access Journals (Sweden)

    Jeffrey Shaman

    2010-02-01

    Full Text Available Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent reanalysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here, we extend these findings to the human population level, showing that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions.

  7. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  8. Overspecification of colour, pattern, and size: Salience, absoluteness, and consistency

    OpenAIRE

    Sammie eTarenskeen; Mirjam eBroersma; Mirjam eBroersma; Bart eGeurts

    2015-01-01

    The rates of overspecification of colour, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Colour and pattern are absolute attributes, whereas size is relative and less salient. Additionally, a tendency towards consistent responses is assessed. Using a within-participants design, we find similar rates of colour and pattern overspecification, which are both higher than the rate of size overspecification. Using a bet...

  9. Overspecification of color, pattern, and size: salience, absoluteness, and consistency

    OpenAIRE

    Tarenskeen, S.L.; Broersma, M.; Geurts, B.

    2015-01-01

    The rates of overspecification of color, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Color and pattern are absolute and salient attributes, whereas size is relative and less salient. Additionally, a tendency toward consistent responses is assessed. Using a within-participants design, we find similar rates of color and pattern overspecification, which are both higher than the rate of size overspecification. Usi...

  10. Absolute photoionization cross-section of the methyl radical.

    Science.gov (United States)

    Taatjes, Craig A; Osborn, David L; Selby, Talitha M; Meloni, Giovanni; Fan, Haiyan; Pratt, Stephen T

    2008-10-02

    The absolute photoionization cross-section of the methyl radical has been measured using two completely independent methods. The CH3 photoionization cross-section was determined relative to that of acetone and methyl vinyl ketone at photon energies of 10.2 and 11.0 eV by using a pulsed laser-photolysis/time-resolved synchrotron photoionization mass spectrometry method. The time-resolved depletion of the acetone or methyl vinyl ketone precursor and the production of methyl radicals following 193 nm photolysis are monitored simultaneously by using time-resolved synchrotron photoionization mass spectrometry. Comparison of the initial methyl signal with the decrease in precursor signal, in combination with previously measured absolute photoionization cross-sections of the precursors, yields the absolute photoionization cross-section of the methyl radical; sigma(CH3)(10.2 eV) = (5.7 +/- 0.9) x 10(-18) cm(2) and sigma(CH3)(11.0 eV) = (6.0 +/- 2.0) x 10(-18) cm(2). The photoionization cross-section for vinyl radical determined by photolysis of methyl vinyl ketone is in good agreement with previous measurements. The methyl radical photoionization cross-section was also independently measured relative to that of the iodine atom by comparison of ionization signals from CH3 and I fragments following 266 nm photolysis of methyl iodide in a molecular-beam ion-imaging apparatus. These measurements gave a cross-section of (5.4 +/- 2.0) x 10(-18) cm(2) at 10.460 eV, (5.5 +/- 2.0) x 10(-18) cm(2) at 10.466 eV, and (4.9 +/- 2.0) x 10(-18) cm(2) at 10.471 eV. The measurements allow relative photoionization efficiency spectra of methyl radical to be placed on an absolute scale and will facilitate quantitative measurements of methyl concentrations by photoionization mass spectrometry.

  11. The Impact of Error-Management Climate, Error Type and Error Originator on Auditors’ Reporting Errors Discovered on Audit Work Papers

    NARCIS (Netherlands)

    A.H. Gold-Nöteberg (Anna); U. Gronewold (Ulfert); S. Salterio (Steve)

    2010-01-01

    textabstractWe examine factors affecting the auditor’s willingness to report their own or their peers’ self-discovered errors in working papers subsequent to detailed working paper review. Prior research has shown that errors in working papers are detected in the review process; however, such

  12. Plasma radiation dynamics with the upgraded Absolute Extreme Ultraviolet tomographical system in the Tokamak à Configuration Variable

    Energy Technology Data Exchange (ETDEWEB)

    Tal, B.; Nagy, D.; Veres, G. [Institute for Particle and Nuclear Physics, Wigner Research Centre for Physics, Hungarian Academy of Sciences, Association EURATOM, P. O. Box 49, H-1525 Budapest (Hungary); Labit, B.; Chavan, R.; Duval, B. [Ecole Polytechnique Fédérale de Lausanne (EPFL), Centre de Recherches en Physique des Plasmas, Association EURATOM-Confédération Suisse, EPFL SB CRPP, Station 13, CH-1015 Lausanne (Switzerland)

    2013-12-15

    We introduce an upgraded version of a tomographical system which is built up from Absolute Extreme Ultraviolet-type (AXUV) detectors and has been installed on the Tokamak à Configuration Variable (TCV). The system is suitable for the investigation of fast radiative processes usually observed in magnetically confined high-temperature plasmas. The upgrade consists in the detector protection by movable shutters, some modifications to correct original design errors and the improvement in the data evaluation techniques. The short-term sensitivity degradation of the detectors, which is caused by the plasma radiation itself, has been monitored and found to be severe. The results provided by the system are consistent with the measurements obtained with the usual plasma radiation diagnostics installed on TCV. Additionally, the coupling between core plasma radiation and plasma-wall interaction is revealed. This was impossible with other available diagnostics on TCV.

  13. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  14. Absolute dissipative drift-wave instabilities in tokamaks

    International Nuclear Information System (INIS)

    Chen, L.; Chance, M.S.; Cheng, C.Z.

    1979-07-01

    Contrary to previous theoretical predictions, it is shown that the dissipative drift-wave instabilities are absolute in tokamak plasmas. The existence of unstable eigenmodes is shown to be associated with a new eigenmode branch induced by the finite toroidal couplings

  15. Internal descriptions of absolute Borel classes

    Czech Academy of Sciences Publication Activity Database

    Holický, P.; Pelant, Jan

    2004-01-01

    Roč. 141, č. 1 (2004), s. 87-104 ISSN 0166-8641 R&D Projects: GA ČR GA201/00/1466; GA ČR GA201/03/0933 Institutional research plan: CEZ:AV0Z1019905 Keywords : absolute Borel class * complete sequence of covers * open map Subject RIV: BA - General Mathematics Impact factor: 0.364, year: 2004

  16. The Role of Monocyte Percentage in Osteoporosis in Male Rheumatic Diseases.

    Science.gov (United States)

    Su, Yu-Jih; Chen, Chao Tung; Tsai, Nai-Wen; Huang, Chih-Cheng; Wang, Hung-Chen; Kung, Chia-Te; Lin, Wei-Che; Cheng, Ben-Chung; Su, Chih-Min; Hsiao, Sheng-Yuan; Lu, Cheng-Hsien

    2017-11-01

    Osteoporosis is easily overlooked in male patients, especially in the field of rheumatic diseases mostly prevalent with female patients, and its link to pathogenesis is still lacking. Attenuated monocyte apoptosis from a transcriptome-wide expression study illustrates the role of monocytes in osteoporosis. This study tested the hypothesis that the monocyte percentage among leukocytes could be a biomarker of osteoporosis in rheumatic diseases. Eighty-seven males with rheumatic diseases were evaluated in rheumatology outpatient clinics for bone mineral density (BMD) and surrogate markers, such as routine peripheral blood parameters and autoantibodies. From the total number of 87 patients included in this study, only 15 met the criteria for diagnosis of osteoporosis. Both age and monocyte percentage remained independently associated with the presence of osteoporosis. Steroid dose (equivalent prednisolone dose) was negatively associated with BMD of the hip area and platelet counts were negatively associated with BMD and T score of the spine area. Besides age, monocyte percentage meets the major requirements for osteoporosis in male rheumatic diseases. A higher monocyte percentage in male rheumatic disease patients, aged over 50 years in this study, and BMD study should be considered in order to reduce the risk of osteoporosis-related fractures.

  17. MODEL PERAMALAN KONSUMSI BAHAN BAKAR JENIS PREMIUM DI INDONESIA DENGAN REGRESI LINIER BERGANDA

    Directory of Open Access Journals (Sweden)

    Farizal

    2014-12-01

    Full Text Available Energy consumption forecasting, especially premium, is an integral part of energy management. Premium is a type of energy that receives government subsidy. Unfortunately, premium forecastings being performed have considerable high error resulting difficulties on reaching planned subsidy target and exploding the amount. In this study forecasting was conducted using multilinear regression (MLR method with ten candidate predictor variables. The result shows that only four variables which are inflation, selling price disparity between pertamanx and premium, economic growth rate, and the number of car, dictate premium consumption. Analsys on the MLR model indicates that the model has a considerable low error with the mean absolute percentage error (MAPE of 5.18%. The model has been used to predict 2013 primium consumption with 1.05% of error. The model predicted that 2013 premium consumption was 29.56 million kiloliter, while the reality was 29.26 million kiloliter.

  18. Absolute photoionization cross-section of the propargyl radical

    Energy Technology Data Exchange (ETDEWEB)

    Savee, John D.; Welz, Oliver; Taatjes, Craig A.; Osborn, David L. [Sandia National Laboratories, Combustion Research Facility, Livermore, California 94551 (United States); Soorkia, Satchin [Institut des Sciences Moleculaires d' Orsay, Universite Paris-Sud 11, Orsay (France); Selby, Talitha M. [Department of Chemistry, University of Wisconsin, Washington County Campus, West Bend, Wisconsin 53095 (United States)

    2012-04-07

    Using synchrotron-generated vacuum-ultraviolet radiation and multiplexed time-resolved photoionization mass spectrometry we have measured the absolute photoionization cross-section for the propargyl (C{sub 3}H{sub 3}) radical, {sigma}{sub propargyl}{sup ion}(E), relative to the known absolute cross-section of the methyl (CH{sub 3}) radical. We generated a stoichiometric 1:1 ratio of C{sub 3}H{sub 3} : CH{sub 3} from 193 nm photolysis of two different C{sub 4}H{sub 6} isomers (1-butyne and 1,3-butadiene). Photolysis of 1-butyne yielded values of {sigma}{sub propargyl}{sup ion}(10.213 eV)=(26.1{+-}4.2) Mb and {sigma}{sub propargyl}{sup ion}(10.413 eV)=(23.4{+-}3.2) Mb, whereas photolysis of 1,3-butadiene yielded values of {sigma}{sub propargyl}{sup ion}(10.213 eV)=(23.6{+-}3.6) Mb and {sigma}{sub propargyl}{sup ion}(10.413 eV)=(25.1{+-}3.5) Mb. These measurements place our relative photoionization cross-section spectrum for propargyl on an absolute scale between 8.6 and 10.5 eV. The cross-section derived from our results is approximately a factor of three larger than previous determinations.

  19. Characterizing absolute piezoelectric microelectromechanical system displacement using an atomic force microscope

    International Nuclear Information System (INIS)

    Evans, J.; Chapman, S.

    2014-01-01

    Piezoresponse Force Microscopy (PFM) is a popular tool for the study of ferroelectric and piezoelectric materials at the nanometer level. Progress in the development of piezoelectric MEMS fabrication is highlighting the need to characterize absolute displacement at the nanometer and Ångstrom scales, something Atomic Force Microscopy (AFM) might do but PFM cannot. Absolute displacement is measured by executing a polarization measurement of the ferroelectric or piezoelectric capacitor in question while monitoring the absolute vertical position of the sample surface with a stationary AFM cantilever. Two issues dominate the execution and precision of such a measurement: (1) the small amplitude of the electrical signal from the AFM at the Ångstrom level and (2) calibration of the AFM. The authors have developed a calibration routine and test technique for mitigating the two issues, making it possible to use an atomic force microscope to measure both the movement of a capacitor surface as well as the motion of a micro-machine structure actuated by that capacitor. The theory, procedures, pitfalls, and results of using an AFM for absolute piezoelectric measurement are provided

  20. Characterizing absolute piezoelectric microelectromechanical system displacement using an atomic force microscope

    Energy Technology Data Exchange (ETDEWEB)

    Evans, J., E-mail: radiant@ferrodevices.com; Chapman, S., E-mail: radiant@ferrodevices.com [Radiant Technologies, Inc., 2835C Pan American Fwy NE, Albuquerque, New Mexico 87107 (United States)

    2014-08-14

    Piezoresponse Force Microscopy (PFM) is a popular tool for the study of ferroelectric and piezoelectric materials at the nanometer level. Progress in the development of piezoelectric MEMS fabrication is highlighting the need to characterize absolute displacement at the nanometer and Ångstrom scales, something Atomic Force Microscopy (AFM) might do but PFM cannot. Absolute displacement is measured by executing a polarization measurement of the ferroelectric or piezoelectric capacitor in question while monitoring the absolute vertical position of the sample surface with a stationary AFM cantilever. Two issues dominate the execution and precision of such a measurement: (1) the small amplitude of the electrical signal from the AFM at the Ångstrom level and (2) calibration of the AFM. The authors have developed a calibration routine and test technique for mitigating the two issues, making it possible to use an atomic force microscope to measure both the movement of a capacitor surface as well as the motion of a micro-machine structure actuated by that capacitor. The theory, procedures, pitfalls, and results of using an AFM for absolute piezoelectric measurement are provided.

  1. Human Error Prediction and Countermeasures based on CREAM in Loading and Storage Phase of Spent Nuclear Fuel (SNF)

    International Nuclear Information System (INIS)

    Kim, Jae San; Kim, Min Su; Jo, Seong Youn

    2007-01-01

    With the steady demands for nuclear power energy in Korea, the amount of accumulated SNF has inevitably increased year by year. Thus far, SNF has been on-site transported from one unit to a nearby unit or an on-site dry storage facility. In the near future, as the amount of SNF generated approaches the capacity of these facilities, a percentage of it will be transported to another SNF storage facility. In the process of transporting SNF, human interactions involve inspecting and preparing the cask and spent fuel, loading the cask, transferring the cask and storage or monitoring the cask, etc. So, human actions play a significant role in SNF transportation. In analyzing incidents that have occurred during transport operations, several recent studies have indicated that 'human error' is a primary cause. Therefore, the objectives of this study are to predict and identify possible human errors during the loading and storage of SNF. Furthermore, after evaluating human error for each process, countermeasures to minimize human error are deduced

  2. A vibration correction method for free-fall absolute gravimeters

    Science.gov (United States)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  3. Wide-field absolute transverse blood flow velocity mapping in vessel centerline

    Science.gov (United States)

    Wu, Nanshou; Wang, Lei; Zhu, Bifeng; Guan, Caizhong; Wang, Mingyi; Han, Dingan; Tan, Haishu; Zeng, Yaguang

    2018-02-01

    We propose a wide-field absolute transverse blood flow velocity measurement method in vessel centerline based on absorption intensity fluctuation modulation effect. The difference between the light absorption capacities of red blood cells and background tissue under low-coherence illumination is utilized to realize the instantaneous and average wide-field optical angiography images. The absolute fuzzy connection algorithm is used for vessel centerline extraction from the average wide-field optical angiography. The absolute transverse velocity in the vessel centerline is then measured by a cross-correlation analysis according to instantaneous modulation depth signal. The proposed method promises to contribute to the treatment of diseases, such as those related to anemia or thrombosis.

  4. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  5. Vehicle speed prediction via a sliding-window time series analysis and an evolutionary least learning machine: A case study on San Francisco urban roads

    Directory of Open Access Journals (Sweden)

    Ladan Mozaffari

    2015-06-01

    Full Text Available The main goal of the current study is to take advantage of advanced numerical and intelligent tools to predict the speed of a vehicle using time series. It is clear that the uncertainty caused by temporal behavior of the driver as well as various external disturbances on the road will affect the vehicle speed, and thus, the vehicle power demands. The prediction of upcoming power demands can be employed by the vehicle powertrain control systems to improve significantly the fuel economy and emission performance. Therefore, it is important to systems design engineers and automotive industrialists to develop efficient numerical tools to overcome the risk of unpredictability associated with the vehicle speed profile on roads. In this study, the authors propose an intelligent tool called evolutionary least learning machine (E-LLM to forecast the vehicle speed sequence. To have a practical evaluation regarding the efficacy of E-LLM, the authors use the driving data collected on the San Francisco urban roads by a private Honda Insight vehicle. The concept of sliding window time series (SWTS analysis is used to prepare the database for the speed forecasting process. To evaluate the performance of the proposed technique, a number of well-known approaches, such as auto regressive (AR method, back-propagation neural network (BPNN, evolutionary extreme learning machine (E-ELM, extreme learning machine (ELM, and radial basis function neural network (RBFNN, are considered. The performances of the rival methods are then compared in terms of the mean square error (MSE, root mean square error (RMSE, mean absolute percentage error (MAPE, median absolute percentage error (MDAPE, and absolute fraction of variances (R2 metrics. Through an exhaustive comparative study, the authors observed that E-LLM is a powerful tool for predicting the vehicle speed profiles. The outcomes of the current study can be of use for the engineers of automotive industry who have been

  6. On determining absolute entropy without quantum theory or the third law of thermodynamics

    Science.gov (United States)

    Steane, Andrew M.

    2016-04-01

    We employ classical thermodynamics to gain information about absolute entropy, without recourse to statistical methods, quantum mechanics or the third law of thermodynamics. The Gibbs-Duhem equation yields various simple methods to determine the absolute entropy of a fluid. We also study the entropy of an ideal gas and the ionization of a plasma in thermal equilibrium. A single measurement of the degree of ionization can be used to determine an unknown constant in the entropy equation, and thus determine the absolute entropy of a gas. It follows from all these examples that the value of entropy at absolute zero temperature does not need to be assigned by postulate, but can be deduced empirically.

  7. Fingerprints of flower absolutes using supercritical fluid chromatography hyphenated with high resolution mass spectrometry.

    Science.gov (United States)

    Santerre, Cyrille; Vallet, Nadine; Touboul, David

    2018-06-02

    Supercritical fluid chromatography hyphenated with high resolution mass spectrometry (SFC-HRMS) was developed for fingerprint analysis of different flower absolutes commonly used in cosmetics field, especially in perfumes. Supercritical fluid chromatography-atmospheric pressure photoionization-high resolution mass spectrometry (SFC-APPI-HRMS) technique was employed to identify the components of the fingerprint. The samples were separated with a porous graphitic carbon (PGC) Hypercarb™ column (100 mm × 2.1 mm, 3 μm) by gradient elution using supercritical CO 2 and ethanol (0.0-20.0 min (2-30% B), 20.0-25.0 min (30% B), 25.0-26.0 min (30-2% B) and 26.0-30.0 min (2% B)) as mobile phase at a flow rate of 1.5 mL/min. In order to compare the SFC fingerprints between five different flower absolutes: Jasminum grandiflorum absolutes, Jasminum sambac absolutes, Narcissus jonquilla absolutes, Narcissus poeticus absolutes, Lavandula angustifolia absolutes from different suppliers and batches, the chemometric procedure including principal component analysis (PCA) was applied to classify the samples according to their genus and their species. Consistent results were obtained to show that samples could be successfully discriminated. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  9. Telling in-tune from out-of-tune: widespread evidence for implicit absolute intonation.

    Science.gov (United States)

    Van Hedger, Stephen C; Heald, Shannon L M; Huang, Alex; Rutstein, Brooke; Nusbaum, Howard C

    2017-04-01

    Absolute pitch (AP) is the rare ability to name or produce an isolated musical note without the aid of a reference note. One skill thought to be unique to AP possessors is the ability to provide absolute intonation judgments (e.g., classifying an isolated note as "in-tune" or "out-of-tune"). Recent work has suggested that absolute intonation perception among AP possessors is not crystallized in a critical period of development, but is dynamically maintained by the listening environment, in which the vast majority of Western music is tuned to a specific cultural standard. Given that all listeners of Western music are constantly exposed to this specific cultural tuning standard, our experiments address whether absolute intonation perception extends beyond AP possessors. We demonstrate that non-AP listeners are able to accurately judge the intonation of completely isolated notes. Both musicians and nonmusicians showed evidence for absolute intonation recognition when listening to familiar timbres (piano and violin). When testing unfamiliar timbres (triangle and inverted sine waves), only musicians showed weak evidence of absolute intonation recognition (Experiment 2). Overall, these results highlight a previously unknown similarity between AP and non-AP possessors' long-term musical note representations, including evidence of sensitivity to frequency.

  10. PERCENTAGE OF VIABLE SPERMATOZOA COLLECTED FROM THE EPIDIDYMES OF DEATH LOCAL DOG

    Directory of Open Access Journals (Sweden)

    I Nyoman Sulabda

    2012-11-01

    Full Text Available The purpose of this study to determine the effectof post mortem time on percentage of lifeepididymessperm from postmortem dog caudae epididymides. A total of 9 dog were usedand divided into three group. T0 was control group, T1, 3 hours postmortem and T2, 6hours postmortem. This way, samples were obtained at different times postmortem. Spermwere extracted from the caudae epididymes by means of cuts.The result showed that the percentage of life sperm were 67,16 ± 5.67(T0, 46.33 ± 5.60(T1 and 24.00 ± 4.35 respectively. We could appreciate that percentage of life wasaffected by postmortem time. There was significant decrease life sperm recovered fromepididymes postmortem (P<0.01. In conclusion, epididymes sperm from dog undergodecrease of percentage of life, but it could stay acceptable within many hours postmortem.We intepreted these data to indicate that it may still be possible to obtain viablespermatozoa many hours later.

  11. Forecast models for suicide: Time-series analysis with data from Italy.

    Science.gov (United States)

    Preti, Antonio; Lentini, Gianluca

    2016-01-01

    The prediction of suicidal behavior is a complex task. To fine-tune targeted preventative interventions, predictive analytics (i.e. forecasting future risk of suicide) is more important than exploratory data analysis (pattern recognition, e.g. detection of seasonality in suicide time series). This study sets out to investigate the accuracy of forecasting models of suicide for men and women. A total of 101 499 male suicides and of 39 681 female suicides - occurred in Italy from 1969 to 2003 - were investigated. In order to apply the forecasting model and test its accuracy, the time series were split into a training set (1969 to 1996; 336 months) and a test set (1997 to 2003; 84 months). The main outcome was the accuracy of forecasting models on the monthly number of suicides. These measures of accuracy were used: mean absolute error; root mean squared error; mean absolute percentage error; mean absolute scaled error. In both male and female suicides a change in the trend pattern was observed, with an increase from 1969 onwards to reach a maximum around 1990 and decrease thereafter. The variances attributable to the seasonal and trend components were, respectively, 24% and 64% in male suicides, and 28% and 41% in female ones. Both annual and seasonal historical trends of monthly data contributed to forecast future trends of suicide with a margin of error around 10%. The finding is clearer in male than in female time series of suicide. The main conclusion of the study is that models taking seasonality into account seem to be able to derive information on deviation from the mean when this occurs as a zenith, but they fail to reproduce it when it occurs as a nadir. Preventative efforts should concentrate on the factors that influence the occurrence of increases above the main trend in both seasonal and cyclic patterns of suicides.

  12. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  13. 7 CFR 987.44 - Free and restricted percentages.

    Science.gov (United States)

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... applicable grade and size available to supply the trade demand for free dates of any variety is likely to be... effectuate the declared policy of the act, it shall recommend such percentages to the Secretary. If the...

  14. Relative and absolute test-retest reliabilities of pressure pain threshold in patients with knee osteoarthritis.

    Science.gov (United States)

    Srimurugan Pratheep, Neeraja; Madeleine, Pascal; Arendt-Nielsen, Lars

    2018-04-25

    Pressure pain threshold (PPT) and PPT maps are commonly used to quantify and visualize mechanical pain sensitivity. Although PPT's have frequently been reported from patients with knee osteoarthritis (KOA), the absolute and relative reliability of PPT assessments remain to be determined. Thus, the purpose of this study was to evaluate the test-retest relative and absolute reliability of PPT in KOA. For that purpose, intra- and interclass correlation coefficient (ICC) as well as the standard error of measurement (SEM) and the minimal detectable change (MDC) values within eight anatomical locations covering the most painful knee of KOA patients was measured. Twenty KOA patients participated in two sessions with a period of 2 weeks±3 days apart. PPT's were assessed over eight anatomical locations covering the knee and two remote locations over tibialis anterior and brachioradialis. The patients rated their maximum pain intensity during the past 24 h and prior to the recordings on a visual analog scale (VAS), and completed The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and PainDetect surveys. The ICC, SEM and MDC between the sessions were assessed. The ICC for the individual variability was expressed with coefficient of variance (CV). Bland-Altman plots were used to assess potential bias in the dataset. The ICC ranged from 0.85 to 0.96 for all the anatomical locations which is considered "almost perfect". CV was lowest in session 1 and ranged from 44.2 to 57.6%. SEM for comparison ranged between 34 and 71 kPa and MDC ranged between 93 and 197 kPa with a mean PPT ranged from 273.5 to 367.7 kPa in session 1 and 268.1-331.3 kPa in session 2. The analysis of Bland-Altman plot showed no systematic bias. PPT maps showed that the patients had lower thresholds in session 2, but no significant difference was observed for the comparison between the sessions for PPT or VAS. No correlations were seen between PainDetect and PPT and PainDetect and WOMAC

  15. A Gridded Daily Min/Max Temperature Dataset With 0.1° Resolution for the Yangtze River Valley and its Error Estimation

    Science.gov (United States)

    Xiong, Qiufen; Hu, Jianglin

    2013-05-01

    The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and

  16. The Absolute Immanence in Deleuze

    OpenAIRE

    Park, Daeseung

    2013-01-01

    The absolute immanence in Deleuze Daeseung Park Abstract The plane of immanence is not unique. Deleuze and Guattari suppose a multiplicity of planes. Each great philosopher draws new planes on his own way, and these planes constitute the "time of philosophy". We can, therefore, "present the entire history of philosophy from the viewpoint of the institution of a plane of immanence" or present the time of philosophy from the viewpoint of the superposition and of the coexistence of planes. Howev...

  17. On the absolute measure of Beta activities

    International Nuclear Information System (INIS)

    Sanchez del Rio, C.; Jimenez Reynaldo, O.; Rodriguez Mayquez, E.

    1956-01-01

    A new method for absolute beta counting of solid samples is given. The mea surements is made with an inside Geiger-Muller tube of new construction. The backscattering correction when using an infinite thick mounting is discussed and results for different materials given. (Author)

  18. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We

  19. A geometrically exact beam element based on the absolute nodal coordinate formulation

    International Nuclear Information System (INIS)

    Gerstmayr, Johannes; Matikainen, Marko K.; Mikkola, Aki M.

    2008-01-01

    In this study, Reissner's classical nonlinear rod formulation, as implemented by Simo and Vu-Quoc by means of the large rotation vector approach, is implemented into the framework of the absolute nodal coordinate formulation. The implementation is accomplished in the planar case accounting for coupled axial, bending, and shear deformation. By employing the virtual work of elastic forces similarly to Simo and Vu-Quoc in the absolute nodal coordinate formulation, the numerical results of the formulation are identical to those of the large rotation vector formulation. It is noteworthy, however, that the material definition in the absolute nodal coordinate formulation can differ from the material definition used in Reissner's beam formulation. Based on an analytical eigenvalue analysis, it turns out that the high frequencies of cross section deformation modes in the absolute nodal coordinate formulation are only slightly higher than frequencies of common shear modes, which are present in the classical large rotation vector formulation of Simo and Vu-Quoc, as well. Thus, previous claims that the absolute nodal coordinate formulation is inefficient or would lead to ill-conditioned finite element matrices, as compared to classical approaches, could be refuted. In the introduced beam element, locking is prevented by means of reduced integration of certain parts of the elastic forces. Several classical large deformation static and dynamic examples as well as an eigenvalue analysis document the equivalence of classical nonlinear rod theories and the absolute nodal coordinate formulation for the case of appropriate material definitions. The results also agree highly with those computed in commercial finite element codes

  20. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    Science.gov (United States)

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    2014-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.

  1. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  2. SU-F-J-208: Prompt Gamma Imaging-Based Prediction of Bragg Peak Position for Realistic Treatment Error Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Y; Macq, B; Bondar, L [Universite catholique de Louvain, Louvain-la-Neuve (Belgium); Janssens, G [IBA, Louvain-la-Neuve (Belgium)

    2016-06-15

    Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees, and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for

  3. Utilization of 1H NMR in the determination of absolute configuration of alcohols

    International Nuclear Information System (INIS)

    Barreiros, Marizeth L.; David, Jorge M.; David, Juceni P. . E-juceni@ufba.br

    2005-01-01

    This review reports the determination of absolute configuration of primary and secondary alcohols by 1 H NMR spectroscopy, using the Mosher method. This method consists in the derivatization of an alcohol possessing unknown absolute configuration with one or both enantiomers of an auxiliary reagent. The resulting diastereoisomer spectra are registered and compared, and the chemical shift differences (Δδ R,S = δ R - δ S ) are measured. The determination of the absolute configuration of the alcohol molecule is based on the correlation between its chiral center and the auxiliary reagent's chiral center. Therefore, the determination of the absolute configuration depends on aromatic ring shielding effects on the substituents of the alcohol as evidenced by the 1 H NMR spectrum. (author)

  4. A comparison of artificial neural networks with other statistical approaches for the prediction of true metabolizable energy of meat and bone meal.

    Science.gov (United States)

    Perai, A H; Nassiri Moghaddam, H; Asadpour, S; Bahrampour, J; Mansoori, Gh

    2010-07-01

    There has been a considerable and continuous interest to develop equations for rapid and accurate prediction of the ME of meat and bone meal. In this study, an artificial neural network (ANN), a partial least squares (PLS), and a multiple linear regression (MLR) statistical method were used to predict the TME(n) of meat and bone meal based on its CP, ether extract, and ash content. The accuracy of the models was calculated by R(2) value, MS error, mean absolute percentage error, mean absolute deviation, bias, and Theil's U. The predictive ability of an ANN was compared with a PLS and a MLR model using the same training data sets. The squared regression coefficients of prediction for the MLR, PLS, and ANN models were 0.38, 0.36, and 0.94, respectively. The results revealed that ANN produced more accurate predictions of TME(n) as compared with PLS and MLR methods. Based on the results of this study, ANN could be used as a promising approach for rapid prediction of nutritive value of meat and bone meal.

  5. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    Science.gov (United States)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.

  6. On the absolute meaning of motion

    Directory of Open Access Journals (Sweden)

    H. Edwards

    Full Text Available The present manuscript aims to clarify why motion causes matter to age slower in a comparable sense, and how this relates to relativistic effects caused by motion. A fresh analysis of motion, build on first axiom, delivers proof with its result, from which significant new understanding and computational power is gained.A review of experimental results demonstrates, that unaccelerated motion causes matter to age slower in a comparable, observer independent sense. Whilst focusing on this absolute effect, the present manuscript clarifies its context to relativistic effects, detailing their relationship and incorporating both into one consistent picture. The presented theoretical results make new predictions and are testable through suggested experiment of a novel nature. The manuscript finally arrives at an experimental tool and methodology, which as far as motion in ungravitated space is concerned or gravity appreciated, enables us to find the absolute observer independent picture of reality, which is reflected in the comparable display of atomic clocks.The discussion of the theoretical results, derives a physical causal understanding of gravity, a mathematical formulation of which, will be presented. Keywords: Kinematics, Gravity, Atomic clocks, Cosmic microwave background

  7. Linear ultrasonic motor for absolute gravimeter.

    Science.gov (United States)

    Jian, Yue; Yao, Zhiyuan; Silberschmidt, Vadim V

    2017-05-01

    Thanks to their compactness and suitability for vacuum applications, linear ultrasonic motors are considered as substitutes for classical electromagnetic motors as driving elements in absolute gravimeters. Still, their application is prevented by relatively low power output. To overcome this limitation and provide better stability, a V-type linear ultrasonic motor with a new clamping method is proposed for a gravimeter. In this paper, a mechanical model of stators with flexible clamping components is suggested, according to a design criterion for clamps of linear ultrasonic motors. After that, an effect of tangential and normal rigidity of the clamping components on mechanical output is studied. It is followed by discussion of a new clamping method with sufficient tangential rigidity and a capability to facilitate pre-load. Additionally, a prototype of the motor with the proposed clamping method was fabricated and the performance tests in vertical direction were implemented. Experimental results show that the suggested motor has structural stability and high dynamic performance, such as no-load speed of 1.4m/s and maximal thrust of 43N, meeting the requirements for absolute gravimeters. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Standardization of the cumulative absolute velocity

    International Nuclear Information System (INIS)

    O'Hara, T.F.; Jacobson, J.P.

    1991-12-01

    EPRI NP-5930, ''A Criterion for Determining Exceedance of the Operating Basis Earthquake,'' was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set

  9. Confirmation of the absolute configuration of (−)-aurantioclavine

    KAUST Repository

    Behenna, Douglas C.; Krishnan, Shyam; Stoltz, Brian M.

    2011-01-01

    We confirm our previous assignment of the absolute configuration of (-)-aurantioclavine as 7R by crystallographically characterizing an advanced 3-bromoindole intermediate reported in our previous synthesis. This analysis also provides additional

  10. Musical Activity Tunes Up Absolute Pitch Ability

    DEFF Research Database (Denmark)

    Dohn, Anders; Garza-Villarreal, Eduardo A.; Ribe, Lars Riisgaard

    2014-01-01

    Absolute pitch (AP) is the ability to identify or produce pitches of musical tones without an external reference. Active AP (i.e., pitch production or pitch adjustment) and passive AP (i.e., pitch identification) are considered to not necessarily coincide, although no study has properly compared...

  11. Leuconostoc mesenteroides growth in food products: prediction and sensitivity analysis by adaptive-network-based fuzzy inference systems.

    Directory of Open Access Journals (Sweden)

    Hue-Yu Wang

    Full Text Available BACKGROUND: An adaptive-network-based fuzzy inference system (ANFIS was compared with an artificial neural network (ANN in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C, pH level (5.5 to 7.5, sodium chloride level (0.25% to 6.25% and sodium nitrite level (0 to 200 ppm on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. METHODS: THE ANFIS AND ANN MODELS WERE COMPARED IN TERMS OF SIX STATISTICAL INDICES CALCULATED BY COMPARING THEIR PREDICTION RESULTS WITH ACTUAL DATA: mean absolute percentage error (MAPE, root mean square error (RMSE, standard error of prediction percentage (SEP, bias factor (Bf, accuracy factor (Af, and absolute fraction of variance (R (2. Graphical plots were also used for model comparison. CONCLUSIONS: The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions.

  12. Relative and Absolute Interrater Reliabilities of a Hand-Held Myotonometer to Quantify Mechanical Muscle Properties in Patients with Acute Stroke in an Inpatient Ward

    Directory of Open Access Journals (Sweden)

    Wai Leung Ambrose Lo

    2017-01-01

    Full Text Available Introduction. The reliability of using MyotonPRO to quantify muscles mechanical properties in a ward setting for the acute stroke population remains unknown. Aims. To investigate the within-session relative and absolute interrater reliability of MyotonPRO. Methods. Mechanical properties of biceps brachii, brachioradialis, rectus femoris, and tibialis anterior were recorded at bedside. Participants were within 1 month of the first occurrence of stroke. Relative reliability was assessed by intraclass correlation coefficient (ICC. Absolute reliability was assessed by standard error of measurement (SEM, SEM%, smallest real difference (SRD, SRD%, and the Bland-Altman 95% limits of agreement. Results. ICCs of all studied muscles ranged between 0.63 and 0.97. The SEM of all muscles ranged within 0.30–0.88 Hz for tone, 0.07–0.19 for decrement, 6.42–20.20 N/m for stiffness, and 0.04–0.07 for creep. The SRD of all muscles ranged within 0.70–2.05 Hz for tone, 0.16–0.45 for decrement, 14.98–47.15 N/m for stiffness, and 0.09–0.17 for creep. Conclusions. MyotonPRO demonstrated acceptable relative and absolute reliability in a ward setting for patients with acute stroke. However, results must be interpreted with caution, due to the varying level of consistency between different muscles, as well as between different parameters within a muscle.

  13. Recursive wind speed forecasting based on Hammerstein Auto-Regressive model

    International Nuclear Information System (INIS)

    Ait Maatallah, Othman; Achuthan, Ajit; Janoyan, Kerop; Marzocca, Pier

    2015-01-01

    Highlights: • Developed a new recursive WSF model for 1–24 h horizon based on Hammerstein model. • Nonlinear HAR model successfully captured chaotic dynamics of wind speed time series. • Recursive WSF intrinsic error accumulation corrected by applying rotation. • Model verified for real wind speed data from two sites with different characteristics. • HAR model outperformed both ARIMA and ANN models in terms of accuracy of prediction. - Abstract: A new Wind Speed Forecasting (WSF) model, suitable for a short term 1–24 h forecast horizon, is developed by adapting Hammerstein model to an Autoregressive approach. The model is applied to real data collected for a period of three years (2004–2006) from two different sites. The performance of HAR model is evaluated by comparing its prediction with the classical Autoregressive Integrated Moving Average (ARIMA) model and a multi-layer perceptron Artificial Neural Network (ANN). Results show that the HAR model outperforms both the ARIMA model and ANN model in terms of root mean square error (RMSE), mean absolute error (MAE), and Mean Absolute Percentage Error (MAPE). When compared to the conventional models, the new HAR model can better capture various wind speed characteristics, including asymmetric (non-gaussian) wind speed distribution, non-stationary time series profile, and the chaotic dynamics. The new model is beneficial for various applications in the renewable energy area, particularly for power scheduling

  14. Human error and crew resource management failures in Naval aviation mishaps: a review of U.S. Naval Safety Center data, 1990-96.

    Science.gov (United States)

    Wiegmann, D A; Shappell, S A

    1999-12-01

    The present study examined the role of human error and crew-resource management (CRM) failures in U.S. Naval aviation mishaps. All tactical jet (TACAIR) and rotary wing Class A flight mishaps between fiscal years 1990-1996 were reviewed. Results indicated that over 75% of both TACAIR and rotary wing mishaps were attributable, at least in part, to some form of human error of which 70% were associated with aircrew human factors. Of these aircrew-related mishaps, approximately 56% involved at least one CRM failure. These percentages are very similar to those observed prior to the implementation of aircrew coordination training (ACT) in the fleet, suggesting that the initial benefits of the program have not persisted and that CRM failures continue to plague Naval aviation. Closer examination of these CRM-related mishaps suggest that the type of flight operations (preflight, routine, emergency) do play a role in the etiology of CRM failures. A larger percentage of CRM failures occurred during non-routine or extremis flight situations when TACAIR mishaps were considered. In contrast, a larger percentage of rotary wing CRM mishaps involved failures that occurred during routine flight operations. These findings illustrate the complex etiology of CRM failures within Naval aviation and support the need for ACT programs tailored to the unique problems faced by specific communities in the fleet.

  15. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  16. Pantomime-grasping: Advance knowledge of haptic feedback availability supports an absolute visuo-haptic calibration

    Directory of Open Access Journals (Sweden)

    Shirin eDavarpanah Jazi

    2016-05-01

    Full Text Available An emerging issue in movement neurosciences is whether haptic feedback influences the nature of the information supporting a simulated grasping response (i.e., pantomime-grasping. In particular, recent work by our group contrasted pantomime-grasping responses performed with (i.e., PH+ trials and without (i.e., PH- trials terminal haptic feedback in separate blocks of trials. Results showed that PH- trials were mediated via relative visual information. In contrast, PH+ trials showed evidence of an absolute visuo-haptic calibration – a finding attributed to an error signal derived from a comparison between expected and actual haptic feedback (i.e., an internal forward model. The present study examined whether advanced knowledge of haptic feedback availability influences the aforementioned calibration process. To that end, PH- and PH+ trials were completed in separate blocks (i.e., the feedback schedule used in our group’s previous study and a block wherein PH- and PH+ trials were randomly interleaved on a trial-by-trial basis (i.e., random feedback schedule. In other words, the random feedback schedule precluded participants from predicting whether haptic feedback would be available at the movement goal location. We computed just-noticeable-difference (JND values to determine whether responses adhered to, or violated, the relative psychophysical principles of Weber’s law. Results for the blocked feedback schedule replicated our group’s previous work, whereas in the random feedback schedule PH- and PH+ trials were supported via relative visual information. Accordingly, we propose that a priori knowledge of haptic feedback is necessary to support an absolute visuo-haptic calibration. Moreover, our results demonstrate that the presence and expectancy of haptic feedback is an important consideration in contrasting the behavioral and neural properties of natural and stimulated (i.e., pantomime-grasping grasping.

  17. FFT swept filtering: a bias-free method for processing fringe signals in absolute gravimeters

    Science.gov (United States)

    Křen, Petr; Pálinkáš, Vojtech; Mašika, Pavel; Val'ko, Miloš

    2018-05-01

    Absolute gravimeters, based on laser interferometry, are widely used for many applications in geoscience and metrology. Although currently the most accurate FG5 and FG5X gravimeters declare standard uncertainties at the level of 2-3 μGal, their inherent systematic errors affect the gravity reference determined by international key comparisons based predominately on the use of FG5-type instruments. The measurement results for FG5-215 and FG5X-251 clearly showed that the measured g-values depend on the size of the fringe signal and that this effect might be approximated by a linear regression with a slope of up to 0.030 μGal/mV . However, these empirical results do not enable one to identify the source of the effect or to determine a reasonable reference fringe level for correcting g-values in an absolute sense. Therefore, both gravimeters were equipped with new measuring systems (according to Křen et al. in Metrologia 53:27-40, 2016. https://doi.org/10.1088/0026-1394/53/1/27 applied for FG5), running in parallel with the original systems. The new systems use an analogue-to-digital converter HS5 to digitize the fringe signal and a new method of fringe signal analysis based on FFT swept bandpass filtering. We demonstrate that the source of the fringe size effect is connected to a distortion of the fringe signal due to the electronic components used in the FG5(X) gravimeters. To obtain a bias-free g-value, the FFT swept method should be applied for the determination of zero-crossings. A comparison of g-values obtained from the new and the original systems clearly shows that the original system might be biased by approximately 3-5 μGal due to improperly distorted fringe signal processing.

  18. Absolute surface reconstruction by slope metrology and photogrammetry

    Science.gov (United States)

    Dong, Yue

    Developing the manufacture of aspheric and freeform optical elements requires an advanced metrology method which is capable of inspecting these elements with arbitrary freeform surfaces. In this dissertation, a new surface measurement scheme is investigated for such a purpose, which is to measure the absolute surface shape of an object under test through its surface slope information obtained by photogrammetric measurement. A laser beam propagating toward the object reflects on its surface while the vectors of the incident and reflected beams are evaluated from the four spots they leave on the two parallel transparent windows in front of the object. The spots' spatial coordinates are determined by photogrammetry. With the knowledge of the incident and reflected beam vectors, the local slope information of the object surface is obtained through vector calculus and finally yields the absolute object surface profile by a reconstruction algorithm. An experimental setup is designed and the proposed measuring principle is experimentally demonstrated by measuring the absolute surface shape of a spherical mirror. The measurement uncertainty is analyzed, and efforts for improvement are made accordingly. In particular, structured windows are designed and fabricated to generate uniform scattering spots left by the transmitted laser beams. Calibration of the fringe reflection instrument, another typical surface slope measurement method, is also reported in the dissertation. Finally, a method for uncertainty analysis of a photogrammetry measurement system by optical simulation is investigated.

  19. A proposal to measure absolute environmental sustainability in lifecycle assessment

    DEFF Research Database (Denmark)

    Bjørn, Anders; Margni, Manuele; Roy, Pierre-Olivier

    2016-01-01

    sustainable are therefore increasingly important. Such absolute indicators exist, but suffer from shortcomings such as incomplete coverage of environmental issues, varying data quality and varying or insufficient spatial resolution. The purpose of this article is to demonstrate that life cycle assessment (LCA...... in supporting decisions aimed at simultaneously reducing environmental impacts efficiently and maintaining or achieving environmental sustainability. We have demonstrated that LCA indicators can be modified from being relative to being absolute indicators of environmental sustainability. Further research should...

  20. A note on unique solvability of the absolute value equation

    Directory of Open Access Journals (Sweden)

    Taher Lotfi

    2014-05-01

    Full Text Available It is proved that applying sufficient regularity conditions to the interval matrix $[A-|B|,A+|B|]$‎, ‎we can create a new unique solvability condition for the absolute value equation $Ax+B|x|=b$‎, ‎since regularity of interval matrices implies unique solvability of their corresponding absolute value equation‎. ‎This condition is formulated in terms of positive definiteness of a certain point matrix‎. ‎Special case $B=-I$ is verified too as an application.