WorldWideScience

Sample records for absolute percentage error

  1. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  2. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  3. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    Science.gov (United States)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  4. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  5. Assessing energy forecasting inaccuracy by simultaneously considering temporal and absolute errors

    International Nuclear Information System (INIS)

    Frías-Paredes, Laura; Mallor, Fermín; Gastón-Romeo, Martín; León, Teresa

    2017-01-01

    Highlights: • A new method to match time series is defined to assess energy forecasting accuracy. • This method relies in a new family of step patterns that optimizes the MAE. • A new definition of the Temporal Distortion Index between two series is provided. • A parametric extension controls both the temporal distortion index and the MAE. • Pareto optimal transformations of the forecast series are obtained for both indexes. - Abstract: Recent years have seen a growing trend in wind and solar energy generation globally and it is expected that an important percentage of total energy production comes from these energy sources. However, they present inherent variability that implies fluctuations in energy generation that are difficult to forecast. Thus, forecasting errors have a considerable role in the impacts and costs of renewable energy integration, management, and commercialization. This study presents an important advance in the task of analyzing prediction models, in particular, in the timing component of prediction error, which improves previous pioneering results. A new method to match time series is defined in order to assess energy forecasting accuracy. This method relies on a new family of step patterns, an essential component of the algorithm to evaluate the temporal distortion index (TDI). This family minimizes the mean absolute error (MAE) of the transformation with respect to the reference series (the real energy series) and also allows detailed control of the temporal distortion entailed in the prediction series. The simultaneous consideration of temporal and absolute errors allows the use of Pareto frontiers as characteristic error curves. Real examples of wind energy forecasts are used to illustrate the results.

  6. Study of errors in absolute flux density measurements of Cassiopeia A

    International Nuclear Information System (INIS)

    Kanda, M.

    1975-10-01

    An error analysis for absolute flux density measurements of Cassiopeia A is discussed. The lower-bound quadrature-accumulation error for state-of-the-art measurements of the absolute flux density of Cas A around 7 GHz is estimated to be 1.71% for 3 sigma limits. The corresponding practicable error for the careful but not state-of-the-art measurement is estimated to be 4.46% for 3 sigma limits

  7. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  8. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  9. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    Science.gov (United States)

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  10. Corrected Lymphocyte Percentages Reduce the Differences in Absolute CD4+ T Lymphocyte Counts between Dual-Platform and Single-Platform Flow Cytometric Approaches.

    Science.gov (United States)

    Noulsri, Egarit; Abudaya, Dinar; Lerdwana, Surada; Pattanapanyasat, Kovit

    2018-03-13

    To determine whether a corrected lymphocyte percentage could reduce bias in the absolute cluster of differentiation (CD)4+ T lymphocyte counts obtained via dual-platform (DP) vs standard single-platform (SP) flow cytometry. The correction factor (CF) for the lymphocyte percentages was calculated at 6 laboratories. The absolute CD4+ T lymphocyte counts in 300 blood specimens infected with human immunodeficiency virus (HIV) were determined using the DP and SP methods. Applying the CFs revealed that 4 sites showed a decrease in the mean bias of absolute CD4+ T lymphocyte counts determined via DP vs standard SP (-109 vs -84 cells/μL, -80 vs -58 cells/μL, -52 vs -45 cells/μL, and -32 vs 1 cells/μL). However, 2 participating laboratories revealed an increase in the difference of the mean bias (-42 vs -49 cells/μL and -20 vs -69 cells/μL). Use of the corrected lymphocyte percentage shows potential for decreasing the difference in CD4 counts between DP and the standard SP method.

  11. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    Science.gov (United States)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  12. Errors of absolute methods of reactor neutron activation analysis caused by non-1/E epithermal neutron spectra

    International Nuclear Information System (INIS)

    Erdtmann, G.

    1993-08-01

    A sufficiently accurate characterization of the neutron flux and spectrum, i.e. the determination of the thermal flux, the flux ratio and the epithermal flux spectrum shape factor, α, is a prerequisite for all types of absolute and monostandard methods of reactor neutron activation analysis. A convenient method for these measurements is the bare triple monitor method. However, the results of this method, are very imprecise, because there are high error propagation factors form the counting errors of the monitor activities. Procedures are described to calculate the errors of the flux parameters, the α-dependent cross-section ratios, and of the analytical results from the errors of the activities of the monitor isotopes. They are included in FORTRAN programs which also allow a graphical representation of the results. A great number of examples were calculated for ten different irradiation facilities in four reactors and for 28 elements. Plots of the results are presented and discussed. (orig./HP) [de

  13. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    Science.gov (United States)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  14. The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.

    Science.gov (United States)

    1985-05-08

    also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies

  15. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  16. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    Science.gov (United States)

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  17. Errors and limits in the determination of plasma electron density by measuring the absolute values of the emitted continuum radiation intensity

    International Nuclear Information System (INIS)

    Bilbao, L.; Bruzzone, H.; Grondona, D.

    1994-01-01

    The reliable determination of a plasma electron structure requires a good knowledge of the errors affecting the employed technique. A technique based on the measurements of the absolute light intensity emitted by travelling plasma structures in plasma focus devices has been used, but it can be easily modified to other geometries and even to stationary plasma structures with time-varying plasma densities. The purpose of this work is to discuss in some detail the errors and limits of this technique. Three separate errors are shown: the minimum size of the density structure that can be resolved, an overall error in the measurements themselves, and an uncertainty in the shape of the density profile. (author)

  18. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600-MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600-MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV

  19. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurements of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV. (auth)

  20. Calculating Error Percentage in Using Water Phantom Instead of Soft Tissue Concerning 103Pd Brachytherapy Source Distribution via Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    OL Ahmadi

    2015-12-01

    Full Text Available Introduction: 103Pd is a low energy source, which is used in brachytherapy. According to the standards of American Association of Physicists in Medicine, dosimetric parameters determination of brachytherapy sources before the clinical application was considered significantly important. Therfore, the present study aimed to compare the dosimetric parameters of the target source using the water phantom and soft tissue. Methods: According to the TG-43U1 protocol, the dosimetric parameters were compared around the 103Pd source in regard with water phantom with the density of 0.998 gr/cm3 and the soft tissue with the density of 1.04 gr/cm3 on the longitudinal and transverse axes using the MCNP4C code and the relative differences were compared between the both conditions. Results: The simulation results indicated that the dosimetric parameters depended on the radial dose function and the anisotropy function in the application of the water phantom instead of soft tissue up to a distance of 1.5 cm,  between which a good consistency was observed. With increasing the distance, the difference increased, so as within 6 cm from the source, this difference increased to 4%. Conclusions: The results of  the soft tissue phantom compared with those of the water phantom indicated 4% relative difference at a distance of 6 cm from the source. Therefore, the results of the water phantom with a maximum error of 4% can be used in practical applications instead of soft tissue. Moreover, the amount of differences obtained in each distance regarding using the soft tissue phantom could be corrected.

  1. Absolute advantage

    NARCIS (Netherlands)

    J.G.M. van Marrewijk (Charles)

    2008-01-01

    textabstractA country is said to have an absolute advantage over another country in the production of a good or service if it can produce that good or service using fewer real resources. Equivalently, using the same inputs, the country can produce more output. The concept of absolute advantage can

  2. Oligomeric models for estimation of polydimethylsiloxane-water partition ratios with COSMO-RS theory: impact of the combinatorial term on absolute error.

    Science.gov (United States)

    Parnis, J Mark; Mackay, Donald

    2017-03-22

    A series of 12 oligomeric models for polydimethylsiloxane (PDMS) were evaluated for their effectiveness in estimating the PDMS-water partition ratio, K PDMS-w . Models ranging in size and complexity from the -Si(CH 3 ) 2 -O- model previously published by Goss in 2011 to octadeca-methyloctasiloxane (CH 3 -(Si(CH 3 ) 2 -O-) 8 CH 3 ) were assessed based on their RMS error with 253 experimental measurements of log K PDMS-w from six published works. The lowest RMS error for log K PDMS-w (0.40 in log K) was obtained with the cyclic oligomer, decamethyl-cyclo-penta-siloxane (D5), (-Si(CH 3 ) 2 -O-) 5 , with the mixing-entropy associated combinatorial term included in the chemical potential calculation. The presence or absence of terminal methyl groups on linear oligomer models is shown to have significant impact only for oligomers containing 1 or 2 -Si(CH 3 ) 2 -O- units. Removal of the combinatorial term resulted in a significant increase in the RMS error for most models, with the smallest increase associated with the largest oligomer studied. The importance of inclusion of the combinatorial term in the chemical potential for liquid oligomer models is discussed.

  3. Absolute Summ

    Science.gov (United States)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  4. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  5. Percentage Retail Mark-Ups

    OpenAIRE

    Thomas von Ungern-Sternberg

    1999-01-01

    A common assumption in the literature on the double marginalization problem is that the retailer can set his mark-up only in the second stage of the game after the producer has moved. To the extent that the sequence of moves is designed to reflect the relative bargaining power of the two parties it is just as plausible to let the retailer move first. Furthermore, retailers frequently calculate their selling prices by adding a percentage mark-up to their wholesale prices. This allows a retaile...

  6. Absolute nuclear material assay

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  7. Danish Towns during Absolutism

    DEFF Research Database (Denmark)

    This anthology, No. 4 in the Danish Urban Studies Series, presents in English recent significant research on Denmark's urban development during the Age of Absolutism, 1660-1848, and features 13 articles written by leading Danish urban historians. The years of Absolutism were marked by a general...

  8. ABSOLUTE NEUTRINO MASSES

    DEFF Research Database (Denmark)

    Schechter, J.; Shahid, M. N.

    2012-01-01

    We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos.......We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos....

  9. Uncertainties in pipeline water percentage measurement

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Bentley N.

    2005-07-01

    Measurement of the quantity, density, average temperature and water percentage in petroleum pipelines has been an issue of prime importance. The methods of measurement have been investigated and have seen continued improvement over the years. Questions are being asked as to the reliability of the measurement of water in the oil through sampling systems originally designed and tested for a narrow range of densities. Today most facilities sampling systems handle vastly increased ranges of density and types of crude oils. Issues of pipeline integrity, product loss and production balances are placing further demands on the issues of accurate measurement. Water percentage is one area that has not received the attention necessary to understand the many factors involved in making a reliable measurement. A previous paper1 discussed the issues of uncertainty of the measurement from a statistical perspective. This paper will outline many of the issues of where the errors lie in the manual and automatic methods in use today. A routine to use the data collected by the analyzers in the on line system for validation of the measurements will be described. (author) (tk)

  10. NGS Absolute Gravity Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NGS Absolute Gravity data (78 stations) was received in July 1993. Principal gravity parameters include Gravity Value, Uncertainty, and Vertical Gradient. The...

  11. Approach To Absolute Zero

    Indian Academy of Sciences (India)

    more and more difficult to remove heat as one approaches absolute zero. This is the ... A new and active branch of engineering ... This temperature is called the critical temperature, Te' For sulfur dioxide the critical ..... adsorbent charcoal.

  12. Encasing the Absolutes

    Directory of Open Access Journals (Sweden)

    Uroš Martinčič

    2014-05-01

    Full Text Available The paper explores the issue of structure and case in English absolute constructions, whose subjects are deduced by several descriptive grammars as being in the nominative case due to its supposed neutrality in terms of register. This deduction is countered by systematic accounts presented within the framework of the Minimalist Program which relate the case of absolute constructions to specific grammatical factors. Each proposal is shown as an attempt of analysing absolute constructions as basic predication structures, either full clauses or small clauses. I argue in favour of the small clause approach due to its minimal reliance on transformations and unique stipulations. Furthermore, I propose that small clauses project a singular category, and show that the use of two cases in English absolute constructions can be accounted for if they are analysed as depictive phrases, possibly selected by prepositions. The case of the subject in absolutes is shown to be a result of syntactic and non-syntactic factors. I thus argue in accordance with Minimalist goals that syntactic case does not exist, attributing its role in absolutes to other mechanisms.

  13. Absolute measurement of 152Eu

    International Nuclear Information System (INIS)

    Baba, Hiroshi; Baba, Sumiko; Ichikawa, Shinichi; Sekine, Toshiaki; Ishikawa, Isamu

    1981-08-01

    A new method of the absolute measurement for 152 Eu was established based on the 4πβ-γ spectroscopic anti-coincidence method. It is a coincidence counting method consisting of a 4πβ-counter and a Ge(Li) γ-ray detector, in which the effective counting efficiencies of the 4πβ-counter for β-rays, conversion electrons, and Auger electrons were obtained by taking the intensity ratios for certain γ-rays between the single spectrum and the spectrum coincident with the pulses from the 4πβ-counter. First, in order to verify the method, three different methods of the absolute measurement were performed with a prepared 60 Co source to find excellent agreement among the results deduced by them. Next, the 4πβ-γ spectroscopic coincidence measurement was applied to 152 Eu sources prepared by irradiating an enriched 151 Eu target in a reactor. The result was compared with that obtained by the γ-ray spectrometry using a 152 Eu standard source supplied by LMRI. They agreed with each other within the error of 2%. (author)

  14. Calibration with Absolute Shrinkage

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul

    2001-01-01

    In this paper, penalized regression using the L-1 norm on the estimated parameters is proposed for chemometric je calibration. The algorithm is of the lasso type, introduced by Tibshirani in 1996 as a linear regression method with bound on the absolute length of the parameters, but a modification...

  15. Approach to Absolute Zero

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 10. Approach to Absolute Zero Below 10 milli-Kelvin. R Srinivasan. Series Article Volume 2 Issue 10 October 1997 pp 8-16. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/002/10/0008-0016 ...

  16. Percentage Energy from Fat Screener: Overview

    Science.gov (United States)

    A short assessment instrument to estimate an individual's usual intake of percentage energy from fat. The foods asked about on the instrument were selected because they were the most important predictors of variability in percentage energy.

  17. Solving Problems with the Percentage Bar

    Science.gov (United States)

    van Galen, Frans; van Eerde, Dolly

    2013-01-01

    At the end of primary school all children more of less know what a percentage is, but yet they often struggle with percentage problems. This article describes a study in which students of 13 and 14 years old were given a written test with percentage problems and a week later were interviewed about the way they solved some of these problems. In a…

  18. Effekten af absolut kumulation

    DEFF Research Database (Denmark)

    Kyvsgaard, Britta; Klement, Christian

    2012-01-01

    Som led i finansloven for 2011 blev regeringen og forligspartierne enige om at undersøge reglerne om strafudmåling ved samtidig pådømmelse af flere kriminelle forhold og i forbindelse hermed vurdere konsekvenserne af at ændre de gældende regler i forhold til kapacitetsbehovet i Kriminalforsorgens...... samlet bødesum ved en absolut kumulation i forhold til en modereret kumulation, som nu er gældende....

  19. Towards absolute neutrino masses

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, Petr [Kellogg Radiation Laboratory 106-38, Caltech, Pasadena, CA 91125 (United States)

    2007-06-15

    Various ways of determining the absolute neutrino masses are briefly reviewed and their sensitivities compared. The apparent tension between the announced but unconfirmed observation of the 0{nu}{beta}{beta} decay and the neutrino mass upper limit based on observational cosmology is used as an example of what could happen eventually. The possibility of a 'nonstandard' mechanism of the 0{nu}{beta}{beta} decay is stressed and the ways of deciding which of the possible mechanisms is actually operational are described. The importance of the 0{nu}{beta}{beta} nuclear matrix elements is discussed and their uncertainty estimated.

  20. Making Sense of Fractions and Percentages

    Science.gov (United States)

    Whitin, David J.; Whitin, Phyllis

    2012-01-01

    Because fractions and percentages can be difficult for children to grasp, connecting them whenever possible is beneficial. Linking them can foster representational fluency as children simultaneously see the part-whole relationship expressed numerically (as a fraction and as a percentage) and visually (as a pie chart). NCTM advocates these…

  1. Thermodynamics of negative absolute pressures

    International Nuclear Information System (INIS)

    Lukacs, B.; Martinas, K.

    1984-03-01

    The authors show that the possibility of negative absolute pressure can be incorporated into the axiomatic thermodynamics, analogously to the negative absolute temperature. There are examples for such systems (GUT, QCD) processing negative absolute pressure in such domains where it can be expected from thermodynamical considerations. (author)

  2. Absolute Gravimetry in Fennoscandia

    DEFF Research Database (Denmark)

    Pettersen, B. R; TImmen, L.; Gitlein, O.

    The Fennoscandian postglacial uplift has been mapped geometrically using precise levelling, tide gauges, and networks of permanent GPS stations. The results identify major uplift rates at sites located around the northern part of the Gulf of Bothnia. The vertical motions decay in all directions...... motions) has its major axis in the direction of southwest to northeast and covers a distance of about 2000 km. Absolute gravimetry was made in Finland and Norway in 1976 with a rise-and fall instrument. A decade later the number of gravity stations was expanded by JILAg-5, in Finland from 1988, in Norway...... time series of several years are now available. Along the coast there are nearby tide gauge stations, many of which have time series of several decades. We describe the observing network, procedures, auxiliary observations, and discuss results obtained for selected sites. We compare the gravity results...

  3. Percentage of Fast-Track Receipts

    Data.gov (United States)

    Social Security Administration — The dataset provides the percentage of fast-track receipts by state during the reporting fiscal year. Fast-tracked cases consist of those cases identified as Quick...

  4. Incorrect Weighting of Absolute Performance in Self-Assessment

    Science.gov (United States)

    Jeffrey, Scott A.; Cozzarin, Brian

    Students spend much of their life in an attempt to assess their aptitude for numerous tasks. For example, they expend a great deal of effort to determine their academic standing given a distribution of grades. This research finds that students use their absolute performance, or percentage correct as a yardstick for their self-assessment, even when relative standing is much more informative. An experiment shows that this reliance on absolute performance for self-evaluation causes a misallocation of time and financial resources. Reasons for this inappropriate responsiveness to absolute performance are explored.

  5. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  6. Absolute risk, absolute risk reduction and relative risk

    Directory of Open Access Journals (Sweden)

    Jose Andres Calvache

    2012-12-01

    Full Text Available This article illustrates the epidemiological concepts of absolute risk, absolute risk reduction and relative risk through a clinical example. In addition, it emphasizes the usefulness of these concepts in clinical practice, clinical research and health decision-making process.

  7. Maximizing percentage depletion in solid minerals

    International Nuclear Information System (INIS)

    Tripp, J.; Grove, H.D.; McGrath, M.

    1982-01-01

    This article develops a strategy for maximizing percentage depletion deductions when extracting uranium or other solid minerals. The goal is to avoid losing percentage depletion deductions by staying below the 50% limitation on taxable income from the property. The article is divided into two major sections. The first section is comprised of depletion calculations that illustrate the problem and corresponding solutions. The last section deals with the feasibility of applying the strategy and complying with the Internal Revenue Code and appropriate regulations. Three separate strategies or appropriate situations are developed and illustrated. 13 references, 3 figures, 7 tables

  8. Absolute method of measuring magnetic susceptibility

    Science.gov (United States)

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  9. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  10. Projective absoluteness for Sacks forcing

    NARCIS (Netherlands)

    Ikegami, D.

    2009-01-01

    We show that Sigma(1)(3)-absoluteness for Sacks forcing is equivalent to the nonexistence of a Delta(1)(2) Bernstein set. We also show that Sacks forcing is the weakest forcing notion among all of the preorders that add a new real with respect to Sigma(1)(3) forcing absoluteness.

  11. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  12. The Language of Comparisons: Communicating about Percentages

    Directory of Open Access Journals (Sweden)

    Jessica Polito

    2014-01-01

    Full Text Available While comparisons between percentages or rates appear frequently in journalism and advertising, and are an essential component of quantitative writing, many students fail to understand precisely what percentages mean, and lack fluency with the language used for comparisons. After reviewing evidence demonstrating this weakness, this experience-based perspective lays out a framework for teaching the language of comparisons in a structured way, and illustrates it with several authentic examples that exemplify mistaken or misleading uses of such numbers. The framework includes three common types of erroneous or misleading quantitative writing: the missing comparison, where a key number is omitted; the apples-to-pineapples comparison, where two subtly incomparable rates are presented; and the implied fallacy, where an invalid quantitative conclusion is left to the reader to infer.

  13. Percentage compensation arrangements: suspect, but not illegal.

    Science.gov (United States)

    Fedor, F P

    2001-01-01

    Percentage compensation arrangements, in which a service is outsourced to a contractor that is paid in accordance with the level of its performance, are widely used in many business sectors. The HHS Office of Inspector General (OIG) has shown concern that these arrangements in the healthcare industry may offer incentives for the performance of unnecessary services or cause false claims to be made to Federal healthcare programs in violation of the antikickback statute and the False Claims Act. Percentage compensation arrangements can work and need not run afoul of the law as long as the healthcare organization carefully oversees the arrangement and sets specific safeguards in place. These safeguards include screening contractors, carefully evaluating their compliance programs, and obligating them contractually to perform within the limits of the law.

  14. Definition of correcting factors for absolute radon content measurement formula

    International Nuclear Information System (INIS)

    Ji Changsong; Xiao Ziyun; Yang Jianfeng

    1992-01-01

    The absolute method of radio content measurement is based on thomas radon measurement formula. It was found in experiment that the systematic error existed in radon content measurement by means of thomas formula. By the analysis on the behaviour of radon daughter five factors including filter efficiency, detector construction factor, self-absorbance, energy spectrum factor, and gravity factor were introduced into the thomas formula, so that the systematic error was eliminated. The measuring methods of the five factors are given

  15. Cryogenic, Absolute, High Pressure Sensor

    Science.gov (United States)

    Chapman, John J. (Inventor); Shams. Qamar A. (Inventor); Powers, William T. (Inventor)

    2001-01-01

    A pressure sensor is provided for cryogenic, high pressure applications. A highly doped silicon piezoresistive pressure sensor is bonded to a silicon substrate in an absolute pressure sensing configuration. The absolute pressure sensor is bonded to an aluminum nitride substrate. Aluminum nitride has appropriate coefficient of thermal expansion for use with highly doped silicon at cryogenic temperatures. A group of sensors, either two sensors on two substrates or four sensors on a single substrate are packaged in a pressure vessel.

  16. Partial sums of arithmetical functions with absolutely convergent ...

    Indian Academy of Sciences (India)

    For an arithmetical function f with absolutely convergent Ramanujan expansion, we derive an asymptotic formula for the ∑ n ≤ N f(n)$ with explicit error term. As a corollary we obtain new results about sum-of-divisors functions and Jordan's totient functions.

  17. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  18. Absolute GPS Positioning Using Genetic Algorithms

    Science.gov (United States)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  19. Predicted percentage dissatisfied with ankle draft.

    Science.gov (United States)

    Liu, S; Schiavon, S; Kabanshi, A; Nazaroff, W W

    2017-07-01

    Draft is unwanted local convective cooling. The draft risk model of Fanger et al. (Energy and Buildings 12, 21-39, 1988) estimates the percentage of people dissatisfied with air movement due to overcooling at the neck. There is no model for predicting draft at ankles, which is more relevant to stratified air distribution systems such as underfloor air distribution (UFAD) and displacement ventilation (DV). We developed a model for predicted percentage dissatisfied with ankle draft (PPD AD ) based on laboratory experiments with 110 college students. We assessed the effect on ankle draft of various combinations of air speed (nominal range: 0.1-0.6 m/s), temperature (nominal range: 16.5-22.5°C), turbulence intensity (at ankles), sex, and clothing insulation (thermal sensation and air speed at ankles are the dominant parameters affecting draft. The seated subjects accepted a vertical temperature difference of up to 8°C between ankles (0.1 m) and head (1.1 m) at neutral whole-body thermal sensation, 5°C more than the maximum difference recommended in existing standards. The developed ankle draft model can be implemented in thermal comfort and air diffuser testing standards. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Absolute and Relative Reliability of the Timed 'Up & Go' Test and '30second Chair-Stand' Test in Hospitalised Patients with Stroke

    DEFF Research Database (Denmark)

    Lyders Johansen, Katrine; Derby Stistrup, Rikke; Skibdal Schjøtt, Camilla

    2016-01-01

    OBJECTIVE: The timed 'Up & Go' test and '30second Chair-Stand' test are simple clinical outcome measures widely used to assess functional performance. The reliability of both tests in hospitalised stroke patients is unknown. The purpose was to investigate the relative and absolute reliability...... of both tests in patients admitted to an acute stroke unit. METHODS: Sixty-two patients (men, n = 41) attended two test sessions separated by a one hours rest. Intraclass correlation coefficients (ICC2,1) were calculated to assess relative reliability. Absolute reliability was expressed as Standard Error...... of Measurement (with 95% certainty-SEM95) and Smallest Real Difference (SRD) and as percentage of their respective means if heteroscedasticity was observed in Bland Altman plots (SEM95% and SRD%). RESULTS: ICC values for interrater reliability were 0.97 and 0.99 for the timed 'Up & Go' test and 0.88 and 0...

  1. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  2. Absolute flux scale for radioastronomy

    International Nuclear Information System (INIS)

    Ivanov, V.P.; Stankevich, K.S.

    1986-01-01

    The authors propose and provide support for a new absolute flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on absolute measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of absolute scales in radio astronomy are summarized

  3. An Empirical Analysis for the Prediction of a Financial Crisis in Turkey through the Use of Forecast Error Measures

    Directory of Open Access Journals (Sweden)

    Seyma Caliskan Cavdar

    2015-08-01

    Full Text Available In this study, we try to examine whether the forecast errors obtained by the ANN models affect the breakout of financial crises. Additionally, we try to investigate how much the asymmetric information and forecast errors are reflected on the output values. In our study, we used the exchange rate of USD/TRY (USD, the Borsa Istanbul 100 Index (BIST, and gold price (GP as our output variables of our Artificial Neural Network (ANN models. We observe that the predicted ANN model has a strong explanation capability for the 2001 and 2008 crises. Our calculations of some symmetry measures such as mean absolute percentage error (MAPE, symmetric mean absolute percentage error (sMAPE, and Shannon entropy (SE, clearly demonstrate the degree of asymmetric information and the deterioration of the financial system prior to, during, and after the financial crisis. We found that the asymmetric information prior to crisis is larger as compared to other periods. This situation can be interpreted as early warning signals before the potential crises. This evidence seems to favor an asymmetric information view of financial crises.

  4. Absolute beam current monitoring in endstation c

    International Nuclear Information System (INIS)

    Bochna, C.

    1995-01-01

    The first few experiments at CEBAF require approximately 1% absolute measurements of beam currents expected to range from 10-25μA. This represents errors of 100-250 nA. The initial complement of beam current monitors are of the non intercepting type. CEBAF accelerator division has provided a stripline monitor and a cavity monitor, and the authors have installed an Unser monitor (parametric current transformer or PCT). After calibrating the Unser monitor with a precision current reference, the authors plan to transfer this calibration using CW beam to the stripline monitors and cavity monitors. It is important that this be done fairly rapidly because while the gain of the Unser monitor is quite stable, the offset may drift on the order of .5μA per hour. A summary of what the authors have learned about the linearity, zero drift, and gain drift of each type of current monitor will be presented

  5. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  6. Relativistic Absolutism in Moral Education.

    Science.gov (United States)

    Vogt, W. Paul

    1982-01-01

    Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess absolute rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)

  7. Forcing absoluteness and regularity properties

    NARCIS (Netherlands)

    Ikegami, D.

    2010-01-01

    For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.

  8. Some absolutely effective product methods

    Directory of Open Access Journals (Sweden)

    H. P. Dikshit

    1992-01-01

    Full Text Available It is proved that the product method A(C,1, where (C,1 is the Cesàro arithmetic mean matrix, is totally effective under certain conditions concerning the matrix A. This general result is applied to study absolute Nörlund summability of Fourier series and other related series.

  9. Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.

    Science.gov (United States)

    Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W

    2016-01-01

    To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.

  10. Comparison of a mobile application to estimate percentage body fat to other non-laboratory based measurements

    Directory of Open Access Journals (Sweden)

    Shaw Matthew P.

    2017-02-01

    Full Text Available Study aim: The measurement of body composition is important from a population perspective as it is a variable associated with a person’s health, and also from a sporting perspective as it can be used to evaluate training. This study aimed to examine the reliability of a mobile application that estimates body composition by digitising a two-dimensional image. Materials and methods: Thirty participants (15 men and 15 women volunteered to have their percentage body fat (%BF estimated via three different methods (skinfold measurements, SFM; bio-electrical impedance, BIA; LeanScreenTM mobile application, LSA. Intra-method reproducibility was assessed using intra-class correlation coefficients (ICC, coefficient of variance (CV and typical error of measurement (TEM. The average measurement for each method were also compared. Results: There were no significant differences between the methods for estimated %BF (p = 0.818 and the reliability of each method as assessed via ICC was good (≥0.974. However the absolute reproducibility, as measured by CV and TEM, was much higher in SFM and BIA (≤1.07 and ≤0.37 respectively compared with LSA (CV 6.47, TEM 1.6. Conclusion: LSA may offer an alternative to other field-based measures for practitioners, however individual variance should be considered to develop an understanding of minimal worthwhile change, as it may not be suitable for a one-off measurement.

  11. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  12. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. © 2014 Diabetes Technology Society.

  13. Moral absolutism and ectopic pregnancy.

    Science.gov (United States)

    Kaczor, C

    2001-02-01

    If one accepts a version of absolutism that excludes the intentional killing of any innocent human person from conception to natural death, ectopic pregnancy poses vexing difficulties. Given that the embryonic life almost certainly will die anyway, how can one retain one's moral principle and yet adequately respond to a situation that gravely threatens the life of the mother and her future fertility? The four options of treatment most often discussed in the literature are non-intervention, salpingectomy (removal of tube with embryo), salpingostomy (removal of embryo alone), and use of methotrexate (MXT). In this essay, I review these four options and introduce a fifth (the milking technique). In order to assess these options in terms of the absolutism mentioned, it will also be necessary to discuss various accounts of the intention/foresight distinction. I conclude that salpingectomy, salpingostomy, and the milking technique are compatible with absolutist presuppositions, but not the use of methotrexate.

  14. Absolute magnitudes by statistical parallaxes

    International Nuclear Information System (INIS)

    Heck, A.

    1978-01-01

    The author describes an algorithm for stellar luminosity calibrations (based on the principle of maximum likelihood) which allows the calibration of relations of the type: Msub(i)=sup(N)sub(j=1)Σqsub(j)Csub(ij), i=1,...,n, where n is the size of the sample at hand, Msub(i) are the individual absolute magnitudes, Csub(ij) are observational quantities (j=1,...,N), and qsub(j) are the coefficients to be determined. If one puts N=1 and Csub(iN)=1, one has q 1 =M(mean), the mean absolute magnitude of the sample. As additional output, the algorithm provides one also with the dispersion in magnitude of the sample sigmasub(M), the mean solar motion (U,V,W) and the corresponding velocity ellipsoid (sigmasub(u), sigmasub(v), sigmasub(w). The use of this algorithm is illustrated. (Auth.)

  15. Absolute gravity measurements in California

    Science.gov (United States)

    Zumberge, M. A.; Sasagawa, G.; Kappus, M.

    1986-08-01

    An absolute gravity meter that determines the local gravitational acceleration by timing a freely falling mass with a laser interferometer has been constructed. The instrument has made measurements at 11 sites in California, four in Nevada, and one in France. The uncertainty in the results is typically 10 microgal. Repeated measurements have been made at several of the sites; only one shows a substantial change in gravity.

  16. The Absolute Immanence in Deleuze

    OpenAIRE

    Park, Daeseung

    2013-01-01

    The absolute immanence in Deleuze Daeseung Park Abstract The plane of immanence is not unique. Deleuze and Guattari suppose a multiplicity of planes. Each great philosopher draws new planes on his own way, and these planes constitute the "time of philosophy". We can, therefore, "present the entire history of philosophy from the viewpoint of the institution of a plane of immanence" or present the time of philosophy from the viewpoint of the superposition and of the coexistence of planes. Howev...

  17. Relative and absolute risk in epidemiology and health physics

    International Nuclear Information System (INIS)

    Goldsmith, R.; Peterson, H.T. Jr.

    1983-01-01

    The health risk from ionizing radiation commonly is expressed in two forms: (1) the relative risk, which is the percentage increase in natural disease rate and (2) the absolute or attributable risk which represents the difference between the natural rate and the rate associated with the agent in question. Relative risk estimates for ionizing radiation generally are higher than those expressed as the absolute risk. This raises the question of which risk estimator is the most appropriate under different conditions. The absolute risk has generally been used for radiation risk assessment, although mathematical combinations such as the arithmetic or geometric mean of both the absolute and relative risks, have also been used. Combinations of the two risk estimators are not valid because the absolute and relative risk are not independent variables. Both human epidemiologic studies and animal experimental data can be found to illustrate the functional relationship between the natural cancer risk and the risk associated with radiation. This implies that the radiation risk estimate derived from one population may not be appropriate for predictions in another population, unless it is adjusted for the difference in the natural disease incidence between the two populations

  18. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors

    Directory of Open Access Journals (Sweden)

    Heon-Ju Kwon

    2018-03-01

    Full Text Available Background/Aims Computed tomography (CT hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT. However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Methods Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (VP was measured via the assumptive hepatectomy plane. Retrospective liver volume (VR was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W, errors in percentage (% VP and VR were evaluated. Plane-dependent error in VP was defined as the absolute difference between VP and VR. % plane-dependent error was defined as follows: |VP–VR|/W∙100. Results Mean VP, VR, and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in VP were 73.3 mL and 10.7%. Mean error and % error in VR were 64.4 mL and 9.3%. Mean plane-dependent error in VP was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in VP exceeded 10% of W in approximately 10% of the subjects in our study. Conclusions There was approximately 5% plane-dependent error in liver VP on CT volumetry. Plane-dependent error in VP exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  19. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors.

    Science.gov (United States)

    Kwon, Heon-Ju; Kim, Kyoung Won; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-03-01

    Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (V P ) was measured via the assumptive hepatectomy plane. Retrospective liver volume (V R ) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) V P and V R were evaluated. Plane-dependent error in V P was defined as the absolute difference between V P and V R . % plane-dependent error was defined as follows: |V P -V R |/W∙100. Mean V P , V R , and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in V P were 73.3 mL and 10.7%. Mean error and % error in V R were 64.4 mL and 9.3%. Mean plane-dependent error in V P was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in V P exceeded 10% of W in approximately 10% of the subjects in our study. There was approximately 5% plane-dependent error in liver V P on CT volumetry. Plane-dependent error in V P exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  20. Android Apps for Absolute Beginners

    CERN Document Server

    Jackson, Wallace

    2011-01-01

    Anybody can start building simple apps for the Android platform, and this book will show you how! Android Apps for Absolute Beginners takes you through the process of getting your first Android applications up and running using plain English and practical examples. It cuts through the fog of jargon and mystery that surrounds Android application development, and gives you simple, step-by-step instructions to get you started.* Teaches Android application development in language anyone can understand, giving you the best possible start in Android development * Provides simple, step-by-step exampl

  1. Absolute pitch: a case study.

    Science.gov (United States)

    Vernon, P E

    1977-11-01

    The auditory skill known as 'absolute pitch' is discussed, and it is shown that this differs greatly in accuracy of identification or reproduction of musical tones from ordinary discrimination of 'tonal height' which is to some extent trainable. The present writer possessed absolute pitch for almost any tone or chord over the normal musical range, from about the age of 17 to 52. He then started to hear all music one semitone too high, and now at the age of 71 it is heard a full tone above the true pitch. Tests were carried out under controlled conditions, in which 68 to 95 per cent of notes were identified as one semitone or one tone higher than they should be. Changes with ageing seem more likely to occur in the elasticity of the basilar membrane mechanisms than in the long-term memory which is used for aural analysis of complex sounds. Thus this experience supports the view that some resolution of complex sounds takes place at the peripheral sense organ, and this provides information which can be incorrect, for interpretation by the cortical centres.

  2. Absolute measurement of 85Sr

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Watanabe, Tamaki

    1978-01-01

    An extension of 4πe.x-γ coincidence technique is described to measure the absolute disintegration rate of 85 Sr. This nuclide shows electron capture-gamma decay, and 514keV level of 85 Rb is a meta-stable state with half life of 0.958 μsec. Therefore, the conventional 4 πe.x-γ coincidence technique with about 1 μsec of resolution time can not be applied to this nuclide. To measure the absolute disintegration rate of this, the delayed 4 πe.x-γ coincidence technique with two different resolution time has been used. The disintegration rate was determined from four counting rates of electron-x ray, gamma ray and two coincidences, and the true disintegration rate could be obtained by extraporation of the electron-x ray detection efficiency to 1. Two resolution time appearing in the calculation formulas were determined from the chance coincidence between electron-x ray and delayed gamma ray signals. When the coincidence countings with three different resolution time were carried out by one coincidence circuit, the results calculated from all combinations did not agree each other. However, when the two coincidence circuits of the same type were used to fix the resolution time, a good coincidence absorption function was obtained and the disintegration rate was determined with accuracy of +- 0.5%. To evaluate the validity of the results the disintegration rates were measured by two NaI (Tl) scintillation detectors whose gamma-ray detection efficiency was previously determined and both results were agreed within accuracy of +- 0.5%. This method can be applied with nearly same accuracy for the beta-gamma decay nuclide possessing a meta-stable state of the half life below about 10 μsec. (auth.)

  3. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  4. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  5. The Korean version of relative and absolute reliability of gait and balance assessment tools for patients with dementia in day care center and nursing home.

    Science.gov (United States)

    Lee, Han Suk; Park, Sun Wook; Chung, Hyung Kuk

    2017-11-01

    [Purpose] This study was aimed to determine the relative and absolute reliability of Korean version tools of the Berg Balance Scale (BBS), the Timed Up and Go (TUG), the Four-Meter Walking Test (4MWT) and the Groningen Meander Walking Test (GMWT) in patients with dementia. [Subjects and Methods] A total of 53 patients with dementia were tested on TUG, BBS, 4MWT and GMWT with a prospective cohort methodological design. Intra-class Correlation Coefficients (ICCs) to assess relative reliability and the standard error of measurement (SEM), minimal detectable change (MDC 95 ) and its percentage (MDC % ) to analyze the absolute reliability were calculated. [Results] Inter-rater reliability (ICC (2,3) ) of TUG, BBS and GMWT was 0.99 and that of 4MWT was 0.82. Inter-rater reliability was high for TUG, BBS and GMWT, with low SEM, MDC 95 , and MDC % . Inter-rater reliability was low for 4MWT, with high SEM, MDC 95 , and MDC % . Test-retest (ICC (2,3) ) of TUG, BBS and GMWT was 0.96-0.99 and Test-retest (ICC (2,3) ) of 4MWT was 0.85. The test-retest was high for TUG, BBS and GMWT, with low SEM, MDC 95 , and MDC % , but it was low for 4MWT, with high SEM, MDC 95 , and MDC % . [Conclusion] The relative reliability was high for all the assessment tools. The absolute reliability has a reasonable level of stability except the 4MWT.

  6. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  7. AC Own Motion Percentage of Randomly Sampled Cases

    Data.gov (United States)

    Social Security Administration — Longitudinal report detailing the numbers and percentages of Appeals Council (AC) own motion review actions taken on un-appealed favorable hearing level decisions...

  8. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  9. Absolute entropy of ions in methanol

    International Nuclear Information System (INIS)

    Abakshin, V.A.; Kobenin, V.A.; Krestov, G.A.

    1978-01-01

    By measuring the initial thermoelectromotive forces of chains with bromo-silver electrodes in tetraalkylammonium bromide solutions the absolute entropy of bromide-ion in methanol is determined in the 298.15-318.15 K range. The anti Ssub(Brsup(-))sup(0) = 9.8 entropy units value is used for calculation of the absolute partial molar entropy of alkali metal ions and halogenide ions. It has been found that, absolute entropy of Cs + =12.0 entropy units, I - =14.0 entropy units. The obtained ion absolute entropies in methanol at 298.15 K within 1-2 entropy units is in an agreement with published data

  10. Near threshold absolute TDCS: First results

    International Nuclear Information System (INIS)

    Roesel, T.; Schlemmer, P.; Roeder, J.; Frost, L.; Jung, K.; Ehrhardt, H.

    1992-01-01

    A new method, and first results for an impact energy 2 eV above the threshold of ionisation of helium, are presented for the measurement of absolute triple differential cross sections (TDCS) in a crossed beam experiment. The method is based upon measurement of beam/target overlap densities using known absolute total ionisation cross sections and of detection efficiencies using known absolute double differential cross sections (DDCS). For the present work the necessary absolute DDCS for 1 eV electrons had also to be measured. Results are presented for several different coplanar kinematics and are compared with recent DWBA calculations. (orig.)

  11. The percentage of nosocomial-related out of total hospitalizations for rotavirus gastroenteritis and its association with hand hygiene compliance.

    Science.gov (United States)

    Waisbourd-Zinman, Orith; Ben-Ziony, Shiri; Solter, Ester; Chodick, Gabriel; Ashkenazi, Shai; Livni, Gilat

    2011-03-01

    Because the absolute numbers of both community-acquired and nosocomial rotavirus gastroenteritis (RVGE) vary, we studied the percentage of hospitalizations for RVGE that were transmitted nosocomially as an indicator of in-hospital acquisition of the infection. In a 4-year prospective study, the percentage of nosocomial RVGE declined steadily, from 20.3% in 2003 to 12.7% in 2006 (P = .001). Concomitantly, the rate of compliance with hand hygiene increased from 33.7% to 49% (P = .012), with a significant (P Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  12. 26 CFR 1.613-1 - Percentage depletion; general rule.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Percentage depletion; general rule. 1.613-1... TAX (CONTINUED) INCOME TAXES (CONTINUED) Natural Resources § 1.613-1 Percentage depletion; general rule. (a) In general. In the case of a taxpayer computing the deduction for depletion under section 611...

  13. Determination of percentage of caffeine content in some analgesic ...

    African Journals Online (AJOL)

    Two methods were employed for the determination of percentage Caffeine content in three brands of analgesic tablets which are; Extraction using only water as a solvent and Extraction using both water and chloroform as solvents, watch glass has been used as the weighing apparatus and the percentage of Caffeine ...

  14. 78 FR 48789 - Loan Guaranty: Percentage to Determine Net Value

    Science.gov (United States)

    2013-08-09

    ... DEPARTMENT OF VETERANS AFFAIRS Loan Guaranty: Percentage to Determine Net Value AGENCY: Department... mortgage holders in the Department of Veterans Affairs (VA) loan guaranty program concerning the percentage to be used in calculating the purchase price of a property that secured a terminated loan. The new...

  15. 7 CFR 982.41 - Free and restricted percentages.

    Science.gov (United States)

    2010-01-01

    ... percentages in effect at the end of the previous marketing year shall be applicable. [51 FR 29548, Aug. 19... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... WASHINGTON Order Regulating Handling Marketing Policy § 982.41 Free and restricted percentages. The free and...

  16. determination of perce rmination of percentage of caffeine content

    African Journals Online (AJOL)

    userpc

    ABSTRACT. Two methods were employed for the deter brands of analgesic tablets which are;. Extraction using both water and chlorofor weighing apparatus and the percentage of. The percentage of caffeine using only water. Boska, and Panadol Extra were 7.40%, 5.60 caffeine using both water and chloroform i.

  17. 78 FR 33757 - Rural Determination and Financing Percentage

    Science.gov (United States)

    2013-06-05

    ... Agency for determining what percentage of a project is eligible for RUS financing if the Rural Percentage... defined as rural. As the Agency investigates financing options for projects owned by entities other than... inability to fund 100 percent of the financing needs of a given project has undermined the Agency's effort...

  18. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  19. Introducing the Mean Absolute Deviation "Effect" Size

    Science.gov (United States)

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  20. Investigating Absolute Value: A Real World Application

    Science.gov (United States)

    Kidd, Margaret; Pagni, David

    2009-01-01

    Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…

  1. Dressing percentage and Carcass characteristics of four Indigenous ...

    African Journals Online (AJOL)

    Dressing percentage and Carcass characteristics of four Indigenous cattle breeds in Nigeria. ... Nigerian Journal of Animal Production ... Their feed intake, live and carcasses weights and the weights of their major carcass components and ...

  2. An absolute distance interferometer with two external cavity diode lasers

    International Nuclear Information System (INIS)

    Hartmann, L; Meiners-Hagen, K; Abou-Zeid, A

    2008-01-01

    An absolute interferometer for length measurements in the range of several metres has been developed. The use of two external cavity diode lasers allows the implementation of a two-step procedure which combines the length measurement with a variable synthetic wavelength and its interpolation with a fixed synthetic wavelength. This synthetic wavelength is obtained at ≈42 µm by a modulation-free stabilization of both lasers to Doppler-reduced rubidium absorption lines. A stable reference interferometer is used as length standard. Different contributions to the total measurement uncertainty are discussed. It is shown that the measurement uncertainty can considerably be reduced by correcting the influence of vibrations on the measurement result and by applying linear regression to the quadrature signals of the absolute interferometer and the reference interferometer. The comparison of the absolute interferometer with a counting interferometer for distances up to 2 m results in a linearity error of 0.4 µm in good agreement with an estimation of the measurement uncertainty

  3. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  4. A global algorithm for estimating Absolute Salinity

    Science.gov (United States)

    McDougall, T. J.; Jackett, D. R.; Millero, F. J.; Pawlowicz, R.; Barker, P. M.

    2012-12-01

    The International Thermodynamic Equation of Seawater - 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density) than does Practical Salinity. When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic), Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg-1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p) in the world ocean. To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811). In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally).

  5. Globular Clusters: Absolute Proper Motions and Galactic Orbits

    Science.gov (United States)

    Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.

    2018-04-01

    We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.

  6. The absolute environmental performance of buildings

    DEFF Research Database (Denmark)

    Brejnrod, Kathrine Nykjær; Kalbar, Pradip; Petersen, Steffen

    2017-01-01

    Our paper presents a novel approach for absolute sustainability assessment of a building's environmental performance. It is demonstrated how the absolute sustainable share of the earth carrying capacity of a specific building type can be estimated using carrying capacity based normalization factors....... A building is considered absolute sustainable if its annual environmental burden is less than its share of the earth environmental carrying capacity. Two case buildings – a standard house and an upcycled single-family house located in Denmark – were assessed according to this approach and both were found...... to exceed the target values of three (almost four) of the eleven impact categories included in the study. The worst-case excess was for the case building, representing prevalent Danish building practices, which utilized 1563% of the Climate Change carrying capacity. Four paths to reach absolute...

  7. Absolute calibration technique for spontaneous fission sources

    International Nuclear Information System (INIS)

    Zucker, M.S.; Karpf, E.

    1984-01-01

    An absolute calibration technique for a spontaneously fissioning nuclide (which involves no arbitrary parameters) allows unique determination of the detector efficiency for that nuclide, hence of the fission source strength

  8. MEAN OF MEDIAN ABSOLUTE DERIVATION TECHNIQUE MEAN ...

    African Journals Online (AJOL)

    eobe

    development of mean of median absolute derivation technique based on the based on the based on .... of noise mean to estimate the speckle noise variance. Noise mean property ..... Foraging Optimization,” International Journal of. Advanced ...

  9. Dosimetric Changes Resulting From Patient Rotational Setup Errors in Proton Therapy Prostate Plans

    International Nuclear Information System (INIS)

    Sejpal, Samir V.; Amos, Richard A.; Bluett, Jaques B.; Levy, Lawrence B.; Kudchadker, Rajat J.; Johnson, Jennifer; Choi, Seungtaek; Lee, Andrew K.

    2009-01-01

    Purpose: To evaluate the dose changes to the target and critical structures from rotational setup errors in prostate cancer patients treated with proton therapy. Methods and Materials: A total of 70 plans were analyzed for 10 patients treated with parallel-opposed proton beams to a dose of 7,600 60 Co-cGy-equivalent (CcGE) in 200 CcGE fractions to the clinical target volume (i.e., prostate and proximal seminal vesicles). Rotational setup errors of +3 o , -3 deg., +5 deg., and -5 deg. (to simulate pelvic tilt) were generated by adjusting the gantry. Horizontal couch shifts of +3 deg. and -3 deg. (to simulate longitudinal setup variability) were also generated. Verification plans were recomputed, keeping the same treatment parameters as the control. Results: All changes shown are for 38 fractions. The mean clinical target volume dose was 7,780 CcGE. The mean change in the clinical target volume dose in the worse case scenario for all shifts was 2 CcGE (absolute range in worst case scenario, 7,729-7,848 CcGE). The mean changes in the critical organ dose in the worst case scenario was 6 CcGE (bladder), 18 CcGE (rectum), 36 CcGE (anterior rectal wall), and 141 CcGE (femoral heads) for all plans. In general, the percentage of change in the worse case scenario for all shifts to the critical structures was <5%. Deviations in the absolute percentage of volume of organ receiving 45 and 70 Gy for the bladder and rectum were <2% for all plans. Conclusion: Patient rotational movements of 3 deg. and 5 deg. and horizontal couch shifts of 3 deg. in prostate proton planning did not confer clinically significant dose changes to the target volumes or critical structures.

  10. Absolute spectrophotometry of Nova Cygni 1975

    International Nuclear Information System (INIS)

    Kontizas, E.; Kontizas, M.; Smyth, M.J.

    1976-01-01

    Radiometric photoelectric spectrophotometry of Nova Cygni 1975 was carried out on 1975 August 31, September 2, 3. α Lyr was used as reference star and its absolute spectral energy distribution was used to reduce the spectrophotometry of the nova to absolute units. Emission strengths of Hα, Hβ, Hγ (in W cm -2 ) were derived. The Balmer decrement Hα:Hβ:Hγ was compared with theory, and found to deviate less than had been reported for an earlier nova. (author)

  11. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  12. A global algorithm for estimating Absolute Salinity

    Directory of Open Access Journals (Sweden)

    T. J. McDougall

    2012-12-01

    Full Text Available The International Thermodynamic Equation of Seawater – 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density than does Practical Salinity.

    When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic, Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg−1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p in the world ocean.

    To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811. In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally.

  13. Artificial neural networks for prediction of percentage of water ...

    Indian Academy of Sciences (India)

    have high compressive strengths in comparison with con- crete specimens ... presenting suitable model based on artificial neural networks. (ANNs) to ... by experimental ones to evaluate the software power for pre- dicting the ..... Figure 7. Correlation of measured and predicted percentage of water absorption values of.

  14. Quantitative trait locus (QTL) analysis of percentage grains ...

    African Journals Online (AJOL)

    user

    2011-03-28

    Mar 28, 2011 ... ATA/M-CGT; (B) AFLP results using primer E-AAA/M-CTC; (C) AFLP results using primer E-AAA/M-CTA. 1,. Minghui63; 2, Zhengshan97A; 3, low PGC bulk; 4, high PGC bulk. The arrow show linkage segments of percentage chalky grain in rice. Table 1. Chromosomal location of AFLP segments linked to ...

  15. 7 CFR 987.44 - Free and restricted percentages.

    Science.gov (United States)

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... applicable grade and size available to supply the trade demand for free dates of any variety is likely to be... effectuate the declared policy of the act, it shall recommend such percentages to the Secretary. If the...

  16. 75 FR 35098 - Federal Employees' Retirement System; Normal Cost Percentages

    Science.gov (United States)

    2010-06-21

    ... normal cost percentages and requests for actuarial assumptions and data to the Board of Actuaries, care of Gregory Kissel, Actuary, Office of Planning and Policy Analysis, Office of Personnel Management... Regulations, regulates how normal costs are determined. Recently, the Board of Actuaries of the Civil Service...

  17. Artificial neural networks for prediction of percentage of water

    Indian Academy of Sciences (India)

    ... Lecture Workshops · Refresher Courses · Symposia · Live Streaming. Home; Journals; Bulletin of Materials Science; Volume 35; Issue 6. Artificial neural networks for prediction of percentage of water absorption of geopolymers produced by waste ashes. Ali Nazari. Volume 35 Issue 6 November 2012 pp 1019-1029 ...

  18. Coral Reef Coverage Percentage on Binor Paiton-Probolinggo Seashore

    Directory of Open Access Journals (Sweden)

    Dwi Budi Wiyanto

    2016-01-01

    Full Text Available The coral reef damage in Probolinggo region was expected to be caused by several factors. The first one comes from its society that exploits fishery by using cyanide toxin and bomb. The second one goes to the extraction of coral reef, which is used as decoration or construction materials. The other factor is likely caused by the existence of large industry on the seashore, such as Electric Steam Power Plant (PLTU Paiton and others alike. Related to the development of coral reef ecosystem, availability of an accurate data is crucially needed to support the manner of future policy, so the research of coral reef coverage percentage needs to be conducted continuously. The aim of this research is to collect biological data of coral reef and to identify coral reef coverage percentage in the effort of constructing coral reef condition basic data on Binor, Paiton, and Probolinggo regency seashore. The method used in this research is Line Intercept Transect (LIT method. LIT method is a method that used to decide benthic community on coral reef based on percentage growth, and to take note of benthic quantity along transect line. Percentage of living coral coverage in 3 meters depth on this Binor Paiton seashore that may be categorized in a good condition is 57,65%. While the rest are dead coral that is only 1,45%, other life form in 23,2%, and non-life form in 17,7%. A good condition of coral reef is caused by coral reef transplantation on the seashore, so this coral reef is dominated by Acropora Branching. On the other hand, Mortality Index (IM of coral reef resulted in 24,5%. The result from observation and calculation of coral reef is dominated by Hard Coral in Acropora Branching (ACB with coral reef coverage percentage of 39%, Coral Massive (CM with coral reef coverage percentage of 2,85%, Coral Foliose (CF with coral reef coverage percentage of 1,6%, and Coral Mushroom (CRM with coral reef coverage percentage of 8,5%. Observation in 10 meters depth

  19. Coral Reef Coverage Percentage on Binor Paiton-Probolinggo Seashore

    Directory of Open Access Journals (Sweden)

    Dwi Budi Wiyanto

    2016-02-01

    Full Text Available The coral reef damage in Probolinggo region was expected to be caused by several factors. The first one comes from its society that exploits fishery by using cyanide toxin and bomb. The second one goes to the extraction of coral reef, which is used as decoration or construction materials. The other factor is likely caused by the existence of large industry on the seashore, such as Electric Steam Power Plant (PLTU Paiton and others alike. Related to the development of coral reef ecosystem, availability of an accurate data is crucially needed to support the manner of future policy, so the research of coral reef coverage percentage needs to be conducted continuously. The aim of this research is to collect biological data of coral reef and to identify coral reef coverage percentage in the effort of constructing coral reef condition basic data on Binor, Paiton, and Probolinggo regency seashore. The method used in this research is Line Intercept Transect (LIT method. LIT method is a method that used to decide benthic community on coral reef based on percentage growth, and to take note of benthic quantity along transect line. Percentage of living coral coverage in 3 meters depth on this Binor Paiton seashore that may be categorized in a good condition is 57,65%. While the rest are dead coral that is only 1,45%, other life form in 23,2%, and non-life form in 17,7%. A good condition of coral reef is caused by coral reef transplantation on the seashore, so this coral reef is dominated by Acropora Branching. On the other hand, Mortality Index (IM of coral reef resulted in 24,5%. The result from observation and calculation of coral reef is dominated by Hard Coral in Acropora Branching (ACB with coral reef coverage percentage of 39%, Coral Massive (CM with coral reef coverage percentage of 2,85%, Coral Foliose (CF with coral reef coverage percentage of 1,6%, and Coral Mushroom (CRM with coral reef coverage percentage of 8,5%. Observation in 10 meters depth

  20. Analysis of the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE) in Assessing Rounding Model

    Science.gov (United States)

    Wang, Weijie; Lu, Yanmin

    2018-03-01

    Most existing Collaborative Filtering (CF) algorithms predict a rating as the preference of an active user toward a given item, which is always a decimal fraction. Meanwhile, the actual ratings in most data sets are integers. In this paper, we discuss and demonstrate why rounding can bring different influences to these two metrics; prove that rounding is necessary in post-processing of the predicted ratings, eliminate of model prediction bias, improving the accuracy of the prediction. In addition, we also propose two new rounding approaches based on the predicted rating probability distribution, which can be used to round the predicted rating to an optimal integer rating, and get better prediction accuracy compared to the Basic Rounding approach. Extensive experiments on different data sets validate the correctness of our analysis and the effectiveness of our proposed rounding approaches.

  1. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    Directory of Open Access Journals (Sweden)

    Tianzhou Chen

    2013-09-01

    Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.

  2. Absolute isotopic abundances of Ti in meteorites

    International Nuclear Information System (INIS)

    Niederer, F.R.; Papanastassiou, D.A.; Wasserburg, G.J.

    1985-01-01

    The absolute isotope abundance of Ti has been determined in Ca-Al-rich inclusions from the Allende and Leoville meteorites and in samples of whole meteorites. The absolute Ti isotope abundances differ by a significant mass dependent isotope fractionation transformation from the previously reported abundances, which were normalized for fractionation using 46 Ti/ 48 Ti. Therefore, the absolute compositions define distinct nucleosynthetic components from those previously identified or reflect the existence of significant mass dependent isotope fractionation in nature. We provide a general formalism for determining the possible isotope compositions of the exotic Ti from the measured composition, for different values of isotope fractionation in nature and for different mixing ratios of the exotic and normal components. The absolute Ti and Ca isotopic compositions still support the correlation of 50 Ti and 48 Ca effects in the FUN inclusions and imply contributions from neutron-rich equilibrium or quasi-equilibrium nucleosynthesis. The present identification of endemic effects at 46 Ti, for the absolute composition, implies a shortfall of an explosive-oxygen component or reflects significant isotope fractionation. Additional nucleosynthetic components are required by 47 Ti and 49 Ti effects. Components are also defined in which 48 Ti is enhanced. Results are given and discussed. (author)

  3. [Errors in Peruvian medical journals references].

    Science.gov (United States)

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  4. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  5. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  6. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  7. Absolute calibration in vivo measurement systems

    International Nuclear Information System (INIS)

    Kruchten, D.A.; Hickman, D.P.

    1991-02-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs

  8. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  9. Redetermination and absolute configuration of atalaphylline

    Directory of Open Access Journals (Sweden)

    Hoong-Kun Fun

    2010-02-01

    Full Text Available The title acridone alkaloid [systematic name: 1,3,5-trihydroxy-2,4-bis(3-methylbut-2-enylacridin-9(10H-one], C23H25NO4, has previously been reported as crystallizing in the chiral orthorhombic space group P212121 [Chantrapromma et al. (2010. Acta Cryst. E66, o81–o82] but the absolute configuration could not be determined from data collected with Mo radiation. The absolute configuration has now been determined by refinement of the Flack parameter with data collected using Cu radiation. All features of the molecule and its crystal packing are similar to those previously described.

  10. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  11. Automated absolute activation analysis with californium-252 sources

    International Nuclear Information System (INIS)

    MacMurdo, K.W.; Bowman, W.W.

    1978-09-01

    A 100-mg 252 Cf neutron activation analysis facility is used routinely at the Savannah River Laboratory for multielement analysis of many solid and liquid samples. An absolute analysis technique converts counting data directly to elemental concentration without the use of classical comparative standards and flux monitors. With the totally automated pneumatic sample transfer system, cyclic irradiation-decay-count regimes can be pre-selected for up to 40 samples, and samples can be analyzed with the facility unattended. An automatic data control system starts and stops a high-resolution gamma-ray spectrometer and/or a delayed-neutron detector; the system also stores data and controls output modes. Gamma ray data are reduced by three main programs in the IBM 360/195 computer: the 4096-channel spectrum and pertinent experimental timing, counting, and sample data are stored on magnetic tape; the spectrum is then reduced to a list of significant photopeak energies, integrated areas, and their associated statistical errors; and the third program assigns gamma ray photopeaks to the appropriate neutron activation product(s) by comparing photopeak energies to tabulated gamma ray energies. Photopeak areas are then converted to elemental concentration by using experimental timing and sample data, calculated elemental neutron capture rates, absolute detector efficiencies, and absolute spectroscopic decay data. Calculational procedures have been developed so that fissile material can be analyzed by cyclic neutron activation and delayed-neutron counting procedures. These calculations are based on a 6 half-life group model of delayed neutron emission; calculations include corrections for delayed neutron interference from 17 O. Detection sensitivities of 239 Pu were demonstrated with 15-g samples at a throughput of up to 140 per day. Over 40 elements can be detected at the sub-ppM level

  12. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    Science.gov (United States)

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed.

  13. Simplified fringe order correction for absolute phase maps recovered with multiple-spatial-frequency fringe projections

    International Nuclear Information System (INIS)

    Ding, Yi; Peng, Kai; Lu, Lei; Zhong, Kai; Zhu, Ziqi

    2017-01-01

    Various kinds of fringe order errors may occur in the absolute phase maps recovered with multi-spatial-frequency fringe projections. In existing methods, multiple successive pixels corrupted by fringe order errors are detected and corrected pixel-by-pixel with repeating searches, which is inefficient for applications. To improve the efficiency of multiple successive fringe order corrections, in this paper we propose a method to simplify the error detection and correction by the stepwise increasing property of fringe order. In the proposed method, the numbers of pixels in each step are estimated to find the possible true fringe order values, repeating the search in detecting multiple successive errors can be avoided for efficient error correction. The effectiveness of our proposed method is validated by experimental results. (paper)

  14. DI3 - A New Procedure for Absolute Directional Measurements

    Directory of Open Access Journals (Sweden)

    A Geese

    2011-06-01

    Full Text Available The standard observatory procedure for determining a geomagnetic field's declination and inclination absolutely is the DI-flux measurement. The instrument consists of a non-magnetic theodolite equipped with a single-axis fluxgate magnetometer. Additionally, a scalar magnetometer is needed to provide all three components of the field. Using only 12 measurement steps, all systematic errors can be accounted for, but if only one of the readings is wrong, the whole measurement has to be rejected. We use a three-component sensor on top of the theodolites telescope. By performing more measurement steps, we gain much better control of the whole procedure: As the magnetometer can be fully calibrated by rotating about two independent directions, every combined reading of magnetometer output and theodolite angles provides the absolute field vector. We predefined a set of angle positions that the observer has to try to achieve. To further simplify the measurement procedure, the observer is guided by a pocket pc, in which he has only to confirm the theodolite position. The magnetic field is then stored automatically, together with the horizontal and vertical angles. The DI3 measurement is periodically performed at the Niemegk Observatory, allowing for a direct comparison with the traditional measurements.

  15. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    Science.gov (United States)

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  16. Absolute chronology and stratigraphy of Lepenski Vir

    Directory of Open Access Journals (Sweden)

    Borić Dušan

    2007-01-01

    meaningful and representative of two separate and defined phases of occupation at this locale. This early period would correspond with the phase that the excavator of Lepenski Vir defined as Proto-Lepenski Vir although his ideas about the spatial distribution of this phase, its interpretation, duration and relation to the later phase of trapezoidal buildings must be revised in the light of new AMS dates and other available data. The phase with trapezoidal buildings most likely starts only around 6200 cal BC and most of the trapezoidal buildings might have been abandoned by around 5900 cal BC. The absolute span of only two or three hundred years and likely even less, for the flourishing of building activity related to trapezoidal structures at Lepenski Vir significantly compresses Srejović's phase I. Thus, it is difficult to maintain the excavator's five subphases which, similarly to Ivana Radovanović's more recent re-phasing of Lepenski Vir into I-1-3, remain largely guess works before more extensive and systematic dating of each building is accomplished along with statistical modeling in order to narrow the magnitude of error. On the whole, new dates from these contexts better correspond with Srejović's stratigraphic logic of sequencing buildings to particular phases on the basis of their superimposing and cutting than with Radovanović's stylistic logic, i.e. her typology of hearth forms, ash-places, entrance platforms, and presence/absence of -supports around rectangular hearths used as reliable chronological indicators. The short chronological span for phase I also suggests that phase Lepenski Vir II is not realistic. This has already been shown by overlapping plans of the phase I buildings and stone outlines that the excavator of the site attributed to Lepenski Vir II phase. According to Srejović, Lepenski Vir phase II was characterized by buildings with stone walls made in the shape of trapezes, repeating the outline of supposedly earlier limestone floors of his

  17. Det demokratiske argument for absolut ytringsfrihed

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2014-01-01

    Artiklen diskuterer den påstand, at absolut ytringsfrihed er en nødvendig forudsætning for demokratisk legitimitet med udgangspunkt i en rekonstruktion af et argument fremsat af Ronald Dworkin. Spørgsmålet er, hvorfor ytringsfrihed skulle være en forudsætning for demokratisk legitimitet, og hvorfor...

  18. Musical Activity Tunes Up Absolute Pitch Ability

    DEFF Research Database (Denmark)

    Dohn, Anders; Garza-Villarreal, Eduardo A.; Ribe, Lars Riisgaard

    2014-01-01

    Absolute pitch (AP) is the ability to identify or produce pitches of musical tones without an external reference. Active AP (i.e., pitch production or pitch adjustment) and passive AP (i.e., pitch identification) are considered to not necessarily coincide, although no study has properly compared...

  19. On the absolute measure of Beta activities

    International Nuclear Information System (INIS)

    Sanchez del Rio, C.; Jimenez Reynaldo, O.; Rodriguez Mayquez, E.

    1956-01-01

    A new method for absolute beta counting of solid samples is given. The mea surements is made with an inside Geiger-Muller tube of new construction. The backscattering correction when using an infinite thick mounting is discussed and results for different materials given. (Author)

  20. Absolute measurement of a tritium standard

    International Nuclear Information System (INIS)

    Hadzisehovic, M.; Mocilnik, I.; Buraei, K.; Pongrac, S.; Milojevic, A.

    1978-01-01

    For the determination of a tritium absolute activity standard, a method of internal gas counting has been used. The procedure involves water reduction by uranium and zinc further the measurement of the absolute disintegration rate of tritium per unit of the effective volume of the counter by a compensation method. Criteria for the choice of methods and procedures concerning the determination and measurement of gaseous 3 H yield, parameters of gaseous hydrogen, sample mass of HTO and the absolute disintegration rate of tritium are discussed. In order to obtain gaseous sources of 3 H (and 2 H), the same reversible chemical reaction was used, namely, the water - uranium hydride - hydrogen system. This reaction was proved to be quantitative above 500 deg C by measuring the yield of the gas obtained and the absolute activity of an HTO standard. A brief description of the measuring apparatus is given, as well as a critical discussion of the brass counter quality and the possibility of obtaining equal working conditions at the counter ends. (T.G.)

  1. Absolutyzm i pluralizm (ABSOLUTISM AND PLURALISM

    Directory of Open Access Journals (Sweden)

    Renata Ziemińska

    2005-06-01

    Full Text Available Alethic absolutism is a thesis that propositions can not be more or less true, that they are true or false for ever (if true at all and that their truth is independent on any circumstances of their assertion. In negative version, easier to defend, alethic absolutism claims the very same proposition can not be both true and false relative to circumstances of its assertion. Simple alethic pluralism is a thesis that we have many concepts of truth. It is a very good way to dissolve the controversy between alethic relativism and absolutism. Many philosophical concepts of truth are the best reason for such pluralism. If concept is meaning of a name, we have many concepts of truth because the name 'truth' was understood in many ways. The variety of meanings however can be superficial. Under it we can find one idea of truth expressed in correspondence truism or schema (T. The content of the truism is too poor to be content of anyone concept of truth, so it usually is connected with some picture of the world (ontology and we have so many concepts of truth as many pictures of the world. The authoress proposes the hierarchical pluralism with privileged classic (or correspondence in weak sense concept of truth as absolute property.Other author's publications:

  2. Absolute Distance Measurements with Tunable Semiconductor Laser

    Czech Academy of Sciences Publication Activity Database

    Mikel, Břetislav; Číp, Ondřej; Lazar, Josef

    T118, - (2005), s. 41-44 ISSN 0031-8949 R&D Projects: GA AV ČR(CZ) IAB2065001 Keywords : tunable laser * absolute interferometer Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.661, year: 2004

  3. Thin-film magnetoresistive absolute position detector

    NARCIS (Netherlands)

    Groenland, J.P.J.

    1990-01-01

    The subject of this thesis is the investigation of a digital absolute posi- tion-detection system, which is based on a position-information carrier (i.e. a magnetic tape) with one single code track on the one hand, and an array of magnetoresistive sensors for the detection of the information on the

  4. Stimulus Probability Effects in Absolute Identification

    Science.gov (United States)

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  5. Absolute tightness: the chemists hesitate to invest

    International Nuclear Information System (INIS)

    Anon.

    1996-01-01

    The safety requirements of industries as nuclear plants and the strengthening of regulations in the field of environment (more particularly those related to volatile organic compounds) have lead the manufacturers to build absolute tightness pumps. But these equipments do not answer all the problems and represent a high investment cost. In consequence, the chemists hesitate to invest. (O.L.)

  6. Solving Absolute Value Equations Algebraically and Geometrically

    Science.gov (United States)

    Shiyuan, Wei

    2005-01-01

    The way in which students can improve their comprehension by understanding the geometrical meaning of algebraic equations or solving algebraic equation geometrically is described. Students can experiment with the conditions of the absolute value equation presented, for an interesting way to form an overall understanding of the concept.

  7. Data error effects on net radiation and evapotranspiration estimation

    International Nuclear Information System (INIS)

    Llasat, M.C.; Snyder, R.L.

    1998-01-01

    The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)

  8. A novel capacitive absolute positioning sensor based on time grating with nanometer resolution

    Science.gov (United States)

    Pu, Hongji; Liu, Hongzhong; Liu, Xiaokang; Peng, Kai; Yu, Zhicheng

    2018-05-01

    The present work proposes a novel capacitive absolute positioning sensor based on time grating. The sensor includes a fine incremental-displacement measurement component combined with a coarse absolute-position measurement component to obtain high-resolution absolute positioning measurements. A single row type sensor was proposed to achieve fine displacement measurement, which combines the two electrode rows of a previously proposed double-row type capacitive displacement sensor based on time grating into a single row. To achieve absolute positioning measurement, the coarse measurement component is designed as a single-row type displacement sensor employing a single spatial period over the entire measurement range. In addition, this component employs a rectangular induction electrode and four groups of orthogonal discrete excitation electrodes with half-sinusoidal envelope shapes, which were formed by alternately extending the rectangular electrodes of the fine measurement component. The fine and coarse measurement components are tightly integrated to form a compact absolute positioning sensor. A prototype sensor was manufactured using printed circuit board technology for testing and optimization of the design in conjunction with simulations. Experimental results show that the prototype sensor achieves a ±300 nm measurement accuracy with a 1 nm resolution over a displacement range of 200 mm when employing error compensation. The proposed sensor is an excellent alternative to presently available long-range absolute nanometrology sensors owing to its low cost, simple structure, and ease of manufacturing.

  9. Valuation Biases, Error Measures, and the Conglomerate Discount

    NARCIS (Netherlands)

    I. Dittmann (Ingolf); E.G. Maug (Ernst)

    2006-01-01

    textabstractWe document the importance of the choice of error measure (percentage vs. logarithmic errors) for the comparison of alternative valuation procedures. We demonstrate for several multiple valuation methods (averaging with the arithmetic mean, harmonic mean, median, geometric mean) that the

  10. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  11. Population-Attributable Risk Percentages for Racialized Risk Environments

    Science.gov (United States)

    Arriola, Kimberly Jacob; Haardörfer, Regine; McBride, Colleen M.

    2016-01-01

    Research about relationships between place characteristics and racial/ethnic inequities in health has largely ignored conceptual advances about race and place within the discipline of geography. Research has also almost exclusively quantified these relationships using effect estimates (e.g., odds ratios), statistics that fail to adequately capture the full impact of place characteristics on inequities and thus undermine our ability to translate research into action. We draw on geography to further develop the concept of “racialized risk environments,” and we argue for the routine calculation of race/ethnicity-specific population-attributable risk percentages. PMID:27552263

  12. The effect of insulin resistance and exercise on the percentage of CD16(+) monocyte subset in obese individuals.

    Science.gov (United States)

    de Matos, Mariana A; Duarte, Tamiris C; Ottone, Vinícius de O; Sampaio, Pâmela F da M; Costa, Karine B; de Oliveira, Marcos F Andrade; Moseley, Pope L; Schneider, Suzanne M; Coimbra, Cândido C; Brito-Melo, Gustavo E A; Magalhães, Flávio de C; Amorim, Fabiano T; Rocha-Vieira, Etel

    2016-06-01

    Obesity is a low-grade chronic inflammation condition, and macrophages, and possibly monocytes, are involved in the pathological outcomes of obesity. Physical exercise is a low-cost strategy to prevent and treat obesity, probably because of its anti-inflammatory action. We evaluated the percentage of CD16(-) and CD16(+) monocyte subsets in obese insulin-resistant individuals and the effect of an exercise bout on the percentage of these cells. Twenty-seven volunteers were divided into three experimental groups: lean insulin sensitive, obese insulin sensitive and obese insulin resistant. Venous blood samples collected before and 1 h after an aerobic exercise session on a cycle ergometer were used for determination of monocyte subsets by flow cytometry. Insulin-resistant obese individuals have a higher percentage of CD16(+) monocytes (14.8 ± 2.4%) than the lean group (10.0 ± 1.3%). A positive correlation of the percentage of CD16(+) monocytes with body mass index and fasting plasma insulin levels was found. One bout of moderate exercise reduced the percentage of CD16(+) monocytes by 10% in all the groups evaluated. Also, the absolute monocyte count, as well as all other leukocyte populations, in lean and obese individuals, increased after exercise. This fact may partially account for the observed reduction in the percentage of CD16(+) cells in response to exercise. Insulin-resistant, but not insulin-sensitive obese individuals, have an increased percentage of CD16(+) monocytes that can be slightly modulated by a single bout of moderate aerobic exercise. These findings may be clinically relevant to the population studied, considering the involvement of CD16(+) monocytes in the pathophysiology of obesity. Copyright © 2016 John Wiley & Sons, Ltd. Obesity is now considered to be an inflammatory condition associated with many pathological consequences, including insulin resistance. It is proposed that insulin resistance contributes to the aggravation of the

  13. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  14. Absolute calibration of TFTR helium proportional counters

    International Nuclear Information System (INIS)

    Strachan, J.D.; Diesso, M.; Jassby, D.; Johnson, L.; McCauley, S.; Munsat, T.; Roquemore, A.L.; Loughlin, M.

    1995-06-01

    The TFTR helium proportional counters are located in the central five (5) channels of the TFTR multichannel neutron collimator. These detectors were absolutely calibrated using a 14 MeV neutron generator positioned at the horizontal midplane of the TFTR vacuum vessel. The neutron generator position was scanned in centimeter steps to determine the collimator aperture width to 14 MeV neutrons and the absolute sensitivity of each channel. Neutron profiles were measured for TFTR plasmas with time resolution between 5 msec and 50 msec depending upon count rates. The He detectors were used to measure the burnup of 1 MeV tritons in deuterium plasmas, the transport of tritium in trace tritium experiments, and the residual tritium levels in plasmas following 50:50 DT experiments

  15. Absolute-magnitude distributions of supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, Dean; Wright, John [Department of Physics, Xavier University of Louisiana, New Orleans, LA 70125 (United States); Jenkins III, Robert L. [Applied Physics Department, Richard Stockton College, Galloway, NJ 08205 (United States); Maddox, Larry, E-mail: drichar7@xula.edu [Department of Chemistry and Physics, Southeastern Louisiana University, Hammond, LA 70402 (United States)

    2014-05-01

    The absolute-magnitude distributions of seven supernova (SN) types are presented. The data used here were primarily taken from the Asiago Supernova Catalogue, but were supplemented with additional data. We accounted for both foreground and host-galaxy extinction. A bootstrap method is used to correct the samples for Malmquist bias. Separately, we generate volume-limited samples, restricted to events within 100 Mpc. We find that the superluminous events (M{sub B} < –21) make up only about 0.1% of all SNe in the bias-corrected sample. The subluminous events (M{sub B} > –15) make up about 3%. The normal Ia distribution was the brightest with a mean absolute blue magnitude of –19.25. The IIP distribution was the dimmest at –16.75.

  16. Absolute and relative dosimetry for ELIMED

    Energy Technology Data Exchange (ETDEWEB)

    Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Cuttone, G.; Candiano, G.; Musumarra, A.; Pisciotta, P.; Romano, F. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Leonora, E.; Randazzo, N. [INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Presti, D. Lo [INFN-Sezione di Catania, Via Santa Sofia 64, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Raffaele, L. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Cirio, R.; Sacchi, R.; Monaco, V. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino, Italy and Università di Torino, Dipartimento di Fisica, Via P.Giuria, 1 10125 Torino (Italy); Marchetto, F.; Giordanengo, S. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy)

    2013-07-26

    The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.

  17. Absolute spectrophotometry of the β Lyr

    International Nuclear Information System (INIS)

    Burnashev, V.I.; Skul'skij, M.Yu.

    1978-01-01

    In 1974 an absolute spectrophotometry of β Lyr was performed with the scanning spectrophotometer in the 3300-7400 A range. The energy distribution in the β Lyr spectrum is obtained. The β Lyr model is proposed. It is shown, that the continuous spectrum of the β Lyr radiation can be presented by the total radiation of the B8 3 and A5 3 two stars and of the gaseous envelope with Te =20000 K

  18. Absolute photoionization cross sections of atomic oxygen

    Science.gov (United States)

    Samson, J. A. R.; Pareek, P. N.

    1985-01-01

    The absolute values of photoionization cross sections of atomic oxygen were measured from the ionization threshold to 120 A. An auto-ionizing resonance belonging to the 2S2P4(4P)3P(3Do, 3So) transition was observed at 479.43 A and another line at 389.97 A. The experimental data is in excellent agreement with rigorous close-coupling calculations that include electron correlations in both the initial and final states.

  19. Absolute purchasing power parity in industrial countries

    OpenAIRE

    Zhang, Zhibai; Bian, Zhicun

    2015-01-01

    Different from popular studies that focus on relative purchasing power parity, we study absolute purchasing power parity (APPP) in 21 main industrial countries. Three databases are used. Both the whole period and the sub-period are analyzed. The empirical proof shows that the phenomenon that APPP holds is common, and the phenomenon that APPP does not hold is also common. In addition, some country pairs and the pooled country data indicate that the nearer the GDPPs of two countries are, the mo...

  20. Internal descriptions of absolute Borel classes

    Czech Academy of Sciences Publication Activity Database

    Holický, P.; Pelant, Jan

    2004-01-01

    Roč. 141, č. 1 (2004), s. 87-104 ISSN 0166-8641 R&D Projects: GA ČR GA201/00/1466; GA ČR GA201/03/0933 Institutional research plan: CEZ:AV0Z1019905 Keywords : absolute Borel class * complete sequence of covers * open map Subject RIV: BA - General Mathematics Impact factor: 0.364, year: 2004

  1. The absolute differential calculus calculus of tensors

    CERN Document Server

    Levi-Cività, Tullio

    1926-01-01

    Written by a towering figure of twentieth-century mathematics, this classic examines the mathematical background necessary for a grasp of relativity theory. Tullio Levi-Civita provides a thorough treatment of the introductory theories that form the basis for discussions of fundamental quadratic forms and absolute differential calculus, and he further explores physical applications.Part one opens with considerations of functional determinants and matrices, advancing to systems of total differential equations, linear partial differential equations, algebraic foundations, and a geometrical intro

  2. An absolute deviation approach to assessing correlation.

    OpenAIRE

    Gorard, S.

    2015-01-01

    This paper describes two possible alternatives to the more traditional Pearson’s R correlation coefficient, both based on using the mean absolute deviation, rather than the standard deviation, as a measure of dispersion. Pearson’s R is well-established and has many advantages. However, these newer variants also have several advantages, including greater simplicity and ease of computation, and perhaps greater tolerance of underlying assumptions (such as the need for linearity). The first alter...

  3. Benzofuranoid and bicyclooctanoid neolignans:absolute configuration

    International Nuclear Information System (INIS)

    Alvarenga, M.A. de; Giesbrecht, A.M.; Gottlieb, O.R.; Yoshida, M.

    1977-01-01

    The naturally occuring benzofuranoid and bicyclo (3,2,1) octanoid neolignans have their relative configurations established by 1 H and 13 C NMR, inclusively with aid of the solvent shift technique. Interconversion of the benzofuranoid type compounds, as well as for a benzofuranoid to a bicyclooctanoid derivate, make ORD correlations, ultimately with (2S, 3S) - and (2R,3R)-2,3- dihydrobenzofurans, possible, and led to the absolute configurations of both series of neolignans [pt

  4. Least Squares Problems with Absolute Quadratic Constraints

    Directory of Open Access Journals (Sweden)

    R. Schöne

    2012-01-01

    Full Text Available This paper analyzes linear least squares problems with absolute quadratic constraints. We develop a generalized theory following Bookstein's conic-fitting and Fitzgibbon's direct ellipse-specific fitting. Under simple preconditions, it can be shown that a minimum always exists and can be determined by a generalized eigenvalue problem. This problem is numerically reduced to an eigenvalue problem by multiplications of Givens' rotations. Finally, four applications of this approach are presented.

  5. The correction of vibration in frequency scanning interferometry based absolute distance measurement system for dynamic measurements

    Science.gov (United States)

    Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu

    2015-10-01

    Absolute distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an absolute distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based absolute distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement error more than 103 times bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.

  6. Absolute measurement method of environment radon content

    International Nuclear Information System (INIS)

    Ji Changsong

    1989-11-01

    A portable environment radon content device with a 40 liter decay chamber based on the method of Thomas double filter radon content absolute measurement has been developed. The correctness of the method of Thomas double filter absolute measurement has been verified by the experiments to measure the sampling gas density of radon that the theoretical density has been known. In addition, the intrinsic uncertainty of this method is also determined in the experiments. The confidence of this device is about 95%, the sensitivity is better than 0.37 Bqm -3 and the intrinsic uncertainty is less than 10%. The results show that the selected measuring and structure parameters are reasonable and the experimental methods are acceptable. In this method, the influence on the measured values from the radioactive equilibrium of radon and its daughters, the ratio of combination daughters to the total daughters and the fraction of charged particles has been excluded in the theory and experimental methods. The formula of Thomas double filter absolute measuring radon is applicable to the cylinder decay chamber, and the applicability is also verified when the diameter of exit filter is much smaller than the diameter of inlet filter

  7. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  8. CDC staging based on absolute CD4 count and CD4 percentage in an HIV-1-infected Indian population: treatment implications

    Science.gov (United States)

    Vajpayee, M; Kaushik, S; Sreenivas, V; Wig, N; Seth, P

    2005-01-01

    CD4+ T-cell levels are an important criterion for categorizing HIV-related clinical conditions according to the CDC classification system and are therefore important in the management of HIV by initiating antiretroviral therapy and prophylaxis for opportunistic infections due to HIV among HIV-infected individuals. However, it has been observed that the CD4 counts are affected by the geographical location, race, ethnic origin, age, gender and changes in total and differential leucocyte counts. In the light of this knowledge, we classified 600 HIV seropositive antiretroviral treatment (ART)-naïve Indian individuals belonging to different CDC groups A, B and C on the basis of CDC criteria of both CD4% and CD4 counts and receiver operating characteristic (ROC) curves were generated. Importantly, CDC staging on the basis of CD4% indicated significant clinical implications, requiring an early implementation of effective antiretroviral treatment regimen in HIV-infected individuals deprived of treatment when classified on the basis of CD4 counts. PMID:16045738

  9. The percentage of migration as indicator of femoral head position

    International Nuclear Information System (INIS)

    Ekloef, O.; Ringertz, H.; Samuelsson, L.; Karolinska Sjukhuset, Stockholm; Karolinska Sjukhuset, Stockholm

    1988-01-01

    In childhood subluxation of one or both hips may develop rather insidiously. For lack of generally accepted objective methods of assessment, ambiguous interpretations of findings in serial examinations are common. Many subluxations are overlooked during the early stages. In order to overcome such disadvantages, determination of the percentage of migration seems to be a reasonably easy and reliable technique facilitating evaluation of impending dislocation. This investigation was carried out in order to estabilsh norms applicable to patients in the pediatric age interval. The 98th percentile of migration increases with age from 16% in patients < 4 years of age to 24% in patients ≥ 12 years. Higher figures represent subluxation. If the migration exceeds 80% a manifest luxation is present. A difference in migration between the two hips larger than 12% indicates abnormality calling for clinical and radiologic follow-up. (orig.)

  10. Budgetary Approach to Project Management by Percentage of Completion Method

    Directory of Open Access Journals (Sweden)

    Leszek Borowiec

    2011-07-01

    Full Text Available Efficient and effective project management process is made possible by the use of methods and techniques of project management. The aim of this paper is to present the problems of project management by using Percentage of Completion method. The research material was gathered based on the experience in implementing this method by the Johnson Controls International Company. The article attempts to demonstrate the validity of the thesis that the POC project management method, allows for effective implementation and monitoring of the project and thus is an effective tool in the managing of companies which exploit the budgetary approach. The study presents planning process of basic parameters affecting the effectiveness of the project (such as costs, revenue, margin and characterized how the primary measurements used to evaluate it. The present theme is illustrating by numerous examples for showing the essence of the raised problems and the results are presenting by using descriptive methods, graphical and tabular.

  11. Pseudo-absolute quantitative analysis using gas chromatography – Vacuum ultraviolet spectroscopy – A tutorial

    International Nuclear Information System (INIS)

    Bai, Ling; Smuts, Jonathan; Walsh, Phillip; Qiu, Changling; McNair, Harold M.; Schug, Kevin A.

    2017-01-01

    The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120–240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method. - Highlights: • Gas chromatography diagnostics and quantification using VUV detector. • Absorption cross-sections for molecules enable pseudo-absolute quantitation. • Injection diagnostics reveal systematic errors in hardware settings. • Internal

  12. Pseudo-absolute quantitative analysis using gas chromatography – Vacuum ultraviolet spectroscopy – A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Ling [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States); Smuts, Jonathan; Walsh, Phillip [VUV Analytics, Inc., Cedar Park, TX (United States); Qiu, Changling [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States); McNair, Harold M. [Department of Chemistry, Virginia Tech, Blacksburg, VA (United States); Schug, Kevin A., E-mail: kschug@uta.edu [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States)

    2017-02-08

    The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120–240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method. - Highlights: • Gas chromatography diagnostics and quantification using VUV detector. • Absorption cross-sections for molecules enable pseudo-absolute quantitation. • Injection diagnostics reveal systematic errors in hardware settings. • Internal

  13. Comparing absolute and normalized indicators in scientific collaboration: a study in Environmental Science in Latin America

    Energy Technology Data Exchange (ETDEWEB)

    Cabrini-Grácio, M.C.; Oliveira, E.F.T.

    2016-07-01

    This paper aims to conduct a comparative analysis of scientific collaboration proximity trends generated from absolute indicators and indicators of collaboration intensity in the field of Environmental Sciences in Latin America (LA), in order to identify possible existing biases in the absolute indicators of international cooperation, due to the magnitude of scientific production of these countries in mainstream science. More specifically, the objective is to analyze the compared forms of absolute and normalized values of co-authorship among Latin America countries and their main collaborators, in order to observe similarities and differences expressed by two indexes of frequency in relation to scientific collaboration trends in LA countries. In addition, we aim to visualize and analyze scientific collaboration networks with absolute and normalized indexes of co-authorship through SC among Latin America countries and their collaborators, comparing proximity evidenced by two generated collaborative networks - absolute and relative indicators. Data collection comprised a period of 10 years (2006-2015) for the countries from LA: Brazil, Mexico, Argentina, Chile and Colombia as they produced 94% of total production, a percentage considered representative and significant for this study. Then, we verified the co-authorship frequencies among the five countries and their key collaborators and builted the matrix with the indexes of co-authorship normalized through SC. Then, we generated two egocentric networks of scientific collaboration - absolute frequencies and normalized frequencies through SC using Pajek software. From the results, we observed the need for absolute and normalized indicators to describe the scientific collaboration phenomenon in a more thoroughly way, once these indicators provide complementary information. (Author)

  14. STAR barrel electromagnetic calorimeter absolute calibration using 'minimum ionizing particles' from collisions at RHIC

    International Nuclear Information System (INIS)

    Cormier, T.M.; Pavlinov, A.I.; Rykov, M.V.; Rykov, V.L.; Shestermanov, K.E.

    2002-01-01

    The procedure for the STAR Barrel Electromagnetic Calorimeter (BEMC) absolute calibrations, using penetrating charged particle hits (MIP-hits) from physics events at RHIC, is presented. Its systematic and statistical errors are evaluated. It is shown that, using this technique, the equalization and transfer of the absolute scale from the test beam can be done to a percent level accuracy in a reasonable amount of time for the entire STAR BEMC. MIP-hits would also be an effective tool for continuously monitoring the variations of the BEMC tower's gains, virtually without interference to STAR's main physics program. The method does not rely on simulations for anything other than geometric and some other small corrections, and also for estimations of the systematic errors. It directly transfers measured test beam responses to operations at RHIC

  15. Mathematical model for body fat percentage of children with cerebral palsy

    Directory of Open Access Journals (Sweden)

    Eduardo Borba Neves

    Full Text Available Abstract Introduction The aim of this study was to develop a specific mathematical model to estimate the body fat percentage (BF% of children with cerebral palsy, based on a Brazilian population of patients with this condition. Method This is a descriptive cross-sectional study. The study included 63 Caucasian children with cerebral palsy, both males and females, aged between three and ten-years-old. Participants were assessed for functional motor impairment using the Gross Motor Function Classification System (GMFCS, dual energy x-ray absorptiometry (DXA and skinfold thickness. Total body mass (TBM and skinfolds thickness from: triceps (Tr, biceps (Bi, Suprailiac (Si, medium thigh (Th, abdominal (Ab, medial calf (Ca and subscapular (Se were collected. Fat mass (FM was estimated by dual energy x-ray absorptiometry (gold standard. Results The model was built from multivariate linear regression; FM was set as a dependent variable and other anthropometric variables, age and sex, were set as independent variables. The final model was established as F%=((0.433xTBM + 0.063xTh + 0.167xSi - 6.768 ÷ TBM × 100, the R2 value was 0.950, R2adjusted=0.948 and the standard error of estimate was 1.039 kg. Conclusion This method was shown to be valid to estimate body fat percentage of children with cerebral palsy. Also, the measurement of skinfolds on both sides of the body showed good results in this modelling.

  16. Absolute measurement of the $\\beta\\alpha$ decay of $^{16}$N

    CERN Multimedia

    We propose to study the $\\beta$-decay of $^{16}$N at ISOLDE with the aim of determining the branching ratio for $\\beta\\alpha$ decay on an absolute scale. There are indications that the previously measured branching ratio is in error by an amount significantly larger than the quoted uncertainty. This limits the precision with which the S-factor of the astrophysically important $^{12}$C($\\alpha, \\gamma)^{16}$O reaction can be determined.

  17. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  18. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  19. Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements

    Science.gov (United States)

    Anthony, Robert E.; Ringler, Adam; Wilson, David

    2018-01-01

    The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.

  20. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    International Nuclear Information System (INIS)

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward; Szczygieł, Dorota M.; Gould, Andrew; Sneden, Christopher; Dong, Subo

    2013-01-01

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M V,RRc = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M V,RRab = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M V,RRc = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W π , W θ , W z ) = (12.0, –209.9, 3.0) km s –1 in the radial, rotational, and vertical directions with dispersions (σ W π ,σ W θ ,σ W z ) = (150.4, 106.1, 96.0) km s -1 . For the disk, we find (W π , W θ , W z ) = (13.0, –42.0, –27.3) km s –1 relative to the Sun with dispersions (σ W π ,σ W θ ,σ W z ) = (67.7,59.2,54.9) km s -1 . Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4

  2. Achieving Climate Change Absolute Accuracy in Orbit

    Science.gov (United States)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; hide

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  3. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  4. Used percentage veto for LIGO and virgo binary inspiral searches

    International Nuclear Information System (INIS)

    Isogai, Tomoki

    2010-01-01

    A challenge for ground-based gravitational wave detectors such as LIGO and Virgo is to understand the origin of non-astrophysical transients that contribute to the background noise, obscuring real astrophysically produced signals. To help this effort, there are a number of environmental and instrumental sensors around the site, recording data in 'channels'. We developed a method called the used percentage veto to eliminate corrupted data based on the statistical correlation between transients in the gravitational wave channel and in the auxiliary channels. The results are used to improve inspiral binary searches on LIGO and Virgo data. We also developed a way to apply this method to help find the physical origin of such transients for detector characterization. After identifying statistically correlated channels, a follow-up code clusters coincident events between the gravitational wave channel and auxiliary channels, and thereby classifies noise by correlated channels. For each selected event, the code also gathers and creates information that is helpful for further investigations. The method is contributing to identifying problems and improving data quality for the LIGO S6 and Virgo VSR2 science runs.

  5. Percentage depth dose evaluation in heterogeneous media using thermoluminescent dosimetry

    Science.gov (United States)

    da Rosa, L.A.R.; Campos, L.T.; Alves, V.G.L.; Batista, D.V.S.; Facure, A.

    2010-01-01

    The purpose of this study is to investigate the influence of lung heterogeneity inside a soft tissue phantom on percentage depth dose (PDD). PDD curves were obtained experimentally using LiF:Mg,Ti (TLD‐100) thermoluminescent detectors and applying Eclipse treatment planning system algorithms Batho, modified Batho (M‐Batho or BMod), equivalent TAR (E‐TAR or EQTAR), and anisotropic analytical algorithm (AAA) for a 15 MV photon beam and field sizes of 1×1,2×2,5×5, and 10×10cm2. Monte Carlo simulations were performed using the DOSRZnrc user code of EGSnrc. The experimental results agree with Monte Carlo simulations for all irradiation field sizes. Comparisons with Monte Carlo calculations show that the AAA algorithm provides the best simulations of PDD curves for all field sizes investigated. However, even this algorithm cannot accurately predict PDD values in the lung for field sizes of 1×1 and 2×2cm2. An overdosage in the lung of about 40% and 20% is calculated by the AAA algorithm close to the interface soft tissue/lung for 1×1 and 2×2cm2 field sizes, respectively. It was demonstrated that differences of 100% between Monte Carlo results and the algorithms Batho, modified Batho, and equivalent TAR responses may exist inside the lung region for the 1×1cm2 field. PACS number: 87.55.kd

  6. Body Fat Percentage Prediction Using Intelligent Hybrid Approaches

    Directory of Open Access Journals (Sweden)

    Yuehjen E. Shao

    2014-01-01

    Full Text Available Excess of body fat often leads to obesity. Obesity is typically associated with serious medical diseases, such as cancer, heart disease, and diabetes. Accordingly, knowing the body fat is an extremely important issue since it affects everyone’s health. Although there are several ways to measure the body fat percentage (BFP, the accurate methods are often associated with hassle and/or high costs. Traditional single-stage approaches may use certain body measurements or explanatory variables to predict the BFP. Diverging from existing approaches, this study proposes new intelligent hybrid approaches to obtain fewer explanatory variables, and the proposed forecasting models are able to effectively predict the BFP. The proposed hybrid models consist of multiple regression (MR, artificial neural network (ANN, multivariate adaptive regression splines (MARS, and support vector regression (SVR techniques. The first stage of the modeling includes the use of MR and MARS to obtain fewer but more important sets of explanatory variables. In the second stage, the remaining important variables are served as inputs for the other forecasting methods. A real dataset was used to demonstrate the development of the proposed hybrid models. The prediction results revealed that the proposed hybrid schemes outperformed the typical, single-stage forecasting models.

  7. Utility of Immature Granulocyte Percentage in Pediatric Appendicitis

    Science.gov (United States)

    Mathews, Eleanor K.; Griffin, Russell L.; Mortellaro, Vincent; Beierle, Elizabeth A.; Harmon, Carroll M.; Chen, Mike K.; Russell, Robert T.

    2014-01-01

    Background Acute appendicitis is the most common cause of abdominal surgery in children. Adjuncts are utilized to help clinicians predict acute or perforated appendicitis, which may affect treatment decisions. Automated hematologic analyzers can perform more accurate automated differentials including immature granulocyte percentages (IG%). Elevated IG% has demonstrated improved accuracy for predicting sepsis in the neonatal population than traditional immature to total neutrophil count (I/T) ratios. We intended to assess the additional discriminatory ability of IG% to traditionally assessed parameters in the differentiation between acute and perforated appendicitis. Materials and Methods We identified all patients with appendicitis from July 2012 to June 2013 by ICD-9 code. Charts were reviewed for relevant demographic, clinical, and outcome data, which were compared between acute and perforated appendicitis groups using Fischer’s exact and t-test for categorical and continuous variables, respectively. We utilized an adjusted logistic regression model utilizing clinical lab values to predict the odds of perforated appendicitis. Results 251 patients were included in the analysis. Those with perforated appendicitis had a higher white blood cell (WBC) count (p=0.0063), C-reactive protein (CRP) (pappendicitis. The c-statistic of the final model was 0.70, suggesting fair discriminatory ability in predicting perforated appendicitis. Conclusions IG% did not provide any additional benefit to elevated CRP and presence of left shift in the differentiation between acute and perforated appendicitis. PMID:24793450

  8. Absolute measurement of environmental radon content

    International Nuclear Information System (INIS)

    Ji Changsong

    1987-01-01

    A transportable meter for environmental radon measurement with a 40 liter decay chamber is designed on the principle of Thomas two-filter radon content absolute measurement. The sensitivity is 0.37 Bq·m -3 with 95% confidence level. This paper describes the experimental method of measuremment and it's intrinsic uncertainty. The typical intrinsic uncertainty (for n x 3.7 Bq·m -3 radon concentration) is <10%. The parameter of exit filter effeciency is introduced into the formula, and the verification is done for the case when the diameter of the exit filter is much less than the inlet one

  9. Fractional order absolute vibration suppression (AVS) controllers

    Science.gov (United States)

    Halevi, Yoram

    2017-04-01

    Absolute vibration suppression (AVS) is a control method for flexible structures. The first step is an accurate, infinite dimension, transfer function (TF), from actuation to measurement. This leads to the collocated, rate feedback AVS controller that in some cases completely eliminates the vibration. In case of the 1D wave equation, the TF consists of pure time delays and low order rational terms, and the AVS controller is rational. In all other cases, the TF and consequently the controller are fractional order in both the delays and the "rational parts". The paper considers stability, performance and actual implementation in such cases.

  10. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement

    Science.gov (United States)

    Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu

    2017-03-01

    In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.

  11. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  12. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  13. Regional absolute conductivity reconstruction using projected current density in MREIT

    International Nuclear Information System (INIS)

    Sajib, Saurav Z K; Kim, Hyung Joong; Woo, Eung Je; Kwon, Oh In

    2012-01-01

    slice and the reconstructed regional projected current density, we propose a direct non-iterative algorithm to reconstruct the absolute conductivity in the ROI. The numerical simulations in the presence of various degrees of noise, as well as a phantom MRI imaging experiment showed that the proposed method reconstructs the regional absolute conductivity in a ROI within a subject including the defective regions. In the simulation experiment, the relative L 2 -mode errors of the reconstructed regional and global conductivities were 0.79 and 0.43, respectively, using a noise level of 50 db in the defective region. (paper)

  14. Linear ultrasonic motor for absolute gravimeter.

    Science.gov (United States)

    Jian, Yue; Yao, Zhiyuan; Silberschmidt, Vadim V

    2017-05-01

    Thanks to their compactness and suitability for vacuum applications, linear ultrasonic motors are considered as substitutes for classical electromagnetic motors as driving elements in absolute gravimeters. Still, their application is prevented by relatively low power output. To overcome this limitation and provide better stability, a V-type linear ultrasonic motor with a new clamping method is proposed for a gravimeter. In this paper, a mechanical model of stators with flexible clamping components is suggested, according to a design criterion for clamps of linear ultrasonic motors. After that, an effect of tangential and normal rigidity of the clamping components on mechanical output is studied. It is followed by discussion of a new clamping method with sufficient tangential rigidity and a capability to facilitate pre-load. Additionally, a prototype of the motor with the proposed clamping method was fabricated and the performance tests in vertical direction were implemented. Experimental results show that the suggested motor has structural stability and high dynamic performance, such as no-load speed of 1.4m/s and maximal thrust of 43N, meeting the requirements for absolute gravimeters. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Relational versus absolute representation in categorization.

    Science.gov (United States)

    Edwards, Darren J; Pothos, Emmanuel M; Perlman, Amotz

    2012-01-01

    This study explores relational-like and absolute-like representations in categorization. Although there is much evidence that categorization processes can involve information about both the particular physical properties of studied instances and abstract (relational) properties, there has been little work on the factors that lead to one kind of representation as opposed to the other. We tested 370 participants in 6 experiments, in which participants had to classify new items into predefined artificial categories. In 4 experiments, we observed a predominantly relational-like mode of classification, and in 2 experiments we observed a shift toward an absolute-like mode of classification. These results suggest 3 factors that promote a relational-like mode of classification: fewer items per group, more training groups, and the presence of a time delay. Overall, we propose that less information about the distributional properties of a category or weaker memory traces for the category exemplars (induced, e.g., by having smaller categories or a time delay) can encourage relational-like categorization.

  16. On the absolute meaning of motion

    Directory of Open Access Journals (Sweden)

    H. Edwards

    Full Text Available The present manuscript aims to clarify why motion causes matter to age slower in a comparable sense, and how this relates to relativistic effects caused by motion. A fresh analysis of motion, build on first axiom, delivers proof with its result, from which significant new understanding and computational power is gained.A review of experimental results demonstrates, that unaccelerated motion causes matter to age slower in a comparable, observer independent sense. Whilst focusing on this absolute effect, the present manuscript clarifies its context to relativistic effects, detailing their relationship and incorporating both into one consistent picture. The presented theoretical results make new predictions and are testable through suggested experiment of a novel nature. The manuscript finally arrives at an experimental tool and methodology, which as far as motion in ungravitated space is concerned or gravity appreciated, enables us to find the absolute observer independent picture of reality, which is reflected in the comparable display of atomic clocks.The discussion of the theoretical results, derives a physical causal understanding of gravity, a mathematical formulation of which, will be presented. Keywords: Kinematics, Gravity, Atomic clocks, Cosmic microwave background

  17. Standardization of the cumulative absolute velocity

    International Nuclear Information System (INIS)

    O'Hara, T.F.; Jacobson, J.P.

    1991-12-01

    EPRI NP-5930, ''A Criterion for Determining Exceedance of the Operating Basis Earthquake,'' was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set

  18. [Sedentary lifestyle: physical activity duration versus percentage of energy expenditure].

    Science.gov (United States)

    Cabrera de León, Antonio; Rodríguez-Pérez, María del C; Rodríguez-Benjumeda, Luis M; Anía-Lafuente, Basilio; Brito-Díaz, Buenaventura; Muros de Fuentes, Mercedes; Almeida-González, Delia; Batista-Medina, Marta; Aguirre-Jaime, Armando

    2007-03-01

    To compare different definitions of a sedentary lifestyle and to determine which is the most appropriate for demonstrating its relationship with the metabolic syndrome and other cardiovascular risk factors. A cross-sectional study of 5814 individuals was carried out. Comparisons were made between two definitions of a sedentary lifestyle: one based on active energy expenditure being less than 10% of total energy expenditure, and the other, on performing less than 25-30 minutes of physical activity per day. Reported levels of physical activity, anthropometric measurements, and biochemical markers of cardiovascular risk were recorded. The associations between a sedentary lifestyle and metabolic syndrome and other risk factors were adjusted for gender, age and tobacco use. The prevalence of a sedentary lifestyle was higher in women (70%) than in men (45-60%, according to the definition used). The definitions based on physical activity duration and on energy expenditure were equally useful: there were direct associations between a sedentary lifestyle and metabolic syndrome, body mass index, abdominal and pelvic circumferences, systolic blood pressure, heart rate, apolipoprotein B, and triglycerides, and inverse associations with high-density lipoprotein cholesterol and paraoxonase activity, which demonstrated the greatest percentage difference between sedentary and active individuals. An incidental finding was that both definitions of a sedentary lifestyle were more strongly associated with the metabolic syndrome as defined by International Diabetes Federation criteria than by Adult Treatment Panel III criteria. Given that it is relatively easy to determine whether a patient performs less than 25 minutes of physical activity per day, use of this definition of a sedentary lifestyle is recommended for clinical practice. The serum paraoxonase activity level could provide a useful marker for studying sedentary lifestyles.

  19. Absolute determination of the deuterium content of heavy water, measurement of absolute density

    International Nuclear Information System (INIS)

    Ceccaldi, M.; Riedinger, M.; Menache, M.

    1975-01-01

    The absolute density of two heavy water samples rich in deuterium (with a grade higher than 99.9%) was determined with the hydrostatic method. The exact isotopic composition of this water (hydrogen and oxygen isotopes) was very carefully studied. A theoretical estimate enabled us to get the absolute density value of isotopically pure D 2 16 O. This value was found to be 1104.750 kg.m -3 at t 68 =22.3 0 C and under the pressure of one atmosphere. (orig.) [de

  20. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  1. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  2. [Medication errors in Spanish intensive care units].

    Science.gov (United States)

    Merino, P; Martín, M C; Alonso, A; Gutiérrez, I; Alvarez, J; Becerril, F

    2013-01-01

    To estimate the incidence of medication errors in Spanish intensive care units. Post hoc study of the SYREC trial. A longitudinal observational study carried out during 24 hours in patients admitted to the ICU. Spanish intensive care units. Patients admitted to the intensive care unit participating in the SYREC during the period of study. Risk, individual risk, and rate of medication errors. The final study sample consisted of 1017 patients from 79 intensive care units; 591 (58%) were affected by one or more incidents. Of these, 253 (43%) had at least one medication-related incident. The total number of incidents reported was 1424, of which 350 (25%) were medication errors. The risk of suffering at least one incident was 22% (IQR: 8-50%) while the individual risk was 21% (IQR: 8-42%). The medication error rate was 1.13 medication errors per 100 patient-days of stay. Most incidents occurred in the prescription (34%) and administration (28%) phases, 16% resulted in patient harm, and 82% were considered "totally avoidable". Medication errors are among the most frequent types of incidents in critically ill patients, and are more common in the prescription and administration stages. Although most such incidents have no clinical consequences, a significant percentage prove harmful for the patient, and a large proportion are avoidable. Copyright © 2012 Elsevier España, S.L. and SEMICYUC. All rights reserved.

  3. Invariant and Absolute Invariant Means of Double Sequences

    Directory of Open Access Journals (Sweden)

    Abdullah Alotaibi

    2012-01-01

    Full Text Available We examine some properties of the invariant mean, define the concepts of strong σ-convergence and absolute σ-convergence for double sequences, and determine the associated sublinear functionals. We also define the absolute invariant mean through which the space of absolutely σ-convergent double sequences is characterized.

  4. Antimicrobial Resistance Percentages of Salmonella and Shigella in Seafood Imported to Jordan: Higher Percentages and More Diverse Profiles in Shigella.

    Science.gov (United States)

    Obaidat, Mohammad M; Bani Salman, Alaa E

    2017-03-01

    This study determined the prevalence and antimicrobial resistance of human-specific ( Shigella spp.) and zoonotic ( Salmonella enterica ) foodborne pathogens in internationally traded seafood. Sixty-four Salmonella and 61 Shigella isolates were obtained from 330 imported fresh fish samples from Egypt, Yemen, and India. The pathogens were isolated on selective media, confirmed by PCR, and tested for antimicrobial resistance. Approximately 79 and 98% of the Salmonella and Shigella isolates, respectively, exhibited resistance to at least one antimicrobial, and 8 and 49% exhibited multidrug resistance (resistance to three or more antimicrobial classes). Generally, Salmonella exhibited high resistance to amoxicillin-clavulanic acid, cephalothin, streptomycin, and ampicillin; very low resistance to kanamycin, tetracycline, gentamicin, chloramphenicol, nalidixic acid, sulfamethoxazole-trimethoprim, and ciprofloxacin; and no resistance to ceftriaxone. Meanwhile, Shigella spp. exhibited high resistance to tetracycline, amoxicillin-clavulanic acid, cephalothin, streptomycin, and ampicillin; low resistance to kanamycin, nalidixic acid, sulfamethoxazole-trimethoprim, and ceftriaxone; and very low resistance to gentamicin and ciprofloxacin. Salmonella isolates exhibited 14 resistance profiles, Shigella isolates 42. This study is novel in showing that a human-specific pathogen has higher antimicrobial resistance percentages and more diverse profiles than a zoonotic pathogen. Thus, the impact of antimicrobial use in humans is as significant as, if not more significant than, it is in animals in spreading antibiotic resistance through food. This study also demonstrates that locally derived antimicrobial resistance can spread and pose a public health risk worldwide through seafood trade and that high resistance would make a possible outbreak difficult to control. So, capacity building and monitoring harvest water areas are encouraged in fish producing countries.

  5. How is an absolute democracy possible?

    Directory of Open Access Journals (Sweden)

    Joanna Bednarek

    2011-01-01

    Full Text Available In the last part of the Empire trilogy, Commonwealth, Negri and Hardt ask about the possibility of the self-governance of the multitude. When answering, they argue that absolute democracy, understood as the political articulation of the multitude that does not entail its unification (construction of the people is possible. As Negri states, this way of thinking about political articulation is rooted in the tradition of democratic materialism and constitutes the alternative to the dominant current of modern political philosophy that identifies political power with sovereignty. The multitude organizes itself politically by means of the constitutive power, identical with the ontological creativity or productivity of the multitude. To state the problem of political organization means to state the problem of class composition: political democracy is at the same time economic democracy.

  6. Absolute partial photoionization cross sections of ethylene

    Science.gov (United States)

    Grimm, F. A.; Whitley, T. A.; Keller, P. R.; Taylor, J. W.

    1991-07-01

    Absolute partial photoionization cross sections for ionization out of the first four valence orbitals to the X 2B 3u, A 2B 3g, B 2A g and C 2B 2u states of the C 2H 4+ ion are presented as a function of photon energy over the energy range from 12 to 26 eV. The experimental results have been compared to previously published relative partial cross sections for the first two bands at 18, 21 and 24 eV. Comparison of the experimental data with continuum multiple scattering Xα calculations provides evidence for extensive autoionization to the X 2B 3u state and confirms the predicted shape resonances in ionization to the A 2B 3g and B 2A g states. Identification of possible transitions for the autoionizing resonances have been made using multiple scattering transition state calculations on Rydberg excited states.

  7. Absolute negative mobility in the anomalous diffusion

    Science.gov (United States)

    Chen, Ruyin; Chen, Chongyang; Nie, Linru

    2017-12-01

    Transport of an inertial Brownian particle driven by the multiplicative Lévy noise was investigated here. Numerical results indicate that: (i) The Lévy noise is able to induce absolute negative mobility (ANM) in the system, while disappearing in the deterministic case; (ii) the ANM can occur in the region of superdiffusion while disappearing in the region of normal diffusion, and the appropriate stable index of the Lévy noise makes the particle move along the opposite direction of the bias force to the maximum degree; (iii) symmetry breaking of the Lévy noise also causes the ANM effect. In addition, the intrinsic physical mechanism and conditions for the ANM to occur are discussed in detail. Our results have the implication that the Lévy noise plays an important role in the occurrence of the ANM phenomenon.

  8. Compensating additional optical power in the central zone of a multifocal contact lens forminimization of the shrinkage error of the shell mold in the injection molding process.

    Science.gov (United States)

    Vu, Lien T; Chen, Chao-Chang A; Lee, Chia-Cheng; Yu, Chia-Wei

    2018-04-20

    This study aims to develop a compensating method to minimize the shrinkage error of the shell mold (SM) in the injection molding (IM) process to obtain uniform optical power in the central optical zone of soft axial symmetric multifocal contact lenses (CL). The Z-shrinkage error along the Z axis or axial axis of the anterior SM corresponding to the anterior surface of a dry contact lens in the IM process can be minimized by optimizing IM process parameters and then by compensating for additional (Add) powers in the central zone of the original lens design. First, the shrinkage error is minimized by optimizing three levels of four IM parameters, including mold temperature, injection velocity, packing pressure, and cooling time in 18 IM simulations based on an orthogonal array L 18 (2 1 ×3 4 ). Then, based on the Z-shrinkage error from IM simulation, three new contact lens designs are obtained by increasing the Add power in the central zone of the original multifocal CL design to compensate for the optical power errors. Results obtained from IM process simulations and the optical simulations show that the new CL design with 0.1 D increasing in Add power has the closest shrinkage profile to the original anterior SM profile with percentage of reduction in absolute Z-shrinkage error of 55% and more uniform power in the central zone than in the other two cases. Moreover, actual experiments of IM of SM for casting soft multifocal CLs have been performed. The final product of wet CLs has been completed for the original design and the new design. Results of the optical performance have verified the improvement of the compensated design of CLs. The feasibility of this compensating method has been proven based on the measurement results of the produced soft multifocal CLs of the new design. Results of this study can be further applied to predict or compensate for the total optical power errors of the soft multifocal CLs.

  9. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  10. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  11. An absolute calibration system for millimeter-accuracy APOLLO measurements

    Science.gov (United States)

    Adelberger, E. G.; Battat, J. B. R.; Birkmeier, K. J.; Colmenares, N. R.; Davis, R.; Hoyle, C. D.; Huang, L. R.; McMillan, R. J.; Murphy, T. W., Jr.; Schlerman, E.; Skrobol, C.; Stubbs, C. W.; Zach, A.

    2017-12-01

    Lunar laser ranging provides a number of leading experimental tests of gravitation—important in our quest to unify general relativity and the standard model of physics. The apache point observatory lunar laser-ranging operation (APOLLO) has for years achieved median range precision at the  ∼2 mm level. Yet residuals in model-measurement comparisons are an order-of-magnitude larger, raising the question of whether the ranging data are not nearly as accurate as they are precise, or if the models are incomplete or ill-conditioned. This paper describes a new absolute calibration system (ACS) intended both as a tool for exposing and eliminating sources of systematic error, and also as a means to directly calibrate ranging data in situ. The system consists of a high-repetition-rate (80 MHz) laser emitting short (motivating continued work on model capabilities. The ACS provides the means to deliver APOLLO data both accurate and precise below the 2 mm level.

  12. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  13. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  14. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  15. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  16. Fluctuation theorems in feedback-controlled open quantum systems: Quantum coherence and absolute irreversibility

    Science.gov (United States)

    Murashita, Yûto; Gong, Zongping; Ashida, Yuto; Ueda, Masahito

    2017-10-01

    The thermodynamics of quantum coherence has attracted growing attention recently, where the thermodynamic advantage of quantum superposition is characterized in terms of quantum thermodynamics. We investigate the thermodynamic effects of quantum coherent driving in the context of the fluctuation theorem. We adopt a quantum-trajectory approach to investigate open quantum systems under feedback control. In these systems, the measurement backaction in the forward process plays a key role, and therefore the corresponding time-reversed quantum measurement and postselection must be considered in the backward process, in sharp contrast to the classical case. The state reduction associated with quantum measurement, in general, creates a zero-probability region in the space of quantum trajectories of the forward process, which causes singularly strong irreversibility with divergent entropy production (i.e., absolute irreversibility) and hence makes the ordinary fluctuation theorem break down. In the classical case, the error-free measurement ordinarily leads to absolute irreversibility, because the measurement restricts classical paths to the region compatible with the measurement outcome. In contrast, in open quantum systems, absolute irreversibility is suppressed even in the presence of the projective measurement due to those quantum rare events that go through the classically forbidden region with the aid of quantum coherent driving. This suppression of absolute irreversibility exemplifies the thermodynamic advantage of quantum coherent driving. Absolute irreversibility is shown to emerge in the absence of coherent driving after the measurement, especially in systems under time-delayed feedback control. We show that absolute irreversibility is mitigated by increasing the duration of quantum coherent driving or decreasing the delay time of feedback control.

  17. Genomic DNA-based absolute quantification of gene expression in Vitis.

    Science.gov (United States)

    Gambetta, Gregory A; McElrone, Andrew J; Matthews, Mark A

    2013-07-01

    Many studies in which gene expression is quantified by polymerase chain reaction represent the expression of a gene of interest (GOI) relative to that of a reference gene (RG). Relative expression is founded on the assumptions that RG expression is stable across samples, treatments, organs, etc., and that reaction efficiencies of the GOI and RG are equal; assumptions which are often faulty. The true variability in RG expression and actual reaction efficiencies are seldom determined experimentally. Here we present a rapid and robust method for absolute quantification of expression in Vitis where varying concentrations of genomic DNA were used to construct GOI standard curves. This methodology was utilized to absolutely quantify and determine the variability of the previously validated RG ubiquitin (VvUbi) across three test studies in three different tissues (roots, leaves and berries). In addition, in each study a GOI was absolutely quantified. Data sets resulting from relative and absolute methods of quantification were compared and the differences were striking. VvUbi expression was significantly different in magnitude between test studies and variable among individual samples. Absolute quantification consistently reduced the coefficients of variation of the GOIs by more than half, often resulting in differences in statistical significance and in some cases even changing the fundamental nature of the result. Utilizing genomic DNA-based absolute quantification is fast and efficient. Through eliminating error introduced by assuming RG stability and equal reaction efficiencies between the RG and GOI this methodology produces less variation, increased accuracy and greater statistical power. © 2012 Scandinavian Plant Physiology Society.

  18. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  19. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  20. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  1. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  2. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  3. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  4. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  5. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    Science.gov (United States)

    Brunke, Heinz-Peter; Matzka, Jürgen

    2018-01-01

    At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer). Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination) measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996).We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters) of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017).

  6. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    Directory of Open Access Journals (Sweden)

    H.-P. Brunke

    2018-01-01

    Full Text Available At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer. Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996.We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017.

  7. The Absolute Stability Analysis in Fuzzy Control Systems with Parametric Uncertainties and Reference Inputs

    Science.gov (United States)

    Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei

    This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.

  8. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  9. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  10. Quantitative structure activity relationship model for predicting the depletion percentage of skin allergic chemical substances of glutathione

    International Nuclear Information System (INIS)

    Si Hongzong; Wang Tao; Zhang Kejun; Duan Yunbo; Yuan Shuping; Fu Aiping; Hu Zhide

    2007-01-01

    A quantitative model was developed to predict the depletion percentage of glutathione (DPG) compounds by gene expression programming (GEP). Each kind of compound was represented by several calculated structural descriptors involving constitutional, topological, geometrical, electrostatic and quantum-chemical features of compounds. The GEP method produced a nonlinear and five-descriptor quantitative model with a mean error and a correlation coefficient of 10.52 and 0.94 for the training set, 22.80 and 0.85 for the test set, respectively. It is shown that the GEP predicted results are in good agreement with experimental ones, better than those of the heuristic method

  11. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    Science.gov (United States)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  12. Comparing different error conditions in filmdosemeter evaluation

    International Nuclear Information System (INIS)

    Roed, H.; Figel, M.

    2005-01-01

    Full text: In the evaluation of a film used as a personal dosemeter it may be necessary to mark the dosemeters when possible error conditions are recognized. These are errors that might have an influence on the ability to make a correct evaluation of the dose value, and include broken, contaminated or improperly handled dosemeters. In this project we have examined how two services (NIRH, GSF), from two different countries within the EU, mark their dosemeters. The services have a large difference in size, customer composition and issuing period, but both use film as their primary dosemeters. The possible error conditions that are examined here are dosemeters being contaminated, dosemeters exposed to moisture or light, missing filters in the dosemeter badges among others. The data are collected for the year 2003 where NIRH evaluated approximately 50 thousand and GSF about one million filmdosemeters. For each error condition the percentage of filmdosemeters belonging hereto is calculated as well as the distribution among different employee categories, i.e. industry, medicine, research, veterinary and other. For some error conditions we see a common pattern, while for others there is a large discrepancy between the services. The differences and possible explanations are discussed. The results of the investigation may motivate further comparisons between the different monitoring services in Europe. (author)

  13. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    Energy Technology Data Exchange (ETDEWEB)

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward [Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Szczygieł, Dorota M.; Gould, Andrew [Department of Astronomy, The Ohio State University, 4051 McPherson Laboratory, Columbus, OH 43210 (United States); Sneden, Christopher [Department of Astronomy, University of Texas at Austin, TX 78712 (United States); Dong, Subo [Institute for Advanced Study, 500 Einstein Drive, Princeton, NJ 08540 (United States)

    2013-09-20

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M{sub V,RRc} = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M{sub V,RRab} = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M{sub V,RRc} = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W{sub π}, W{sub θ}, W{sub z} ) = (12.0, –209.9, 3.0) km s{sup –1} in the radial, rotational, and vertical directions with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (150.4, 106.1, 96.0) km s{sup -1}. For the disk, we find (W{sub π}, W{sub θ}, W{sub z} ) = (13.0, –42.0, –27.3) km s{sup –1} relative to the Sun with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (67.7,59.2,54.9) km s{sup -1}. Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4.

  14. Absolute Lower Bound on the Bounce Action

    Science.gov (United States)

    Sato, Ryosuke; Takimoto, Masahiro

    2018-03-01

    The decay rate of a false vacuum is determined by the minimal action solution of the tunneling field: bounce. In this Letter, we focus on models with scalar fields which have a canonical kinetic term in N (>2 ) dimensional Euclidean space, and derive an absolute lower bound on the bounce action. In the case of four-dimensional space, we show the bounce action is generically larger than 24 /λcr, where λcr≡max [-4 V (ϕ )/|ϕ |4] with the false vacuum being at ϕ =0 and V (0 )=0 . We derive this bound on the bounce action without solving the equation of motion explicitly. Our bound is derived by a quite simple discussion, and it provides useful information even if it is difficult to obtain the explicit form of the bounce solution. Our bound offers a sufficient condition for the stability of a false vacuum, and it is useful as a quick check on the vacuum stability for given models. Our bound can be applied to a broad class of scalar potential with any number of scalar fields. We also discuss a necessary condition for the bounce action taking a value close to this lower bound.

  15. Gyrokinetic statistical absolute equilibrium and turbulence

    International Nuclear Information System (INIS)

    Zhu Jianzhou; Hammett, Gregory W.

    2010-01-01

    A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence [T.-D. Lee, Q. Appl. Math. 10, 69 (1952)] is taken to study gyrokinetic plasma turbulence: a finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N+1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.

  16. Gyrokinetic Statistical Absolute Equilibrium and Turbulence

    International Nuclear Information System (INIS)

    Zhu, Jian-Zhou; Hammett, Gregory W.

    2011-01-01

    A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence (T.-D. Lee, 'On some statistical properties of hydrodynamical and magnetohydrodynamical fields,' Q. Appl. Math. 10, 69 (1952)) is taken to study gyrokinetic plasma turbulence: A finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N + 1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.

  17. Measurement of the 235 U absolute activity

    International Nuclear Information System (INIS)

    Bueno, C.C.; Santos, M.D.S.

    1993-01-01

    The absolute activity of 235 U contained in a sample was measured utilizing a sum-coincidence circuit which selects only the alpha particles emitted simultaneously with the 143 KeV gamma radiations from the 231 Th (product nucleus). The alpha particles were detected by means of a new type of a gas scintillating chamber, in which the light emitted by excitation of the gas atoms, due to the passage of a charged incoming particle, has its intensity increased by the action of an applied electric field. The gamma radiations were detected by means of a 1'x 1 1/2 Nal (TI) scintillation detector. The value obtained for the half-life of 235 U, (7.04+-0.01)10 8 y, was compared with the data available from various observers with used different experimental techniques. It is shown that our results are in excellent agreement with the best data available on the subject. (author) 15 refs, 5 figs, 1 tab

  18. Auditory processing in absolute pitch possessors

    Science.gov (United States)

    McKetton, Larissa; Schneider, Keith A.

    2018-05-01

    Absolute pitch (AP) is a rare ability in classifying a musical pitch without a reference standard. It has been of great interest to researchers studying auditory processing and music cognition since it is seldom expressed and sheds light on influences pertaining to neurodevelopmental biological predispositions and the onset of musical training. We investigated the smallest frequency that could be detected or just noticeable difference (JND) between two pitches. Here, we report significant differences in JND thresholds in AP musicians and non-AP musicians compared to non-musician control groups at both 1000 Hz and 987.76 Hz testing frequencies. Although the AP-musicians did better than non-AP musicians, the difference was not significant. In addition, we looked at neuro-anatomical correlates of musicianship and AP using structural MRI. We report increased cortical thickness of the left Heschl's Gyrus (HG) and decreased cortical thickness of the inferior frontal opercular gyrus (IFO) and circular insular sulcus volume (CIS) in AP compared to non-AP musicians and controls. These structures may therefore be optimally enhanced and reduced to form the most efficient network for AP to emerge.

  19. [Tobacco and plastic surgery: An absolute contraindication?

    Science.gov (United States)

    Matusiak, C; De Runz, A; Maschino, H; Brix, M; Simon, E; Claudot, F

    2017-08-01

    Smoking increases perioperative risk regarding wound healing, infection rate and failure of microsurgical procedures. There is no present consensus about plastic and aesthetic surgical indications concerning smoking patients. The aim of our study is to analyze French plastic surgeons practices concerning smokers. A questionnaire was send by e-mail to French plastic surgeons in order to evaluate their own operative indications: patient information about smoking dangers, pre- and postoperative delay of smoking cessation, type of intervention carried out, smoking cessation supports, use of screening test and smoking limit associated to surgery refusing were studied. Statistical tests were used to compare results according to practitioner activity (liberal or public), own smoking habits and time of installation. In 148 questionnaires, only one surgeon did not explain smoking risk. Of the surgeons, 49.3% proposed smoking-cessation supports, more frequently with public practice (P=0.019). In total, 85.4% of surgeons did not use screening tests. Years of installation affected operative indication with smoking patients (P=0.02). Pre- and postoperative smoking cessation delay were on average respectively 4 and 3 weeks in accordance with literature. Potential improvements could be proposed to smoking patients' care: smoking cessation assistance, screening tests, absolute contraindication of some procedures or level of consumption to determine. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  20. Reliability and error analysis on xenon/CT CBF

    International Nuclear Information System (INIS)

    Zhang, Z.

    2000-01-01

    This article provides a quantitative error analysis of a simulation model of xenon/CT CBF in order to investigate the behavior and effect of different types of errors such as CT noise, motion artifacts, lower percentage of xenon supply, lower tissue enhancements, etc. A mathematical model is built to simulate these errors. By adjusting the initial parameters of the simulation model, we can scale the Gaussian noise, control the percentage of xenon supply, and change the tissue enhancement with different kVp settings. The motion artifact will be treated separately by geometrically shifting the sequential CT images. The input function is chosen from an end-tidal xenon curve of a practical study. Four kinds of cerebral blood flow, 10, 20, 50, and 80 cc/100 g/min, are examined under different error environments and the corresponding CT images are generated following the currently popular timing protocol. The simulated studies will be fed to a regular xenon/CT CBF system for calculation and evaluation. A quantitative comparison is given to reveal the behavior and effect of individual error resources. Mixed error testing is also provided to inspect the combination effect of errors. The experiment shows that CT noise is still a major error resource. The motion artifact affects the CBF results more geometrically than quantitatively. Lower xenon supply has a lesser effect on the results, but will reduce the signal/noise ratio. The lower xenon enhancement will lower the flow values in all areas of brain. (author)

  1. Absolute Position Sensing Based on a Robust Differential Capacitive Sensor with a Grounded Shield Window

    Directory of Open Access Journals (Sweden)

    Yang Bai

    2016-05-01

    Full Text Available A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10−4 pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range.

  2. Absolute beam-charge measurement for single-bunch electron beams

    International Nuclear Information System (INIS)

    Suwada, Tsuyoshi; Ohsawa, Satoshi; Furukawa, Kazuro; Akasaka, Nobumasa

    2000-01-01

    The absolute beam charge of a single-bunch electron beam with a pulse width of 10 ps and that of a short-pulsed electron beam with a pulse width of 1 ns were measured with a Faraday cup in a beam test for the KEK B-Factory (KEKB) injector linac. It is strongly desired to obtain a precise beam-injection rate to the KEKB rings, and to estimate the amount of beam loss. A wall-current monitor was also recalibrated within an error of ±2%. This report describes the new results for an absolute beam-charge measurement for single-bunch and short-pulsed electron beams, and recalibration of the wall-current monitors in detail. (author)

  3. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  4. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  5. Standard Error Computations for Uncertainty Quantification in Inverse Problems: Asymptotic Theory vs. Bootstrapping.

    Science.gov (United States)

    Banks, H T; Holm, Kathleen; Robbins, Danielle

    2010-11-01

    We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods.

  6. Laboratory and field evaluation of the Partec CyFlow miniPOC for absolute and relative CD4 T-cell enumeration.

    Directory of Open Access Journals (Sweden)

    Djibril Wade

    Full Text Available A new CD4 point-of-care instrument, the CyFlow miniPOC, which provides absolute and percentage CD4 T-cells, used for screening and monitoring of HIV-infected patients in resource-limited settings, was introduced recently. We assessed the performance of this novel instrument in a reference laboratory and in a field setting in Senegal.A total of 321 blood samples were obtained from 297 adults and 24 children, all HIV-patients attending university hospitals in Dakar, or health centers in Ziguinchor. Samples were analyzed in parallel on CyFlow miniPOC, FACSCount CD4 and FACSCalibur to assess CyFlow miniPOC precision and accuracy.At the reference lab, CyFlow miniPOC, compared to FACSCalibur, showed an absolute mean bias of -12.6 cells/mm3 and a corresponding relative mean bias of -2.3% for absolute CD4 counts. For CD4 percentages, the absolute mean bias was -0.1%. Compared to FACSCount CD4, the absolute and relative mean biases were -31.2 cells/mm3 and -4.7%, respectively, for CD4 counts, whereas the absolute mean bias for CD4 percentages was 1.3%. The CyFlow miniPOC was able to classify HIV-patients eligible for ART with a sensitivity of ≥ 95% at the different ART-initiation thresholds (200, 350 and 500 CD4 cells/mm3. In the field lab, the room temperature ranged from 30 to 35°C during the working hours. At those temperatures, the CyFlow miniPOC, compared to FACSCount CD4, had an absolute and relative mean bias of 7.6 cells/mm3 and 2.8%, respectively, for absolute CD4 counts, and an absolute mean bias of 0.4% for CD4 percentages. The CyFlow miniPOC showed sensitivity equal or greater than 94%.The CyFlow miniPOC showed high agreement with FACSCalibur and FACSCount CD4. The CyFlow miniPOC provides both reliable absolute CD4 counts and CD4 percentages even under the field conditions, and is suitable for monitoring HIV-infected patients in resource-limited settings.

  7. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  8. Evaluation of the absolute regional temperature potential

    Directory of Open Access Journals (Sweden)

    D. T. Shindell

    2012-09-01

    Full Text Available The Absolute Regional Temperature Potential (ARTP is one of the few climate metrics that provides estimates of impacts at a sub-global scale. The ARTP presented here gives the time-dependent temperature response in four latitude bands (90–28° S, 28° S–28° N, 28–60° N and 60–90° N as a function of emissions based on the forcing in those bands caused by the emissions. It is based on a large set of simulations performed with a single atmosphere-ocean climate model to derive regional forcing/response relationships. Here I evaluate the robustness of those relationships using the forcing/response portion of the ARTP to estimate regional temperature responses to the historic aerosol forcing in three independent climate models. These ARTP results are in good accord with the actual responses in those models. Nearly all ARTP estimates fall within ±20% of the actual responses, though there are some exceptions for 90–28° S and the Arctic, and in the latter the ARTP may vary with forcing agent. However, for the tropics and the Northern Hemisphere mid-latitudes in particular, the ±20% range appears to be roughly consistent with the 95% confidence interval. Land areas within these two bands respond 39–45% and 9–39% more than the latitude band as a whole. The ARTP, presented here in a slightly revised form, thus appears to provide a relatively robust estimate for the responses of large-scale latitude bands and land areas within those bands to inhomogeneous radiative forcing and thus potentially to emissions as well. Hence this metric could allow rapid evaluation of the effects of emissions policies at a finer scale than global metrics without requiring use of a full climate model.

  9. Orion Absolute Navigation System Progress and Challenge

    Science.gov (United States)

    Holt, Greg N.; D'Souza, Christopher

    2012-01-01

    The absolute navigation design of NASA's Orion vehicle is described. It has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to benchmark the current state of the design and some of the rationale and analysis behind it. There are specific challenges to address when preparing a timely and effective design for the Exploration Flight Test (EFT-1), while still looking ahead and providing software extensibility for future exploration missions. The primary onboard measurements in a Near-Earth or Mid-Earth environment consist of GPS pseudo-range and delta-range, but for future explorations missions the use of star-tracker and optical navigation sources need to be considered. Discussions are presented for state size and composition, processing techniques, and consider states. A presentation is given for the processing technique using the computationally stable and robust UDU formulation with an Agee-Turner Rank-One update. This allows for computational savings when dealing with many parameters which are modeled as slowly varying Gauss-Markov processes. Preliminary analysis shows up to a 50% reduction in computation versus a more traditional formulation. Several state elements are discussed and evaluated, including position, velocity, attitude, clock bias/drift, and GPS measurement biases in addition to bias, scale factor, misalignment, and non-orthogonalities of the accelerometers and gyroscopes. Another consideration is the initialization of the EKF in various scenarios. Scenarios such as single-event upset, ground command, and cold start are discussed as are strategies for whole and partial state updates as well as covariance considerations. Strategies are given for dealing with latent measurements and high-rate propagation using multi-rate architecture. The details of the rate groups and the data ow between the elements is discussed and evaluated.

  10. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  11. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  12. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  13. Planck absolute entropy of a rotating BTZ black hole

    Science.gov (United States)

    Riaz, S. M. Jawwad

    2018-04-01

    In this paper, the Planck absolute entropy and the Bekenstein-Smarr formula of the rotating Banados-Teitelboim-Zanelli (BTZ) black hole are presented via a complex thermodynamical system contributed by its inner and outer horizons. The redefined entropy approaches zero as the temperature of the rotating BTZ black hole tends to absolute zero, satisfying the Nernst formulation of a black hole. Hence, it can be regarded as the Planck absolute entropy of the rotating BTZ black hole.

  14. Positioning, alignment and absolute pointing of the ANTARES neutrino telescope

    International Nuclear Information System (INIS)

    Fehr, F; Distefano, C

    2010-01-01

    A precise detector alignment and absolute pointing is crucial for point-source searches. The ANTARES neutrino telescope utilises an array of hydrophones, tiltmeters and compasses for the relative positioning of the optical sensors. The absolute calibration is accomplished by long-baseline low-frequency triangulation of the acoustic reference devices in the deep-sea with a differential GPS system at the sea surface. The absolute pointing can be independently verified by detecting the shadow of the Moon in cosmic rays.

  15. Absolute nuclear material assay using count distribution (LAMBDA) space

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  16. Systematic Review of Errors in Inhaler Use

    DEFF Research Database (Denmark)

    Sanchis, Joaquin; Gich, Ignasi; Pedersen, Søren

    2016-01-01

    in these outcomes over these 40 years and when partitioned into years 1 to 20 and years 21 to 40. Analyses were conducted in accordance with recommendations from Preferred Reporting Items for Systematic Reviews and Meta-Analyses and Strengthening the Reporting of Observational Studies in Epidemiology. Results Data...... A systematic search for articles reporting direct observation of inhaler technique by trained personnel covered the period from 1975 to 2014. Outcomes were the nature and frequencies of the three most common errors; the percentage of patients demonstrating correct, acceptable, or poor technique; and variations...

  17. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  18. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  19. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  20. 26 CFR 1.1502-44 - Percentage depletion for independent producers and royalty owners.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Percentage depletion for independent producers...-44 Percentage depletion for independent producers and royalty owners. (a) In general. The sum of the percentage depletion deductions for the taxable year for all oil or gas property owned by all members, plus...

  1. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

    Science.gov (United States)

    2010-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the numerical... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51...

  2. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  3. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    Energy Technology Data Exchange (ETDEWEB)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  4. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    International Nuclear Information System (INIS)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D

    2015-01-01

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  5. [Absolute and relative strength-endurance of the knee flexor and extensor muscles: a reliability study using the IsoMed 2000-dynamometer].

    Science.gov (United States)

    Dirnberger, J; Wiesinger, H P; Stöggl, T; Kösters, A; Müller, E

    2012-09-01

    Isokinetic devices are highly rated in strength-related performance diagnosis. A few years ago, the broad variety of existing products was extended by the IsoMed 2000-dynamometer. In order for an isokinetic device to be clinically useful, the reliability of specific applications must be established. Although there have already been single studies on this topic for the IsoMed 2000 concerning maximum strength measurements, there has been no study regarding the assessment of strength-endurance so far. The aim of the present study was to establish the reliability for various methods of quantification of strength-endurance using the IsoMed 2000. A sample of 33 healthy young subjects (age: 23.8 ± 2.6 years) participated in one familiarisation and two testing sessions, 3-4 days apart. Testing consisted of a series 30 full effort concentric extension-flexion cycles of the right knee muscles at an angular velocity of 180 °/s. Based on the parameters Peak, Torque and Work for each repetition, indices of absolute (KADabs) and relative (KADrel) strength-endurance were derived. KADabs was calculated as the mean value of all testing repetitions, KADrel was determined in two ways: on the one hand, as the percentage decrease between the first and the last 5 repetitions (KADrelA) and on the other, as the negative slope derived from the linear regression equitation of all repetitions (KADrelB). Detection of systematic errors was performed using paired sample t-tests, relative and absolute reliability were examined using intraclass correlation coefficient (ICC 2.1) and standard error of measurement (SEM%), respectively. In general, for extension measurements concerning KADabs and - in an weakened form - KADrel high ICC -values of 0.76-0.89 combined with clinically acceptable values of SEM% of 1.2-5.9 % could be found. For flexion measurements this only applies to KADabs, whereas results for KADrel turned out to be clearly weaker with ICC- and SEM% values of 0.42-0.62 and 9

  6. Auto-calibration of Systematic Odometry Errors in Mobile Robots

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Andersen, Nils Axel

    1999-01-01

    This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement....... By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations...... and experiments on a mobile robot....

  7. Absolute density measurements in the middle atmosphere

    Directory of Open Access Journals (Sweden)

    M. Rapp

    2001-05-01

    Full Text Available In the last ten years a total of 25 sounding rockets employing ionization gauges have been launched at high latitudes ( ~ 70° N to measure total atmospheric density and its small scale fluctuations in an altitude range between 70 and 110 km. While the determination of small scale fluctuations is unambiguous, the total density analysis has been complicated in the past by aerodynamical disturbances leading to densities inside the sensor which are enhanced compared to atmospheric values. Here, we present the results of both Monte Carlo simulations and wind tunnel measurements to quantify this aerodynamical effect. The comparison of the resulting ‘ram-factor’ profiles with empirically determined density ratios of ionization gauge measurements and falling sphere measurements provides excellent agreement. This demonstrates both the need, but also the possibility, to correct aerodynamical influences on measurements from sounding rockets. We have determined a total of 20 density profiles of the mesosphere-lower-thermosphere (MLT region. Grouping these profiles according to season, a listing of mean density profiles is included in the paper. A comparison with density profiles taken from the reference atmospheres CIRA86 and MSIS90 results in differences of up to 40%. This reflects that current reference atmospheres are a significant potential error source for the determination of mixing ratios of, for example, trace gas constituents in the MLT region.Key words. Middle atmosphere (composition and chemistry; pressure, density, and temperature; instruments and techniques

  8. Absolute density measurements in the middle atmosphere

    Directory of Open Access Journals (Sweden)

    M. Rapp

    Full Text Available In the last ten years a total of 25 sounding rockets employing ionization gauges have been launched at high latitudes ( ~ 70° N to measure total atmospheric density and its small scale fluctuations in an altitude range between 70 and 110 km. While the determination of small scale fluctuations is unambiguous, the total density analysis has been complicated in the past by aerodynamical disturbances leading to densities inside the sensor which are enhanced compared to atmospheric values. Here, we present the results of both Monte Carlo simulations and wind tunnel measurements to quantify this aerodynamical effect. The comparison of the resulting ‘ram-factor’ profiles with empirically determined density ratios of ionization gauge measurements and falling sphere measurements provides excellent agreement. This demonstrates both the need, but also the possibility, to correct aerodynamical influences on measurements from sounding rockets. We have determined a total of 20 density profiles of the mesosphere-lower-thermosphere (MLT region. Grouping these profiles according to season, a listing of mean density profiles is included in the paper. A comparison with density profiles taken from the reference atmospheres CIRA86 and MSIS90 results in differences of up to 40%. This reflects that current reference atmospheres are a significant potential error source for the determination of mixing ratios of, for example, trace gas constituents in the MLT region.

    Key words. Middle atmosphere (composition and chemistry; pressure, density, and temperature; instruments and techniques

  9. Meniscal tear. Diagnostic errors in MR imaging

    International Nuclear Information System (INIS)

    Barrera, M. C.; Recondo, J. A.; Gervas, C.; Fernandez, E.; Villanua, J. A.M.; Salvador, E.

    2003-01-01

    To analyze diagnostic discrepancies found between magnetic resonance (MR) and arthroscopy, and the determine the reasons that they occur. Two-hundred and forty-eight MR knee explorations were retrospectively checked. Forty of these showed diagnostic discrepancies between MR and arthroscopy. Two radiologists independently re-analyzed the images from 29 of the 40 studies without knowing which diagnosis had resulted from which of the two techniques. Their interpretations were correlated with the initial MR diagnosis, MR images and arthroscopic results. Initial errors in MR imaging were classified as either unavoidable, interpretive, or secondary to equivocal findings. Eleven MR examinations could not be checked since their corresponding imaging results could not be located. Of 34 errors found in the original diagnoses, 12 (35.5%)were classified as unavoidable, 14 (41.2%) as interpretative and 8 (23.5%) as secondary to equivocal findings. 41.2% of the errors were avoided in the retrospective study probably due to our department having greater experience in interpreting MR images, 25.5% were unavailable even in the retrospective study. A small percentage of diagnostic errors were due to the presence of subtle equivocal findings. (Author) 15 refs

  10. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  11. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry

    Directory of Open Access Journals (Sweden)

    Guochao Wang

    2018-02-01

    Full Text Available We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  12. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry.

    Science.gov (United States)

    Wang, Guochao; Tan, Lilong; Yan, Shuhua

    2018-02-07

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  13. Does Absolute Synonymy exist in Owere-Igbo? | Omego | AFRREV ...

    African Journals Online (AJOL)

    Among Igbo linguistic researchers, determining whether absolute synonymy exists in Owere–Igbo, a dialect of the Igbo language predominantly spoken by the people of Owerri, Imo State, Nigeria, has become a thorny issue. While some linguistic scholars strive to establish that absolute synonymy exists in the lexical ...

  14. Absolute tense forms in Tswana | Pretorius | Journal for Language ...

    African Journals Online (AJOL)

    These views were compared in an attempt to put forth an applicable framework for the classification of the tenses in Tswana and to identify the absolute tenses of Tswana. Keywords: tense; simple tenses; compound tenses; absolute tenses; relative tenses; aspect; auxiliary verbs; auxiliary verbal groups; Tswana Opsomming

  15. Absolute calibration of sniffer probes on Wendelstein 7-X

    NARCIS (Netherlands)

    Moseev, D.; Laqua, H.P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.J.; Oosterbeek, J.W.

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of

  16. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  17. Absolute instrumental neutron activation analysis at Lawrence Livermore Laboratory

    International Nuclear Information System (INIS)

    Heft, R.E.

    1977-01-01

    The Environmental Science Division at Lawrence Livermore Laboratory has in use a system of absolute Instrumental Neutron Activation Analysis (INAA). Basically, absolute INAA is dependent upon the absolute measurement of the disintegration rates of the nuclides produced by neutron capture. From such disintegration rate data, the amount of the target element present in the irradiated sample is calculated by dividing the observed disintegration rate for each nuclide by the expected value for the disintegration rate per microgram of the target element that produced the nuclide. In absolute INAA, the expected value for disintegration rate per microgram is calculated from nuclear parameters and from measured values of both thermal and epithermal neutron fluxes which were present during irradiation. Absolute INAA does not depend on the concurrent irradiation of elemental standards but does depend on the values for thermal and epithermal neutron capture cross-sections for the target nuclides. A description of the analytical method is presented

  18. A developmental study of latent absolute pitch memory.

    Science.gov (United States)

    Jakubowski, Kelly; Müllensiefen, Daniel; Stewart, Lauren

    2017-03-01

    The ability to recall the absolute pitch level of familiar music (latent absolute pitch memory) is widespread in adults, in contrast to the rare ability to label single pitches without a reference tone (overt absolute pitch memory). The present research investigated the developmental profile of latent absolute pitch (AP) memory and explored individual differences related to this ability. In two experiments, 288 children from 4 to12 years of age performed significantly above chance at recognizing the absolute pitch level of familiar melodies. No age-related improvement or decline, nor effects of musical training, gender, or familiarity with the stimuli were found in regard to latent AP task performance. These findings suggest that latent AP memory is a stable ability that is developed from as early as age 4 and persists into adulthood.

  19. Advancing Absolute Calibration for JWST and Other Applications

    Science.gov (United States)

    Rieke, George; Bohlin, Ralph; Boyajian, Tabetha; Carey, Sean; Casagrande, Luca; Deustua, Susana; Gordon, Karl; Kraemer, Kathleen; Marengo, Massimo; Schlawin, Everett; Su, Kate; Sloan, Greg; Volk, Kevin

    2017-10-01

    We propose to exploit the unique optical stability of the Spitzer telescope, along with that of IRAC, to (1) transfer the accurate absolute calibration obtained with MSX on very bright stars directly to two reference stars within the dynamic range of the JWST imagers (and of other modern instrumentation); (2) establish a second accurate absolute calibration based on the absolutely calibrated spectrum of the sun, transferred onto the astronomical system via alpha Cen A; and (3) provide accurate infrared measurements for the 11 (of 15) highest priority stars with no such data but with accurate interferometrically measured diameters, allowing us to optimize determinations of effective temperatures using the infrared flux method and thus to extend the accurate absolute calibration spectrally. This program is integral to plans for an accurate absolute calibration of JWST and will also provide a valuable Spitzer legacy.

  20. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  1. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  2. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  3. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  4. Satellite Photometric Error Determination

    Science.gov (United States)

    2015-10-18

    Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical

  5. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  6. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  7. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  8. Changes in relative and absolute concentrations of plasma phospholipid fatty acids observed in a randomized trial of Omega-3 fatty acids supplementation in Uganda.

    Science.gov (United States)

    Song, Xiaoling; Diep, Pho; Schenk, Jeannette M; Casper, Corey; Orem, Jackson; Makhoul, Zeina; Lampe, Johanna W; Neuhouser, Marian L

    2016-11-01

    Expressing circulating phospholipid fatty acids (PLFAs) in relative concentrations has some limitations: the total of all fatty acids are summed to 100%; therefore, the values of individual fatty acid are not independent. In this study we examined if both relative and absolute metrics could effectively measure changes in circulating PLFA concentrations in an intervention trial. 66 HIV and HHV8 infected patients in Uganda were randomized to take 3g/d of either long-chain omega-3 fatty acids (1856mg EPA and 1232mg DHA) or high-oleic safflower oil in a 12-week double-blind trial. Plasma samples were collected at baseline and end of trial. Relative weight percentage and absolute concentrations of 41 plasma PLFAs were measured using gas chromatography. Total cholesterol was also measured. Intervention-effect changes in concentrations were calculated as differences between end of 12-week trial and baseline. Pearson correlations of relative and absolute concentration changes in individual PLFAs were high (>0.6) for 37 of the 41 PLFAs analyzed. In the intervention arm, 17 PLFAs changed significantly in relative concentration and 16 in absolute concentration, 15 of which were identical. Absolute concentration of total PLFAs decreased 95.1mg/L (95% CI: 26.0, 164.2; P=0.0085), but total cholesterol did not change significantly in the intervention arm. No significant change was observed in any of the measurements in the placebo arm. Both relative weight percentage and absolute concentrations could effectively measure changes in plasma PLFA concentrations. EPA and DHA supplementation changes the concentrations of multiple plasma PLFAs besides EPA and DHA.Both relative weight percentage and absolute concentrations could effectively measure changes in plasma phospholipid fatty acid (PLFA) concentrations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Proton spectroscopic imaging of polyacrylamide gel dosimeters for absolute radiation dosimetry

    International Nuclear Information System (INIS)

    Murphy, P.S.; Schwarz, A.J.; Leach, M.O.

    2000-01-01

    Proton spectroscopy has been evaluated as a method for quantifying radiation induced changes in polyacrylamide gel dosimeters. A calibration was first performed using BANG-type gel samples receiving uniform doses of 6 MV photons from 0 to 9 Gy in 1 Gy intervals. The peak integral of the acrylic protons belonging to acrylamide and methylenebisacrylamide normalized to the water signal was plotted against absorbed dose. Response was approximately linear within the range 0-7 Gy. A large gel phantom irradiated with three, coplanar 3x3cm square fields to 5.74 Gy at isocentre was then imaged with an echo-filter technique to map the distribution of monomers directly. The image, normalized to the water signal, was converted into an absolute dose map. At the isocentre the measured dose was 5.69 Gy (SD = 0.09) which was in good agreement with the planned dose. The measured dose distribution elsewhere in the sample shows greater errors. A T 2 derived dose map demonstrated a better relative distribution but gave an overestimate of the dose at isocentre of 18%. The data indicate that MR measurements of monomer concentration can complement T 2 -based measurements and can be used to verify absolute dose. Compared with the more usual T 2 measurements for assessing gel polymerization, monomer concentration analysis is less sensitive to parameters such as gel pH and temperature, which can cause ambiguous relaxation time measurements and erroneous absolute dose calculations. (author)

  10. Tinker-OpenMM: Absolute and relative alchemical free energies using AMOEBA on GPUs.

    Science.gov (United States)

    Harger, Matthew; Li, Daniel; Wang, Zhi; Dalby, Kevin; Lagardère, Louis; Piquemal, Jean-Philip; Ponder, Jay; Ren, Pengyu

    2017-09-05

    The capabilities of the polarizable force fields for alchemical free energy calculations have been limited by the high computational cost and complexity of the underlying potential energy functions. In this work, we present a GPU-based general alchemical free energy simulation platform for polarizable potential AMOEBA. Tinker-OpenMM, the OpenMM implementation of the AMOEBA simulation engine has been modified to enable both absolute and relative alchemical simulations on GPUs, which leads to a ∼200-fold improvement in simulation speed over a single CPU core. We show that free energy values calculated using this platform agree with the results of Tinker simulations for the hydration of organic compounds and binding of host-guest systems within the statistical errors. In addition to absolute binding, we designed a relative alchemical approach for computing relative binding affinities of ligands to the same host, where a special path was applied to avoid numerical instability due to polarization between the different ligands that bind to the same site. This scheme is general and does not require ligands to have similar scaffolds. We show that relative hydration and binding free energy calculated using this approach match those computed from the absolute free energy approach. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Absolute gravity measurements at three sites characterized by different environmental conditions using two portable ballistic gravimeters

    Science.gov (United States)

    Greco, Filippo; Biolcati, Emanuele; Pistorio, Antonio; D'Agostino, Giancarlo; Germak, Alessandro; Origlia, Claudio; Del Negro, Ciro

    2015-03-01

    The performances of two absolute gravimeters at three different sites in Italy between 2009 and 2011 is presented. The measurements of the gravity acceleration g were performed using the absolute gravimeters Micro-g LaCoste FG5#238 and the INRiM prototype IMGC-02, which represent the state of the art in ballistic gravimeter technology (relative uncertainty of a few parts in 109). For the comparison, the measured g values were reported at the same height by means of the vertical gravity gradient estimated at each site with relative gravimeters. The consistency and reliability of the gravity observations, as well as the performance and efficiency of the instruments, were assessed by measurements made in sites characterized by different logistics and environmental conditions. Furthermore, the various factors affecting the measurements and their uncertainty were thoroughly investigated. The measurements showed good agreement, with the minimum and maximum differences being 4.0 and 8.3 μGal. The normalized errors are very much lower than 1, ranging between 0.06 and 0.45, confirming the compatibility between the results. This excellent agreement can be attributed to several factors, including the good working order of gravimeters and the correct setup and use of the instruments in different conditions. These results can contribute to the standardization of absolute gravity surveys largely for applications in geophysics, volcanology and other branches of geosciences, allowing achieving a good trade-off between uncertainty and efficiency of gravity measurements.

  12. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    International Nuclear Information System (INIS)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-01-01

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (±1 mm in two banks, ±0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  14. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    Energy Technology Data Exchange (ETDEWEB)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)

    2010-07-15

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  15. Relative and Absolute Reliability of Timed Up and Go Test in Community Dwelling Older Adult and Healthy Young People

    Directory of Open Access Journals (Sweden)

    Farhad Azadi

    2014-01-01

    Full Text Available Objectives: Relative and absolute reliability are psychometric properties of the test that many clinical decisions are based on them. In many cases, only relative reliability takes into consideration while the absolute reliability is also very important. Methods & Materials: Eleven community-dwelling older adults aged 65 years and older (69.64±3.58 and 20 healthy young in the age range 20 to 35 years (28.80±4.15 using three versions of Timed Up and Go test were evaluated twice with an interval of 2 to 5 days. Results: Generally, the non-homogeneity of the study population was stratified to increase the Intra-class Correlation Coefficient (ICC this coefficient in elderly people is greater than young people and with a secondary task is reduced. In This study, absolute reliability indices using different data sources and equations lead to in more or less similar results. At general, in test–retest situations, the elderly more than the young people must be changed to be interpreted as a real change, not random. The random error contribution is slightly greater in elderly than young and with a secondary task is increased.It seems, heterogeneity leads to moderation in absolute reliability indices. Conclusion: In relative reliability studies, researchers and clinicians should pay attention to factors such as homogeneity of population and etc. As well as, absolute reliability beside relative reliability is needed and necessary in clinical decision making.

  16. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    Science.gov (United States)

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  17. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  18. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    Science.gov (United States)

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  20. Appeals to AC as a Percentage of Appealable Hearing Level Dispositions

    Data.gov (United States)

    Social Security Administration — Longitudinal report detailing the numbers and percentages of Requests for Review (RR) of hearing level decisions or dismissals filed with the Appeals Council (AC)...

  1. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  2. Efficacy of intrahepatic absolute alcohol in unrespectable hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Farooqi, J.I.; Hameed, K.; Khan, I.U.; Shah, S.

    2001-01-01

    To determine efficacy of intrahepatic absolute alcohol injection in researchable hepatocellular carcinoma. A randomized, controlled, experimental and interventional clinical trial. Gastroenterology Department, PGMI, Hayatabad Medical Complex, Peshawar during the period from June, 1998 to June, 2000. Thirty patients were treated by percutaneous, intrahepatic absolute alcohol injection sin repeated sessions, 33 patients were not given or treated with alcohol to serve as control. Both the groups were comparable for age, sex and other baseline characteristics. Absolute alcohol therapy significantly improved quality of life of patients, reduced the tumor size and mortality as well as showed significantly better results regarding survival (P< 0.05) than the patients of control group. We conclude that absolute alcohol is a beneficial and safe palliative treatment measure in advanced hepatocellular carcinoma (HCC). (author)

  3. DOES ABSOLUTE SYNONYMY EXIST IN OWERE-IGBO?

    African Journals Online (AJOL)

    USER

    The researcher also interviewed native speakers of the dialect. The study ... The word 'synonymy' means sameness of meaning, i.e., a relationship in which more ... whether absolute synonymy exists in Owere–Igbo or not. ..... 'close this book'.

  4. Prognostic Value of Absolute versus Relative Rise of Blood ...

    African Journals Online (AJOL)

    maternal outcome than a relative rise in the systolic/diastolic blood pressure from mid pregnancy, which did not reach this absolute level. We conclude that in the Nigerian obstetric population, the practice of diagnosing pregnancy hypertension on ...

  5. Absolute calibration of sniffer probes on Wendelstein 7-X

    International Nuclear Information System (INIS)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-01-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m 2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m 2 per MW injected beam power is measured.

  6. Absolute calibration of sniffer probes on Wendelstein 7-X

    Science.gov (United States)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m2 per MW injected beam power is measured.

  7. Absolute calibration of sniffer probes on Wendelstein 7-X

    Energy Technology Data Exchange (ETDEWEB)

    Moseev, D., E-mail: dmitry.moseev@ipp.mpg.de; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Gellert, F. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Ernst-Moritz-Arndt-Universität Greifswald, Greifswald (Germany); Oosterbeek, J. W. [Eindhoven University of Technology, Eindhoven (Netherlands)

    2016-08-15

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m{sup 2} per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m{sup 2} per MW injected beam power is measured.

  8. Probative value of absolute and relative judgments in eyewitness identification.

    Science.gov (United States)

    Clark, Steven E; Erickson, Michael A; Breneman, Jesse

    2011-10-01

    It is well-accepted that eyewitness identification decisions based on relative judgments are less accurate than identification decisions based on absolute judgments. However, the theoretical foundation for this view has not been established. In this study relative and absolute judgments were compared through simulations of the WITNESS model (Clark, Appl Cogn Psychol 17:629-654, 2003) to address the question: Do suspect identifications based on absolute judgments have higher probative value than suspect identifications based on relative judgments? Simulations of the WITNESS model showed a consistent advantage for absolute judgments over relative judgments for suspect-matched lineups. However, simulations of same-foils lineups showed a complex interaction based on the accuracy of memory and the similarity relationships among lineup members.

  9. Changes in Absolute Sea Level Along U.S. Coasts

    Data.gov (United States)

    U.S. Environmental Protection Agency — This map shows changes in absolute sea level from 1960 to 2016 based on satellite measurements. Data were adjusted by applying an inverted barometer (air pressure)...

  10. Confirmation of the absolute configuration of (−)-aurantioclavine

    KAUST Repository

    Behenna, Douglas C.; Krishnan, Shyam; Stoltz, Brian M.

    2011-01-01

    We confirm our previous assignment of the absolute configuration of (-)-aurantioclavine as 7R by crystallographically characterizing an advanced 3-bromoindole intermediate reported in our previous synthesis. This analysis also provides additional

  11. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  12. A proposal to measure absolute environmental sustainability in lifecycle assessment

    DEFF Research Database (Denmark)

    Bjørn, Anders; Margni, Manuele; Roy, Pierre-Olivier

    2016-01-01

    sustainable are therefore increasingly important. Such absolute indicators exist, but suffer from shortcomings such as incomplete coverage of environmental issues, varying data quality and varying or insufficient spatial resolution. The purpose of this article is to demonstrate that life cycle assessment (LCA...... in supporting decisions aimed at simultaneously reducing environmental impacts efficiently and maintaining or achieving environmental sustainability. We have demonstrated that LCA indicators can be modified from being relative to being absolute indicators of environmental sustainability. Further research should...

  13. Overspecification of colour, pattern, and size: Salience, absoluteness, and consistency

    OpenAIRE

    Sammie eTarenskeen; Mirjam eBroersma; Mirjam eBroersma; Bart eGeurts

    2015-01-01

    The rates of overspecification of colour, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Colour and pattern are absolute attributes, whereas size is relative and less salient. Additionally, a tendency towards consistent responses is assessed. Using a within-participants design, we find similar rates of colour and pattern overspecification, which are both higher than the rate of size overspecification. Using a bet...

  14. Overspecification of color, pattern, and size: salience, absoluteness, and consistency

    OpenAIRE

    Tarenskeen, S.L.; Broersma, M.; Geurts, B.

    2015-01-01

    The rates of overspecification of color, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Color and pattern are absolute and salient attributes, whereas size is relative and less salient. Additionally, a tendency toward consistent responses is assessed. Using a within-participants design, we find similar rates of color and pattern overspecification, which are both higher than the rate of size overspecification. Usi...

  15. Absolute transition probabilities in the NeI 3p-3s fine structure by beam-gas-dye laser spectroscopy

    International Nuclear Information System (INIS)

    Hartmetz, P.; Schmoranzer, H.

    1983-01-01

    The beam-gas-dye laser two-step excitation technique is further developed and applied to the direct measurement of absolute atomic transition probabilities in the NeI 3p-3s fine-structure transition array with a maximum experimental error of 5%. (orig.)

  16. Phonological errors predominate in Arabic spelling across grades 1-9.

    Science.gov (United States)

    Abu-Rabia, Salim; Taha, Haitham

    2006-03-01

    Most of the spelling error analysis has been conducted in Latin orthographies and rarely conducted in other orthographies like Arabic. Two hundred and eighty-eight students in grades 1-9 participated in the study. They were presented nine lists of words to test their spelling skills. Their spelling errors were analyzed by error categories. The most frequent errors were phonological. The results did not indicate any significant differences in the percentages of phonological errors across grades one to nine.Thus, phonology probably presents the greatest challenge to students developing spelling skills in Arabic.

  17. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  18. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  19. The Pragmatics of "Unruly" Dative Absolutes in Early Slavic

    Directory of Open Access Journals (Sweden)

    Daniel E. Collins

    2011-08-01

    Full Text Available This chapter examines some uses of the dative absolute in Old Church Slavonic and in early recensional Slavonic texts that depart from notions of how Indo-European absolute constructions should behave, either because they have subjects coreferential with the (putative main-clause subjects or because they function as if they were main clauses in their own right. Such "noncanonical" absolutes have generally been written off as mechanistic translations or as mistakes by scribes who did not understand the proper uses of the construction. In reality, the problem is not with literalistic translators or incompetent scribes but with the definition of the construction itself; it is quite possible to redefine the Early Slavic dative absolute in a way that accounts for the supposedly deviant cases. While the absolute is generally dependent semantically on an adjacent unit of discourse, it should not always be regarded as subordinated syntactically. There are good grounds for viewing some absolutes not as dependent clauses but as independent sentences whose collateral character is an issue not of syntax but of the pragmatics of discourse.

  20. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  1. Error Patterns in Problem Solving.

    Science.gov (United States)

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  2. Performance, postmodernity and errors

    DEFF Research Database (Denmark)

    Harder, Peter

    2013-01-01

    speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....

  3. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Quantificação da falha na madeira em juntas coladas utilizando técnicas de visão artificial Measuring wood failure percentage using a machine vision system

    Directory of Open Access Journals (Sweden)

    Christovão Pereira Abrahão

    2003-02-01

    measurement can replace the manual grid method. The proposed algorithms presented an average absolute error of 3%, as compared to the manual grid method.

  5. Effect of breed and non-genetic factors on percentage milk ...

    African Journals Online (AJOL)

    This study was done to determine the effect of breed and non-genetic factors on percentage milk composition of smallholders' dual-purpose cattle on-farm in the Ashanti Region. Fresh milk samples from various breeds of cows were assessed for percentage components of protein, fat, lactose, cholesterol, solidnon- fat and ...

  6. 12 CFR Appendix A to Part 230 - Annual Percentage Yield Calculation

    Science.gov (United States)

    2010-01-01

    ... following simple formula: APY=100 (Interest/Principal) Examples (1) If an institution pays $61.68 in... percentage yield is 5.39%, using the simple formula: APY=100(134.75/2,500) APY=5.39% For $15,000, interest is... Yield Calculation The annual percentage yield measures the total amount of interest paid on an account...

  7. 7 CFR 981.47 - Method of establishing salable and reserve percentages.

    Science.gov (United States)

    2010-01-01

    ...) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF... effectuate the declared policy of the act, he shall designate such percentages. Except as provided in § 981... percentages, the Secretary shall give consideration to the ratio of estimated trade demand (domestic plus...

  8. High body fat percentage among adult women in Malaysia: the role ...

    African Journals Online (AJOL)

    Body fat percentage is regarded as an important measurement for diagnosis of obesity. The aim of this study is to determine the association of high body fat percentage (BF%) and lifestyle among adult women. The study was conducted on 327 women, aged 40-59 years, recruited during a health screening program. Data on ...

  9. 13 CFR 126.701 - Can these subcontracting percentages requirements change?

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Can these subcontracting percentages requirements change? 126.701 Section 126.701 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION HUBZONE PROGRAM Contract Performance Requirements § 126.701 Can these subcontracting percentages...

  10. Infants with Down Syndrome: Percentage and Age for Acquisition of Gross Motor Skills

    Science.gov (United States)

    Pereira, Karina; Basso, Renata Pedrolongo; Lindquist, Ana Raquel Rodrigues; da Silva, Louise Gracelli Pereira; Tudella, Eloisa

    2013-01-01

    The literature is bereft of information about the age at which infants with Down syndrome (DS) acquire motor skills and the percentage of infants that do so by the age of 12 months. Therefore, it is necessary to identify the difference in age, in relation to typical infants, at which motor skills were acquired and the percentage of infants with DS…

  11. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    Science.gov (United States)

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not

  12. Medical Error Types and Causes Made by Nurses in Turkey

    Directory of Open Access Journals (Sweden)

    Dilek Kucuk Alemdar

    2013-06-01

    Full Text Available AIM: This study was carried out as a descriptive study in order to determine types, causes and prevalence of medical errors made by nurses in Turkey. METHOD: Seventy eight (78 nurses who have worked in a randomly selected hospital from five hospitals in Giresun city centre were enrolled in the study. The data was collected by the researchers using the ‘Information Form for Nurses’ and ‘Medical Error Form’. The Medical Error Form consists of 2 parts and 40 items including types and causes of medical errors. Nurses’ socio-demographic variables, medical error types and causes were evaluated using the percentage distribution and mean. RESULTS: The mean age of the nurses was 25.5 years, with a standard deviation 6.03 years. 50% of the nurses graduated health professional high school in the study. 53.8% of the nurses are single, 63.1% worked between 1-5 years, 71.8% day and night shifts and 42.3% in medical clinics. The common types of medical errors were hospital infection rate of 15.4%, diagnostic errors 12.8%, needle or cutting tool injuries and problems related to drug usage which has side effects 10.3%. In the study 38.5% of the nurses reported that they thought the cause of medical error highly was tiredness, 36.4% increased workload and 34.6% long working hours. CONCLUSION: As a result of the present study, nurses mentioned hospital infection, diagnostic errors, needle or cutting tool injuries as the most common medical errors and fatigue, over work load and long working hours as the most common medical error reasons. [TAF Prev Med Bull 2013; 12(3.000: 307-314

  13. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  14. THE DISKMASS SURVEY. II. ERROR BUDGET

    International Nuclear Information System (INIS)

    Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A.

    2010-01-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ * ), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25 0 -35 0 is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F bar ) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σ dyn ), disk stellar mass-to-light ratio (Υ disk * ), and disk maximality (F *,max disk ≡V disk *,max / V c ). Random and systematic errors in these quantities for individual galaxies will be ∼25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  15. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  16. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  17. Auditory working memory predicts individual differences in absolute pitch learning.

    Science.gov (United States)

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the

  18. "First, know thyself": cognition and error in medicine.

    Science.gov (United States)

    Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo

    2016-04-01

    Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.

  19. Absolute earthquake locations using 3-D versus 1-D velocity models below a local seismic network: example from the Pyrenees

    Science.gov (United States)

    Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.

    2018-03-01

    Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.

  20. PERBANDINGAN ANALISIS LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR DAN PARTIAL LEAST SQUARES (Studi Kasus: Data Microarray

    Directory of Open Access Journals (Sweden)

    KADEK DWI FARMANI

    2012-09-01

    Full Text Available Linear regression analysis is one of the parametric statistical methods which utilize the relationship between two or more quantitative variables. In linear regression analysis, there are several assumptions that must be met that is normal distribution of errors, there is no correlation between the error and error variance is constant and homogent. There are some constraints that caused the assumption can not be met, for example, the correlation between independent variables (multicollinearity, constraints on the number of data and independent variables are obtained. When the number of samples obtained less than the number of independent variables, then the data is called the microarray data. Least Absolute shrinkage and Selection Operator (LASSO and Partial Least Squares (PLS is a statistical method that can be used to overcome the microarray, overfitting, and multicollinearity. From the above description, it is necessary to study with the intention of comparing LASSO and PLS method. This study uses coronary heart and stroke patients data which is a microarray data and contain multicollinearity. With these two characteristics of the data that most have a weak correlation between independent variables, LASSO method produces a better model than PLS seen from the large RMSEP.

  1. Automated objective determination of percentage of malignant nuclei for mutation testing.

    Science.gov (United States)

    Viray, Hollis; Coulter, Madeline; Li, Kevin; Lane, Kristin; Madan, Aruna; Mitchell, Kisha; Schalper, Kurt; Hoyt, Clifford; Rimm, David L

    2014-01-01

    Detection of DNA mutations in tumor tissue can be a critical companion diagnostic test before prescription of a targeted therapy. Each method for detection of these mutations is associated with an analytic sensitivity that is a function of the percentage of tumor cells present in the specimen. Currently, tumor cell percentage is visually estimated resulting in an ordinal and highly variant result for a biologically continuous variable. We proposed that this aspect of DNA mutation testing could be standardized by developing a computer algorithm capable of accurately determining the percentage of malignant nuclei in an image of a hematoxylin and eosin-stained tissue. Using inForm software, we developed an algorithm, to calculate the percentage of malignant cells in histologic specimens of colon adenocarcinoma. A criterion standard was established by manually counting malignant and benign nuclei. Three pathologists also estimated the percentage of malignant nuclei in each image. Algorithm #9 had a median deviation from the criterion standard of 5.4% on the training set and 6.2% on the validation set. Compared with pathologist estimation, Algorithm #9 showed a similar ability to determine percentage of malignant nuclei. This method represents a potential future tool to assist in determining the percent of malignant nuclei present in a tissue section. Further validation of this algorithm or an improved algorithm may have value to more accurately assess percentage of malignant cells for companion diagnostic mutation testing.

  2. Absolute Navigation Information Estimation for Micro Planetary Rovers

    Directory of Open Access Journals (Sweden)

    Muhammad Ilyas

    2016-03-01

    Full Text Available This paper provides algorithms to estimate absolute navigation information, e.g., absolute attitude and position, by using low power, weight and volume Microelectromechanical Systems-type (MEMS sensors that are suitable for micro planetary rovers. Planetary rovers appear to be easily navigable robots due to their extreme slow speed and rotation but, unfortunately, the sensor suites available for terrestrial robots are not always available for planetary rover navigation. This makes them difficult to navigate in a completely unexplored, harsh and complex environment. Whereas the relative attitude and position can be tracked in a similar way as for ground robots, absolute navigation information, unlike in terrestrial applications, is difficult to obtain for a remote celestial body, such as Mars or the Moon. In this paper, an algorithm called the EASI algorithm (Estimation of Attitude using Sun sensor and Inclinometer is presented to estimate the absolute attitude using a MEMS-type sun sensor and inclinometer, only. Moreover, the output of the EASI algorithm is fused with MEMS gyros to produce more accurate and reliable attitude estimates. An absolute position estimation algorithm has also been presented based on these on-board sensors. Experimental results demonstrate the viability of the proposed algorithms and the sensor suite for low-cost and low-weight micro planetary rovers.

  3. The relative and absolute speed of radiographic screen - film systems

    International Nuclear Information System (INIS)

    Lee, In Ja; Huh, Joon

    1993-01-01

    Recently, a large number of new screen-film systems have become available for use in diagnostic radiology. These new screens are made of materials generally known as rare - earth phosphors which have high x-ray absorption and high x-ray to light conversion efficiency compared to calcium tungstate phosphors. The major advantage of these new systems is reduction of patient exposure due to their high speed or high sensitivity. However, a system with excessively high speed can result in a significant degradation of radiographic image quality. Therefore, the speed is important parameters for users of these system. Our aim of in this was to determine accurately and precisely the absolute speed and relative speeds of both new and conventional screen - film system. We determined the absolute speed in condition of BRH phantom beam quality and the relative speed were measured by a split - screen technique in condition of BRH and ANSI phantom beam quality. The absolute and the relative speed were determined for 8 kinds of screen - 4 kinds of film in regular system and 7 kinds pf screen - 7 kinds of film in ortho system. In this study we could know the New Rx, T - MAT G has the highest film speed, also know Green system's standard deviation of relative speed larger than blue system. It was realized that there were no relationship between the absolute speed and the blue system. It was realized that there were no relationship between the absolute speed and the relative speed in ortho or regular system

  4. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  5. A digital, constant-frequency pulsed phase-locked-loop instrument for real-time, absolute ultrasonic phase measurements

    Science.gov (United States)

    Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.

    2018-05-01

    A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.

  6. Body fat percentage of urban South African children: implications for health and fitness.

    Science.gov (United States)

    Goon, D T; Toriola, A L; Shaw, B S; Amusa, L O; Khoza, L B; Shaw, I

    2013-09-01

    To explore gender and racial profiling of percentage body fat of 1136 urban South African children attending public schools in Pretoria Central. This is a cross-sectional survey of 1136 randomly selected children (548 boys and 588 girls) aged 9-13 years in urban (Pretoria Central) South Africa. Body mass, stature, skinfolds (subscapular and triceps) were measured. Data were analysed using descriptive statistics (means and standard deviations). Differences in the mean body fat percentage were examined for boys and girls according to their age group/race, using independent t-test samples. Girls had a significantly (p = 0.001) higher percentage body fat (22.7 ± 5.7%, 95% CI = 22.3, 23.2) compared to boys (16.1 ± 7.7%, 95% CI = 15.5, 16.8). Percentage body fat fluctuated with age in both boys and girls. Additionally, girls had significantly (p = 0.001) higher percentage body fat measurements at all ages compared to boys. Viewed racially, black children (20.1 ± 7.5) were significantly (p = 0.010) fatter than white children (19.0 ± 7.4) with a mean difference of 4.0. Black children were fatter than white children at ages 9, 10, 12 and 13 years, with a significant difference (p = 0.009) observed at age 12 years. There was a considerably higher level of excessive percentage body fat among school children in Central Pretoria, South Africa, with girls having significantly higher percentage body fat compared to boys. Racially, black children were fatter than white children. The excessive percentage body fat observed among the children in this study has implications for their health and fitness. Therefore, an intervention programme must be instituted in schools to prevent and control possible excessive percentage body fat in this age group.

  7. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  8. Perceiving pitch absolutely: Comparing absolute and relative pitch possessors in a pitch memory task

    Directory of Open Access Journals (Sweden)

    Schlaug Gottfried

    2009-08-01

    Full Text Available Abstract Background The perceptual-cognitive mechanisms and neural correlates of Absolute Pitch (AP are not fully understood. The aim of this fMRI study was to examine the neural network underlying AP using a pitch memory experiment and contrasting two groups of musicians with each other, those that have AP and those that do not. Results We found a common activation pattern for both groups that included the superior temporal gyrus (STG extending into the adjacent superior temporal sulcus (STS, the inferior parietal lobule (IPL extending into the adjacent intraparietal sulcus (IPS, the posterior part of the inferior frontal gyrus (IFG, the pre-supplementary motor area (pre-SMA, and superior lateral cerebellar regions. Significant between-group differences were seen in the left STS during the early encoding phase of the pitch memory task (more activation in AP musicians and in the right superior parietal lobule (SPL/intraparietal sulcus (IPS during the early perceptual phase (ITP 0–3 and later working memory/multimodal encoding phase of the pitch memory task (more activation in non-AP musicians. Non-significant between-group trends were seen in the posterior IFG (more in AP musicians and the IPL (more anterior activations in the non-AP group and more posterior activations in the AP group. Conclusion Since the increased activation of the left STS in AP musicians was observed during the early perceptual encoding phase and since the STS has been shown to be involved in categorization tasks, its activation might suggest that AP musicians involve categorization regions in tonal tasks. The increased activation of the right SPL/IPS in non-AP musicians indicates either an increased use of regions that are part of a tonal working memory (WM network, or the use of a multimodal encoding strategy such as the utilization of a visual-spatial mapping scheme (i.e., imagining notes on a staff or using a spatial coding for their relative pitch height for pitch

  9. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  10. Absolute marine gravimetry with matter-wave interferometry.

    Science.gov (United States)

    Bidel, Y; Zahzam, N; Blanchard, C; Bonnin, A; Cadoret, M; Bresson, A; Rouxel, D; Lequentrec-Lalancette, M F

    2018-02-12

    Measuring gravity from an aircraft or a ship is essential in geodesy, geophysics, mineral and hydrocarbon exploration, and navigation. Today, only relative sensors are available for onboard gravimetry. This is a major drawback because of the calibration and drift estimation procedures which lead to important operational constraints. Atom interferometry is a promising technology to obtain onboard absolute gravimeter. But, despite high performances obtained in static condition, no precise measurements were reported in dynamic. Here, we present absolute gravity measurements from a ship with a sensor based on atom interferometry. Despite rough sea conditions, we obtained precision below 10 -5  m s -2 . The atom gravimeter was also compared with a commercial spring gravimeter and showed better performances. This demonstration opens the way to the next generation of inertial sensors (accelerometer, gyroscope) based on atom interferometry which should provide high-precision absolute measurements from a moving platform.

  11. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  12. Optimal Design of the Absolute Positioning Sensor for a High-Speed Maglev Train and Research on Its Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Junge Zhang

    2012-08-01

    Full Text Available This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.

  13. Optimal design of the absolute positioning sensor for a high-speed maglev train and research on its fault diagnosis.

    Science.gov (United States)

    Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge

    2012-01-01

    This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.

  14. Application of a soft computing technique in predicting the percentage of shear force carried by walls in a rectangular channel with non-homogeneous roughness.

    Science.gov (United States)

    Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein

    2016-01-01

    Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.

  15. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  16. Absolute and Relative Socioeconomic Health Inequalities across Age Groups.

    Science.gov (United States)

    van Zon, Sander K R; Bültmann, Ute; Mendes de Leon, Carlos F; Reijneveld, Sijmen A

    2015-01-01

    The magnitude of socioeconomic health inequalities differs across age groups. It is less clear whether socioeconomic health inequalities differ across age groups by other factors that are known to affect the relation between socioeconomic position and health, like the indicator of socioeconomic position, the health outcome, gender, and as to whether socioeconomic health inequalities are measured in absolute or in relative terms. The aim is to investigate whether absolute and relative socioeconomic health inequalities differ across age groups by indicator of socioeconomic position, health outcome and gender. The study sample was derived from the baseline measurement of the LifeLines Cohort Study and consisted of 95,432 participants. Socioeconomic position was measured as educational level and household income. Physical and mental health were measured with the RAND-36. Age concerned eleven 5-years age groups. Absolute inequalities were examined by comparing means. Relative inequalities were examined by comparing Gini-coefficients. Analyses were performed for both health outcomes by both educational level and household income. Analyses were performed for all age groups, and stratified by gender. Absolute and relative socioeconomic health inequalities differed across age groups by indicator of socioeconomic position, health outcome, and gender. Absolute inequalities were most pronounced for mental health by household income. They were larger in younger than older age groups. Relative inequalities were most pronounced for physical health by educational level. Gini-coefficients were largest in young age groups and smallest in older age groups. Absolute and relative socioeconomic health inequalities differed cross-sectionally across age groups by indicator of socioeconomic position, health outcome and gender. Researchers should critically consider the implications of choosing a specific age group, in addition to the indicator of socioeconomic position and health outcome

  17. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  18. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  19. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  20. HANPP Collection: Human Appropriation of Net Primary Productivity as a Percentage of Net Primary Productivity

    Data.gov (United States)

    National Aeronautics and Space Administration — The Human Appropriation of Net Primary Productivity (HANPP) as a Percentage of Net Primary Product (NPP) portion of the HANPP Collection represents a map identifying...

  1. 12 CFR Appendix A to Part 707 - Annual Percentage Yield Calculation

    Science.gov (United States)

    2010-01-01

    ... percentage yield calculations for account disclosures and advertisements, while Part II discusses annual... number of days that would occur for any actual sequence of that many calendar months. If credit unions...

  2. Weight Percentage of Calcium Carbonate for 17 Equatorial Pacific Cores from Brown University

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Weight percentages of calcium carbonate in this file were compiled by J. Farrell and W. L. Prell of Brown University for 17 equatorial Pacific Ocean sediment cores....

  3. Owners of nuclear power plants: Percentage ownership of commercial nuclear power plants by utility companies

    International Nuclear Information System (INIS)

    Wood, R.S.

    1987-08-01

    The following list indicates percentage ownership of commercial nuclear power plants by utility companies as of June 1, 1987. The list includes all plants licensed to operate, under construction, docked for NRC safety and environmental reviews, or under NRC antitrust review. It does not include those plants announced but not yet under review or those plants formally canceled. In many cases, ownership may be in the process of changing as a result of altered financial conditions, changed power needs, and other reasons. However, this list reflects only those ownership percentages of which the NRC has been formally notified. Part I lists plants alphabetically with their associated applicants/licensees and percentage ownership. Part II lists applicants/licensees alphabetically with their associated plants and percentage ownership. Part I also indicates which plants have received operating licenses (OL's). Footnotes for both parts appear at the end of this document

  4. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  5. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  6. Total Synthesis and Absolute Configuration of the Marine Norditerpenoid Xestenone

    Directory of Open Access Journals (Sweden)

    Hiroaki Miyaoka

    2009-11-01

    Full Text Available Xestenone is a marine norditerpenoid found in the northeastern Pacific sponge Xestospongia vanilla. The relative configuration of C-3 and C-7 in xestenone was determined by NOESY spectral analysis. However the relative configuration of C-12 and the absolute configuration of this compound were not determined. The authors have now achieved the total synthesis of xestenone using their developed one-pot synthesis of cyclopentane derivatives employing allyl phenyl sulfone and an epoxy iodide as a key step. The relative and absolute configurations of xestenone were thus successfully determined by this synthesis.

  7. Absolute transition probabilities for 559 strong lines of neutral cerium

    Energy Technology Data Exchange (ETDEWEB)

    Curry, J J, E-mail: jjcurry@nist.go [National Institute of Standards and Technology, Gaithersburg, MD 20899-8422 (United States)

    2009-07-07

    Absolute radiative transition probabilities are reported for 559 strong lines of neutral cerium covering the wavelength range 340-880 nm. These transition probabilities are obtained by scaling published relative line intensities (Meggers et al 1975 Tables of Spectral Line Intensities (National Bureau of Standards Monograph 145)) with a smaller set of published absolute transition probabilities (Bisson et al 1991 J. Opt. Soc. Am. B 8 1545). All 559 new values are for lines for which transition probabilities have not previously been available. The estimated relative random uncertainty of the new data is +-35% for nearly all lines.

  8. Strongly nonlinear theory of rapid solidification near absolute stability

    Science.gov (United States)

    Kowal, Katarzyna N.; Altieri, Anthony L.; Davis, Stephen H.

    2017-10-01

    We investigate the nonlinear evolution of the morphological deformation of a solid-liquid interface of a binary melt under rapid solidification conditions near two absolute stability limits. The first of these involves the complete stabilization of the system to cellular instabilities as a result of large enough surface energy. We derive nonlinear evolution equations in several limits in this scenario and investigate the effect of interfacial disequilibrium on the nonlinear deformations that arise. In contrast to the morphological stability problem in equilibrium, in which only cellular instabilities appear and only one absolute stability boundary exists, in disequilibrium the system is prone to oscillatory instabilities and a second absolute stability boundary involving attachment kinetics arises. Large enough attachment kinetics stabilize the oscillatory instabilities. We derive a nonlinear evolution equation to describe the nonlinear development of the solid-liquid interface near this oscillatory absolute stability limit. We find that strong asymmetries develop with time. For uniform oscillations, the evolution equation for the interface reduces to the simple form f''+(βf')2+f =0 , where β is the disequilibrium parameter. Lastly, we investigate a distinguished limit near both absolute stability limits in which the system is prone to both cellular and oscillatory instabilities and derive a nonlinear evolution equation that captures the nonlinear deformations in this limit. Common to all these scenarios is the emergence of larger asymmetries in the resulting shapes of the solid-liquid interface with greater departures from equilibrium and larger morphological numbers. The disturbances additionally sharpen near the oscillatory absolute stability boundary, where the interface becomes deep-rooted. The oscillations are time-periodic only for small-enough initial amplitudes and their frequency depends on a single combination of physical parameters, including the

  9. A note on unique solvability of the absolute value equation

    Directory of Open Access Journals (Sweden)

    Taher Lotfi

    2014-05-01

    Full Text Available It is proved that applying sufficient regularity conditions to the interval matrix $[A-|B|,A+|B|]$‎, ‎we can create a new unique solvability condition for the absolute value equation $Ax+B|x|=b$‎, ‎since regularity of interval matrices implies unique solvability of their corresponding absolute value equation‎. ‎This condition is formulated in terms of positive definiteness of a certain point matrix‎. ‎Special case $B=-I$ is verified too as an application.

  10. Absolute decay parametric instability of high-temperature plasma

    International Nuclear Information System (INIS)

    Zozulya, A.A.; Silin, V.P.; Tikhonchuk, V.T.

    1986-01-01

    A new absolute decay parametric instability having wide spatial localization region is shown to be possible near critical plasma density. Its excitation is conditioned by distributed feedback of counter-running Langmuir waves occurring during parametric decay of incident and reflected pumping wave components. In a hot plasma with the temperature of the order of kiloelectronvolt its threshold is lower than that of a known convective decay parametric instability. Minimum absolute instability threshold is shown to be realized under conditions of spatial parametric resonance of higher orders

  11. Absolute analytical prediction of photonic crystal guided mode resonance wavelengths

    DEFF Research Database (Denmark)

    Hermannsson, Pétur Gordon; Vannahme, Christoph; Smith, Cameron

    2014-01-01

    numerically with methods such as rigorous coupled wave analysis. Here it is demonstrated how the absolute resonance wavelengths of such structures can be predicted by analytically modeling them as slab waveguides in which the propagation constant is determined by a phase matching condition. The model...... is experimentally verified to be capable of predicting the absolute resonance wavelengths to an accuracy of within 0.75 nm, as well as resonance wavelength shifts due to changes in cladding index within an accuracy of 0.45 nm across the visible wavelength regime in the case where material dispersion is taken...

  12. The bolometric, infrared and visual absolute magnitudes of Mira variables

    International Nuclear Information System (INIS)

    Robertson, B.S.C.; Feast, M.W.

    1981-01-01

    Statistical parallaxes, as well as stars with individually known distances are used to derive bolometric and infrared absolute magnitudes of Mira (Me) variables. The derived bolometric magnitudes are in the mean about 0.75 mag fainter than recent estimates. The problem of determining the pulsation constant is discussed. Miras with periods greater than 150 days probably pulsate in the first overtone. Those of shorter periods are anomalous and may be fundamental pulsators. It is shown that the absolute visual magnitudes at mean light of Miras with individually determined distances are consistent with values derived by Clayton and Feast from statistical parallaxes. (author)

  13. Identifying Error in AUV Communication

    National Research Council Canada - National Science Library

    Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B

    2006-01-01

    Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...

  14. Human Errors in Decision Making

    OpenAIRE

    Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan

    2005-01-01

    The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...

  15. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  16. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  17. Effect of Gene and Physical Activity Interaction on Trunk Fat Percentage Among the Newfoundland Population

    Directory of Open Access Journals (Sweden)

    Anthony Payne

    2014-01-01

    Full Text Available Objective To explore the effect of FTO gene and physical activity interaction on trunk fat percentage. Design and Methods Subjects are 3,004 individuals from Newfoundland and Labrador whose trunk fat percentage and physical activity were recorded, and who were genotyped for 11 single-nucleotide polymorphisms (SNPs in the FTO gene. Subjects were stratified by gender. Multiple tests and multiple regressions were used to analyze the effects of physical activity, variants of FTO , age, and their interactions on trunk fat percentage. Dietary information and other environmental factors were not considered. Results Higher levels of physical activity tend to reduce trunk fat percentage in all individuals. Furthermore, in males, rs9939609 and rs1421085 were significant (α = 0.05 in explaining central body fat, but no SNPs were significant in females. For highly active males, trunk fat percentage varied significantly between variants of rs9939609 and rs1421085, but there is no significant effect among individuals with low activity. The other SNPs examined were not significant in explaining trunk fat percentage. Conclusions Homozygous male carriers of non-obesity risk alleles at rs9939609 and rs1421085 will have significant reduction in central body fat from physical activity in contrast to homozygous males of the obesity-risk alleles. The additive effect of these SNPs is found in males with high physical activity only.

  18. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  19. Bounds on absolutely maximally entangled states from shadow inequalities, and the quantum MacWilliams identity

    Science.gov (United States)

    Huber, Felix; Eltschka, Christopher; Siewert, Jens; Gühne, Otfried

    2018-04-01

    A pure multipartite quantum state is called absolutely maximally entangled (AME), if all reductions obtained by tracing out at least half of its parties are maximally mixed. Maximal entanglement is then present across every bipartition. The existence of such states is in many cases unclear. With the help of the weight enumerator machinery known from quantum error correction and the shadow inequalities, we obtain new bounds on the existence of AME states in dimensions larger than two. To complete the treatment on the weight enumerator machinery, the quantum MacWilliams identity is derived in the Bloch representation. Finally, we consider AME states whose subsystems have different local dimensions, and present an example for a 2×3×3×3 system that shows maximal entanglement across every bipartition.

  20. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  1. Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.

    Science.gov (United States)

    Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2017-05-06

    The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

  2. ACCESS, Absolute Color Calibration Experiment for Standard Stars: Integration, Test, and Ground Performance

    Science.gov (United States)

    Kaiser, Mary Elizabeth; Morris, Matthew; Aldoroty, Lauren; Kurucz, Robert; McCandliss, Stephan; Rauscher, Bernard; Kimble, Randy; Kruk, Jeffrey; Wright, Edward L.; Feldman, Paul; Riess, Adam; Gardner, Jonathon; Bohlin, Ralph; Deustua, Susana; Dixon, Van; Sahnow, David J.; Perlmutter, Saul

    2018-01-01

    Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. Systematic errors associated with astrophysical data used to probe fundamental astrophysical questions, such as SNeIa observations used to constrain dark energy theories, now exceed the statistical errors associated with merged databases of these measurements. ACCESS, “Absolute Color Calibration Experiment for Standard Stars”, is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35‑1.7μm bandpass. To achieve this goal ACCESS (1) observes HST/ Calspec stars (2) above the atmosphere to eliminate telluric spectral contaminants (e.g. OH) (3) using a single optical path and (HgCdTe) detector (4) that is calibrated to NIST laboratory standards and (5) monitored on the ground and in-flight using a on-board calibration monitor. The observations are (6) cross-checked and extended through the generation of stellar atmosphere models for the targets. The ACCESS telescope and spectrograph have been designed, fabricated, and integrated. Subsystems have been tested. Performance results for subsystems, operations testing, and the integrated spectrograph will be presented. NASA sounding rocket grant NNX17AC83G supports this work.

  3. Performance evaluations of continuous glucose monitoring systems: precision absolute relative deviation is part of the assessment.

    Science.gov (United States)

    Obermaier, Karin; Schmelzeisen-Redeker, Günther; Schoemaker, Michael; Klötzer, Hans-Martin; Kirchsteiger, Harald; Eikmeier, Heino; del Re, Luigi

    2013-07-01

    Even though a Clinical and Laboratory Standards Institute proposal exists on the design of studies and performance criteria for continuous glucose monitoring (CGM) systems, it has not yet led to a consistent evaluation of different systems, as no consensus has been reached on the reference method to evaluate them or on acceptance levels. As a consequence, performance assessment of CGM systems tends to be inconclusive, and a comparison of the outcome of different studies is difficult. Published information and available data (as presented in this issue of Journal of Diabetes Science and Technology by Freckmann and coauthors) are used to assess the suitability of several frequently used methods [International Organization for Standardization, continuous glucose error grid analysis, mean absolute relative deviation (MARD), precision absolute relative deviation (PARD)] when assessing performance of CGM systems in terms of accuracy and precision. The combined use of MARD and PARD seems to allow for better characterization of sensor performance. The use of different quantities for calibration and evaluation, e.g., capillary blood using a blood glucose (BG) meter versus venous blood using a laboratory measurement, introduces an additional error source. Using BG values measured in more or less large intervals as the only reference leads to a significant loss of information in comparison with the continuous sensor signal and possibly to an erroneous estimation of sensor performance during swings. Both can be improved using data from two identical CGM sensors worn by the same patient in parallel. Evaluation of CGM performance studies should follow an identical study design, including sufficient swings in glycemia. At least a part of the study participants should wear two identical CGM sensors in parallel. All data available should be used for evaluation, both by MARD and PARD, a good PARD value being a precondition to trust a good MARD value. Results should be analyzed and

  4. aCNViewer: Comprehensive genome-wide visualization of absolute copy number and copy neutral variations.

    Directory of Open Access Journals (Sweden)

    Victor Renault

    Full Text Available Copy number variations (CNV include net gains or losses of part or whole chromosomal regions. They differ from copy neutral loss of heterozygosity (cn-LOH events which do not induce any net change in the copy number and are often associated with uniparental disomy. These phenomena have long been reported to be associated with diseases and particularly in cancer. Losses/gains of genomic regions are often correlated with lower/higher gene expression. On the other hand, loss of heterozygosity (LOH and cn-LOH are common events in cancer and may be associated with the loss of a functional tumor suppressor gene. Therefore, identifying recurrent CNV and cn-LOH events can be important as they may highlight common biological components and give insights into the development or mechanisms of a disease. However, no currently available tools allow a comprehensive whole-genome visualization of recurrent CNVs and cn-LOH in groups of samples providing absolute quantification of the aberrations leading to the loss of potentially important information.To overcome these limitations, we developed aCNViewer (Absolute CNV Viewer, a visualization tool for absolute CNVs and cn-LOH across a group of samples. aCNViewer proposes three graphical representations: dendrograms, bi-dimensional heatmaps showing chromosomal regions sharing similar abnormality patterns, and quantitative stacked histograms facilitating the identification of recurrent absolute CNVs and cn-LOH. We illustrated aCNViewer using publically available hepatocellular carcinomas (HCCs Affymetrix SNP Array data (Fig 1A. Regions 1q and 8q present a similar percentage of total gains but significantly different copy number gain categories (p-value of 0.0103 with a Fisher exact test, validated by another cohort of HCCs (p-value of 5.6e-7 (Fig 2B.aCNViewer is implemented in python and R and is available with a GNU GPLv3 license on GitHub https://github.com/FJD-CEPH/aCNViewer and Docker https://hub.docker.com/r/fjdceph/acnviewer/.aCNViewer@cephb.fr.

  5. aCNViewer: Comprehensive genome-wide visualization of absolute copy number and copy neutral variations.

    Science.gov (United States)

    Renault, Victor; Tost, Jörg; Pichon, Fabien; Wang-Renault, Shu-Fang; Letouzé, Eric; Imbeaud, Sandrine; Zucman-Rossi, Jessica; Deleuze, Jean-François; How-Kit, Alexandre

    2017-01-01

    Copy number variations (CNV) include net gains or losses of part or whole chromosomal regions. They differ from copy neutral loss of heterozygosity (cn-LOH) events which do not induce any net change in the copy number and are often associated with uniparental disomy. These phenomena have long been reported to be associated with diseases and particularly in cancer. Losses/gains of genomic regions are often correlated with lower/higher gene expression. On the other hand, loss of heterozygosity (LOH) and cn-LOH are common events in cancer and may be associated with the loss of a functional tumor suppressor gene. Therefore, identifying recurrent CNV and cn-LOH events can be important as they may highlight common biological components and give insights into the development or mechanisms of a disease. However, no currently available tools allow a comprehensive whole-genome visualization of recurrent CNVs and cn-LOH in groups of samples providing absolute quantification of the aberrations leading to the loss of potentially important information. To overcome these limitations, we developed aCNViewer (Absolute CNV Viewer), a visualization tool for absolute CNVs and cn-LOH across a group of samples. aCNViewer proposes three graphical representations: dendrograms, bi-dimensional heatmaps showing chromosomal regions sharing similar abnormality patterns, and quantitative stacked histograms facilitating the identification of recurrent absolute CNVs and cn-LOH. We illustrated aCNViewer using publically available hepatocellular carcinomas (HCCs) Affymetrix SNP Array data (Fig 1A). Regions 1q and 8q present a similar percentage of total gains but significantly different copy number gain categories (p-value of 0.0103 with a Fisher exact test), validated by another cohort of HCCs (p-value of 5.6e-7) (Fig 2B). aCNViewer is implemented in python and R and is available with a GNU GPLv3 license on GitHub https://github.com/FJD-CEPH/aCNViewer and Docker https

  6. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  7. Measured and modelled absolute gravity changes in Greenland

    DEFF Research Database (Denmark)

    Nielsen, Jens Emil; Forsberg, René; Strykowski, Gabriel

    2014-01-01

    in Greenland. Theresult is compared with the initial measurements of absolute gravity (AG) change at selected GreenlandNetwork (GNET) sites.We find that observations are highly influenced by the direct attraction from the ice and ocean. Thisis especially evident in the measurements conducted at the GNET...

  8. Absolute luminosity measurements with the LHCb detector at the LHC

    CERN Document Server

    Aaij, R; Adinolfi, M; Adrover, C; Affolder, A; Ajaltouni, Z; Albrecht, J; Alessio, F; Alexander, M; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amhis, Y; Anderson, J; Appleby, R B; Aquines Gutierrez, O; Archilli, F; Arrabito, L; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Bachmann, S; Back, J J; Bailey, D S; Balagura, V; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Bates, A; Bauer, C; Bauer, Th; Bay, A; Bediaga, I; Belous, K; Belyaev, I; Ben-Haim, E; Benayoun, M; Bencivenni, G; Benson, S; Benton, J; Bernet, R; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Bizzeti, A; Bjørnstad, P M; Blake, T; Blanc, F; Blanks, C; Blouw, J; Blusk, S; Bobrov, A; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borgia, A; Bowcock, T J V; Bozzi, C; Brambach, T; van den Brand, J; Bressieux, J; Brett, D; Brisbane, S; Britsch, M; Britton, T; Brook, N H; Brown, H; Büchler-Germann, A; Burducea, I; Bursche, A; Buytaert, J; Cadeddu, S; Caicedo Carvajal, J M; Callot, O; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carson, L; Carvalho Akiba, K; Casse, G; Cattaneo, M; Charles, M; Charpentier, Ph; Chiapolini, N; Ciba, K; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coca, C; Coco, V; Cogan, J; Collins, P; Constantin, F; Conti, G; Contu, A; Cook, A; Coombes, M; Corti, G; Cowan, G A; Currie, R; D'Almagne, B; D'Ambrosio, C; David, P; De Bonis, I; De Capua, S; De Cian, M; De Lorenzi, F; De Miranda, J M; De Paula, L; De Simone, P; Decamp, D; Deckenhoff, M; Degaudenzi, H; Deissenroth, M; Del Buono, L; Deplano, C; Deschamps, O; Dettori, F; Dickens, J; Dijkstra, H; Diniz Batista, P; Donleavy, S; Dordei, F; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dupertuis, F; Dzhelyadin, R; Eames, C; Easo, S; Egede, U; Egorychev, V; Eidelman, S; van Eijk, D; Eisele, F; Eisenhardt, S; Ekelhof, R; Eklund, L; Elsasser, Ch; d'Enterria, D G; Esperante Pereira, D; Estève, L; Falabella, A; Fanchini, E; Färber, C; Fardell, G; Farinelli, C; Farry, S; Fave, V; Fernandez Albor, V; Ferro-Luzzi, M; Filippov, S; Fitzpatrick, C; Fontana, M; Fontanelli, F; Forty, R; Frank, M; Frei, C; Frosini, M; Furcas, S; Gallas Torreira, A; Galli, D; Gandelman, M; Gandini, P; Gao, Y; Garnier, J-C; Garofoli, J; Garra Tico, J; Garrido, L; Gaspar, C; Gauvin, N; Gersabeck, M; Gershon, T; Ghez, Ph; Gibson, V; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gordon, H; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graziani, G; Grecu, A; Gregson, S; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Haefeli, G; Haen, C; Haines, S C; Hampson, T; Hansmann-Menzemer, S; Harji, R; Harnew, N; Harrison, J; Harrison, P F; He, J; Heijne, V; Hennessy, K; Henrard, P; Hernando Morata, J A; van Herwijnen, E; Hicks, E; Hofmann, W; Holubyev, K; Hopchev, P; Hulsbergen, W; Hunt, P; Huse, T; Huston, R S; Hutchcroft, D; Hynds, D; Iakovenko, V; Ilten, P; Imong, J; Jacobsson, R; Jaeger, A; Jahjah Hussein, M; Jans, E; Jansen, F; Jaton, P; Jean-Marie, B; Jing, F; John, M; Johnson, D; Jones, C R; Jost, B; Kandybei, S; Karacson, M; Karbach, T M; Keaveney, J; Kerzel, U; Ketel, T; Keune, A; Khanji, B; Kim, Y M; Knecht, M; Koblitz, S; Koppenburg, P; Kozlinskiy, A; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Kruzelecki, K; Kucharczyk, M; Kukulak, S; Kumar, R; Kvaratskheliya, T; La Thi, V N; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lambert, R W; Lanciotti, E; Lanfranchi, G; Langenbruch, C; Latham, T; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Leroy, O; Lesiak, T; Li, L; Li Gioi, L; Lieng, M; Liles, M; Lindner, R; Linn, C; Liu, B; Liu, G; Lopes, J H; Lopez Asamar, E; Lopez-March, N; Luisier, J; Machefert, F; Machikhiliyan, I V; Maciuc, F; Maev, O; Magnin, J; Malde, S; Mamunur, R M D; Manca, G; Mancinelli, G; Mangiafave, N; Marconi, U; Märki, R; Marks, J; Martellotti, G; Martens, A; Martin, L; Martín Sánchez, A; Martinez Santos, D; Massafferri, A; Matev, R; Mathe, Z; Matteuzzi, C; Matveev, M; Maurice, E; Maynard, B; Mazurov, A; McGregor, G; McNulty, R; Mclean, C; Meissner, M; Merk, M; Merkel, J; Messi, R; Miglioranzi, S; Milanes, D A; Minard, M-N; Monteil, S; Moran, D; Morawski, P; Mountain, R; Mous, I; Muheim, F; Müller, K; Muresan, R; Muryn, B; Musy, M; Mylroie-Smith, J; Naik, P; Nakada, T; Nandakumar, R; Nardulli, J; Nasteva, I; Nedos, M; Needham, M; Neufeld, N; Nguyen-Mau, C; Nicol, M; Nies, S; Niess, V; Nikitin, N; Oblakowska-Mucha, A; Obraztsov, V; Oggero, S; Ogilvy, S; Okhrimenko, O; Oldeman, R; Orlandea, M; Otalora Goicochea, J M; Owen, P; Pal, B; Palacios, J; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Parkes, C; Parkinson, C J; Passaleva, G; Patel, G D; Patel, M; Paterson, S K; Patrick, G N; Patrignani, C; Pavel-Nicorescu, C; Pazos Alvarez, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perego, D L; Perez Trigo, E; Pérez-Calero Yzquierdo, A; Perret, P; Perrin-Terrin, M; Pessina, G; Petrella, A; Petrolini, A; Pie Valls, B; Pietrzyk, B; Pilar, T; Pinci, D; Plackett, R; Playfer, S; Plo Casasus, M; Polok, G; Poluektov, A; Polycarpo, E; Popov, D; Popovici, B; Potterat, C; Powell, A; du Pree, T; Prisciandaro, J; Pugatch, V; Puig Navarro, A; Qian, W; Rademacker, J H; Rakotomiaramanana, B; Rangel, M S; Raniuk, I; Raven, G; Redford, S; Reid, M M; dos Reis, A C; Ricciardi, S; Rinnert, K; Roa Romero, D A; Robbe, P; Rodrigues, E; Rodrigues, F; Rodriguez Perez, P; Rogers, G J; Roiser, S; Romanovsky, V; Rouvinet, J; Ruf, T; Ruiz, H; Sabatino, G; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salzmann, C; Sannino, M; Santacesaria, R; Santamarina Rios, C; Santinelli, R; Santovetti, E; Sapunov, M; Sarti, A; Satriano, C; Satta, A; Savrie, M; Savrina, D; Schaack, P; Schiller, M; Schleich, S; Schmelling, M; Schmidt, B; Schneider, O; Schopper, A; Schune, M -H; Schwemmer, R; Sciubba, A; Seco, M; Semennikov, A; Senderowska, K; Sepp, I; Serra, N; Serrano, J; Seyfert, P; Shao, B; Shapkin, M; Shapoval, I; Shatalov, P; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, O; Shevchenko, V; Shires, A; Silva Coutinho, R; Skottowe, H P; Skwarnicki, T; Smith, A C; Smith, N A; Sobczak, K; Soler, F J P; Solomin, A; Soomro, F; Souza De Paula, B; Spaan, B; Sparkes, A; Spradlin, P; Stagni, F; Stahl, S; Steinkamp, O; Stoica, S; Stone, S; Storaci, B; Straticiuc, M; Straumann, U; Styles, N; Subbiah, V K; Swientek, S; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Teodorescu, E; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Topp-Joergensen, S; Tran, M T; Tsaregorodtsev, A; Tuning, N; Ubeda Garcia, M; Ukleja, A; Urquijo, P; Uwer, U; Vagnoni, V; Valenti, G; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; Velthuis, J J; Veltri, M; Vervink, K; Viaud, B; Videau, I; Vilasis-Cardona, X; Visniakov, J; Vollhardt, A; Voong, D; Vorobyev, A; Voss, H; Wacker, K; Wandernoth, S; Wang, J; Ward, D R; Webber, A D; Websdale, D; Whitehead, M; Wiedner, D; Wiggers, L; Wilkinson, G; Williams, M P; Williams, M; Wilson, F F; Wishahi, J; Witek, M; Witzeling, W; Wotton, S A; Wyllie, K; Xie, Y; Xing, F; Yang, Z; Young, R; Yushchenko, O; Zavertyaev, M; Zhang, F; Zhang, L; Zhang, W C; Zhang, Y; Zhelezov, A; Zhong, L; Zverev, E; Zvyagin, A

    2012-01-01

    Absolute luminosity measurements are of general interest for colliding-beam experiments at storage rings. These measurements are necessary to determine the absolute cross-sections of reaction processes and are valuable to quantify the performance of the accelerator. LHCb has applied two methods to determine the absolute scale of its luminosity measurements for proton-proton collisions at the LHC with a centre-of-mass energy of 7 TeV. In addition to the classic ``van der Meer scan'' method a novel technique has been developed which makes use of direct imaging of the individual beams using beam-gas and beam-beam interactions. This beam imaging method is made possible by the high resolution of the LHCb vertex detector and the close proximity of the detector to the beams, and allows beam parameters such as positions, angles and widths to be determined. The results of the two methods have comparable precision and are in good agreement. Combining the two methods, an overall precision of 3.5\\% in the absolute lumi...

  9. Lyman alpha SMM/UVSP absolute calibration and geocoronal correction

    Science.gov (United States)

    Fontenla, Juan M.; Reichmann, Edwin J.

    1987-01-01

    Lyman alpha observations from the Ultraviolet Spectrometer Polarimeter (UVSP) instrument of the Solar Maximum Mission (SMM) spacecraft were analyzed and provide instrumental calibration details. Specific values of the instrument quantum efficiency, Lyman alpha absolute intensity, and correction for geocoronal absorption are presented.

  10. Absolute configurations of zingiberenols isolated from ginger (Zingiber officinale) rhizomes

    Science.gov (United States)

    The sesquiterpene alcohol zingiberenol, or 1,10-bisaboladien-3-ol, was isolated some time ago from ginger, Zingiber officinale, rhizomes, but its absolute configuration had not been determined. With three chiral centers present in the molecule, zingiberenol can exist in eight stereoisomeric forms. ...

  11. Fabricating the absolute fake: America in contemporary pop culture

    NARCIS (Netherlands)

    Kooijman, J.

    2008-01-01

    Onze wereld wordt gedomineerd door de Amerikaanse popcultuur. Fabricating the Absolute Fake onderzoekt de dynamiek van Amerikanisering aan de hand van hedendaagse films, televisieprogramma's en popsterren die reflecteren op de vraag wat het betekent om Amerikaan in een mondiale popcultuur te zijn.

  12. Confirmation of the absolute configuration of (−)-aurantioclavine

    KAUST Repository

    Behenna, Douglas C.

    2011-04-01

    We confirm our previous assignment of the absolute configuration of (-)-aurantioclavine as 7R by crystallographically characterizing an advanced 3-bromoindole intermediate reported in our previous synthesis. This analysis also provides additional support for our model of enantioinduction in the palladium(II)-catalyzed oxidative kinetic resolution of secondary alcohols. © 2010 Elsevier Ltd. All rights reserved.

  13. Multipliers for the Absolute Euler Summability of Fourier Series

    Indian Academy of Sciences (India)

    In this paper, the author has investigated necessary and sufficient conditions for the absolute Euler summability of the Fourier series with multipliers. These conditions are weaker than those obtained earlier by some workers. It is further shown that the multipliers are best possible in certain sense.

  14. The Absolute and the Relative Dimensions of Constitutional Rights

    Czech Academy of Sciences Publication Activity Database

    Alexy, Robert

    2017-01-01

    Roč. 37, č. 1 (2017), s. 31-47 ISSN 0143-6503 Keywords : constitutional rights * judicial review * proportionality Subject RIV: AG - Legal Sciences OBOR OECD: Law Impact factor: 1.242, year: 2016 https://academic.oup.com/ojls/article/37/1/31/2669583/The-Absolute-and-the-Relative-Dimensions-of

  15. Europe's Other Poverty Measures: Absolute Thresholds Underlying Social Assistance

    Science.gov (United States)

    Bavier, Richard

    2009-01-01

    The first thing many learn about international poverty measurement is that European nations apply a "relative" poverty threshold and that they also do a better job of reducing poverty. Unlike the European model, the "absolute" U.S. poverty threshold does not increase in real value when the nation's standard of living rises,…

  16. Absolute configuration of some dinorlabdanes from the copaiba oil

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Adriano L.; Baptistela, Lucia H.B.; Imamura, Paulo M. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Inst. de Quimica], e-mail: imam@iqm.unicamp.br

    2009-07-01

    A novel ent-dinorlabdane ({iota})-13(R)-14,15-dinorlabd-8(17)-ene-3,13-diol was isolated from commercial copaiba oil along with two known dinorlabdanes. The absolute configuration of these dinorditerpenes was established for the first time through synthesis starting from known ({iota})-3-hydroxycopalic acid, which was also isolated from the same oleoresin. (author)

  17. Absolute Risk Aversion and the Returns to Education.

    Science.gov (United States)

    Brunello, Giorgio

    2002-01-01

    Uses 1995 Italian household income and wealth survey to measure individual absolute risk aversion of 1,583 married Italian male household heads. Uses this measure as an instrument for attained education in a standard-log earnings equation. Finds that the IV estimate of the marginal return to schooling is much higher than the ordinary least squares…

  18. Absolute differential yield of parametric x-ray radiation

    International Nuclear Information System (INIS)

    Shchagin, A.V.; Pristupa, V.I.; Khizhnyak, N.A.

    1993-01-01

    The results of measurements of absolute differential yield of parametric X-ray radiation (PXR) in thin single crystal are presented for the first time. It has been established that the experimental results are in good agreement with theoretical calculations according with kinematical theory. The influence of density effect on PXR properties is discussed. (author). 19 refs., 7 figs

  19. Absolute measurements of chlorine Cl+ cation single photoionization cross section

    NARCIS (Netherlands)

    Hernandez, E. M.; Juarez, A. M.; Kilcoyne, A. L. D.; Aguilar, A.; Hernandez, L.; Antillon, A.; Macaluso, D.; Morales-Mori, A.; Gonzalez-Magana, O.; Hanstorp, D.; Covington, A. M.; Davis, V.; Calabrese, D.; Hinojosa, G.

    The photoionization of Cl+ leading to Cl2+ was measured in the photon energy range of 19.5-28.0 eV. A spectrum with a photon energy resolution of 15 meV normalized to absolute cross-section measurements is presented. The measurements were carried out by merging a Cl+ ion beam with a photon beam of

  20. Comments on the theory of absolute and convective instabilities

    International Nuclear Information System (INIS)

    Oscarsson, T.E.; Roennmark, K.

    1986-10-01

    The theory of absolute and convective instabilities is discussed and we argue that the basis of the theory is questionable, since it describes the linear development of instabilities by their behaviour in the time asymptotic limit. In order to make sensible predictions on the linear development of instabilities, the problem should be studied on the finite time scale implied by the linear approximation. (authors)

  1. Global Absolute Poverty: Behind the Veil of Dollars

    NARCIS (Netherlands)

    Moatsos, M.

    2017-01-01

    The widely applied “dollar-a-day” methodology identifies global absolute poverty as declining precipitously since the early 80’s throughout the developing world. The methodological underpinnings of the “dollar-a-day” approach have been questioned in terms of adequately representing equivalent

  2. Global Absolute Poverty: Behind the Veil of Dollars

    NARCIS (Netherlands)

    Moatsos, M.

    2015-01-01

    The global absolute poverty rates of the World Bank demonstrate a continued decline of poverty in developing countries between 1983 and 2012. However, the methodology applied to derive these results has received extensive criticism by scholars for requiring the application of PPP exchange rates and

  3. Population-based absolute risk estimation with survey data

    Science.gov (United States)

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  4. Rational functions with maximal radius of absolute monotonicity

    KAUST Repository

    Loczi, Lajos; Ketcheson, David I.

    2014-01-01

    -Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend

  5. Absolute parametric instability in a nonuniform plane plasma ...

    Indian Academy of Sciences (India)

    Abstract. The paper reports an analysis of the effect of spatial plasma nonuniformity on absolute parametric instability (API) of electrostatic waves in magnetized plane waveguides subjected to an intense high-frequency (HF) electric field using the separation method. In this case the effect of strong static magnetic field is ...

  6. Absolute parametric instability in a nonuniform plane plasma

    Indian Academy of Sciences (India)

    The paper reports an analysis of the effect of spatial plasma nonuniformity on absolute parametric instability (API) of electrostatic waves in magnetized plane waveguides subjected to an intense high-frequency (HF) electric field using the separation method. In this case the effect of strong static magnetic field is considered.

  7. Absolute configuration and antiprotozoal activity of minquartynoic acid

    DEFF Research Database (Denmark)

    Rasmussen, H B; Christensen, Søren Brøgger; Kvist, L P

    2000-01-01

    Minquartynoic acid (1) was isolated as an antimalarial and antileishmanial constituent of the Peruvian tree Minquartia guianensis and its absolute configuration at C-17 established to be (+)-S through conversion to the known (+)-(S)-17-hydroxystearic acid (2) and confirmed using Mosher's method....

  8. Relative versus Absolute Stimulus Control in the Temporal Bisection Task

    Science.gov (United States)

    de Carvalho, Marilia Pinhiero; Machado, Armando

    2012-01-01

    When subjects learn to associate two sample durations with two comparison keys, do they learn to associate the keys with the short and long samples (relational hypothesis), or with the specific sample durations (absolute hypothesis)? We exposed 16 pigeons to an ABA design in which phases A and B corresponded to tasks using samples of 1 s and 4 s,…

  9. Absolute intensity calibration for ECE measurements on EAST

    International Nuclear Information System (INIS)

    Liu Yong; Liu Xiang; Zhao Hailin

    2014-01-01

    In this proceeding, the results of the in-situ absolute intensity calibration for ECE measurements on EAST are presented. A 32-channel heterodyne radiometer system and a Michelson interferometer on EAST have been calibrated independently, and preliminary results from plasma operation indicate a good agreement between the electron temperature profiles obtained with different systems. (author)

  10. Absolute dissipative drift-wave instabilities in tokamaks

    International Nuclear Information System (INIS)

    Chen, L.; Chance, M.S.; Cheng, C.Z.

    1979-07-01

    Contrary to previous theoretical predictions, it is shown that the dissipative drift-wave instabilities are absolute in tokamak plasmas. The existence of unstable eigenmodes is shown to be associated with a new eigenmode branch induced by the finite toroidal couplings

  11. On quantum harmonic oscillator being subjected to absolute ...

    Indian Academy of Sciences (India)

    On quantum harmonic oscillator being subjected to absolute potential state. SWAMI NITYAYOGANANDA. Ramakrishna Mission Ashrama, R.K. Beach, Visakhapatnam 530 003, India. E-mail: nityayogananda@gmail.com. MS received 1 May 2015; accepted 6 May 2016; published online 3 December 2016. Abstract.

  12. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  13. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  14. A theory of human error

    Science.gov (United States)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  15. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  16. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  17. Absolute Gravity Datum in the Age of Cold Atom Gravimeters

    Science.gov (United States)

    Childers, V. A.; Eckl, M. C.

    2014-12-01

    The international gravity datum is defined today by the International Gravity Standardization Net of 1971 (IGSN-71). The data supporting this network was measured in the 1950s and 60s using pendulum and spring-based gravimeter ties (plus some new ballistic absolute meters) to replace the prior protocol of referencing all gravity values to the earlier Potsdam value. Since this time, gravimeter technology has advanced significantly with the development and refinement of the FG-5 (the current standard of the industry) and again with the soon-to-be-available cold atom interferometric absolute gravimeters. This latest development is anticipated to provide improvement in the range of two orders of magnitude as compared to the measurement accuracy of technology utilized to develop ISGN-71. In this presentation, we will explore how the IGSN-71 might best be "modernized" given today's requirements and available instruments and resources. The National Geodetic Survey (NGS), along with other relevant US Government agencies, is concerned about establishing gravity control to establish and maintain high order geodetic networks as part of the nation's essential infrastructure. The need to modernize the nation's geodetic infrastructure was highlighted in "Precise Geodetic Infrastructure, National Requirements for a Shared Resource" National Academy of Science, 2010. The NGS mission, as dictated by Congress, is to establish and maintain the National Spatial Reference System, which includes gravity measurements. Absolute gravimeters measure the total gravity field directly and do not involve ties to other measurements. Periodic "intercomparisons" of multiple absolute gravimeters at reference gravity sites are used to constrain the behavior of the instruments to ensure that each would yield reasonably similar measurements of the same location (i.e. yield a sufficiently consistent datum when measured in disparate locales). New atomic interferometric gravimeters promise a significant

  18. Studies on the Effect of Type and Solarization Period on Germination Percentage of Four Weed Species

    Directory of Open Access Journals (Sweden)

    J. Rostam

    2011-01-01

    Full Text Available Abstract In order to study the effects of soil solarization on weed control, an experiment with factorial arrangement in a randomized complete block design with four replications was conducted in a fallow farm in Daregaz in 2008. Factors included solarization duration (0, 2, 4 and 6 weeks and soil moisture content (dry and moist. Soil seed bank was sampled (in two depth, 0-10 and 10-20 cm prior to the experiment and immediately after applying treatments, and germination percentage of weed species were determined. Results of this study showed that seed germination percentage in 10 cm soil depth was influenced by soil moisture and solarization and their interactions, while in 20 cm soil depth only solarization period affected the weed seed germination. Germination percentage in moist soil was less than that in dry soil. Seed germination percentage declined more by increasing solarization duration, so that the greatest decline was obtained after 6 weeks solarization. Solarization decreased germination percentage in moist soil more than that in dry soil. Overall, the results of this experiment indicated that solarization of moist soil for 6 weeks was the most effective treatment in controlling common lambsquatres (Chenopodium album, common purslane (Portulaca oleracea, redroot pigweed (Amaranthus retroflexus, and wild mustard (Sinapis arvensis, while solarization of dry soil for 2 weeks was the least effective treatment for weed control. Keywords: Solarization, Soil moisture, Seed bank

  19. The percentage of macrophage numbers in rat model of sciatic nerve crush injury

    Directory of Open Access Journals (Sweden)

    Satrio Wicaksono

    2016-02-01

    Full Text Available ABSTRACT Excessive accumulation of macrophages in sciatic nerve fascicles inhibits regeneration of peripheral nerves. The aim of this study is to determine the percentage of the macrophages inside and outside of the fascicles at the proximal, at the site of injury and at the distal segment of rat model of sciatic nerve crush injury. Thirty male 3 months age Wistar rats of 200-230 g were divided into sham-operation group and crush injury group. Termination was performed on day 3, 7, and 14 after crush injury. Immunohistochemical examination was done using anti CD68 antibody. Counting of immunopositive and immunonegative cells was done on three representative fields for extrafascicular and intrafascicular area of proximal, injury and distal segments. The data was presented as percentage of immunopositive cells. The percentage of the macrophages was significantly increased in crush injury group compared to the sham-operated group in all segments of the peripheral nerves. While the percentage of macrophages outside fascicle in all segments of sciatic nerve and within the fascicle in the proximal segment reached its peak on day 3, the percentage of macrophages within the fascicles at the site of injury and distal segments reached the peak later at day 7. In conclusions, accumulation of macrophages outside the nerve fascicles occurs at the beginning of the injury, and then followed later by the accumulation of macrophages within nerve fascicles

  20. Quantitative Analysis of the Effect of Iterative Reconstruction Using a Phantom: Determining the Appropriate Blending Percentage

    Science.gov (United States)

    Kim, Hyun Gi; Lee, Young Han; Choi, Jin-Young; Park, Mi-Suk; Kim, Myeong-Jin; Kim, Ki Whang

    2015-01-01

    Purpose To investigate the optimal blending percentage of adaptive statistical iterative reconstruction (ASIR) in a reduced radiation dose while preserving a degree of image quality and texture that is similar to that of standard-dose computed tomography (CT). Materials and Methods The CT performance phantom was scanned with standard and dose reduction protocols including reduced mAs or kVp. Image quality parameters including noise, spatial, and low-contrast resolution, as well as image texture, were quantitatively evaluated after applying various blending percentages of ASIR. The optimal blending percentage of ASIR that preserved image quality and texture compared to standard dose CT was investigated in each radiation dose reduction protocol. Results As the percentage of ASIR increased, noise and spatial-resolution decreased, whereas low-contrast resolution increased. In the texture analysis, an increasing percentage of ASIR resulted in an increase of angular second moment, inverse difference moment, and correlation and in a decrease of contrast and entropy. The 20% and 40% dose reduction protocols with 20% and 40% ASIR blending, respectively, resulted in an optimal quality of images with preservation of the image texture. Conclusion Blending the 40% ASIR to the 40% reduced tube-current product can maximize radiation dose reduction and preserve adequate image quality and texture. PMID:25510772

  1. Error and objectivity: cognitive illusions and qualitative research.

    Science.gov (United States)

    Paley, John

    2005-07-01

    Psychological research has shown that cognitive illusions, of which visual illusions are just a special case, are systematic and pervasive, raising epistemological questions about how error in all forms of research can be identified and eliminated. The quantitative sciences make use of statistical techniques for this purpose, but it is not clear what the qualitative equivalent is, particularly in view of widespread scepticism about validity and objectivity. I argue that, in the light of cognitive psychology, the 'error question' cannot be dismissed as a positivist obsession, and that the concepts of truth and objectivity are unavoidable. However, they constitute only a 'minimal realism', which does not necessarily bring a commitment to 'absolute' truth, certainty, correspondence, causation, reductionism, or universal laws in its wake. The assumption that it does reflects a misreading of positivism and, ironically, precipitates a 'crisis of legitimation and representation', as described by constructivist authors.

  2. Evaluation of electron mobility in InSb quantum wells by means of percentage-impact

    International Nuclear Information System (INIS)

    Mishima, T. D.; Edirisooriya, M.; Santos, M. B.

    2014-01-01

    In order to quantitatively analyze the contribution of each scattering factor toward the total carrier mobility, we use a new convenient figure-of-merit, named a percentage impact. The mobility limit due to a scattering factor, which is widely used to summarize a scattering analysis, has its own advantage. However, a mobility limit is not quite appropriate for the above purpose. A comprehensive understanding of the difference in contribution among many scattering factors toward the total carrier mobility can be obtained by evaluating percentage impacts of scattering factors, which can be straightforwardly calculated from their mobility limits and the total mobility. Our percentage impact analysis shows that threading dislocation is one of the dominant scattering factors for the electron transport in InSb quantum wells at room temperature

  3. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Science.gov (United States)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  4. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    International Nuclear Information System (INIS)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  5. Absolute total and one and two electron transfer cross sections for Ar8+ on Ar as a function of energy

    International Nuclear Information System (INIS)

    Vancura, J.; Kostroun, V.O.

    1992-01-01

    The absolute total and one and two electron transfer cross sections for Ar 8+ on Ar were measured as a function of projectile laboratory energy from 0.090 to 0.550 keV/amu. The effective one electron transfer cross section dominates above 0.32 keV/amu, while below this energy, the effective two electron transfer starts to become appreciable. The total cross section varies by a factor over the energy range explored. The overall error in the cross section measurement is estimated to be ± 15%

  6. Soil Carbon Mapping in Low Relief Areas with Combined Land Use Types and Percentages

    Science.gov (United States)

    Liu, Y. L.; Wu, Z. H.; Chen, Y. Y.; Wang, B. Z.

    2018-05-01

    Accurate mapping of soil carbon in low relief areas is of great challenge because of the defect of conventional "soil-landscape" model. Efforts have been made to integrate the land use information in the modelling and mapping of soil organic carbon (SOC), in which the spatial context was ignored. With 256 topsoil samples collected from Jianghan Plain, we aim to (i) explore the land-use dependency of SOC via one-way ANOVA; (ii) investigate the "spillover effect" of land use on SOC content; (iii) examine the feasibility of land use types and percentages (obtained with a 200-meter buffer) for soil mapping via regression Kriging (RK) models. Results showed that the SOC of paddy fields was higher than that of woodlands and irrigated lands. The land use type could explain 20.5 % variation of the SOC, and the value increased to 24.7 % when the land use percentages were considered. SOC was positively correlated with the percentage of water area and irrigation canals. Further research indicated that SOC of irrigated lands was significantly correlated with the percentage of water area and irrigation canals, while paddy fields and woodlands did not show similar trends. RK model that combined land use types and percentages outperformed the other models with the lowest values of RMSEC (5.644 g/kg) and RMSEP (6.229 g/kg), and the highest R2C (0.193) and R2P (0.197). In conclusions, land use types and percentages serve as efficient indicators for the SOC mapping in plain areas. Additionally, irrigation facilities contributed to the farmland SOC sequestration especially in irrigated lands.

  7. ABOUT FEW APPROACHES TO COMMERCIAL BANK PERCENTAGE POLICY CONSTRUCTION IN CREDITING POPULATION

    Directory of Open Access Journals (Sweden)

    A.A. Kuklin

    2007-06-01

    Full Text Available In the article we consider some aspects of Russian Federation and Sverdlovsk region bank sector development and few principles of credit organization percentage policy construction. We also describe interest rate calculation methods depending on currency toolkit and the received results of using the methods in reference to population crediting development. Besides we give some offers on increasing management efficiency of percentage policy and decreasing delayed credit debts level and some offers on specification of population crediting development forecasts in Sverdlovsk region.

  8. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  10. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  11. Medical Errors in Cyprus: The 2005 Eurobarometer Survey

    Directory of Open Access Journals (Sweden)

    Andreas Pavlakis

    2012-01-01

    Full Text Available Background: Medical errors have been highlighted in recent years by different agencies, scientific bodies and research teams alike. We sought to explore the issue of medical errors in Cyprus using data from the Eurobarometer survey.Methods: Data from the special Eurobarometer survey conducted in 2005 across all European Union countries (EU-25 and the acceding countries were obtained from the corresponding EU office. Statisticalanalyses including logistic regression models were performed using SPSS.Results: A total of 502 individuals participated in the Cyprus survey. About 90% reported that they had often or sometimes heard about medical errors, while 22% reported that a family member or they had suffered a serious medical error in a local hospital. In addition, 9.4% reported a serious problem from a prescribed medicine. We also found statistically significant differences across different ages and gender and in rural versus urban residents. Finally, using multivariable-adjusted logistic regression models, wefound that residents in rural areas were more likely to have suffered a serious medical error in a local hospital or from a prescribed medicine.Conclusion: Our study shows that the vast majority of residents in Cyprus in parallel with the other Europeans worry about medical errors and a significant percentage report having suffered a serious medical error at a local hospital or from a prescribed medicine. The results of our study could help the medical community in Cyprus and the society at large to enhance its vigilance with respect to medical errors in order to improve medical care.

  12. Proposal for an absolute, atomic definition of mass

    International Nuclear Information System (INIS)

    Wignall, J.W.G.

    1991-11-01

    It is proposed that the mass of a particle be defined absolutely as its de Broglie frequency, measured as the mean de Broglie wavelength of the particle when it has a mean speed (v) and Lorentz factor γ; the masses of systems too large to have a measurable de Broglie wavelength mean are then to be derived by specifying the usual inertial and additive properties of mass. This definition avoids the use of an arbitrary macroscopic standard such as the prototype kilogram, and, if present theory is correct, does not even require the choice of a specific particle as a mass standard. Suggestions are made as to how this absolute mass can be realized and measured at the macroscopic level and, finally, some comments are made on the effect of the new definition on the form of the equations of physics. 19 refs

  13. Remote ultrasound palpation for robotic interventions using absolute elastography.

    Science.gov (United States)

    Schneider, Caitlin; Baghani, Ali; Rohling, Robert; Salcudean, Septimiu

    2012-01-01

    Although robotic surgery has addressed many of the challenges presented by minimally invasive surgery, haptic feedback and the lack of knowledge of tissue stiffness is an unsolved problem. This paper presents a system for finding the absolute elastic properties of tissue using a freehand ultrasound scanning technique, which utilizes the da Vinci Surgical robot and a custom 2D ultrasound transducer for intraoperative use. An external exciter creates shear waves in the tissue, and a local frequency estimation method computes the shear modulus. Results are reported for both phantom and in vivo models. This system can be extended to any 6 degree-of-freedom tracking method and any 2D transducer to provide real-time absolute elastic properties of tissue.

  14. Absolute limit on rotation of gravitationally bound stars

    Science.gov (United States)

    Glendenning, N. K.

    1994-03-01

    The authors seek an absolute limit on the rotational period for a neutron star as a function of its mass, based on the minimal constraints imposed by Einstein's theory of relativity, Le Chatelier's principle, causality, and a low-density equation of state, uncertainties which can be evaluated as to their effect on the result. This establishes a limiting curve in the mass-period plane below which no pulsar that is a neutron star can lie. For example, the minimum possible Kepler period, which is an absolute limit on rotation below which mass-shedding would occur, is 0.33 ms for a M = 1.442 solar mass neutron star (the mass of PSR1913+16). If the limit were found to be broken by any pulsar, it would signal that the confined hadronic phase of ordinary nucleons and nuclei is only metastable.

  15. Determination of absolute internal conversion coefficients using the SAGE spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Sorri, J., E-mail: juha.m.t.sorri@jyu.fi [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 University of Jyvaskyla (Finland); Greenlees, P.T.; Papadakis, P.; Konki, J. [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 University of Jyvaskyla (Finland); Cox, D.M. [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 University of Jyvaskyla (Finland); Department of Physics, University of Liverpool, Oxford Street, Liverpool L69 7ZE (United Kingdom); Auranen, K.; Partanen, J.; Sandzelius, M.; Pakarinen, J.; Rahkila, P.; Uusitalo, J. [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 University of Jyvaskyla (Finland); Herzberg, R.-D. [Department of Physics, University of Liverpool, Oxford Street, Liverpool L69 7ZE (United Kingdom); Smallcombe, J.; Davies, P.J.; Barton, C.J.; Jenkins, D.G. [Department of Physics, University of York, Heslington, York YO10 5DD (United Kingdom)

    2016-03-11

    A non-reference based method to determine internal conversion coefficients using the SAGE spectrometer is carried out for transitions in the nuclei of {sup 154}Sm, {sup 152}Sm and {sup 166}Yb. The Normalised-Peak-to-Gamma method is in general an efficient tool to extract internal conversion coefficients. However, in many cases the required well-known reference transitions are not available. The data analysis steps required to determine absolute internal conversion coefficients with the SAGE spectrometer are presented. In addition, several background suppression methods are introduced and an example of how ancillary detectors can be used to select specific reaction products is given. The results obtained for ground-state band E2 transitions show that the absolute internal conversion coefficients can be extracted using the methods described with a reasonable accuracy. In some cases of less intense transitions only an upper limit for the internal conversion coefficient could be given.

  16. Absolute elastic cross sections for electron scattering from SF6

    International Nuclear Information System (INIS)

    Gulley, R.J.; Uhlmann, L.J.; Dedman, C.J.; Buckman, S.J.; Cho, H.; Trantham, K.W.

    2000-01-01

    Full text: Absolute differential cross sections for vibrationally elastic scattering of electrons from sulphur hexafluoride (SF 6 ) have been measured at fixed angles of 60 deg, 90 deg and 120 deg over the energy range of 5 to 15 eV, and also at 11 fixed energies between 2.7 and 75 eV for scattering angles between 10 deg and 180 deg. These measurements employ the magnetic angle-changing technique of Read and Channing in combination with the relative flow technique to obtain absolute elastic scattering cross sections at backward angles (135 deg to 180 deg) for incident energies below 15 eV. The results reveal some substantial differences with several previous determinations and a reasonably good level of agreement with a recent close coupling calculation

  17. ABSOLUTE AND COMPARATIVE SUSTAINABILITY OF FARMING ENTERPRISES IN BULGARIA

    Directory of Open Access Journals (Sweden)

    H. Bachev

    2017-04-01

    Full Text Available Evaluating absolute and comparative sustainability of farming enterprises is among the most topical issues for researchers, farmers, investors, administrators, politicians, interests groups and public at large. Nevertheless, in Bulgaria and most East European countries there are no comprehensive assessments on sustainability level of Bulgarian farms of different juridical type. This article applies a holistic framework and assesses absolute and comparative sustainability major farming structures in Bulgaria - unregistered farms of Natural Persons, Sole Traders, Cooperatives, and Companies. First, method of the study is outlined, and overall characteristics of surveyed farming enterprises presented. After that an assessment is made of integral, governance, economic, social, environmental sustainability of farming structures of different juridical type. Next, structure of farming enterprises with different sustainability levels is analyzed. Finally, conclusion from the study and directions for further research and amelioration of sustainability assessments suggested.

  18. Absolute dating of the Aegean Late Bronze Age

    International Nuclear Information System (INIS)

    Warren, P.M.

    1987-01-01

    A recent argument for raising the absolute date of the beginning of the Aegean Late Bronze (LB) Age to about 1700 B.C. is critically examined. It is argued here that: (1) the alabaster lid from Knossos did have the stratigraphical context assigned to it by Evans, in all probability Middle Minoan IIIA, c. 1650 B.C.; (2) the attempt to date the alabastron found in an early Eighteenth Dynasty context at Aniba to Late Minoan IIIA:1 is open to objections; (3) radiocarbon dates from Aegean LB I contexts are too wide in their calibrated ranges and too inconsistent both within and between site sets to offer any reliable grounds at present for raising Aegean LB I absolute chronology to 1700 B.C. Other evidence, however, suggests this period began about 1600 B.C., i.e. some fifty years earlier than the conventional date of 1550 B.C. (author)

  19. Limitations of absolute activity determination of I-125 sources

    Energy Technology Data Exchange (ETDEWEB)

    Pelled, O; German, U; Kol, R; Levinson, S; Weinstein, M; Laichter, Y [Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev; Alphasy, Z [Ben-Gurion Univ. of the Negev, Beersheba (Israel)

    1996-12-01

    A method for absolute determination of the activity of a I-125 source, based on the counting rate values of the 27 keV photons and the coincidence photon peak is given in the literature. It is based on the principle that if a radionuclide emits two photons in coincidence , a measurement of its disintegration rate in the photopeak and in the sum- peak can determinate it`s absolute activity. When using this method , the system calibration is simplified and parameters such as source geometry or source position relative to the detector have no significant influence. However, when the coincidence rate is very low, the application of this method is limited because of the statistics of the coincidence peak (authors).

  20. Rational functions with maximal radius of absolute monotonicity

    KAUST Repository

    Loczi, Lajos

    2014-05-19

    We study the radius of absolute monotonicity R of rational functions with numerator and denominator of degree s that approximate the exponential function to order p. Such functions arise in the application of implicit s-stage, order p Runge-Kutta methods for initial value problems and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with p=2 and R>2s, disproving a conjecture of van de Griend and Kraaijevanger. We determine the maximum attainable radius for functions in several one-parameter families of rational functions. Moreover, we prove earlier conjectured optimal radii in some families with 2 or 3 parameters via uniqueness arguments for systems of polynomial inequalities. Our results also prove the optimality of some strong stability preserving implicit and singly diagonally implicit Runge-Kutta methods. Whereas previous results in this area were primarily numerical, we give all constants as exact algebraic numbers.

  1. Determination of absolute internal conversion coefficients using the SAGE spectrometer

    International Nuclear Information System (INIS)

    Sorri, J.; Greenlees, P.T.; Papadakis, P.; Konki, J.; Cox, D.M.; Auranen, K.; Partanen, J.; Sandzelius, M.; Pakarinen, J.; Rahkila, P.; Uusitalo, J.; Herzberg, R.-D.; Smallcombe, J.; Davies, P.J.; Barton, C.J.; Jenkins, D.G.

    2016-01-01

    A non-reference based method to determine internal conversion coefficients using the SAGE spectrometer is carried out for transitions in the nuclei of "1"5"4Sm, "1"5"2Sm and "1"6"6Yb. The Normalised-Peak-to-Gamma method is in general an efficient tool to extract internal conversion coefficients. However, in many cases the required well-known reference transitions are not available. The data analysis steps required to determine absolute internal conversion coefficients with the SAGE spectrometer are presented. In addition, several background suppression methods are introduced and an example of how ancillary detectors can be used to select specific reaction products is given. The results obtained for ground-state band E2 transitions show that the absolute internal conversion coefficients can be extracted using the methods described with a reasonable accuracy. In some cases of less intense transitions only an upper limit for the internal conversion coefficient could be given.

  2. Synesthesia and rhythm. The road to absolute cinema

    Directory of Open Access Journals (Sweden)

    Ricardo Roncero Palomar

    2017-05-01

    Full Text Available Absolute cinema, developed during the historical avant-garde, con-tinued with a long artistic tradition that linked musical with visual experience. Due to cinema as médium of expression, this filmmakers were able to work with the moving image to develop concepts such as rhythm, also with more complex figures than the colored spots that other devices could create at those time. This study starts with the published texts in 1704 by Newton about color, and provides an overview of those artistic highlights that link image and sound, and which creates the origins of absolute cinema. The connections and equivalences between the visual and sound experiences used by these filmmakers are also studied in order to know if there was a continuous line with the origins of these studies or if there was a rupture and other later investigations were able to have more repercussion in their works.

  3. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  4. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  5. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  6. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  7. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  8. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  9. Purely absolutely continuous spectrum for almost Mathieu operators

    International Nuclear Information System (INIS)

    Chulaevsky, V.; Delyon, F.

    1989-01-01

    Using a recent result of Sinai, the authors prove that the almost Mathieu operators acting on l 2 (Z), (H αλ Psi)(n) = Ψ(n + 1) + Ψ(n - 1) + λ cos(ωn + α) Ψ(n), have a purely absolutely continuous spectrum for almost all α provided that ω is a good irrational and λ is sufficiently small. Furthermore, the generalized eigenfunctions are quasiperiodic

  10. Absolute calibration and beam background of the Squid Polarimeter

    International Nuclear Information System (INIS)

    Blaskiewicz, M.M.; Cameron, P.R.; Shea, T.J.

    1996-01-01

    The problem of beam background in Squid Polarimetry is not without residual benefits. The authors may deliberately generate beam background by gently kicking the beam at the spin tune frequency. This signal may be used to accomplish a simple and accurate absolute calibration of the polarimeter. The authors present details of beam background calculations and their application to polarimeter calibration, and suggest a simple proof-of-principle accelerator experiment

  11. Results from a U.S. Absolute Gravity Survey,

    Science.gov (United States)

    1982-01-01

    National Bureau of Standards. La . ... ,., 831A08 NOV -2- 1. Introduction We have recently completed an absolute gravity survey at twelve sites in the...Air Force Geophysics Laboratory (AFGL) and the Istituto di Metrologia -7- "G. Colonnetti" (IMGC) [Marson and Alasia, 1978, 19801. All three...for ab- solute measurements of the earth’s gravity, Metrologia , in press, 1982. L 4 !" Table 1. Gravity values transferred to the floor in gal (cm

  12. Blastic plasmacytoid dendritic cell neoplasm with absolute monocytosis at presentation

    Directory of Open Access Journals (Sweden)

    Jaworski JM

    2015-02-01

    Full Text Available Joseph M Jaworski,1,2 Vanlila K Swami,1 Rebecca C Heintzelman,1 Carrie A Cusack,3 Christina L Chung,3 Jeremy Peck,3 Matthew Fanelli,3 Micheal Styler,4 Sanaa Rizk,4 J Steve Hou1 1Department of Pathology and Laboratory Medicine, Hahnemann University Hospital/Drexel University College of Medicine, Philadelphia, PA, USA; 2Department of Pathology, Mercy Fitzgerald Hospital, Darby, PA, USA; 3Department of Dermatology, Hahnemann University Hospital/Drexel University College of Medicine, Philadelphia, PA, USA; 4Department of Hematology/Oncology, Hahnemann University Hospital/Drexel University College of Medicine, Philadelphia, PA, USA Abstract: Blastic plasmacytoid dendritic cell neoplasm is an uncommon malignancy derived from precursors of plasmacytoid dendritic cells. Nearly all patients present initially with cutaneous manifestations, with many having extracutaneous disease additionally. While response to chemotherapy initially is effective, relapse occurs in most, with a leukemic phase ultimately developing. The prognosis is dismal. While most of the clinical and pathologic features are well described, the association and possible prognostic significance between peripheral blood absolute monocytosis (>1.0 K/µL and blastic plasmacytoid dendritic cell neoplasm have not been reported. We report a case of a 68-year-old man who presented with a rash for 4–5 months. On physical examination, there were multiple, dull-pink, indurated plaques on the trunk and extremities. Complete blood count revealed thrombocytopenia, absolute monocytosis of 1.7 K/µL, and a negative flow cytometry study. Biopsy of an abdominal lesion revealed typical features of blastic plasmacytoid dendritic cell neoplasm. Patients having both hematologic and nonhematologic malignancies have an increased incidence of absolute monocytosis. Recent studies examining Hodgkin and non-Hodgkin lymphoma patients have suggested that this is a negative prognostic factor. The association between

  13. THE ABSOLUTE MAGNITUDES OF TYPE Ia SUPERNOVAE IN THE ULTRAVIOLET

    International Nuclear Information System (INIS)

    Brown, Peter J.; Roming, Peter W. A.; Ciardullo, Robin; Gronwall, Caryl; Hoversten, Erik A.; Pritchard, Tyler; Milne, Peter; Bufano, Filomena; Mazzali, Paolo; Elias-Rosa, Nancy; Filippenko, Alexei V.; Li Weidong; Foley, Ryan J.; Hicken, Malcolm; Kirshner, Robert P.; Gehrels, Neil; Holland, Stephen T.; Immler, Stefan; Phillips, Mark M.; Still, Martin

    2010-01-01

    We examine the absolute magnitudes and light-curve shapes of 14 nearby (redshift z = 0.004-0.027) Type Ia supernovae (SNe Ia) observed in the ultraviolet (UV) with the Swift Ultraviolet/Optical Telescope. Colors and absolute magnitudes are calculated using both a standard Milky Way extinction law and one for the Large Magellanic Cloud that has been modified by circumstellar scattering. We find very different behavior in the near-UV filters (uvw1 rc covering ∼2600-3300 A after removing optical light, and u ∼ 3000-4000 A) compared to a mid-UV filter (uvm2 ∼2000-2400 A). The uvw1 rc - b colors show a scatter of ∼0.3 mag while uvm2-b scatters by nearly 0.9 mag. Similarly, while the scatter in colors between neighboring filters is small in the optical and somewhat larger in the near-UV, the large scatter in the uvm2 - uvw1 colors implies significantly larger spectral variability below 2600 A. We find that in the near-UV the absolute magnitudes at peak brightness of normal SNe Ia in our sample are correlated with the optical decay rate with a scatter of 0.4 mag, comparable to that found for the optical in our sample. However, in the mid-UV the scatter is larger, ∼1 mag, possibly indicating differences in metallicity. We find no strong correlation between either the UV light-curve shapes or the UV colors and the UV absolute magnitudes. With larger samples, the UV luminosity might be useful as an additional constraint to help determine distance, extinction, and metallicity in order to improve the utility of SNe Ia as standardized candles.

  14. Mylar sources for the absolute determination of activity

    International Nuclear Information System (INIS)

    Arenillas, Pablo A.

    1999-01-01

    Strong Mylar foils 2.5 μm thick are proposed as an alternative to the very fragile Vyns foils for the preparation of the radioactive sources for absolute counting. Several experiments have been carried out with β and X-ray emitters to demonstrate the suitability of this material. The results show that Mylar can replace Vyns foils even for low energy β emitters. (author)

  15. Improved Harmony Search Algorithm with Chaos for Absolute Value Equation

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    2013-11-01

    Full Text Available In this paper, an improved harmony search with chaos (HSCH is presented for solving NP-hard absolute value equation (AVE Ax - |x| = b, where A is an arbitrary square matrix whose singular values exceed one. The simulation results in solving some given AVE problems demonstrate that the HSCH algorithm is valid and outperforms the classical HS algorithm (CHS and HS algorithm with differential mutation operator (HSDE.

  16. Absolute Bunch Length Measurements by Incoherent Radiation Fluctuation Analysis

    International Nuclear Information System (INIS)

    Sannibale, F.; Stupakov, G.V.; Zolotorev, M.S.; Filippetto, D.; Jagerhofer, L.

    2009-01-01

    By analyzing the pulse to pulse intensity fluctuations of the radiation emitted by a charge particle in the incoherent part of the spectrum, it is possible to extract information about the spatial distribution of the beam. At the Advanced Light Source (ALS) of the Lawrence Berkeley National Laboratory, we have developed and successfully tested a simple scheme based on this principle that allows for the absolute measurement of the rms bunch length. A description of the method and the experimental results are presented.

  17. Absolute efficiency calibration of HPGe detector by simulation method

    International Nuclear Information System (INIS)

    Narayani, K.; Pant, Amar D.; Verma, Amit K.; Bhosale, N.A.; Anilkumar, S.

    2018-01-01

    High resolution gamma ray spectrometry by HPGe detectors is a powerful radio analytical technique for estimation of activity of various radionuclides. In the present work absolute efficiency calibration of the HPGe detector was carried out using Monte Carlo simulation technique and results are compared with those obtained by experiment using standard radionuclides of 152 Eu and 133 Ba. The coincidence summing correction factors for the measurement of these nuclides were also calculated

  18. Absolute Pitch: Effects of Timbre on Note-Naming Ability

    OpenAIRE

    Vanzella, Patr?cia; Schellenberg, E. Glenn

    2010-01-01

    Background Absolute pitch (AP) is the ability to identify or produce isolated musical tones. It is evident primarily among individuals who started music lessons in early childhood. Because AP requires memory for specific pitches as well as learned associations with verbal labels (i.e., note names), it represents a unique opportunity to study interactions in memory between linguistic and nonlinguistic information. One untested hypothesis is that the pitch of voices may be difficult for AP poss...

  19. Assembler absolute forward thick-target bremsstrahlung spectra program

    International Nuclear Information System (INIS)

    Niculescu, V.I.R.; Baciu, G.; Ionescu-Bujor, M.

    1981-12-01

    The program is intended to compute the absolute forward thick-target bremsstrahlung spectrum for electrons in the energy range 1-24 MeV. The program takes into account the following phenomena: multiple scattering, energy loss and the attenuation of the emitted gamma rays. The computer program is written in Assembler having a higher degree of generality and is more performant than the FORTRAN version. (authors)

  20. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  1. Negligence, genuine error, and litigation

    OpenAIRE

    Sohn DH

    2013-01-01

    David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort syst...

  2. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  3. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  4. Overspecification of colour, pattern, and size: Salience, absoluteness, and consistency

    Directory of Open Access Journals (Sweden)

    Sammie eTarenskeen

    2015-11-01

    Full Text Available The rates of overspecification of colour, pattern, and size are compared, to investigate how salience and absoluteness contribute to the production of overspecification. Colour and pattern are absolute attributes, whereas size is relative and less salient. Additionally, a tendency towards consistent responses is assessed. Using a within-participants design, we find similar rates of colour and pattern overspecification, which are both higher than the rate of size overspecification. Using a between-participants design, however, we find similar rates of pattern and size overspecification, which are both lower than the rate of colour overspecification. This indicates that although many speakers are more likely to include colour than pattern (probably because colour is more salient, they may also treat pattern like colour due to a tendency towards consistency. We find no increase in size overspecification when the salience of size is increased, suggesting that speakers are more likely to include absolute than relative attributes. However, we do find an increase in size overspecification when mentioning the attributes is triggered, which again shows that speakers tend refer in a consistent manner, and that there are circumstances in which even size overspecification is frequently produced.

  5. Philosophy as Inquiry Aimed at the Absolute Knowledge

    Directory of Open Access Journals (Sweden)

    Ekaterina Snarskaya

    2017-09-01

    Full Text Available Philosophy as the absolute knowledge has been studied from two different but closely related approaches: historical and logical. The first approach exposes four main stages in the history of European metaphysics that marked out types of “philosophical absolutism”: the evolution of philosophy brought to light metaphysics of being, method, morals and logic. All of them are associated with the names of Aristotle, Bacon/Descartes, Kant and Hegel. Then these forms are considered in the second approach that defined them as subject-matter of philosophy as such. Due to their overall, comprehensive character, the focus of philosophy on them justifies its claim on absoluteness as far as philosophy is aimed at comprehension of the world’s unity regardless of the philosopher’s background, values and other preferences. And that is its prerogative since no other form of consciousness lays down this kind of aim. Thus, philosophy is defined as an everlasting attempt to succeed in conceiving the world in all its multifold manifestations. This article is to try to clarify the claim of philosophy on the absolute knowledge.

  6. Neutron activation analysis of certified samples by the absolute method

    Science.gov (United States)

    Kadem, F.; Belouadah, N.; Idiri, Z.

    2015-07-01

    The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.

  7. A vibration correction method for free-fall absolute gravimeters

    Science.gov (United States)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  8. Absolute photoionization cross-section of the propargyl radical

    Energy Technology Data Exchange (ETDEWEB)

    Savee, John D.; Welz, Oliver; Taatjes, Craig A.; Osborn, David L. [Sandia National Laboratories, Combustion Research Facility, Livermore, California 94551 (United States); Soorkia, Satchin [Institut des Sciences Moleculaires d' Orsay, Universite Paris-Sud 11, Orsay (France); Selby, Talitha M. [Department of Chemistry, University of Wisconsin, Washington County Campus, West Bend, Wisconsin 53095 (United States)

    2012-04-07

    Using synchrotron-generated vacuum-ultraviolet radiation and multiplexed time-resolved photoionization mass spectrometry we have measured the absolute photoionization cross-section for the propargyl (C{sub 3}H{sub 3}) radical, {sigma}{sub propargyl}{sup ion}(E), relative to the known absolute cross-section of the methyl (CH{sub 3}) radical. We generated a stoichiometric 1:1 ratio of C{sub 3}H{sub 3} : CH{sub 3} from 193 nm photolysis of two different C{sub 4}H{sub 6} isomers (1-butyne and 1,3-butadiene). Photolysis of 1-butyne yielded values of {sigma}{sub propargyl}{sup ion}(10.213 eV)=(26.1{+-}4.2) Mb and {sigma}{sub propargyl}{sup ion}(10.413 eV)=(23.4{+-}3.2) Mb, whereas photolysis of 1,3-butadiene yielded values of {sigma}{sub propargyl}{sup ion}(10.213 eV)=(23.6{+-}3.6) Mb and {sigma}{sub propargyl}{sup ion}(10.413 eV)=(25.1{+-}3.5) Mb. These measurements place our relative photoionization cross-section spectrum for propargyl on an absolute scale between 8.6 and 10.5 eV. The cross-section derived from our results is approximately a factor of three larger than previous determinations.

  9. Absolute surface reconstruction by slope metrology and photogrammetry

    Science.gov (United States)

    Dong, Yue

    Developing the manufacture of aspheric and freeform optical elements requires an advanced metrology method which is capable of inspecting these elements with arbitrary freeform surfaces. In this dissertation, a new surface measurement scheme is investigated for such a purpose, which is to measure the absolute surface shape of an object under test through its surface slope information obtained by photogrammetric measurement. A laser beam propagating toward the object reflects on its surface while the vectors of the incident and reflected beams are evaluated from the four spots they leave on the two parallel transparent windows in front of the object. The spots' spatial coordinates are determined by photogrammetry. With the knowledge of the incident and reflected beam vectors, the local slope information of the object surface is obtained through vector calculus and finally yields the absolute object surface profile by a reconstruction algorithm. An experimental setup is designed and the proposed measuring principle is experimentally demonstrated by measuring the absolute surface shape of a spherical mirror. The measurement uncertainty is analyzed, and efforts for improvement are made accordingly. In particular, structured windows are designed and fabricated to generate uniform scattering spots left by the transmitted laser beams. Calibration of the fringe reflection instrument, another typical surface slope measurement method, is also reported in the dissertation. Finally, a method for uncertainty analysis of a photogrammetry measurement system by optical simulation is investigated.

  10. Absolutely minimal extensions of functions on metric spaces

    International Nuclear Information System (INIS)

    Milman, V A

    1999-01-01

    Extensions of a real-valued function from the boundary ∂X 0 of an open subset X 0 of a metric space (X,d) to X 0 are discussed. For the broad class of initial data coming under discussion (linearly bounded functions) locally Lipschitz extensions to X 0 that preserve localized moduli of continuity are constructed. In the set of these extensions an absolutely minimal extension is selected, which was considered before by Aronsson for Lipschitz initial functions in the case X 0 subset of R n . An absolutely minimal extension can be regarded as an ∞-harmonic function, that is, a limit of p-harmonic functions as p→+∞. The proof of the existence of absolutely minimal extensions in a metric space with intrinsic metric is carried out by the Perron method. To this end, ∞-subharmonic, ∞-superharmonic, and ∞-harmonic functions on a metric space are defined and their properties are established

  11. Absolute total cross sections for noble gas systems

    International Nuclear Information System (INIS)

    Kam, P. van der.

    1981-01-01

    This thesis deals with experiments on the elastic scattering of Ar, Kr and Xe, using the molecular beam technique. The aim of this work was the measurement of the absolute value of the total cross section and the behaviour of the total cross section, Q, as function of the relative velocity g of the scattering partners. The author gives an extensive analysis of the glory structure in the total cross section and parametrizes the experimental results using a semiclassical model function. This allows a detailed comparison of the phase and amplitude of the predicted and measured glory undulations. He indicates how the depth and position of the potential well should be changed in order to come to an optimum description of the glory structure. With this model function he has also been able to separate the glory and attractive contribution to Q, and using the results from the extrapolation measurements he has obtained absolute values for Qsub(a). From these absolute values he has calculated the parameter C 6 that determines the strength of the attractive region of the potential. In two of the four investigated gas combinations the obtained values lie outside the theoretical bounds. (Auth.)

  12. Local absolute alcohol ablation for the treatment of recurrent pheochromocytoma

    International Nuclear Information System (INIS)

    Shang Mingyi; Wang Peijun; Lu Ying; Ma Jun; Tang Junjun; Xi Qian; Huang Zongliang; Gao Xiaolong

    2010-01-01

    Objective: to assess the clinical value of local injection of absolute alcohol under CT guidance in treating recurrent pheochromocytoma. Methods: Five patients with benign recurrent pheochromocytoma were enrolled in this study. Of the five cases, the lesions were located on the right side in three, on the left in one and on both sides in one. All the lesions were pathologically proved to be benign ones. Under CT guidance the ablation therapy with local injection of absolute alcohol was performed. The therapeutic results were observed and evaluated. Results: Thirty days after the treatment, different degrees of decrease in tumor size was observed on follow-up CT scans. All the patients were followed up for 9-42 months. During the follow-up period, both the blood pressure and the vanillyl mandelic acid (VMA) level in urine remained normal and no paroxysmal dizziness, headache or syncope occurred in all patients. Conclusion: For the treatment of recurrent pheochromocytoma the ablation therapy by using local injection of absolute alcohol under CT guidance is a safe and practical therapeutic means with definite and reliable effectiveness. (authors)

  13. The remaining percentage of 32P after burning of sulphur tablet containing 32P

    International Nuclear Information System (INIS)

    Ke Weiqing

    1991-01-01

    Three types of sulphur tablet containing 32 P are made artificially. The remaining percentage of 32 P after burning of three types of sulphur tablets containing 32 P is 98.1 ± 1.3% for 1st and 2nd types and 97.2 ± 2.8% for 3rd type

  14. Relation Between Bitumen Content and Percentage Air Voids in Semi Dense Bituminous Concrete

    Science.gov (United States)

    Panda, R. P.; Das, Sudhanshu Sekhar; Sahoo, P. K.

    2018-06-01

    Hot mix asphalt (HMA) is a heterogeneous mix of aggregate, mineral filler, bitumen, additives and air voids. Researchers have indicated that the durability of the HMA is sensitive on the actual bitumen content and percentage air void. This paper aims at establishing the relationship between the bitumen content and the percentage air voids in Semi Dense Bituminous Concrete (SDBC) using Viscosity Grade-30 (VG-30) bitumen. Total 54 samples have been collected, for formulation and validation of relationship and observed that the percentage air voids increases with decrease in actual bitumen content and vice versa. A minor increase in percentage air voids beyond practice of designed air voids in Marshall Method of design is required for better performance, indicating a need for reducing the codal provision of minimum bitumen content for SDBC as specified in Specification for Road & Bridges (Fourth Revision) published by Indian Road Congress, 2001. The study shows a possibility of reducing designed minimum bitumen content from codal provision for SDBC by 0.2% of weight with VG-30 grade of Bitumen.

  15. 13 CFR 120.210 - What percentage of a loan may SBA guarantee?

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false What percentage of a loan may SBA guarantee? 120.210 Section 120.210 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS... percent, except as otherwise authorized by law. [61 FR 3235, Jan. 31, 1996, as amended at 68 FR 51680, Aug...

  16. The dependence of percentage depth dose on the source-to-skin ...

    African Journals Online (AJOL)

    The variation of percentage depth dose (PDD) with source-to-skin distance (SSD) for kilovoltage X-rays used in radiotherapy has been investigated. Based on physical parameters of photon fluence, absorption and scatter during interaction of radiation with tissue, a mathematical model was developed to predict the PDDs at ...

  17. 26 CFR 1.410(b)-5 - Average benefit percentage test.

    Science.gov (United States)

    2010-04-01

    ... benefit percentages may be determined on the basis of any definition of compensation that satisfies § 1... underlying definition of compensation that satisfies section 414(s). Except as otherwise specifically... definitions of section 414(s) compensation in the determination of rates; (B) Use of different definitions of...

  18. 29 CFR 778.503 - Pseudo “percentage bonuses.”

    Science.gov (United States)

    2010-07-01

    ... such a scheme is artificially low, and the difference between the wages paid at the hourly rate and the... Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL... part, a true bonus based on a percentage of total wages—both straight time and overtime wages—satisfies...

  19. 7 CFR 929.49 - Marketable quantity, allotment percentage, and annual allotment.

    Science.gov (United States)

    2010-01-01

    ... AGRICULTURE CRANBERRIES GROWN IN STATES OF MASSACHUSETTS, RHODE ISLAND, CONNECTICUT, NEW JERSEY, WISCONSIN, MICHIGAN, MINNESOTA, OREGON, WASHINGTON, AND LONG ISLAND IN THE STATE OF NEW YORK Order Regulating Handling... history, established pursuant to § 929.48. Such allotment percentage shall be established by the Secretary...

  20. Brief Report: On the Concordance Percentages for Autistic Spectrum Disorder of Twins

    Science.gov (United States)

    Bohm, Henry V.; Stewart, Melbourne G.

    2009-01-01

    In the development of genetic theories of Autistic Spectrum Disorder (ASD) various characteristics of monozygotic (MZ) and dizygotic (DZ) twins are often considered. This paper sets forth a possible refinement in the interpretation of the MZ twin concordance percentages for ASD underlying such genetic theories, and, drawing the consequences from…

  1. Limitations of the relative standard deviation of win percentages for measuring competitive balance in sports leagues

    OpenAIRE

    P. Dorian Owen

    2009-01-01

    The relative standard deviation of win percentages, the most widely used measure of within-season competitive balance, has an upper bound which is very sensitive to variation in the numbers of teams and games played. Taking into account this upper bound provides additional insight into comparisons of competitive balance across leagues or over time.

  2. Increased percentage of Th17 cells in peritoneal fluid is associated with severity of endometriosis.

    Science.gov (United States)

    Gogacz, Marek; Winkler, Izabela; Bojarska-Junak, Agnieszka; Tabarkiewicz, Jacek; Semczuk, Andrzej; Rechberger, Tomasz; Adamiak, Aneta

    2016-09-01

    Th17 cells are a newly discovered T helper lymphocyte subpopulation, producing interleukin IL-17. Th17 cells are present in blood and peritoneal fluid (PF) at different stages of endometriosis. We aim to establish their potential importance in the pathogenesis and clinical features of the disease. The percentage of Th17 cells among T helper lymphocytes was determined in the PF and peripheral blood (PB) of patients with endometriosis and in the control group by flow cytometry using monoclonal antibodies: anti-CD-4-FITC, anti-CD-3-PE/Cy5, and anti-IL-17A-PE. Th17 percentage is increased in PF in comparison with PB in both endometriotic patients and in the control group. In severe endometriosis, the percentage of Th17 cells in PF was higher than with early (I/II stage) endometriosis. A positive correlation between the percentage of Th17 cells in PF and the white blood cell count in PB was found in patients with endometriosis. Targeting the activity of PF Th17 cells may have an influence on the proliferation of ectopic tissue and clinical manifestations of the disease. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. 78 FR 32991 - Medicaid Program; Increased Federal Medical Assistance Percentage Changes Under the Affordable...

    Science.gov (United States)

    2013-06-03

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare & Medicaid Services 42 CFR Part 433 [CMS-2327-CN] RIN 0938-AR38 Medicaid Program; Increased Federal Medical Assistance Percentage Changes Under the Affordable Care Act of 2010; Correction AGENCY: Centers for Medicare & Medicaid Services (CMS...

  4. Marathon performance in relation to body fat percentage and training indices in recreational male runners

    Directory of Open Access Journals (Sweden)

    Tanda G

    2013-05-01

    Full Text Available Giovanni Tanda,1 Beat Knechtle2,31DIME, Università degli Studi di Genova, Genova, Italy; 2Gesundheitszentrum St Gallen, St Gallen, Switzerland; 3Institute of General Practice and Health Services Research, University of Zurich, Zurich, SwitzerlandBackground: The purpose of this study was to investigate the effect of anthropometric characteristics and training indices on marathon race times in recreational male marathoners.Methods: Training and anthropometric characteristics were collected for a large cohort of recreational male runners (n = 126 participating in the Basel marathon in Switzerland between 2010 and 2011.Results: Among the parameters investigated, marathon performance time was found to be affected by mean running speed and the mean weekly distance run during the training period prior to the race and by body fat percentage. The effect of body fat percentage became significant as it exceeded a certain limiting value; for a relatively low body fat percentage, marathon performance time correlated only with training indices.Conclusion: Marathon race time may be predicted (r = 0.81 for recreational male runners by the following equation: marathon race time (minutes = 11.03 + 98.46 exp(−0.0053 mean weekly training distance [km/week] + 0.387 mean training pace (sec/km + 0.1 exp(0.23 body fat percentage [%]. The marathon race time results were valid over a range of 165–266 minutes.Keywords: endurance, exercise, anthropometry

  5. Marathon performance in relation to body fat percentage and training indices in recreational male runners.

    Science.gov (United States)

    Tanda, Giovanni; Knechtle, Beat

    2013-01-01

    The purpose of this study was to investigate the effect of anthropometric characteristics and training indices on marathon race times in recreational male marathoners. Training and anthropometric characteristics were collected for a large cohort of recreational male runners (n = 126) participating in the Basel marathon in Switzerland between 2010 and 2011. Among the parameters investigated, marathon performance time was found to be affected by mean running speed and the mean weekly distance run during the training period prior to the race and by body fat percentage. The effect of body fat percentage became significant as it exceeded a certain limiting value; for a relatively low body fat percentage, marathon performance time correlated only with training indices. Marathon race time may be predicted (r = 0.81) for recreational male runners by the following equation: marathon race time (minutes) = 11.03 + 98.46 exp(-0.0053 mean weekly training distance [km/week]) + 0.387 mean training pace (sec/km) + 0.1 exp(0.23 body fat percentage [%]). The marathon race time results were valid over a range of 165-266 minutes.

  6. The Role of Monocyte Percentage in Osteoporosis in Male Rheumatic Diseases.

    Science.gov (United States)

    Su, Yu-Jih; Chen, Chao Tung; Tsai, Nai-Wen; Huang, Chih-Cheng; Wang, Hung-Chen; Kung, Chia-Te; Lin, Wei-Che; Cheng, Ben-Chung; Su, Chih-Min; Hsiao, Sheng-Yuan; Lu, Cheng-Hsien

    2017-11-01

    Osteoporosis is easily overlooked in male patients, especially in the field of rheumatic diseases mostly prevalent with female patients, and its link to pathogenesis is still lacking. Attenuated monocyte apoptosis from a transcriptome-wide expression study illustrates the role of monocytes in osteoporosis. This study tested the hypothesis that the monocyte percentage among leukocytes could be a biomarker of osteoporosis in rheumatic diseases. Eighty-seven males with rheumatic diseases were evaluated in rheumatology outpatient clinics for bone mineral density (BMD) and surrogate markers, such as routine peripheral blood parameters and autoantibodies. From the total number of 87 patients included in this study, only 15 met the criteria for diagnosis of osteoporosis. Both age and monocyte percentage remained independently associated with the presence of osteoporosis. Steroid dose (equivalent prednisolone dose) was negatively associated with BMD of the hip area and platelet counts were negatively associated with BMD and T score of the spine area. Besides age, monocyte percentage meets the major requirements for osteoporosis in male rheumatic diseases. A higher monocyte percentage in male rheumatic disease patients, aged over 50 years in this study, and BMD study should be considered in order to reduce the risk of osteoporosis-related fractures.

  7. New loci for body fat percentage reveal link between adiposity and cardiometabolic disease risk

    DEFF Research Database (Denmark)

    Lu, Yingchang; Day, Felix R; Gustafsson, Stefan

    2016-01-01

    To increase our understanding of the genetic basis of adiposity and its links to cardiometabolic disease risk, we conducted a genome-wide association meta-analysis of body fat percentage (BF%) in up to 100,716 individuals. Twelve loci reached genome-wide significance (P<5 × 10(-8)), of which eigh...

  8. Method for quantifying percentage wood failure in block-shear specimens by a laser scanning profilometer

    Science.gov (United States)

    C. T. Scott; R. Hernandez; C. Frihart; R. Gleisner; T. Tice

    2005-01-01

    A new method for quantifying percentage wood failure of an adhesively bonded block-shear specimen has been developed. This method incorporates a laser displacement gage with an automated two-axis positioning system that functions as a highly sensitive profilometer. The failed specimen is continuously scanned across its width to obtain a surface failure profile. The...

  9. 45 CFR 305.33 - Determination of applicable percentages based on performance levels.

    Science.gov (United States)

    2010-10-01

    ..., DEPARTMENT OF HEALTH AND HUMAN SERVICES PROGRAM PERFORMANCE MEASURES, STANDARDS, FINANCIAL INCENTIVES, AND PENALTIES § 305.33 Determination of applicable percentages based on performance levels. (a) A State's... performance levels. 305.33 Section 305.33 Public Welfare Regulations Relating to Public Welfare OFFICE OF...

  10. Validation of Field Methods to Assess Body Fat Percentage in Elite Youth Soccer Players.

    Science.gov (United States)

    Munguia-Izquierdo, Diego; Suarez-Arrones, Luis; Di Salvo, Valter; Paredes-Hernandez, Victor; Alcazar, Julian; Ara, Ignacio; Kreider, Richard; Mendez-Villanueva, Alberto

    2018-05-01

    This study determined the most effective field method for quantifying body fat percentage in male elite youth soccer players and developed prediction equations based on anthropometric variables. Forty-four male elite-standard youth soccer players aged 16.3-18.0 years underwent body fat percentage assessments, including bioelectrical impedance analysis and the calculation of various skinfold-based prediction equations. Dual X-ray absorptiometry provided a criterion measure of body fat percentage. Correlation coefficients, bias, limits of agreement, and differences were used as validity measures, and regression analyses were used to develop soccer-specific prediction equations. The equations from Sarria et al. (1998) and Durnin & Rahaman (1967) reached very large correlations and the lowest biases, and they reached neither the practically worthwhile difference nor the substantial difference between methods. The new youth soccer-specific skinfold equation included a combination of triceps and supraspinale skinfolds. None of the practical methods compared in this study are adequate for estimating body fat percentage in male elite youth soccer players, except for the equations from Sarria et al. (1998) and Durnin & Rahaman (1967). The new youth soccer-specific equation calculated in this investigation is the only field method specifically developed and validated in elite male players, and it shows potentially good predictive power. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Percentage of Protected Area Amounts within each Watershed Boundary for the Conterminous US

    Science.gov (United States)

    Abstract: This dataset uses spatial information from the Watershed Boundary Dataset (WBD, March 2011) and the Protected Areas Database of the United States (PAD-US Version 1.0). The resulting data layer, with percentages of protected areas by category, was created using the ATtI...

  12. Percentage and function of CD4+CD25+ regulatory T cells in patients with hyperthyroidism

    Science.gov (United States)

    Jiang, Ting-Jun; Cao, Xue-Liang; Luan, Sha; Cui, Wan-Hui; Qiu, Si-Huang; Wang, Yi-Chao; Zhao, Chang-Jiu; Fu, Peng

    2018-01-01

    The current study observed the percentage of peripheral blood (PB) CD4+CD25+ regulatory T cells (Tregs) and the influence of CD4+CD25+ Tregs on the proliferation of naïve CD4 T cells in patients with hyperthyroidism. Furthermore, preliminary discussions are presented on the action mechanism of CD4+CD25+ Tregs on hyperthyroidism attacks. The present study identified that compared with the percentage of PB CD4+CD25+ Tregs in healthy control subjects, no significant changes were observed in the percentage of PB CD4+CD25+ Tregs in patients with hyperthyroidism (P>0.05). For patients with hyperthyroidism, CD4+CD25+ Tregs exhibited significantly reduced inhibition of the proliferation of naïve CD4 T cells and decreased secretion capacity on the cytokines of CD4 T cells, compared with those of healthy control subjects (Phyperthyroidism was significantly improved (Phyperthyroidism before treatment, no significant changes were observed in the percentage of PB CD4+CD25+ Tregs in hyperthyroidism patients following treatment (P>0.05). In the patients with hyperthyroidism, following treatment, CD4+CD25+ Tregs exhibited significantly increased inhibition of the proliferation of naïve CD4 T cells and increased secretion capacity of CD4 T cell cytokines, compared with those of the patients with hyperthyroidism prior to treatment (Phyperthyroidism, and its non-proportional decrease may be closely associated with the occurrence and progression of hyperthyroidism. PMID:29207121

  13. PERCENTAGE OF VIABLE SPERMATOZOA COLLECTED FROM THE EPIDIDYMES OF DEATH LOCAL DOG

    Directory of Open Access Journals (Sweden)

    I Nyoman Sulabda

    2012-11-01

    Full Text Available The purpose of this study to determine the effectof post mortem time on percentage of lifeepididymessperm from postmortem dog caudae epididymides. A total of 9 dog were usedand divided into three group. T0 was control group, T1, 3 hours postmortem and T2, 6hours postmortem. This way, samples were obtained at different times postmortem. Spermwere extracted from the caudae epididymes by means of cuts.The result showed that the percentage of life sperm were 67,16 ± 5.67(T0, 46.33 ± 5.60(T1 and 24.00 ± 4.35 respectively. We could appreciate that percentage of life wasaffected by postmortem time. There was significant decrease life sperm recovered fromepididymes postmortem (P<0.01. In conclusion, epididymes sperm from dog undergodecrease of percentage of life, but it could stay acceptable within many hours postmortem.We intepreted these data to indicate that it may still be possible to obtain viablespermatozoa many hours later.

  14. 39 CFR 3010.23 - Calculation of percentage change in rates.

    Science.gov (United States)

    2010-07-01

    ... DOMINANT PRODUCTS Rules for Applying the Price Cap § 3010.23 Calculation of percentage change in rates. (a... Postal Service billing determinants. The Postal Service shall make reasonable adjustments to the billing determinants to account for the effects of classification changes such as the introduction, deletion, or...

  15. 13 CFR 108.1840 - Computation of NMVC Company's Capital Impairment Percentage.

    Science.gov (United States)

    2010-01-01

    ... Capital Impairment Percentage. 108.1840 Section 108.1840 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW MARKETS VENTURE CAPITAL (âNMVCâ) PROGRAM NMVC Company's Noncompliance With Terms of Leverage Computation of Nmvc Company's Capital Impairment § 108.1840 Computation of NMVC Company's Capital Impairment...

  16. 26 CFR 1.42-8 - Election of appropriate percentage month.

    Science.gov (United States)

    2010-04-01

    ... Section 1.42-8 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY INCOME TAX INCOME TAXES Credits Against Tax § 1.42-8 Election of appropriate percentage month. (a) Election under section... previously placed in service under section 42(e). (5) Amount allocated. The housing credit dollar amount...

  17. Relation Between Bitumen Content and Percentage Air Voids in Semi Dense Bituminous Concrete

    Science.gov (United States)

    Panda, R. P.; Das, Sudhanshu Sekhar; Sahoo, P. K.

    2018-02-01

    Hot mix asphalt (HMA) is a heterogeneous mix of aggregate, mineral filler, bitumen, additives and air voids. Researchers have indicated that the durability of the HMA is sensitive on the actual bitumen content and percentage air void. This paper aims at establishing the relationship between the bitumen content and the percentage air voids in Semi Dense Bituminous Concrete (SDBC) using Viscosity Grade-30 (VG-30) bitumen. Total 54 samples have been collected, for formulation and validation of relationship and observed that the percentage air voids increases with decrease in actual bitumen content and vice versa. A minor increase in percentage air voids beyond practice of designed air voids in Marshall Method of design is required for better performance, indicating a need for reducing the codal provision of minimum bitumen content for SDBC as specified in Specification for Road & Bridges (Fourth Revision) published by Indian Road Congress, 2001. The study shows a possibility of reducing designed minimum bitumen content from codal provision for SDBC by 0.2% of weight with VG-30 grade of Bitumen.

  18. Total and Lower Extremity Lean Mass Percentage Positively Correlates With Jump Performance.

    Science.gov (United States)

    Stephenson, Mitchell L; Smith, Derek T; Heinbaugh, Erika M; Moynes, Rebecca C; Rockey, Shawn S; Thomas, Joi J; Dai, Boyi

    2015-08-01

    Strength and power have been identified as valuable components in both athletic performance and daily function. A major component of strength and power is the muscle mass, which can be assessed with dual-energy x-ray absorptiometry (DXA). The primary purpose of this study was to quantify the relationship between total body lean mass percentage (TBLM%) and lower extremity lean mass percentage (LELM%) and lower extremity force/power production during a countermovement jump (CMJ) in a general population. Researchers performed a DXA analysis on 40 younger participants aged 18-35 years, 28 middle-aged participants aged 36-55 years, and 34 older participants aged 56-75 years. Participants performed 3 CMJ on force platforms. Correlations revealed significant and strong relationships between TBLM% and LELM% compared with CMJ normalized peak vertical ground reaction force (p lean mass percentages. The findings have implications in including DXA-assessed lean mass percentage as a component for evaluating lower extremity strength and power. A paired DXA analysis and CMJ jump test may be useful for identifying neuromuscular deficits that limit performance.

  19. A low-power CMOS integrated sensor for CO2 detection in the percentage range

    NARCIS (Netherlands)

    Humbert, A.; Tuerlings, B.J.; Hoofman, R.J.O.M.; Tan, Z.; Gravesteijn, D.J.; Pertijs, M.A.P.; Bastiaansen, C.W.M.; Soccol, D.

    2013-01-01

    Within the Catrene project PASTEUR, a low-cost, low-power capacitive carbon dioxide sensor has been developed for tracking CO2 concentration in the percentage range. This paper describes this sensor, which operates at room temperature where it exhibits short response times as well as reversible

  20. Annual Percentage Rate and Annual Effective Rate: Resolving Confusion in Intermediate Accounting Textbooks

    Science.gov (United States)

    Vicknair, David; Wright, Jeffrey

    2015-01-01

    Evidence of confusion in intermediate accounting textbooks regarding the annual percentage rate (APR) and annual effective rate (AER) is presented. The APR and AER are briefly discussed in the context of a note payable and correct formulas for computing each is provided. Representative examples of the types of confusion that we found is presented…