WorldWideScience

Sample records for multiple comparison correction

  1. An open-source software program for performing Bonferroni and related corrections for multiple comparisons

    Directory of Open Access Journals (Sweden)

    Kyle Lesack

    2011-01-01

    Full Text Available Increased type I error resulting from multiple statistical comparisons remains a common problem in the scientific literature. This may result in the reporting and promulgation of spurious findings. One approach to this problem is to correct groups of P-values for "family-wide significance" using a Bonferroni correction or the less conservative Bonferroni-Holm correction or to correct for the "false discovery rate" with a Benjamini-Hochberg correction. Although several solutions are available for performing this correction through commercially available software there are no widely available easy to use open source programs to perform these calculations. In this paper we present an open source program written in Python 3.2 that performs calculations for standard Bonferroni, Bonferroni-Holm and Benjamini-Hochberg corrections.

  2. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. Coulomb correction to the screening angle of the Moliere multiple scattering theory

    International Nuclear Information System (INIS)

    Kuraev, E.A.; Voskresenskaya, O.O.; Tarasov, A.V.

    2012-01-01

    Coulomb correction to the screening angular parameter of the Moliere multiple scattering theory is found. Numerical calculations are presented in the range of nuclear charge 4 ≤ Z ≤ 82. Comparison with the Moliere result for the screening angle reveals up to 30% deviation from it for sufficiently heavy elements of the target material

  4. Atmospheric Correction Inter-Comparison Exercise

    Directory of Open Access Journals (Sweden)

    Georgia Doxani

    2018-02-01

    Full Text Available The Atmospheric Correction Inter-comparison eXercise (ACIX is an international initiative with the aim to analyse the Surface Reflectance (SR products of various state-of-the-art atmospheric correction (AC processors. The Aerosol Optical Thickness (AOT and Water Vapour (WV are also examined in ACIX as additional outputs of AC processing. In this paper, the general ACIX framework is discussed; special mention is made of the motivation to initiate the experiment, the inter-comparison protocol, and the principal results. ACIX is free and open and every developer was welcome to participate. Eventually, 12 participants applied their approaches to various Landsat-8 and Sentinel-2 image datasets acquired over sites around the world. The current results diverge depending on the sensors, products, and sites, indicating their strengths and weaknesses. Indeed, this first implementation of processor inter-comparison was proven to be a good lesson for the developers to learn the advantages and limitations of their approaches. Various algorithm improvements are expected, if not already implemented, and the enhanced performances are yet to be assessed in future ACIX experiments.

  5. Correction of rhodium detector signals for comparison to design calculations

    International Nuclear Information System (INIS)

    Judd, J.L.; Chang, R.Y.; Gabel, C.W.

    1989-01-01

    Rhodium detectors are used in many commercial pressurized water reactors PWRs [pressurized water reactor] as in-core neutron detectors. The signals from the detectors are the result of neutron absorption in 103 Rh and the subsequent beta decay of 104 Rh to 104 Pd. The rhodium depletes ∼1% per full-power month, so corrections are necessary to the detector signal to account for the effects of the rhodium depletion. These corrections result from the change in detector self-shielding with rhodium burnup and the change in rhodium concentration itself. Correction for the change in rhodium concentration is done by multiplication of the factor N(t)/N 0 , where N(t) is the rhodium concentration at time t and N 0 is the initial rhodium concentration. The calculation of the self-shielding factor is more complicated and is presented. A self-shielding factor based on the fraction of rhodium remaining was calculated with the CASMO-3 code. The results obtained from our comparisons of predicted and measured in-core detector signals show that the CASMO-3/SIMULATE-3 code package is an effective tool for estimating pin peaking and power distributions

  6. Evaluation of multiple protein docking structures using correctly predicted pairwise subunits

    Directory of Open Access Journals (Sweden)

    Esquivel-Rodríguez Juan

    2012-03-01

    Full Text Available Abstract Background Many functionally important proteins in a cell form complexes with multiple chains. Therefore, computational prediction of multiple protein complexes is an important task in bioinformatics. In the development of multiple protein docking methods, it is important to establish a metric for evaluating prediction results in a reasonable and practical fashion. However, since there are only few works done in developing methods for multiple protein docking, there is no study that investigates how accurate structural models of multiple protein complexes should be to allow scientists to gain biological insights. Methods We generated a series of predicted models (decoys of various accuracies by our multiple protein docking pipeline, Multi-LZerD, for three multi-chain complexes with 3, 4, and 6 chains. We analyzed the decoys in terms of the number of correctly predicted pair conformations in the decoys. Results and conclusion We found that pairs of chains with the correct mutual orientation exist even in the decoys with a large overall root mean square deviation (RMSD to the native. Therefore, in addition to a global structure similarity measure, such as the global RMSD, the quality of models for multiple chain complexes can be better evaluated by using the local measurement, the number of chain pairs with correct mutual orientation. We termed the fraction of correctly predicted pairs (RMSD at the interface of less than 4.0Å as fpair and propose to use it for evaluation of the accuracy of multiple protein docking.

  7. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  8. Published GMO studies find no evidence of harm when corrected for multiple comparisons.

    Science.gov (United States)

    Panchin, Alexander Y; Tuzhikov, Alexander I

    2017-03-01

    A number of widely debated research articles claiming possible technology-related health concerns have influenced the public opinion on genetically modified food safety. We performed a statistical reanalysis and review of experimental data presented in some of these studies and found that quite often in contradiction with the authors' conclusions the data actually provides weak evidence of harm that cannot be differentiated from chance. In our opinion the problem of statistically unaccounted multiple comparisons has led to some of the most cited anti-genetically modified organism health claims in history. We hope this analysis puts the original results of these studies into proper context.

  9. All of the above: When multiple correct response options enhance the testing effect.

    Science.gov (United States)

    Bishara, Anthony J; Lanzo, Lauren A

    2015-01-01

    Previous research has shown that multiple choice tests often improve memory retention. However, the presence of incorrect lures often attenuates this memory benefit. The current research examined the effects of "all of the above" (AOTA) options. When such options are correct, no incorrect lures are present. In the first three experiments, a correct AOTA option on an initial test led to a larger memory benefit than no test and standard multiple choice test conditions. The benefits of a correct AOTA option occurred even without feedback on the initial test; for both 5-minute and 48-hour retention delays; and for both cued recall and multiple choice final test formats. In the final experiment, an AOTA question led to better memory retention than did a control condition that had identical timing and exposure to response options. However, the benefits relative to this control condition were similar regardless of the type of multiple choice test (AOTA or not). Results suggest that retrieval contributes to multiple choice testing effects. However, the extra testing effect from a correct AOTA option, rather than being due to more retrieval, might be due simply to more exposure to correct information.

  10. Calculation of the flux attenuation and multiple scattering correction factors in time of flight technique for double differential cross section measurements

    International Nuclear Information System (INIS)

    Martin, G.; Coca, M.; Capote, R.

    1996-01-01

    Using Monte Carlo method technique , a computer code which simulates the time of flight experiment to measure double differential cross section was developed. The correction factor for flux attenuation and multiple scattering, that make a deformation to the measured spectrum, were calculated. The energy dependence of the correction factor was determined and a comparison with other works is shown. Calculations for Fe 56 at two different scattering angles were made. We also reproduce the experiment performed at the Nuclear Analysis Laboratory for C 12 at 25 celsius degree and the calculated correction factor for the is measured is shown. We found a linear relation between the scatter size and the correction factor for flux attenuation

  11. A quantitative comparison of corrective and perfective maintenance

    Science.gov (United States)

    Henry, Joel; Cain, James

    1994-01-01

    This paper presents a quantitative comparison of corrective and perfective software maintenance activities. The comparison utilizes basic data collected throughout the maintenance process. The data collected are extensive and allow the impact of both types of maintenance to be quantitatively evaluated and compared. Basic statistical techniques test relationships between and among process and product data. The results show interesting similarities and important differences in both process and product characteristics.

  12. Multiple testing corrections in quantitative proteomics: A useful but blunt tool.

    Science.gov (United States)

    Pascovici, Dana; Handler, David C L; Wu, Jemma X; Haynes, Paul A

    2016-09-01

    Multiple testing corrections are a useful tool for restricting the FDR, but can be blunt in the context of low power, as we demonstrate by a series of simple simulations. Unfortunately, in proteomics experiments low power can be common, driven by proteomics-specific issues like small effects due to ratio compression, and few replicates due to reagent high cost, instrument time availability and other issues; in such situations, most multiple testing corrections methods, if used with conventional thresholds, will fail to detect any true positives even when many exist. In this low power, medium scale situation, other methods such as effect size considerations or peptide-level calculations may be a more effective option, even if they do not offer the same theoretical guarantee of a low FDR. Thus, we aim to highlight in this article that proteomics presents some specific challenges to the standard multiple testing corrections methods, which should be employed as a useful tool but not be regarded as a required rubber stamp. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Correction factor for the experimental prompt neutron decay constant

    International Nuclear Information System (INIS)

    Talamo, Alberto; Gohar, Y.; Sadovich, S.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2013-01-01

    Highlights: • Definition of a spatial correction factor for the experimental prompt neutron decay constant. • Introduction of a MCNP6 calculation methodology to simulate Rossi-alpha distribution for pulsed neutron sources. • Comparison of MCNP6 results with experimental data for count rate, Rossi-alpha, and Feynman-alpha distributions. • Improvement of the comparison between numerical and experimental results by taking into account the dead-time effect. - Abstract: This study introduces a new correction factor to obtain the experimental effective multiplication factor of subcritical assemblies by the point kinetics formulation. The correction factor is defined as the ratio between the MCNP6 prompt neutron decay constant obtained in criticality mode and the one obtained in source mode. The correction factor mainly takes into account the longer neutron lifetime in the reflector region and the effects of the external neutron source. For the YALINA Thermal facility, the comparison between the experimental and computational effective multiplication factors noticeably improves after the application of the correction factor. The accuracy of the MCNP6 computational model of the YALINA Thermal subcritical assembly has been verified by reproducing the neutron count rate, Rossi-α, and Feynman-α distributions obtained from the experimental data

  14. Simplified fringe order correction for absolute phase maps recovered with multiple-spatial-frequency fringe projections

    International Nuclear Information System (INIS)

    Ding, Yi; Peng, Kai; Lu, Lei; Zhong, Kai; Zhu, Ziqi

    2017-01-01

    Various kinds of fringe order errors may occur in the absolute phase maps recovered with multi-spatial-frequency fringe projections. In existing methods, multiple successive pixels corrupted by fringe order errors are detected and corrected pixel-by-pixel with repeating searches, which is inefficient for applications. To improve the efficiency of multiple successive fringe order corrections, in this paper we propose a method to simplify the error detection and correction by the stepwise increasing property of fringe order. In the proposed method, the numbers of pixels in each step are estimated to find the possible true fringe order values, repeating the search in detecting multiple successive errors can be avoided for efficient error correction. The effectiveness of our proposed method is validated by experimental results. (paper)

  15. Method for measuring multiple scattering corrections between liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  16. GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections

    Science.gov (United States)

    Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian

    2017-09-01

    The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi

  17. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  18. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    Science.gov (United States)

    Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.

    2018-01-01

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.

  19. Multiple-scattering corrections to the Beer-Lambert law

    International Nuclear Information System (INIS)

    Zardecki, A.

    1983-01-01

    The effect of multiple scattering on the validity of the Beer-Lambert law is discussed for a wide range of particle-size parameters and optical depths. To predict the amount of received radiant power, appropriate correction terms are introduced. For particles larger than or comparable to the wavelength of radiation, the small-angle approximation is adequate; whereas for small densely packed particles, the diffusion theory is advantageously employed. These two approaches are used in the context of the problem of laser-beam propagation in a dense aerosol medium. In addition, preliminary results obtained by using a two-dimensional finite-element discrete-ordinates transport code are described. Multiple-scattering effects for laser propagation in fog, cloud, rain, and aerosol cloud are modeled

  20. Aethalometer multiple scattering correction Cref for mineral dust aerosols

    Science.gov (United States)

    Di Biagio, Claudia; Formenti, Paola; Cazaunau, Mathieu; Pangui, Edouard; Marchand, Nicolas; Doussin, Jean-François

    2017-08-01

    In this study we provide a first estimate of the Aethalometer multiple scattering correction Cref for mineral dust aerosols. Cref is an empirical constant used to correct the aerosol absorption coefficient measurements for the multiple scattering artefact of the Aethalometer; i.e. the filter fibres on which aerosols are deposited scatter light and this is miscounted as absorption. The Cref at 450 and 660 nm was obtained from the direct comparison of Aethalometer data (Magee Sci. AE31) with (i) the absorption coefficient calculated as the difference between the extinction and scattering coefficients measured by a Cavity Attenuated Phase Shift Extinction analyser (CAPS PMex) and a nephelometer respectively at 450 nm and (ii) the absorption coefficient from a MAAP (Multi-Angle Absorption Photometer) at 660 nm. Measurements were performed on seven dust aerosol samples generated in the laboratory by the mechanical shaking of natural parent soils issued from different source regions worldwide. The single scattering albedo (SSA) at 450 and 660 nm and the size distribution of the aerosols were also measured. Cref for mineral dust varies between 1.81 and 2.56 for a SSA of 0.85-0.96 at 450 nm and between 1.75 and 2.28 for a SSA of 0.98-0.99 at 660 nm. The calculated mean for dust is 2.09 (±0.22) at 450 nm and 1.92 (±0.17) at 660 nm. With this new Cref the dust absorption coefficient by the Aethalometer is about 2 % (450 nm) and 11 % (660 nm) higher than that obtained by using Cref = 2.14 at both 450 and 660 nm, as usually assumed in the literature. This difference induces a change of up to 3 % in the dust SSA at 660 nm. The Cref seems to be independent of the fine and coarse particle size fractions, and so the obtained Cref can be applied to dust both close to sources and following transport. Additional experiments performed with pure kaolinite minerals and polluted ambient aerosols indicate Cref of 2.49 (±0.02) and 2.32 (±0.01) at 450 and 660 nm (SSA = 0.96-0.97) for

  1. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  2. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  3. Multiple scattering corrections to the Beer-Lambert law. 1: Open detector.

    Science.gov (United States)

    Tam, W G; Zardecki, A

    1982-07-01

    Multiple scattering corrections to the Beer-Lambert law are analyzed by means of a rigorous small-angle solution to the radiative transfer equation. Transmission functions for predicting the received radiant power-a directly measured quantity in contrast to the spectral radiance in the Beer-Lambert law-are derived. Numerical algorithms and results relating to the multiple scattering effects for laser propagation in fog, cloud, and rain are presented.

  4. Does Correct Answer Distribution Influence Student Choices When Writing Multiple Choice Examinations?

    Directory of Open Access Journals (Sweden)

    Jacqueline A. Carnegie

    2017-03-01

    Full Text Available Summative evaluation for large classes of first- and second-year undergraduate courses often involves the use of multiple choice question (MCQ exams in order to provide timely feedback. Several versions of those exams are often prepared via computer-based question scrambling in an effort to deter cheating. An important parameter to consider when preparing multiple exam versions is that they must be equivalent in their assessment of student knowledge. This project investigated a possible influence of correct answer organization on student answer selection when writing multiple versions of MCQ exams. The specific question asked was whether the existence of a series of four to five consecutive MCQs in which the same letter represented the correct answer had a detrimental influence on a student’s ability to continue to select the correct answer as he/she moved through that series. Student outcomes from such exams were compared with results from exams with identical questions but which did not contain such series. These findings were supplemented by student survey data in which students self-assessed the extent to which they paid attention to the distribution of correct answer choices when writing summative exams, both during their initial answer selection and when transferring their answer letters to the Scantron sheet for correction. Despite the fact that more than half of survey respondents indicated that they do make note of answer patterning during exams and that a series of four to five questions with the same letter for the correct answer would encourage many of them to take a second look at their answer choice, the results pertaining to student outcomes suggest that MCQ randomization, even when it does result in short serial arrays of letter-specific correct answers, does not constitute a distraction capable of adversely influencing student performance. Dans les très grandes classes de cours de première et deuxième années, l

  5. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  6. Aethalometer multiple scattering correction Cref for mineral dust aerosols

    Directory of Open Access Journals (Sweden)

    C. Di Biagio

    2017-08-01

    Full Text Available In this study we provide a first estimate of the Aethalometer multiple scattering correction Cref for mineral dust aerosols. Cref is an empirical constant used to correct the aerosol absorption coefficient measurements for the multiple scattering artefact of the Aethalometer; i.e. the filter fibres on which aerosols are deposited scatter light and this is miscounted as absorption. The Cref at 450 and 660 nm was obtained from the direct comparison of Aethalometer data (Magee Sci. AE31 with (i the absorption coefficient calculated as the difference between the extinction and scattering coefficients measured by a Cavity Attenuated Phase Shift Extinction analyser (CAPS PMex and a nephelometer respectively at 450 nm and (ii the absorption coefficient from a MAAP (Multi-Angle Absorption Photometer at 660 nm. Measurements were performed on seven dust aerosol samples generated in the laboratory by the mechanical shaking of natural parent soils issued from different source regions worldwide. The single scattering albedo (SSA at 450 and 660 nm and the size distribution of the aerosols were also measured. Cref for mineral dust varies between 1.81 and 2.56 for a SSA of 0.85–0.96 at 450 nm and between 1.75 and 2.28 for a SSA of 0.98–0.99 at 660 nm. The calculated mean for dust is 2.09 (±0.22 at 450 nm and 1.92 (±0.17 at 660 nm. With this new Cref the dust absorption coefficient by the Aethalometer is about 2 % (450 nm and 11 % (660 nm higher than that obtained by using Cref  =  2.14 at both 450 and 660 nm, as usually assumed in the literature. This difference induces a change of up to 3 % in the dust SSA at 660 nm. The Cref seems to be independent of the fine and coarse particle size fractions, and so the obtained Cref can be applied to dust both close to sources and following transport. Additional experiments performed with pure kaolinite minerals and polluted ambient aerosols indicate Cref of 2.49 (±0.02 and 2

  7. Determination of shell correction energies at saddle point using pre-scission neutron multiplicities

    International Nuclear Information System (INIS)

    Golda, K.S.; Saxena, A.; Mittal, V.K.; Mahata, K.; Sugathan, P.; Jhingan, A.; Singh, V.; Sandal, R.; Goyal, S.; Gehlot, J.; Dhal, A.; Behera, B.R.; Bhowmik, R.K.; Kailas, S.

    2013-01-01

    Pre-scission neutron multiplicities have been measured for 12 C + 194, 198 Pt systems at matching excitation energies at near Coulomb barrier region. Statistical model analysis with a modified fission barrier and level density prescription have been carried out to fit the measured pre-scission neutron multiplicities and the available evaporation residue and fission cross sections simultaneously to constrain statistical model parameters. Simultaneous fitting of the pre-scission neutron multiplicities and cross section data requires shell correction at the saddle point

  8. Atmospheric Correction Inter-comparison Exercise (ACIX)

    Science.gov (United States)

    Vermote, E.; Doxani, G.; Gascon, F.; Roger, J. C.; Skakun, S.

    2017-12-01

    The free and open data access policy to Landsat-8 (L-8) and Sentinel-2 (S-2) satellite imagery has encouraged the development of atmospheric correction (AC) approaches for generating Bottom-of-Atmosphere (BOA) products. Several entities have started to generate (or plan to generate in the short term) BOA reflectance products at global scale for L-8 and S-2 missions. To this end, the European Space Agency (ESA) and National Aeronautics and Space Administration (NASA) have initiated an exercise on the inter-comparison of the available AC processors. The results of the exercise are expected to point out the strengths and weaknesses, as well as communalities and discrepancies of various AC processors, in order to suggest and define ways for their further improvement. In particular, 11 atmospheric processors from five different countries participate in ACIX with the aim to inter-compare their performance when applied to L-8 and S-2 data. All the processors should be operational without requiring parametrization when applied on different areas. A protocol describing in details the inter-comparison metrics and the test dataset based on the AERONET sites has been agreed unanimously during the 1st ACIX workshop in June 2016. In particular, a basic and an advanced run of each of the processor were requested in the frame of ACIX, with the aim to draw robust and reliable conclusions on the processors' performance. The protocol also describes the comparison metrics of the aerosol optical thickness and water vapour products of the processors with the corresponding AERONET measurements. Moreover, concerning the surface reflectances, the inter-comparison among the processors is defined, as well as the comparison with the MODIS surface reflectance and with a reference surface reflectance product. Such a reference product will be obtained using the AERONET characterization of the aerosol (size distribution and refractive indices) and an accurate radiative transfer code. The inter-comparison

  9. Compton scatter correction in case of multiple crosstalks in SPECT imaging.

    Science.gov (United States)

    Sychra, J J; Blend, M J; Jobe, T H

    1996-02-01

    A strategy for Compton scatter correction in brain SPECT images was proposed recently. It assumes that two radioisotopes are used and that a significant portion of photons of one radioisotope (for example, Tc99m) spills over into the low energy acquisition window of the other radioisotope (for example, Tl201). We are extending this approach to cases of several radioisotopes with mutual, multiple and significant photon spillover. In the example above, one may correct not only the Tl201 image but also the Tc99m image corrupted by the Compton scatter originating from the small component of high energy Tl201 photons. The proposed extension is applicable to other anatomical domains (cardiac imaging).

  10. Correction for variable moderation and multiplication effects associated with thermal neutron coincidence counting

    International Nuclear Information System (INIS)

    Baron, N.

    1978-01-01

    A correction is described for multiplication and moderation when doing passive thermal neutron coincidence counting nondestructive assay measurements on powder samples of PuO 2 mixed arbitrarily with MgO, SiO 2 , and moderating material. The multiplication correction expression is shown to be approximately separable into the product of two independent terms; F/sub Pu/ which depends on the mass of 240 Pu, and F/sub αn/ which depends on properties of the matrix material. Necessary assumptions for separability are (1) isotopic abundances are constant, and (2) fission cross sections are independent of incident neutron energy: both of which are reasonable for the 8% 240 Pu powder samples considered here. Furthermore since all prompt fission neutrons are expected to have nearly the same energy distributions, variations among different samples can be due only to the moderating properties of the samples. Relative energy distributions are provided by a thermal neutron well counter having two concentric rings of 3 He proportional counters placed symmetrically about the well. Measured outer-to-inner ring ratios raised to an empirically determined power for coincidences, (N/sup I//N/sup O/)/sup Z/, and singles, (T/sup O//T/sup I/)/sup delta/, provide corrections for moderation and F/sub αn/ respectively, and F/sub Pu/ is approximated by M 240 /sup X//M 240 . The exponents are calibration constants determined by a least squares fitting procedure using standards' data. System calibration is greatly simplified using the separability principle. Once appropriate models are established for F/sub Pu/ and F/sub αn/, only a few standards are necessary to determine the calibration constants associated with these terms. Since F/sub Pu/ is expressed as a function of M 240 , correction for multiplication in a subsequent assay demands only a measurement of F/sub αn/

  11. Multiple scattering and attenuation corrections in Deep Inelastic Neutron Scattering experiments

    International Nuclear Information System (INIS)

    Dawidowski, J; Blostein, J J; Granada, J R

    2006-01-01

    Multiple scattering and attenuation corrections in Deep Inelastic Neutron Scattering experiments are analyzed. The theoretical basis of the method is stated, and a Monte Carlo procedure to perform the calculation is presented. The results are compared with experimental data. The importance of the accuracy in the description of the experimental parameters is tested, and the implications of the present results on the data analysis procedures is examined

  12. Correction to: Multiple Score Comparison: a network meta-analysis approach to comparison and external validation of prognostic scores

    Directory of Open Access Journals (Sweden)

    Sarah R. Haile

    2018-02-01

    Full Text Available Correction Following publication of the original article [1], a member of the writing group reported that his name is misspelt. The paper should appear in Pubmed under “Ter Riet G”, bot as “Riet GT”.

  13. BIOFEEDBACK: A NEW METHOD FOR CORRECTION OF MOTOR DISORDERS IN PATIENTS WITH MULTIPLE SCLEROSIS

    Directory of Open Access Journals (Sweden)

    Ya. S. Pekker

    2014-01-01

    Full Text Available Major disabling factors in multiple sclerosis is motor disorders. Rehabilitation of such violations is one of the most important medical and social problems. Currently, most of the role given to the development of methods for correction of motor disorders based on accessing natural resources of the human body. One of these methods is the adaptive control with biofeedback (BFB. The aim of our study was the correction of motor disorders in multiple sclerosis patients using biofeedback training. In the study, we have developed scenarios for training rehabilitation program computer EMG biofeedback aimed at correction of motor disorders in patients with multiple sclerosis (MS. The method was tested in the neurological clinic of SSMU. The study included 9 patients with definite diagnosis of MS with the presence of the clinical picture of combined pyramidal and cerebellar symptoms. Assessed the effectiveness of rehabilitation procedures biofeedback training using specialized scales (rating scale functional systems Kurtzke; questionnaire research quality of life – SF-36, evaluation of disease impact Profile – SIP and score on a scale fatigue – FSS. In the studied group of patients decreased score on a scale of fatigue (FSS, increased motor control (SIP2, the physical and mental components of health (SF-36. The tendency to reduce the amount of neurological deficit by reducing the points on the pyramidal Kurtske violations. Analysis of the exchange rate dynamics of biofeedback training on EMG for trained muscles indicates an increase in the recorded signal OEMG from session to session. Proved a tendency to increase strength and coordination trained muscles of patients studied.Positive results of biofeedback therapy in patients with MS can be recommended to use this method in the complex rehabilitation measures to correct motor and psycho-emotional disorders.

  14. A Monte Carlo evaluation of analytical multiple scattering corrections for unpolarised neutron scattering and polarisation analysis data

    International Nuclear Information System (INIS)

    Mayers, J.; Cywinski, R.

    1985-03-01

    Some of the approximations commonly used for the analytical estimation of multiple scattering corrections to thermal neutron elastic scattering data from cylindrical and plane slab samples have been tested using a Monte Carlo program. It is shown that the approximations are accurate for a wide range of sample geometries and scattering cross-sections. Neutron polarisation analysis provides the most stringent test of multiple scattering calculations as multiply scattered neutrons may be redistributed not only geometrically but also between the spin flip and non spin flip scattering channels. A very simple analytical technique for correcting for multiple scattering in neutron polarisation analysis has been tested using the Monte Carlo program and has been shown to work remarkably well in most circumstances. (author)

  15. Power Factor Correction Capacitors for Multiple Parallel Three-Phase ASD Systems

    DEFF Research Database (Denmark)

    Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Today’s three-phase Adjustable Speed Drive (ASD) systems still employ Diode Rectifiers (DRs) and Silicon-Controlled Rectifiers (SCRs) as the front-end converters due to structural and control simplicity, small volume, low cost, and high reliability. However, the uncontrollable DRs and phase......-controllable SCRs bring side-effects by injecting high harmonics to the grid, which will degrade the system performance in terms of lowering the overall efficiency and overheating the system if remain uncontrolled or unattenuated. For multiple ASD systems, certain harmonics in the entire system can be mitigated...... the power factor, passive capacitors can be installed, which yet can trigger the system resonance. Hence, this paper analyzes the resonant issues in multiple ASD systems with power factor correction capacitors. Potential damping solutions are summarized. Simulations are carried out, while laboratory tests...

  16. Implementation of dynamic cross-talk correction (DCTC) for MOX holdup assay measurements among multiple gloveboxes

    International Nuclear Information System (INIS)

    Nakamichi, Hideo; Nakamura, Hironobu; Mukai, Yasunobu; Kurita, Tsutomu; Beddingfield, David H.

    2012-01-01

    Plutonium holdup in gloveboxes (GBs) are measured by (passive neutron based NDA (HBAS) for the material control and accountancy (MC and A) at Plutonium Conversion Development Facility (PCDF). In the case that the GBs are installed close to one another, the cross-talk which means neutron double counting among GBs should be corrected properly. Though we used to use predetermined constants as the cross-talk correction, a new correction methodology for neutron cross-talk among the GBs with inventory changes is required for the improvement of MC and A. In order to address the issue of variable cross-talk contributions to holdup assay values, we applied a dynamic cross-talk correction (DCTC) method, based on the distributed source-term analysis approach, to obtain the actual doubles derived from the cross-talk between multiple GBs. As a result of introduction of DCTC for HBAS measurement, we could reduce source biases from the assay result by estimating the reliable doubles-counting derived from the cross-talk. Therefore, we could improve HBAS measurement uncertainty to a half of conventional system, and we are going to confirm the result. Since the DCTC methodology can be used to determine the cross-correlation among multiple inventories in small areas, it is expected that this methodology can be extended to the knowledge of safeguards by design. (author)

  17. "None of the above" as a correct and incorrect alternative on a multiple-choice test: implications for the testing effect.

    Science.gov (United States)

    Odegard, Timothy N; Koen, Joshua D

    2007-11-01

    Both positive and negative testing effects have been demonstrated with a variety of materials and paradigms (Roediger & Karpicke, 2006b). The present series of experiments replicate and extend the research of Roediger and Marsh (2005) with the addition of a "none-of-the-above" response option. Participants (n=32 in both experiments) read a set of passages, took an initial multiple-choice test, completed a filler task, and then completed a final cued-recall test (Experiment 1) or multiple-choice test (Experiment 2). Questions were manipulated on the initial multiple-choice test by adding a "none-of-the-above" response alternative (choice "E") that was incorrect ("E" Incorrect) or correct ("E" Correct). The results from both experiments demonstrated that the positive testing effect was negated when the "none-of-the-above" alternative was the correct response on the initial multiple-choice test, but was still present when the "none-of-the-above" alternative was an incorrect response.

  18. Kruskal-Wallis Test in Multiple Comparisons

    OpenAIRE

    Parys, Dariusz

    2009-01-01

    In this paper we show that the Kruskal-Wallis test can be transform to quadratic form among the Mann-Whitney or Kendal τ au concordance measures between pairs of treatments. A multiple comparisons procedure based on patterns of transitive ordering among treatments is implement. We also consider the circularity and non-transitive effects. Statystyka testu Kruskala-Wallisa przedstawiona jest w postaci formy kwadratowej z użyciem statystyki Manna-Whitneya lub miar konkordacji τ au Kendalla. N...

  19. A Fiducial Approach to Extremes and Multiple Comparisons

    Science.gov (United States)

    Wandler, Damian V.

    2010-01-01

    Generalized fiducial inference is a powerful tool for many difficult problems. Based on an extension of R. A. Fisher's work, we used generalized fiducial inference for two extreme value problems and a multiple comparison procedure. The first extreme value problem is dealing with the generalized Pareto distribution. The generalized Pareto…

  20. Nonparametric Analysis of Right Censored Data with Multiple Comparisons

    OpenAIRE

    Shih, Hwei-Weng

    1982-01-01

    This report demonstrates the use of a computer program written in FORTRAN for the Burroughs B6800 computer at Utah State University to perform Breslow's (1970) generalization of the Kruskal-Wallis test for right censored data. A pairwise multiple comparison procedure using Bonferroni's inequality is also introduced and demonstrated. Comparisons are also made with a parametric F test and the original Kruskal-Wallis test. Application of these techniques to two data sets indicate that there is l...

  1. Calculator for the correction of the experimental specific migration for comparison with the legislative limit

    DEFF Research Database (Denmark)

    Petersen, Jens Højslev; Hoekstra, Eddo J.

    The EURL-NRL-FCM Taskforce on the Fourth Amendment of the Plastic Directive 2002/72/EC developed a calculator for the correction of the test results for comparison with the specific migration limit (SML). The calculator calculates the maximum acceptable specific migration under the given experime......The EURL-NRL-FCM Taskforce on the Fourth Amendment of the Plastic Directive 2002/72/EC developed a calculator for the correction of the test results for comparison with the specific migration limit (SML). The calculator calculates the maximum acceptable specific migration under the given...... experimental conditions in food or food stimulant and indicates whether the test result is in compliance with the legislation. This calculator includes the Fat Reduction Factor, the simulant D Reduction Factor and the factor of the difference in surface-to-volume ratio between test and real food contact....

  2. An empirical correction for moderate multiple scattering in super-heterodyne light scattering.

    Science.gov (United States)

    Botin, Denis; Mapa, Ludmila Marotta; Schweinfurth, Holger; Sieber, Bastian; Wittenberg, Christopher; Palberg, Thomas

    2017-05-28

    Frequency domain super-heterodyne laser light scattering is utilized in a low angle integral measurement configuration to determine flow and diffusion in charged sphere suspensions showing moderate to strong multiple scattering. We introduce an empirical correction to subtract the multiple scattering background and isolate the singly scattered light. We demonstrate the excellent feasibility of this simple approach for turbid suspensions of transmittance T ≥ 0.4. We study the particle concentration dependence of the electro-kinetic mobility in low salt aqueous suspension over an extended concentration regime and observe a maximum at intermediate concentrations. We further use our scheme for measurements of the self-diffusion coefficients in the fluid samples in the absence or presence of shear, as well as in polycrystalline samples during crystallization and coarsening. We discuss the scope and limits of our approach as well as possible future applications.

  3. Subroutine MLTGRD: a multigrid algorithm based on multiplicative correction and implicit non-stationary iteration

    International Nuclear Information System (INIS)

    Barry, J.M.; Pollard, J.P.

    1986-11-01

    A FORTRAN subroutine MLTGRD is provided to solve efficiently the large systems of linear equations arising from a five-point finite difference discretisation of some elliptic partial differential equations. MLTGRD is a multigrid algorithm which provides multiplicative correction to iterative solution estimates from successively reduced systems of linear equations. It uses the method of implicit non-stationary iteration for all grid levels

  4. Geometry-based multiplication correction for passive neutron coincidence assay of materials with variable and unknown (α,n) neutron rates

    International Nuclear Information System (INIS)

    Langner, D.G.; Russo, P.A.

    1993-02-01

    We have studied the problem of assaying impure plutonium-bearing materials using passive neutron coincidence counting. We have developed a technique to analyze neutron coincidence data from impure plutonium samples that uses the bulk geometry of the sample to correct for multiplication in samples for which the (α,n) neutron production rate is unknown. This technique can be applied to any impure plutonium-bearing material whose matrix constituents are approximately constant, whose self-multiplication is low to moderate, whose plutonium isotopic composition is known and not substantially varying, and whose bulk geometry is measurable or can be derived. This technique requires a set of reference materials that have well-characterized plutonium contents. These reference materials are measured once to derive a calibration that is specific to the neutron detector and the material. The technique has been applied to molten salt extraction residues, PuF 4 samples that have a variable salt matrix, and impure plutonium oxide samples. It is also applied to pure plutonium oxide samples for comparison. Assays accurate to 4% (1 σ) were obtained for impure samples measured in a High-Level Neutron Coincidence Counter II. The effects on the technique of variations in neutron detector efficiency with energy and the effects of neutron capture in the sample are discussed

  5. Correction of elevation offsets in multiple co-located lidar datasets

    Science.gov (United States)

    Thompson, David M.; Dalyander, P. Soupy; Long, Joseph W.; Plant, Nathaniel G.

    2017-04-07

    IntroductionTopographic elevation data collected with airborne light detection and ranging (lidar) can be used to analyze short- and long-term changes to beach and dune systems. Analysis of multiple lidar datasets at Dauphin Island, Alabama, revealed systematic, island-wide elevation differences on the order of 10s of centimeters (cm) that were not attributable to real-world change and, therefore, were likely to represent systematic sampling offsets. These offsets vary between the datasets, but appear spatially consistent within a given survey. This report describes a method that was developed to identify and correct offsets between lidar datasets collected over the same site at different times so that true elevation changes over time, associated with sediment accumulation or erosion, can be analyzed.

  6. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  7. Correction of over and under exposure images using multiple lighting system

    International Nuclear Information System (INIS)

    Im, Jonghoon; Fujii, Hiromitsu; Yamashita, Atsushi; Asama, Hajime

    2015-01-01

    When images are acquired in bright condition, it can cause a loss of highlight details (over exposure) in bright area and a loss of shadow details (under exposure) in dark area. Over and under exposure causes a big problem when people investigate dangerous place like Fukushima nuclear power plant through the camera attached remote control robot. In this paper, we propose a method to correct the over and under exposure image to solve the problem. The image processing consists of four steps. Firstly, multiple images are acquired by alternately turning on and off each illumination which set in different positions. Then the image obtained first is defined as input image 1, the image obtained second is defined as input image 2 and the image obtained N-th is defined as input image N. Secondly, luminance of the images is corrected. Thirdly, over and under exposure regions in the image are extracted from the input image 1. Finally, the over and under exposure regions in the input image 1 are compensated by other images. The results show that the over and under exposure regions in the input image are recovered by our proposed method. (author)

  8. Interactive comparison and remediation of collections of macromolecular structures.

    Science.gov (United States)

    Moriarty, Nigel W; Liebschner, Dorothee; Klei, Herbert E; Echols, Nathaniel; Afonine, Pavel V; Headd, Jeffrey J; Poon, Billy K; Adams, Paul D

    2018-01-01

    Often similar structures need to be compared to reveal local differences throughout the entire model or between related copies within the model. Therefore, a program to compare multiple structures and enable correction any differences not supported by the density map was written within the Phenix framework (Adams et al., Acta Cryst 2010; D66:213-221). This program, called Structure Comparison, can also be used for structures with multiple copies of the same protein chain in the asymmetric unit, that is, as a result of non-crystallographic symmetry (NCS). Structure Comparison was designed to interface with Coot(Emsley et al., Acta Cryst 2010; D66:486-501) and PyMOL(DeLano, PyMOL 0.99; 2002) to facilitate comparison of large numbers of related structures. Structure Comparison analyzes collections of protein structures using several metrics, such as the rotamer conformation of equivalent residues, displays the results in tabular form and allows superimposed protein chains and density maps to be quickly inspected and edited (via the tools in Coot) for consistency, completeness and correctness. © 2017 The Protein Society.

  9. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    Science.gov (United States)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  10. Comparison of fluorescence rejection methods of baseline correction and shifted excitation Raman difference spectroscopy

    Science.gov (United States)

    Cai, Zhijian; Zou, Wenlong; Wu, Jianhong

    2017-10-01

    Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.

  11. Evaluation of Shifted Excitation Raman Difference Spectroscopy and Comparison to Computational Background Correction Methods Applied to Biochemical Raman Spectra.

    Science.gov (United States)

    Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W; Popp, Jürgen

    2017-07-27

    Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC.

  12. International comparison of methods to test the validity of dead-time and pile-up corrections for high-precision. gamma. -ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Houtermans, H.; Schaerf, K.; Reichel, F. (International Atomic Energy Agency, Vienna (Austria)); Debertin, K. (Physikalisch-Technische Bundesanstalt, Braunschweig (Germany, F.R.))

    1983-02-01

    The International Atomic Energy Agency organized an international comparison of methods applied in high-precision ..gamma..-ray spectrometry for the correction of dead-time and pile-up losses. Results of this comparison are reported and discussed.

  13. Measuring University students' understanding of the greenhouse effect - a comparison of multiple-choice, short answer and concept sketch assessment tools with respect to students' mental models

    Science.gov (United States)

    Gold, A. U.; Harris, S. E.

    2013-12-01

    The greenhouse effect comes up in most discussions about climate and is a key concept related to climate change. Existing studies have shown that students and adults alike lack a detailed understanding of this important concept or might hold misconceptions. We studied the effectiveness of different interventions on University-level students' understanding of the greenhouse effect. Introductory level science students were tested for their pre-knowledge of the greenhouse effect using validated multiple-choice questions, short answers and concept sketches. All students participated in a common lesson about the greenhouse effect and were then randomly assigned to one of two lab groups. One group explored an existing simulation about the greenhouse effect (PhET-lesson) and the other group worked with absorption spectra of different greenhouse gases (Data-lesson) to deepen the understanding of the greenhouse effect. All students completed the same assessment including multiple choice, short answers and concept sketches after participation in their lab lesson. 164 students completed all the assessments, 76 completed the PhET lesson and 77 completed the data lesson. 11 students missed the contrasting lesson. In this presentation we show the comparison between the multiple-choice questions, short answer questions and the concept sketches of students. We explore how well each of these assessment types represents student's knowledge. We also identify items that are indicators of the level of understanding of the greenhouse effect as measured in correspondence of student answers to an expert mental model and expert responses. Preliminary data analysis shows that student who produce concept sketch drawings that come close to expert drawings also choose correct multiple-choice answers. However, correct multiple-choice answers are not necessarily an indicator that a student produces an expert-like correlating concept sketch items. Multiple-choice questions that require detailed

  14. A state comparison amplifier with feed forward state correction

    Science.gov (United States)

    Mazzarella, Luca; Donaldson, Ross; Collins, Robert; Zanforlin, Ugo; Buller, Gerald; Jeffers, John

    2017-04-01

    The Quantum State Comparison AMPlifier (SCAMP) is a probabilistic amplifier that works for known sets of coherent states. The input state is mixed with a guess state at a beam splitter and one of the output ports is coupled to a detector. The other output contains the amplified state, which is accepted on the condition that no counts are recorded. The system uses only classical resources and has been shown to achieve high gain and repetition rate. However the output fidelity is not high enough for most quantum communication purposes. Here we show how the success probability and fidelity are enhanced by repeated comparison stages, conditioning later state choices on the outcomes of earlier detections. A detector firing at an early stage means that a guess is wrong. This knowledge allows us to correct the state perfectly. The system requires fast-switching between different input states, but still requires only classical resources. Figures of merit compare favourably with other schemes, most notably the probability-fidelity product is higher than for unambiguous state discrimination. Due to its simplicity, the system is a candidate to counteract quantum signal degradation in a lossy fibre or as a quantum receiver to improve the key rate of continuous variable quantum communication. The work was supported by the QComm Project of the UK Engineering and Physical Sciences Research Council (EP/M013472/1).

  15. M-GCAT: interactively and efficiently constructing large-scale multiple genome comparison frameworks in closely related species

    Directory of Open Access Journals (Sweden)

    Messeguer Xavier

    2006-10-01

    Full Text Available Abstract Background Due to recent advances in whole genome shotgun sequencing and assembly technologies, the financial cost of decoding an organism's DNA has been drastically reduced, resulting in a recent explosion of genomic sequencing projects. This increase in related genomic data will allow for in depth studies of evolution in closely related species through multiple whole genome comparisons. Results To facilitate such comparisons, we present an interactive multiple genome comparison and alignment tool, M-GCAT, that can efficiently construct multiple genome comparison frameworks in closely related species. M-GCAT is able to compare and identify highly conserved regions in up to 20 closely related bacterial species in minutes on a standard computer, and as many as 90 (containing 75 cloned genomes from a set of 15 published enterobacterial genomes in an hour. M-GCAT also incorporates a novel comparative genomics data visualization interface allowing the user to globally and locally examine and inspect the conserved regions and gene annotations. Conclusion M-GCAT is an interactive comparative genomics tool well suited for quickly generating multiple genome comparisons frameworks and alignments among closely related species. M-GCAT is freely available for download for academic and non-commercial use at: http://alggen.lsi.upc.es/recerca/align/mgcat/intro-mgcat.html.

  16. Data-driven motion correction in brain SPECT

    International Nuclear Information System (INIS)

    Kyme, A.Z.; Hutton, B.F.; Hatton, R.L.; Skerrett, D.W.

    2002-01-01

    Patient motion can cause image artifacts in SPECT despite restraining measures. Data-driven detection and correction of motion can be achieved by comparison of acquired data with the forward-projections. By optimising the orientation of the reconstruction, parameters can be obtained for each misaligned projection and applied to update this volume using a 3D reconstruction algorithm. Digital and physical phantom validation was performed to investigate this approach. Noisy projection data simulating at least one fully 3D patient head movement during acquisition were constructed by projecting the digital Huffman brain phantom at various orientations. Motion correction was applied to the reconstructed studies. The importance of including attenuation effects in the estimation of motion and the need for implementing an iterated correction were assessed in the process. Correction success was assessed visually for artifact reduction, and quantitatively using a mean square difference (MSD) measure. Physical Huffman phantom studies with deliberate movements introduced during the acquisition were also acquired and motion corrected. Effective artifact reduction in the simulated corrupt studies was achieved by motion correction. Typically the MSD ratio between the corrected and reference studies compared to the corrupted and reference studies was > 2. Motion correction could be achieved without inclusion of attenuation effects in the motion estimation stage, providing simpler implementation and greater efficiency. Moreover the additional improvement with multiple iterations of the approach was small. Improvement was also observed in the physical phantom data, though the technique appeared limited here by an object symmetry. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  17. Comparison of grey matter atrophy between patients with neuromyelitis optica and multiple sclerosis: A voxel-based morphometry study

    International Nuclear Information System (INIS)

    Duan Yunyun; Liu Yaou; Liang Peipeng; Jia Xiuqin; Yu Chunshui; Qin Wen; Sun Hui; Liao Zhangyuan; Ye Jing; Li Kuncheng

    2012-01-01

    Purpose: Previous studies have established regional grey matter (GM) loss in multiple sclerosis (MS). However, whether there is any regional GM atrophy in neuromyelitis optica (NMO) and the difference between NMO and MS is unclear. The present study addresses this issue by voxel-based morphometry (VBM). Methods: Conventional magnetic resonance imaging (MRI) and T1-weighted three-dimensional MRI were obtained from 26 NMO patients, 26 relapsing–remitting MS (RRMS) patients, and 26 normal controls. An analysis of covariance model assessed with cluster size inference was used to compare GM volume among three groups. The correlations of GM volume changes with disease duration, expanded disability status scale (EDSS) and brain T2 lesion volume (LV) were analyzed. Results: GM atrophy was found in NMO patients in several regions of frontal, temporal, parietal lobes and insula (uncorrected, p < 0.001). While extensive GM atrophy was found in RRMS patients, including most cortical regions and the deep grey matter (corrected for multiple comparisons, p < 0.01). Compared with NMO, those with RRMS had significant GM loss in bilateral thalami, caudate, left parahippocampal gyrus, right hippocampus and insula (corrected, p < 0.01). In RRMS group, regional GM loss in right caudate and bilateral thalami were strongly correlated with brain T2LV. Conclusions: Our study found the difference of GM atrophy between NMO and RRMS patients mainly in deep grey matter. The correlational results suggested axonal degeneration from lesions on T2WI may be a key pathogenesis of atrophy in deep grey matter in RRMS.

  18. Neutral current Drell-Yan with combined QCD and electroweak corrections in the POWHEG BOX

    CERN Document Server

    Barze', Luca; Nason, Paolo; Nicrosini, Oreste; Piccinini, Fulvio; Vicini, Alessandro

    2013-01-01

    Following recent work on the combination of electroweak and strong radiative corrections to single W-boson hadroproduction in the POWHEG BOX framework, we generalize the above treatment to cover the neutral current Drell-Yan process. According to the POWHEG method, we combine both the next-to-leading order (NLO) electroweak and QED multiple photon corrections with the native NLO and Parton Shower QCD contributions. We show comparisons with the predictions of the electroweak generator HORACE, to validate the reliability and accuracy of the approach. We also present phenomenological results obtained with the new tool for physics studies at the LHC.

  19. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  20. Correcting Estimates of the Occurrence Rate of Earth-like Exoplanets for Stellar Multiplicity

    Science.gov (United States)

    Cantor, Elliot; Dressing, Courtney D.; Ciardi, David R.; Christiansen, Jessie

    2018-06-01

    One of the most prominent questions in the exoplanet field has been determining the true occurrence rate of potentially habitable Earth-like planets. NASA’s Kepler mission has been instrumental in answering this question by searching for transiting exoplanets, but follow-up observations of Kepler target stars are needed to determine whether or not the surveyed Kepler targets are in multi-star systems. While many researchers have searched for companions to Kepler planet host stars, few studies have investigated the larger target sample. Regardless of physical association, the presence of nearby stellar companions biases our measurements of a system’s planetary parameters and reduces our sensitivity to small planets. Assuming that all Kepler target stars are single (as is done in many occurrence rate calculations) would overestimate our search completeness and result in an underestimate of the frequency of potentially habitable Earth-like planets. We aim to correct for this bias by characterizing the set of targets for which Kepler could have detected Earth-like planets. We are using adaptive optics (AO) imaging to reveal potential stellar companions and near-infrared spectroscopy to refine stellar parameters for a subset of the Kepler targets that are most amenable to the detection of Earth-like planets. We will then derive correction factors to correct for the biases in the larger set of target stars and determine the true frequency of systems with Earth-like planets. Due to the prevalence of stellar multiples, we expect to calculate an occurrence rate for Earth-like exoplanets that is higher than current figures.

  1. Paper-pen peer-correction versus wiki-based peer-correction

    Directory of Open Access Journals (Sweden)

    Froldova Vladimira

    2016-01-01

    Full Text Available This study reports on the comparison of the students’ achievement and their attitudes towards the use of paper-pen peer-correction and wiki-based peer-correction within English language lessons and CLIL Social Science lessons at the higher secondary school in Prague. Questionnaires and semi-structured interviews were utilized to gather information. The data suggests that students made considerable use of wikis and showed higher degrees of motivation in wiki-based peer-correction during English language lessons than in CLIL Social Science lessons. In both cases wikis not only contributed to developing students’ writing skills, but also helped students recognize the importance of collaboration.

  2. Analytical multiple scattering correction to the Mie theory: Application to the analysis of the lidar signal

    Science.gov (United States)

    Flesia, C.; Schwendimann, P.

    1992-01-01

    The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.

  3. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    Science.gov (United States)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  4. Comparison of multiple crystal structures with NMR data for engrailed homeodomain

    Energy Technology Data Exchange (ETDEWEB)

    Religa, Tomasz L. [MRC Centre for Protein Engineering (United Kingdom)], E-mail: tlr25@mrc-lmb.cam.ac.uk

    2008-03-15

    Two methods are currently available to solve high resolution protein structures-X-ray crystallography and nuclear magnetic resonance (NMR). Both methods usually produce highly similar structures, but small differences between both solutions are always observed. Here the raw NMR data as well as the solved NMR structure were compared to the multiple crystal structures solved for the WT 60 residue three helix bundle engrailed homeodomain (EnHD) and single point mutants. There was excellent agreement between TALOS-predicted and crystal structure-observed dihedral angles and a good agreement for the {sup 3}J(H{sup N}H{sup {alpha}}) couplings for the multiple crystal structures. Around 1% of NOEs were violated for any crystal structure, but no NOE was inconsistent with all of the crystal structures. Violations usually occurred for surface residues or for residues for which multiple discreet conformations were observed between the crystal structures. Comparison of the disorder shown in the multiple crystal structures shows little correlation with dynamics under native conditions for this protein.

  5. Psychometric Comparisons of Benevolent and Corrective Humor across 22 Countries: The Virtue Gap in Humor Goes International.

    Science.gov (United States)

    Heintz, Sonja; Ruch, Willibald; Platt, Tracey; Pang, Dandan; Carretero-Dios, Hugo; Dionigi, Alberto; Argüello Gutiérrez, Catalina; Brdar, Ingrid; Brzozowska, Dorota; Chen, Hsueh-Chih; Chłopicki, Władysław; Collins, Matthew; Ďurka, Róbert; Yahfoufi, Najwa Y El; Quiroga-Garza, Angélica; Isler, Robert B; Mendiburo-Seguel, Andrés; Ramis, TamilSelvan; Saglam, Betül; Shcherbakova, Olga V; Singh, Kamlesh; Stokenberga, Ieva; Wong, Peter S O; Torres-Marín, Jorge

    2018-01-01

    Recently, two forms of virtue-related humor, benevolent and corrective, have been introduced. Benevolent humor treats human weaknesses and wrongdoings benevolently, while corrective humor aims at correcting and bettering them. Twelve marker items for benevolent and corrective humor (the BenCor) were developed, and it was demonstrated that they fill the gap between humor as temperament and virtue. The present study investigates responses to the BenCor from 25 samples in 22 countries (overall N = 7,226). The psychometric properties of the BenCor were found to be sufficient in most of the samples, including internal consistency, unidimensionality, and factorial validity. Importantly, benevolent and corrective humor were clearly established as two positively related, yet distinct dimensions of virtue-related humor. Metric measurement invariance was supported across the 25 samples, and scalar invariance was supported across six age groups (from 18 to 50+ years) and across gender. Comparisons of samples within and between four countries (Malaysia, Switzerland, Turkey, and the UK) showed that the item profiles were more similar within than between countries, though some evidence for regional differences was also found. This study thus supported, for the first time, the suitability of the 12 marker items of benevolent and corrective humor in different countries, enabling a cumulative cross-cultural research and eventually applications of humor aiming at the good.

  6. Psychometric Comparisons of Benevolent and Corrective Humor across 22 Countries: The Virtue Gap in Humor Goes International

    Directory of Open Access Journals (Sweden)

    Sonja Heintz

    2018-02-01

    Full Text Available Recently, two forms of virtue-related humor, benevolent and corrective, have been introduced. Benevolent humor treats human weaknesses and wrongdoings benevolently, while corrective humor aims at correcting and bettering them. Twelve marker items for benevolent and corrective humor (the BenCor were developed, and it was demonstrated that they fill the gap between humor as temperament and virtue. The present study investigates responses to the BenCor from 25 samples in 22 countries (overall N = 7,226. The psychometric properties of the BenCor were found to be sufficient in most of the samples, including internal consistency, unidimensionality, and factorial validity. Importantly, benevolent and corrective humor were clearly established as two positively related, yet distinct dimensions of virtue-related humor. Metric measurement invariance was supported across the 25 samples, and scalar invariance was supported across six age groups (from 18 to 50+ years and across gender. Comparisons of samples within and between four countries (Malaysia, Switzerland, Turkey, and the UK showed that the item profiles were more similar within than between countries, though some evidence for regional differences was also found. This study thus supported, for the first time, the suitability of the 12 marker items of benevolent and corrective humor in different countries, enabling a cumulative cross-cultural research and eventually applications of humor aiming at the good.

  7. Complete restoration of multiple dystrophin isoforms in genetically corrected Duchenne muscular dystrophy patient–derived cardiomyocytes

    Directory of Open Access Journals (Sweden)

    Susi Zatti

    2014-01-01

    Full Text Available Duchenne muscular dystrophy (DMD–associated cardiac diseases are emerging as a major cause of morbidity and mortality in DMD patients, and many therapies for treatment of skeletal muscle failed to improve cardiac function. The reprogramming of patients' somatic cells into pluripotent stem cells, combined with technologies for correcting the genetic defect, possesses great potential for the development of new treatments for genetic diseases. In this study, we obtained human cardiomyocytes from DMD patient–derived, induced pluripotent stem cells genetically corrected with a human artificial chromosome carrying the whole dystrophin genomic sequence. Stimulation by cytokines was combined with cell culturing on hydrogel with physiological stiffness, allowing an adhesion-dependent maturation and a proper dystrophin expression. The obtained cardiomyocytes showed remarkable sarcomeric organization of cardiac troponin T and α-actinin, expressed cardiac-specific markers, and displayed electrically induced calcium transients lasting less than 1 second. We demonstrated that the human artificial chromosome carrying the whole dystrophin genomic sequence is stably maintained throughout the cardiac differentiation process and that multiple promoters of the dystrophin gene are properly activated, driving expression of different isoforms. These dystrophic cardiomyocytes can be a valuable source for in vitro modeling of DMD-associated cardiac disease. Furthermore, the derivation of genetically corrected, patient-specific cardiomyocytes represents a step toward the development of innovative cell and gene therapy approaches for DMD.

  8. The Contribution of Numerical Magnitude Comparison and Phonological Processing to Individual Differences in Fourth Graders' Multiplication Fact Ability.

    Directory of Open Access Journals (Sweden)

    Tamara M J Schleepen

    Full Text Available Although numerical magnitude processing has been related to individual differences in arithmetic, its role in children's multiplication performance remains largely unknown. On the other hand, studies have indicated that phonological awareness is an important correlate of individual differences in children's multiplication performance, but the involvement of phonological memory, another important phonological processing skill, has not been studied in much detail. Furthermore, knowledge about the relative contribution of above mentioned processes to the specific arithmetic operation of multiplication in children is lacking. The present study therefore investigated for the first time the unique contributions of numerical magnitude comparison and phonological processing in explaining individual differences in 63 fourth graders' multiplication fact ability (mean age = 9.6 years, SD = .67. The results showed that children's multiplication fact competency correlated significantly with symbolic and nonsymbolic magnitude comparison as well as with phonological short-term memory. A hierarchical regression analysis revealed that, after controlling for intellectual ability and general reaction time, both symbolic and nonsymbolic magnitude comparison and phonological short-term memory accounted for unique variance in multiplication fact performance. The ability to compare symbolic magnitudes was found to contribute the most, indicating that the access to numerical magnitudes by means of Arabic digits is a key factor in explaining individual differences in children's multiplication fact ability.

  9. A SAS(®) macro implementation of a multiple comparison post hoc test for a Kruskal-Wallis analysis.

    Science.gov (United States)

    Elliott, Alan C; Hynan, Linda S

    2011-04-01

    The Kruskal-Wallis (KW) nonparametric analysis of variance is often used instead of a standard one-way ANOVA when data are from a suspected non-normal population. The KW omnibus procedure tests for some differences between groups, but provides no specific post hoc pair wise comparisons. This paper provides a SAS(®) macro implementation of a multiple comparison test based on significant Kruskal-Wallis results from the SAS NPAR1WAY procedure. The implementation is designed for up to 20 groups at a user-specified alpha significance level. A Monte-Carlo simulation compared this nonparametric procedure to commonly used parametric multiple comparison tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  10. Comparison of reconfigurable structures for flexible word-length multiplication

    Directory of Open Access Journals (Sweden)

    O. A. Pfänder

    2008-05-01

    Full Text Available Binary multiplication continues to be one of the essential arithmetic operations in digital circuits. Even though field-programmable gate arrays (FPGAs are becoming more and more powerful these days, the vendors cannot avoid implementing multiplications with high word-lengths using embedded blocks instead of configurable logic. But on the other hand, the circuit's efficiency decreases if the provided word-length of the hard-wired multipliers exceeds the precision requirements of the algorithm mapped into the FPGA. Thus it is beneficial to use multiplier blocks with configurable word-length, optimized for area, speed and power dissipation, e.g. regarding digital signal processing (DSP applications.

    In this contribution, we present different approaches and structures for the realization of a multiplication with variable precision and perform an objective comparison. This includes one approach based on a modified Baugh and Wooley algorithm and three structures using Booth's arithmetic operand recoding with different array structures. All modules have the option to compute signed two's complement fix-point numbers either as an individual computing unit or interconnected to a superior array. Therefore, a high throughput at low precision through parallelism, or a high precision through concatenation can be achieved.

  11. A method for the correction of the feed-in tariff price for cogeneration based on a comparison between Croatia and EU Member States

    International Nuclear Information System (INIS)

    Uran, Vedran; Krajcar, Slavko

    2009-01-01

    The European Commission has adopted Directive 2004/8/EC on the promotion of cogeneration, which the EU Member States, as well as candidates including Croatia, were obliged to accept. Among other terms and conditions, the Directive requires certain support mechanisms, such as feed-in tariff prices and premiums added to market electricity prices. In this paper, the cost effectiveness of selling electricity at the feed-in tariff prices in the selected EU Member States is compared to selling it on the European electricity market, with or without premiums. The results of this comparison will indicate whether correction of the Croatian feed-in tariff price to a higher value would be justified. The cost effectiveness ratio of a cogeneration unit upgraded with mean reverting and jump diffusion processes is used for comparison. At the end of this paper, a method is suggested for the correction of feed-in tariff prices, with examples of corrected prices for the years 2008 and 2009. Such corrections have been proven to be justified and are compared to the feed-in tariff prices in most of the selected EU Member States.

  12. Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time.

    Science.gov (United States)

    Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa

    2008-01-01

    This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.

  13. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Dispersion correction through movement of the closed orbit

    International Nuclear Information System (INIS)

    Parzen, G.

    1980-01-01

    The closed orbit correction system can be used to correct the vertical dispersion by displacing the orbit at the quadrupoles and sextupoles. The accuracy of the results have been verified by comparison with exact calculations. Results for correcting the horizontal dispersion are also given

  15. Comparison of Parenting Style in Single Child and Multiple Children Families

    OpenAIRE

    Masoumeh Alidosti; Seyedeh Leila Dehghani; Akbar Babaei-Heydarabadi; Elahe Tavassoli

    2016-01-01

    Background and Purpose: Family is the first and the most important structure in human civilization in which social lifestyles, mutual understanding, and compatibility is learned. Studies have shown that parenting style, is one the most important and fundamental factors in personality development. The purpose of this study was comparison of parenting style in single child and multiple children families. Materials and Methods: This study, in total, 152 mothers from Andimeshk city, Iran, wer...

  16. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    Science.gov (United States)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  17. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    Science.gov (United States)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin

  18. Practical aspects of data-driven motion correction approach for brain SPECT

    International Nuclear Information System (INIS)

    Kyme, A.Z.; Hutton, B.F.; Hatton, R.L.; Skerrett, D.; Barnden, L.

    2002-01-01

    Full text: Patient motion can cause image artifacts in SPECT despite restraining measures. Data-driven detection and correction of motion can be achieved by comparison of acquired data with the forward-projections. By optimising the orientation of a partial reconstruction, parameters can be obtained for each misaligned projection and applied to update this volume using a 3D reconstruction algorithm. Phantom validation was performed to explore practical aspects of this approach. Noisy projection datasets simulating a patient undergoing at least one fully 3D movement during acquisition were compiled from various projections of the digital Hoffman brain phantom. Motion correction was then applied to the reconstructed studies. Correction success was assessed visually and quantitatively. Resilience with respect to subset order and missing data in the reconstruction and updating stages, detector geometry considerations, and the need for implementing an iterated correction were assessed in the process. Effective correction of the corrupted studies was achieved. Visually, artifactual regions in the reconstructed slices were suppressed and/or removed. Typically the ratio of mean square difference between the corrected and reference studies compared to that between the corrupted and reference studies was > 2. Although components of the motions are missed using a single-head implementation, improvement was still evident in the correction. The need for multiple iterations in the approach was small due to the bulk of misalignment errors being corrected in the first pass. Dispersion of subsets for reconstructing and updating the partial reconstruction appears to give optimal correction. Further validation is underway using triple-head physical phantom data. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  19. Catadioptric aberration correction in cathode lens microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Tromp, R.M. [IBM T.J. Watson Research Center, PO Box 218, Yorktown Heights, NY 10598 (United States); Kamerlingh Onnes Laboratory, Leiden Institute of Physics, Niels Bohrweg 2, 2333 CA Leiden (Netherlands)

    2015-04-15

    In this paper I briefly review the use of electrostatic electron mirrors to correct the aberrations of the cathode lens objective lens in low energy electron microscope (LEEM) and photo electron emission microscope (PEEM) instruments. These catadioptric systems, combining electrostatic lens elements with a reflecting mirror, offer a compact solution, allowing simultaneous and independent correction of both spherical and chromatic aberrations. A comparison with catadioptric systems in light optics informs our understanding of the working principles behind aberration correction with electron mirrors, and may point the way to further improvements in the latter. With additional developments in detector technology, 1 nm spatial resolution in LEEM appears to be within reach. - Highlights: • The use of electron mirrors for aberration correction in LEEM/PEEM is reviewed. • A comparison is made with similar systems in light optics. • Conditions for 1 nm spatial resolution are discussed.

  20. Attenuation correction for hybrid MR/PET scanners: a comparison study

    Energy Technology Data Exchange (ETDEWEB)

    Rota Kops, Elena [Forschungszentrum Jülich GmbH, Jülich (Germany); Ribeiro, Andre Santos [Imperial College London, London (United Kingdom); Caldeira, Liliana [Forschungszentrum Jülich GmbH, Jülich (Germany); Hautzel, Hubertus [Heinrich-Heine-University Düsseldorf, Düsseldorf (Germany); Lukas, Mathias [Technische Universitaet Muenchen, Munich (Germany); Antoch, Gerald [Heinrich-Heine-University Düsseldorf, Düsseldorf (Germany); Lerche, Christoph; Shah, Jon [Forschungszentrum Jülich GmbH, Jülich (Germany)

    2015-05-18

    Attenuation correction of PET data acquired in hybrid MR/PET scanners is still a challenge. Different methods have been adopted by several groups to obtain reliable attenuation maps (mu-maps). In this study we compare three methods: MGH, UCL, Neural-Network. The MGH method is based on an MR/CT template obtained with the SPM8 software. The UCL method uses a database of MR/CT pairs. Both generate mu-maps from MP-RAGE images. The feed-forward neural-network from Juelich (NN-Juelich) requires two UTE images; it generates segmented mu-maps. Data from eight subjects (S1-S8) measured in the Siemens 3T MR-BrainPET scanner were used. Corresponding CT images were acquired. The resulting mu-maps were compared against the CT-based mu-maps for each subject and method. Overlapped voxels and Dice similarity coefficients, D, for bone, soft-tissue and air regions, and relative differences images were calculated. The true positive (TP) recognized voxels for the whole head were 79.9% (NN-Juelich, S7) to 92.1% (UCL method, S1). D values of the bone were D=0.65 (NN-Juelich, S1) to D=0.87 (UCL method, S1). For S8 the MHG method failed (TP=76.4%; D=0.46 for bone). D values shared a common tendency in all subjects and methods to recognize soft-tissue as bone. The relative difference images showed a variation of -10.9% - +10.1%; for S8 and MHG method the values were -24.5% and +14.2%. A preliminary comparison of three methods for generation of mu-maps for MR/PET scanners is presented. The continuous methods (MGH, UCL) seem to generate reliable mu-maps, whilst the binary method seems to need further improvement. Future work will include more subjects, the reconstruction of corresponding PET data and their comparison.

  1. Attenuation correction for hybrid MR/PET scanners: a comparison study

    International Nuclear Information System (INIS)

    Rota Kops, Elena; Ribeiro, Andre Santos; Caldeira, Liliana; Hautzel, Hubertus; Lukas, Mathias; Antoch, Gerald; Lerche, Christoph; Shah, Jon

    2015-01-01

    Attenuation correction of PET data acquired in hybrid MR/PET scanners is still a challenge. Different methods have been adopted by several groups to obtain reliable attenuation maps (mu-maps). In this study we compare three methods: MGH, UCL, Neural-Network. The MGH method is based on an MR/CT template obtained with the SPM8 software. The UCL method uses a database of MR/CT pairs. Both generate mu-maps from MP-RAGE images. The feed-forward neural-network from Juelich (NN-Juelich) requires two UTE images; it generates segmented mu-maps. Data from eight subjects (S1-S8) measured in the Siemens 3T MR-BrainPET scanner were used. Corresponding CT images were acquired. The resulting mu-maps were compared against the CT-based mu-maps for each subject and method. Overlapped voxels and Dice similarity coefficients, D, for bone, soft-tissue and air regions, and relative differences images were calculated. The true positive (TP) recognized voxels for the whole head were 79.9% (NN-Juelich, S7) to 92.1% (UCL method, S1). D values of the bone were D=0.65 (NN-Juelich, S1) to D=0.87 (UCL method, S1). For S8 the MHG method failed (TP=76.4%; D=0.46 for bone). D values shared a common tendency in all subjects and methods to recognize soft-tissue as bone. The relative difference images showed a variation of -10.9% - +10.1%; for S8 and MHG method the values were -24.5% and +14.2%. A preliminary comparison of three methods for generation of mu-maps for MR/PET scanners is presented. The continuous methods (MGH, UCL) seem to generate reliable mu-maps, whilst the binary method seems to need further improvement. Future work will include more subjects, the reconstruction of corresponding PET data and their comparison.

  2. Comparing Effects of Biologic Agents in Treating Patients with Rheumatoid Arthritis: A Multiple Treatment Comparison Regression Analysis.

    Directory of Open Access Journals (Sweden)

    Ingunn Fride Tvete

    Full Text Available Rheumatoid arthritis patients have been treated with disease modifying anti-rheumatic drugs (DMARDs and the newer biologic drugs. We sought to compare and rank the biologics with respect to efficacy. We performed a literature search identifying 54 publications encompassing 9 biologics. We conducted a multiple treatment comparison regression analysis letting the number experiencing a 50% improvement on the ACR score be dependent upon dose level and disease duration for assessing the comparable relative effect between biologics and placebo or DMARD. The analysis embraced all treatment and comparator arms over all publications. Hence, all measured effects of any biologic agent contributed to the comparison of all biologic agents relative to each other either given alone or combined with DMARD. We found the drug effect to be dependent on dose level, but not on disease duration, and the impact of a high versus low dose level was the same for all drugs (higher doses indicated a higher frequency of ACR50 scores. The ranking of the drugs when given without DMARD was certolizumab (ranked highest, etanercept, tocilizumab/ abatacept and adalimumab. The ranking of the drugs when given with DMARD was certolizumab (ranked highest, tocilizumab, anakinra/rituximab, golimumab/ infliximab/ abatacept, adalimumab/ etanercept [corrected]. Still, all drugs were effective. All biologic agents were effective compared to placebo, with certolizumab the most effective and adalimumab (without DMARD treatment and adalimumab/ etanercept (combined with DMARD treatment the least effective. The drugs were in general more effective, except for etanercept, when given together with DMARDs.

  3. Direct concurrent comparison of multiple pediatric acute asthma scoring instruments.

    Science.gov (United States)

    Johnson, Michael D; Nkoy, Flory L; Sheng, Xiaoming; Greene, Tom; Stone, Bryan L; Garvin, Jennifer

    2017-09-01

    Appropriate delivery of Emergency Department (ED) treatment to children with acute asthma requires clinician assessment of acute asthma severity. Various clinical scoring instruments exist to standardize assessment of acute asthma severity in the ED, but their selection remains arbitrary due to few published direct comparisons of their properties. Our objective was to test the feasibility of directly comparing properties of multiple scoring instruments in a pediatric ED. Using a novel approach supported by a composite data collection form, clinicians categorized elements of five scoring instruments before and after initial treatment for 48 patients 2-18 years of age with acute asthma seen at the ED of a tertiary care pediatric hospital ED from August to December 2014. Scoring instruments were compared for inter-rater reliability between clinician types and their ability to predict hospitalization. Inter-rater reliability between clinician types was not different between instruments at any point and was lower (weighted kappa range 0.21-0.55) than values reported elsewhere. Predictive ability of most instruments for hospitalization was higher after treatment than before treatment (p < 0.05) and may vary between instruments after treatment (p = 0.054). We demonstrate the feasibility of comparing multiple clinical scoring instruments simultaneously in ED clinical practice. Scoring instruments had higher predictive ability for hospitalization after treatment than before treatment and may differ in their predictive ability after initial treatment. Definitive conclusions about the best instrument or meaningful comparison between instruments will require a study with a larger sample size.

  4. Computed tomography apparatus with detector sensilivity correction

    International Nuclear Information System (INIS)

    Waltham, R. M.

    1984-01-01

    In a rotary fan beam computed tomography apparatus using recurrent relative displacement between the source and detectors (e.g. a deflected spot X-ray tube) for the recalibration of detectors in chain-like sequences across the detector array by successive pairwise common-path sensitivity comparisons starting from a terminal detector each sequence normally involves or more successive comparisons, and consistent but unpredictable errors are found to occur, leading to incorrect Houndsfield values in the computed image matrix. The improvement comprises locating at least one radiation transparent detector of high stability in front of the array at an intermediate point and using the output to further correct the chain-corrected detector sensitivity values. A detector comprising a plastics scintillator optically coupled to a photomultiplier is described, whose output pulses are counted during a rotational scan and compared with the mean corresponding measurement from detectors lying behind the detector, to form a sensitivity ratio. From a corresponding ratio and data derived during calibration, a measured sensitivity value for detectors is determined for each scan and is compared with the corresponding chain-corrected sensitivity value to generate a further sensitivity correction value which is then distributed among the detectors of the comparison sequence

  5. Multiple Score Comparison: a network meta-analysis approach to comparison and external validation of prognostic scores

    Directory of Open Access Journals (Sweden)

    Sarah R. Haile

    2017-12-01

    Full Text Available Abstract Background Prediction models and prognostic scores have been increasingly popular in both clinical practice and clinical research settings, for example to aid in risk-based decision making or control for confounding. In many medical fields, a large number of prognostic scores are available, but practitioners may find it difficult to choose between them due to lack of external validation as well as lack of comparisons between them. Methods Borrowing methodology from network meta-analysis, we describe an approach to Multiple Score Comparison meta-analysis (MSC which permits concurrent external validation and comparisons of prognostic scores using individual patient data (IPD arising from a large-scale international collaboration. We describe the challenges in adapting network meta-analysis to the MSC setting, for instance the need to explicitly include correlations between the scores on a cohort level, and how to deal with many multi-score studies. We propose first using IPD to make cohort-level aggregate discrimination or calibration scores, comparing all to a common comparator. Then, standard network meta-analysis techniques can be applied, taking care to consider correlation structures in cohorts with multiple scores. Transitivity, consistency and heterogeneity are also examined. Results We provide a clinical application, comparing prognostic scores for 3-year mortality in patients with chronic obstructive pulmonary disease using data from a large-scale collaborative initiative. We focus on the discriminative properties of the prognostic scores. Our results show clear differences in performance, with ADO and eBODE showing higher discrimination with respect to mortality than other considered scores. The assumptions of transitivity and local and global consistency were not violated. Heterogeneity was small. Conclusions We applied a network meta-analytic methodology to externally validate and concurrently compare the prognostic properties

  6. Default mode network links to visual hallucinations: A comparison between Parkinson's disease and multiple system atrophy.

    Science.gov (United States)

    Franciotti, Raffaella; Delli Pizzi, Stefano; Perfetti, Bernardo; Tartaro, Armando; Bonanni, Laura; Thomas, Astrid; Weis, Luca; Biundo, Roberta; Antonini, Angelo; Onofrj, Marco

    2015-08-01

    Studying default mode network activity or connectivity in different parkinsonisms, with or without visual hallucinations, could highlight its roles in clinical phenotypes' expression. Multiple system atrophy is the archetype of parkinsonism without visual hallucinations, variably appearing instead in Parkinson's disease (PD). We aimed to evaluate default mode network functions in multiple system atrophy in comparison with PD. Functional magnetic resonance imaging evaluated default mode network activity and connectivity in 15 multiple system atrophy patients, 15 healthy controls, 15 early PD patients matched for disease duration, 30 severe PD patients (15 with and 15 without visual hallucinations), matched with multiple system atrophy for disease severity. Cortical thickness and neuropsychological evaluations were also performed. Multiple system atrophy had reduced default mode network activity compared with controls and PD with hallucinations, and no differences with PD (early or severe) without hallucinations. In PD with visual hallucinations, activity and connectivity was preserved compared with controls and higher than in other groups. In early PD, connectivity was lower than in controls but higher than in multiple system atrophy and severe PD without hallucinations. Cortical thickness was reduced in severe PD, with and without hallucinations, and correlated only with disease duration. Higher anxiety scores were found in patients without hallucinations. Default mode network activity and connectivity was higher in PD with visual hallucinations and reduced in multiple system atrophy and PD without visual hallucinations. Cortical thickness comparisons suggest that functional, rather than structural, changes underlie the activity and connectivity differences. © 2015 International Parkinson and Movement Disorder Society.

  7. Comparison of MR-based attenuation correction and CT-based attenuation correction of whole-body PET/MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Izquierdo-Garcia, David [Mount Sinai School of Medicine, Translational and Molecular Imaging Institute, New York, NY (United States); Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA (United States); Sawiak, Stephen J. [University of Cambridge, Wolfson Brain Imaging Centre, Cambridge (United Kingdom); Knesaurek, Karin; Machac, Joseph [Mount Sinai School of Medicine, Division of Nuclear Medicine, Department of Radiology, New York, NY (United States); Narula, Jagat [Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); Fuster, Valentin [Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); The Centro Nacional de Investigaciones Cardiovasculares (CNIC), Madrid (Spain); Fayad, Zahi A. [Mount Sinai School of Medicine, Translational and Molecular Imaging Institute, New York, NY (United States); Mount Sinai School of Medicine, Department of Cardiology, Zena and Michael A. Weiner Cardiovascular Institute and Marie-Josee and Henry R. Kravis Cardiovascular Health Center, New York, NY (United States); Mount Sinai School of Medicine, Department of Radiology, New York, NY (United States)

    2014-08-15

    The objective of this study was to evaluate the performance of the built-in MR-based attenuation correction (MRAC) included in the combined whole-body Ingenuity TF PET/MR scanner and compare it to the performance of CT-based attenuation correction (CTAC) as the gold standard. Included in the study were 26 patients who underwent clinical whole-body FDG PET/CT imaging and subsequently PET/MR imaging (mean delay 100 min). Patients were separated into two groups: the alpha group (14 patients) without MR coils during PET/MR imaging and the beta group (12 patients) with MR coils present (neurovascular, spine, cardiac and torso coils). All images were coregistered to the same space (PET/MR). The two PET images from PET/MR reconstructed using MRAC and CTAC were compared by voxel-based and region-based methods (with ten regions of interest, ROIs). Lesions were also compared by an experienced clinician. Body mass index and lung density showed significant differences between the alpha and beta groups. Right and left lung densities were also significantly different within each group. The percentage differences in uptake values using MRAC in relation to those using CTAC were greater in the beta group than in the alpha group (alpha group -0.2 ± 33.6 %, R{sup 2} = 0.98, p < 0.001; beta group 10.31 ± 69.86 %, R{sup 2} = 0.97, p < 0.001). In comparison to CTAC, MRAC led to underestimation of the PET values by less than 10 % on average, although some ROIs and lesions did differ by more (including the spine, lung and heart). The beta group (imaged with coils present) showed increased overall PET quantification as well as increased variability compared to the alpha group (imaged without coils). PET data reconstructed with MRAC and CTAC showed some differences, mostly in relation to air pockets, metallic implants and attenuation differences in large bone areas (such as the pelvis and spine) due to the segmentation limitation of the MRAC method. (orig.)

  8. Comparison of MR-based attenuation correction and CT-based attenuation correction of whole-body PET/MR imaging

    International Nuclear Information System (INIS)

    Izquierdo-Garcia, David; Sawiak, Stephen J.; Knesaurek, Karin; Machac, Joseph; Narula, Jagat; Fuster, Valentin; Fayad, Zahi A.

    2014-01-01

    The objective of this study was to evaluate the performance of the built-in MR-based attenuation correction (MRAC) included in the combined whole-body Ingenuity TF PET/MR scanner and compare it to the performance of CT-based attenuation correction (CTAC) as the gold standard. Included in the study were 26 patients who underwent clinical whole-body FDG PET/CT imaging and subsequently PET/MR imaging (mean delay 100 min). Patients were separated into two groups: the alpha group (14 patients) without MR coils during PET/MR imaging and the beta group (12 patients) with MR coils present (neurovascular, spine, cardiac and torso coils). All images were coregistered to the same space (PET/MR). The two PET images from PET/MR reconstructed using MRAC and CTAC were compared by voxel-based and region-based methods (with ten regions of interest, ROIs). Lesions were also compared by an experienced clinician. Body mass index and lung density showed significant differences between the alpha and beta groups. Right and left lung densities were also significantly different within each group. The percentage differences in uptake values using MRAC in relation to those using CTAC were greater in the beta group than in the alpha group (alpha group -0.2 ± 33.6 %, R 2 = 0.98, p 2 = 0.97, p < 0.001). In comparison to CTAC, MRAC led to underestimation of the PET values by less than 10 % on average, although some ROIs and lesions did differ by more (including the spine, lung and heart). The beta group (imaged with coils present) showed increased overall PET quantification as well as increased variability compared to the alpha group (imaged without coils). PET data reconstructed with MRAC and CTAC showed some differences, mostly in relation to air pockets, metallic implants and attenuation differences in large bone areas (such as the pelvis and spine) due to the segmentation limitation of the MRAC method. (orig.)

  9. Improving neutron multiplicity counting for the spatial dependence of multiplication: Results for spherical plutonium samples

    Energy Technology Data Exchange (ETDEWEB)

    Göttsche, Malte, E-mail: malte.goettsche@physik.uni-hamburg.de; Kirchner, Gerald

    2015-10-21

    The fissile mass deduced from a neutron multiplicity counting measurement of high mass dense items is underestimated if the spatial dependence of the multiplication is not taken into account. It is shown that an appropriate physics-based correction successfully removes the bias. It depends on four correction coefficients which can only be exactly determined if the sample geometry and composition are known. In some cases, for example in warhead authentication, available information on the sample will be very limited. MCNPX-PoliMi simulations have been performed to obtain the correction coefficients for a range of spherical plutonium metal geometries, with and without polyethylene reflection placed around the spheres. For hollow spheres, the analysis shows that the correction coefficients can be approximated with high accuracy as a function of the sphere's thickness depending only slightly on the radius. If the thickness remains unknown, less accurate estimates of the correction coefficients can be obtained from the neutron multiplication. The influence of isotopic composition is limited. The correction coefficients become somewhat smaller when reflection is present.

  10. Detector correction in large container inspection systems

    CERN Document Server

    Kang Ke Jun; Chen Zhi Qiang

    2002-01-01

    In large container inspection systems, the image is constructed by parallel scanning with a one-dimensional detector array with a linac used as the X-ray source. The linear nonuniformity and nonlinearity of multiple detectors and the nonuniform intensity distribution of the X-ray sector beam result in horizontal striations in the scan image. This greatly impairs the image quality, so the image needs to be corrected. The correction parameters are determined experimentally by scaling the detector responses at multiple points with logarithm interpolation of the results. The horizontal striations are eliminated by modifying the original image data with the correction parameters. This method has proven to be effective and applicable in large container inspection systems

  11. Tract-oriented statistical group comparison of diffusion in sheet-like white matter

    DEFF Research Database (Denmark)

    Lyksborg, Mark; Dyrby, T. B.; Sorensen, P. S.

    2013-01-01

    tube-like shapes, not always suitable for modelling the white matter tracts of the brain. The tract-oriented technique aimed at group studies, integrates the usage of multivariate features and outputs a single value of significance indicating tract-specific differences. This is in contrast to voxel...... based analysis techniques which outputs a significance per voxel basis, and requires multiple comparison correction. We demonstrate our technique by comparing a group of controls with a group of Multiple Sclerosis subjects obtaining significant differences on 11 different fascicle structures....

  12. Comparison of different Aethalometer correction schemes and a reference multi-wavelength absorption technique for ambient aerosol data

    Science.gov (United States)

    Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.

    2017-08-01

    Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.

  13. Comparison between two methodologies for uniformity correction of extensive reference sources

    International Nuclear Information System (INIS)

    Junior, Iremar Alves S.; Siqueira, Paulo de T.D.; Vivolo, Vitor; Potiens, Maria da Penha A.; Nascimento, Eduardo

    2016-01-01

    This article presents the procedures to obtain the uniformity correction factors for extensive reference sources proposed by two different methodologies. The first methodology is presented by the Good Practice Guide of Nº 14 of the NPL, which provides a numerical correction. The second one uses the radiation transport code, MCNP5, to obtain the correction factor. Both methods retrieve very similar corrections factor values, with a maximum deviation of 0.24%. (author)

  14. Comparison of two screening corrections to the additivity rule for the calculation of electron scattering from polyatomic molecules

    International Nuclear Information System (INIS)

    Blanco, F.; Rosado, J.; Illana, A.; Garcia, G.

    2010-01-01

    The SCAR and EGAR procedures have been proposed in order to extend to lower energies the applicability of the additivity rule for calculation of electron-molecule total cross sections. Both those approximate treatments arise after considering geometrical screening corrections due to partial overlapping of atoms in the molecule, as seen by the incident electrons. The main features, results and limitations of both treatments are put here in comparison by means of their application to some different sized species.

  15. A comparison of high-order explicit Runge–Kutta, extrapolation, and deferred correction methods in serial and parallel

    KAUST Repository

    Ketcheson, David I.

    2014-06-13

    We compare the three main types of high-order one-step initial value solvers: extrapolation, spectral deferred correction, and embedded Runge–Kutta pairs. We consider orders four through twelve, including both serial and parallel implementations. We cast extrapolation and deferred correction methods as fixed-order Runge–Kutta methods, providing a natural framework for the comparison. The stability and accuracy properties of the methods are analyzed by theoretical measures, and these are compared with the results of numerical tests. In serial, the eighth-order pair of Prince and Dormand (DOP8) is most efficient. But other high-order methods can be more efficient than DOP8 when implemented in parallel. This is demonstrated by comparing a parallelized version of the wellknown ODEX code with the (serial) DOP853 code. For an N-body problem with N = 400, the experimental extrapolation code is as fast as the tuned Runge–Kutta pair at loose tolerances, and is up to two times as fast at tight tolerances.

  16. Multiple and dependent scattering by densely packed discrete spheres: Comparison of radiative transfer and Maxwell theory

    International Nuclear Information System (INIS)

    Ma, L.X.; Tan, J.Y.; Zhao, J.M.; Wang, F.Q.; Wang, C.A.

    2017-01-01

    The radiative transfer equation (RTE) has been widely used to deal with multiple scattering of light by sparsely and randomly distributed discrete particles. However, for densely packed particles, the RTE becomes questionable due to strong dependent scattering effects. This paper examines the accuracy of RTE by comparing with the exact electromagnetic theory. For an imaginary spherical volume filled with randomly distributed, densely packed spheres, the RTE is solved by the Monte Carlo method combined with the Percus–Yevick hard model to consider the dependent scattering effect, while the electromagnetic calculation is based on the multi-sphere superposition T-matrix method. The Mueller matrix elements of the system with different size parameters and volume fractions of spheres are obtained using both methods. The results verify that the RTE fails to deal with the systems with a high-volume fraction due to the dependent scattering effects. Apart from the effects of forward interference scattering and coherent backscattering, the Percus–Yevick hard sphere model shows good accuracy in accounting for the far-field interference effects for medium or smaller size parameters (up to 6.964 in this study). For densely packed discrete spheres with large size parameters (equals 13.928 in this study), the improvement of dependent scattering correction tends to deteriorate. The observations indicate that caution must be taken when using RTE in dealing with the radiative transfer in dense discrete random media even though the dependent scattering correction is applied. - Highlights: • The Muller matrix of randomly distributed, densely packed spheres are investigated. • The effects of multiple scattering and dependent scattering are analyzed. • The accuracy of radiative transfer theory for densely packed spheres is discussed. • Dependent scattering correction takes effect at medium size parameter or smaller. • Performance of dependent scattering correction

  17. Comparison of different Aethalometer correction schemes and a reference multi-wavelength absorption technique for ambient aerosol data

    Directory of Open Access Journals (Sweden)

    J. Saturno

    2017-08-01

    Full Text Available Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA, which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June–September 2014. The mean absorption coefficient (at 637 nm during this period was 1.8 ± 2.1 Mm−1, with a maximum of 15.9 Mm−1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.

  18. Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing.

    Science.gov (United States)

    Butler, Andrew C; Roediger, Henry L

    2008-04-01

    Multiple-choice tests are used frequently in higher education without much consideration of the impact this form of assessment has on learning. Multiple-choice testing enhances retention of the material tested (the testing effect); however, unlike other tests, multiple-choice can also be detrimental because it exposes students to misinformation in the form of lures. The selection of lures can lead students to acquire false knowledge (Roediger & Marsh, 2005). The present research investigated whether feedback could be used to boost the positive effects and reduce the negative effects of multiple-choice testing. Subjects studied passages and then received a multiple-choice test with immediate feedback, delayed feedback, or no feedback. In comparison with the no-feedback condition, both immediate and delayed feedback increased the proportion of correct responses and reduced the proportion of intrusions (i.e., lure responses from the initial multiple-choice test) on a delayed cued recall test. Educators should provide feedback when using multiple-choice tests.

  19. Evaluation of intensity drift correction strategies using MetaboDrift, a normalization tool for multi-batch metabolomics data.

    Science.gov (United States)

    Thonusin, Chanisa; IglayReger, Heidi B; Soni, Tanu; Rothberg, Amy E; Burant, Charles F; Evans, Charles R

    2017-11-10

    In recent years, mass spectrometry-based metabolomics has increasingly been applied to large-scale epidemiological studies of human subjects. However, the successful use of metabolomics in this context is subject to the challenge of detecting biologically significant effects despite substantial intensity drift that often occurs when data are acquired over a long period or in multiple batches. Numerous computational strategies and software tools have been developed to aid in correcting for intensity drift in metabolomics data, but most of these techniques are implemented using command-line driven software and custom scripts which are not accessible to all end users of metabolomics data. Further, it has not yet become routine practice to assess the quantitative accuracy of drift correction against techniques which enable true absolute quantitation such as isotope dilution mass spectrometry. We developed an Excel-based tool, MetaboDrift, to visually evaluate and correct for intensity drift in a multi-batch liquid chromatography - mass spectrometry (LC-MS) metabolomics dataset. The tool enables drift correction based on either quality control (QC) samples analyzed throughout the batches or using QC-sample independent methods. We applied MetaboDrift to an original set of clinical metabolomics data from a mixed-meal tolerance test (MMTT). The performance of the method was evaluated for multiple classes of metabolites by comparison with normalization using isotope-labeled internal standards. QC sample-based intensity drift correction significantly improved correlation with IS-normalized data, and resulted in detection of additional metabolites with significant physiological response to the MMTT. The relative merits of different QC-sample curve fitting strategies are discussed in the context of batch size and drift pattern complexity. Our drift correction tool offers a practical, simplified approach to drift correction and batch combination in large metabolomics studies

  20. Advances in ranking and selection, multiple comparisons, and reliability methodology and applications

    CERN Document Server

    Balakrishnan, N; Nagaraja, HN

    2007-01-01

    S. Panchapakesan has made significant contributions to ranking and selection and has published in many other areas of statistics, including order statistics, reliability theory, stochastic inequalities, and inference. Written in his honor, the twenty invited articles in this volume reflect recent advances in these areas and form a tribute to Panchapakesan's influence and impact on these areas. Thematically organized, the chapters cover a broad range of topics from: Inference; Ranking and Selection; Multiple Comparisons and Tests; Agreement Assessment; Reliability; and Biostatistics. Featuring

  1. Multiple scattering corrections to the Beer-Lambert law. 2: Detector with a variable field of view.

    Science.gov (United States)

    Zardecki, A; Tam, W G

    1982-07-01

    The multiple scattering corrections to the Beer-Lambert law in the case of a detector with a variable field of view are analyzed. We introduce transmission functions relating the received radiant power to reference power levels relevant to two different experimental situations. In the first case, the transmission function relates the received power to a reference power level appropriate to a nonattenuating medium. In the second case, the reference power level is established by bringing the receiver to the close-up position with respect to the source. To examine the effect of the variation of the detector field of view the behavior of the gain factor is studied. Numerical results modeling the laser beam propagation in fog, cloud, and rain are presented.

  2. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    OpenAIRE

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60?years. By comparison, recent data (1999?2006) show that hearing thresholds in the US population have improved....

  3. Motion artifacts in functional near-infrared spectroscopy: a comparison of motion correction techniques applied to real cognitive data

    Science.gov (United States)

    Brigadoi, Sabrina; Ceccherini, Lisa; Cutini, Simone; Scarpa, Fabio; Scatturin, Pietro; Selb, Juliette; Gagnon, Louis; Boas, David A.; Cooper, Robert J.

    2013-01-01

    Motion artifacts are a significant source of noise in many functional near-infrared spectroscopy (fNIRS) experiments. Despite this, there is no well-established method for their removal. Instead, functional trials of fNIRS data containing a motion artifact are often rejected completely. However, in most experimental circumstances the number of trials is limited, and multiple motion artifacts are common, particularly in challenging populations. Many methods have been proposed recently to correct for motion artifacts, including principle component analysis, spline interpolation, Kalman filtering, wavelet filtering and correlation-based signal improvement. The performance of different techniques has been often compared in simulations, but only rarely has it been assessed on real functional data. Here, we compare the performance of these motion correction techniques on real functional data acquired during a cognitive task, which required the participant to speak aloud, leading to a low-frequency, low-amplitude motion artifact that is correlated with the hemodynamic response. To compare the efficacy of these methods, objective metrics related to the physiology of the hemodynamic response have been derived. Our results show that it is always better to correct for motion artifacts than reject trials, and that wavelet filtering is the most effective approach to correcting this type of artifact, reducing the area under the curve where the artifact is present in 93% of the cases. Our results therefore support previous studies that have shown wavelet filtering to be the most promising and powerful technique for the correction of motion artifacts in fNIRS data. The analyses performed here can serve as a guide for others to objectively test the impact of different motion correction algorithms and therefore select the most appropriate for the analysis of their own fNIRS experiment. PMID:23639260

  4. Finite-Geometry and Polarized Multiple-Scattering Corrections of Experimental Fast- Neutron Polarization Data by Means of Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Aspelund, O; Gustafsson, B

    1967-05-15

    After an introductory discussion of various methods for correction of experimental left-right ratios for polarized multiple-scattering and finite-geometry effects necessary and sufficient formulas for consistent tracking of polarization effects in successive scattering orders are derived. The simplifying assumptions are then made that the scattering is purely elastic and nuclear, and that in the description of the kinematics of the arbitrary Scattering {mu}, only one triple-parameter - the so-called spin rotation parameter {beta}{sup ({mu})} - is required. Based upon these formulas a general discussion of the importance of the correct inclusion of polarization effects in any scattering order is presented. Special attention is then paid to the question of depolarization of an already polarized beam. Subsequently, the afore-mentioned formulas are incorporated in the comprehensive Monte Carlo program MULTPOL, which has been designed so as to correctly account for finite-geometry effects in the sense that both the scattering sample and the detectors (both having cylindrical shapes) are objects of finite dimensions located at finite distances from each other and from the source of polarized fast-neutrons. A special feature of MULTPOL is the application of the method of correlated sampling for reduction of the standard deviations .of the results of the simulated experiment. Typical data of performance of MULTPOL have been obtained by the application of this program to the correction of experimental polarization data observed in n + '{sup 12}C elastic scattering between 1 and 2 MeV. Finally, in the concluding remarks the possible modification of MULTPOL to other experimental geometries is briefly discussed.

  5. The hippocampus supports multiple cognitive processes through relational binding and comparison

    Directory of Open Access Journals (Sweden)

    Rosanna Kathleen Olsen

    2012-05-01

    Full Text Available It has been well established that the hippocampus plays a pivotal role in explicit long-term recognition memory. However, findings from amnesia, lesion and recording studies with non-human animals, eye-movement recording studies, and functional neuroimaging have recently converged upon a similar message: the functional reach of the hippocampus extends far beyond explicit recognition memory. Damage to the hippocampus affects performance on a number of cognitive tasks including recognition memory after short and long delays and visual discrimination. Additionally, with the advent of neuroimaging techniques that have fine spatial and temporal resolution, findings have emerged that show the elicitation of hippocampal responses within the first few hundred milliseconds of stimulus/task onset. These responses occur for novel and previously viewed information during a time when perceptual processing is traditionally thought to occur, and long before overt recognition responses are made. We propose that the hippocampus is obligatorily involved in the binding of disparate elements across both space and time, and in the comparison of such relational memory representations. Furthermore, the hippocampus supports relational binding and comparison with or without conscious awareness for the relational representations that are formed, retrieved and/or compared. It is by virtue of these basic binding and comparison functions that the reach of the hippocampus extends beyond long-term recognition memory and underlies task performance in multiple cognitive domains.

  6. Internal correction of spectral interferences and mass bias for selenium metabolism studies using enriched stable isotopes in combination with multiple linear regression.

    Science.gov (United States)

    Lunøe, Kristoffer; Martínez-Sierra, Justo Giner; Gammelgaard, Bente; Alonso, J Ignacio García

    2012-03-01

    The analytical methodology for the in vivo study of selenium metabolism using two enriched selenium isotopes has been modified, allowing for the internal correction of spectral interferences and mass bias both for total selenium and speciation analysis. The method is based on the combination of an already described dual-isotope procedure with a new data treatment strategy based on multiple linear regression. A metabolic enriched isotope ((77)Se) is given orally to the test subject and a second isotope ((74)Se) is employed for quantification. In our approach, all possible polyatomic interferences occurring in the measurement of the isotope composition of selenium by collision cell quadrupole ICP-MS are taken into account and their relative contribution calculated by multiple linear regression after minimisation of the residuals. As a result, all spectral interferences and mass bias are corrected internally allowing the fast and independent quantification of natural abundance selenium ((nat)Se) and enriched (77)Se. In this sense, the calculation of the tracer/tracee ratio in each sample is straightforward. The method has been applied to study the time-related tissue incorporation of (77)Se in male Wistar rats while maintaining the (nat)Se steady-state conditions. Additionally, metabolically relevant information such as selenoprotein synthesis and selenium elimination in urine could be studied using the proposed methodology. In this case, serum proteins were separated by affinity chromatography while reverse phase was employed for urine metabolites. In both cases, (74)Se was used as a post-column isotope dilution spike. The application of multiple linear regression to the whole chromatogram allowed us to calculate the contribution of bromine hydride, selenium hydride, argon polyatomics and mass bias on the observed selenium isotope patterns. By minimising the square sum of residuals for the whole chromatogram, internal correction of spectral interferences and mass

  7. Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs.

    Science.gov (United States)

    Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang

    2015-01-01

    Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n (2)), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k (2) n (2)) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results.

  8. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    International Nuclear Information System (INIS)

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  9. Comparison of Intelligibility Measures for Adults with Parkinson's Disease, Adults with Multiple Sclerosis, and Healthy Controls

    Science.gov (United States)

    Stipancic, Kaila L.; Tjaden, Kris; Wilding, Gregory

    2016-01-01

    Purpose: This study obtained judgments of sentence intelligibility using orthographic transcription for comparison with previously reported intelligibility judgments obtained using a visual analog scale (VAS) for individuals with Parkinson's disease and multiple sclerosis and healthy controls (K. Tjaden, J. E. Sussman, & G. E. Wilding, 2014).…

  10. Assessment of sistemic ventricle function in corrected transposition of great arteries with Gated SPECT: comparison with radionuclide ventriculography

    International Nuclear Information System (INIS)

    Alexanderson, E.; Espinola, N.; Duenas, D.; Fermon, S.; Acevedo, C.; Martinez, C.

    2002-01-01

    Corrected trasposition of great arteries is a uncommon congenital heart disease where the right ventricle works as the sistemic one. QGS Gated SPECT program was designed to recognize the contours of left ventricle being a good method to evaluate left ventricle ejection fraction. The purpose of this study was to evaluate the right ventricle ejection fraction (RVEF) by gated SPECT using Tc-99mSestaMIBI in comparison with radionuclide ventriculography (RVG) in patients with corrected trasposition of great arteries. Methods: We performed gated SPECT and radionuclide ventriculography within 15 days of each other in 7 adults consecutive patients with the diagnosis of corrected trasposition of great arteries (5 men, 2 women; mean age 47 y). Gated tomographic data, including ventricular volumes and ejection fraction, were processed using QGS automatic algorithm, whereas equilibrium radionuclide ventriculography used standard techniques. Results: We found a good correlation between right ventricle ejection fraction obtained with Gated SPECT compared with equilibrium radionuclide ventriculography. The mean of the RVEF with Gated SPECT was 41.2% compared with 44.2% of RVEF with equilibrium radionuclide ventriculography. Both methods recognized abnormal RVEF in 5 patients ( 50%) with Gated SPECT and abnormal with RVG meanwhile another patient had normal RVEF with RVG and abnormal with Gated SPECT. Conclusion: Quantitative gated tomography, using Tc 99mSestaMIBI, has a good correlation with radionuclide ventriculography for the assessment of right ventricle ejection fraction in patients with corrected trasposition of great arteries. These results support the clinical use of this technique among these patients

  11. Quantum Corrections to the 'Atomistic' MOSFET Simulations

    Science.gov (United States)

    Asenov, Asen; Slavcheva, G.; Kaya, S.; Balasubramaniam, R.

    2000-01-01

    We have introduced in a simple and efficient manner quantum mechanical corrections in our 3D 'atomistic' MOSFET simulator using the density gradient formalism. We have studied in comparison with classical simulations the effect of the quantum mechanical corrections on the simulation of random dopant induced threshold voltage fluctuations, the effect of the single charge trapping on interface states and the effect of the oxide thickness fluctuations in decanano MOSFETs with ultrathin gate oxides. The introduction of quantum corrections enhances the threshold voltage fluctuations but does not affect significantly the amplitude of the random telegraph noise associated with single carrier trapping. The importance of the quantum corrections for proper simulation of oxide thickness fluctuation effects has also been demonstrated.

  12. Osteotomias segmentares múltiplas para a correção da cifose Multiple segmental osteotomies to the kyphosis correction

    Directory of Open Access Journals (Sweden)

    Carlos Fernando Pereira da Silva Herrero

    2009-01-01

    Full Text Available OBJETIVO: Avaliar o resultado do tratamento cirúrgico da hipercifose dorsal da coluna vertebral por meio da técnica de Ponte (osteotomias múltiplas posteriores. MÉTODOS: Estudo retrospectivo de 10 pacientes (oito com sequela de doença de Scheuermann e dois com sequela de laminectomia submetidos a cirurgia para correção de hipercifose acima de 70°. A idade variou de 12 anos a 20 anos (média de 16,8 anos ± 2,89. Os parâmetros radiográficos estudados foram a mensuração da cifose, lordose e, quando presente, da escoliose. Também foram avaliadas a presença de cifose juncional proximal e distal, a perda da correção e complicações como soltura e quebra dos implantes. Os parâmetros radiográficos foram avaliados no período pré-operatório, pós-operatório imediato e avaliação tardia. RESULTADOS: Os pacientes foram seguidos por um período que variou de 24 a 144 meses (média de 65,8 meses ± 39,92. O valor médio da hipercifose pré-operatória foi de 78,8º ± 7,59º (Cobb e de 47,5º ± 12,54º no seguimento, com a média de correção de 33,9º ± 9,53º e perda média de correção de 2,2º. CONCLUSÃO: O tratamento cirúrgico da hipercifose torácica por meio de osteotomias múltiplas posteriores apresentou boa correção da deformidade e perda mínima de correção ao longo do seguimento.OBJECTIVE: To evaluate the results of the surgical treatment of the spinal Kyphosis using the Ponte's technique (multiple posterior osteotomies. METHODS: Ten patients (8 with Scheuermann´s kyphosis and 2 with kyphosis after laminectomy submitted to surgical correction of kyphotic deformity greater than 70º were retrospectively assessed. The age at the surgical time ranged from 12 to 20 years old (mean age16.8 years ± 2.89. The radiographic parameters evaluated were the kyphosis, the lordosis and the scoliosis - whenever present. The presence of proximal and distal junctional kyphosis, loss of correction, and complications as implants

  13. Rapid descriptive sensory methods – Comparison of Free Multiple Sorting, Partial Napping, Napping, Flash Profiling and conventional profiling

    DEFF Research Database (Denmark)

    Dehlholm, Christian; Brockhoff, Per B.; Meinert, Lene

    2012-01-01

    is a modal restriction of Napping to specific sensory modalities, directing sensation and still allowing a holistic approach to products. The new methods are compared to Flash Profiling, Napping and conventional descriptive sensory profiling. Evaluations are performed by several panels of expert assessors......Two new rapid descriptive sensory evaluation methods are introduced to the field of food sensory evaluation. The first method, free multiple sorting, allows subjects to perform ad libitum free sortings, until they feel that no more relevant dissimilarities among products remain. The second method...... are applied for the graphical validation and comparisons. This allows similar comparisons and is applicable to single-block evaluation designs such as Napping. The partial Napping allows repetitions on multiple sensory modalities, e.g. appearance, taste and mouthfeel, and shows the average...

  14. Comparison of violence and abuse in juvenile correctional facilities and schools.

    Science.gov (United States)

    Davidson-Arad, Bilha; Benbenishty, Rami; Golan, Miriam

    2009-02-01

    Peer violence, peer sexual harassment and abuse, and staff abuse experienced by boys and girls in juvenile correctional facilities are compared with those experienced by peers in schools in the community. Responses of 360 youths in 20 gender-separated correctional facilities in Israel to a questionnaire tapping these forms of mistreatment were compared with those of 7,012 students in a representative sample of Israeli junior high and high schools. Victimization was reported more frequently by those in correctional facilities than by those in schools. However, some of the more prevalent forms of violence and abuse were reported with equal frequency in both settings, and some more frequently in schools. Despite being victimized more frequently, those in the correctional facilities tended to view their victimization as a significantly less serious problem than those in the schools and to rate the staff as doing a better job of dealing with the problem.

  15. Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs

    Directory of Open Access Journals (Sweden)

    Chun-Yuan Lin

    2015-01-01

    Full Text Available Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n2, where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC. The intrinsic time complexity of MCC problem is O(k2n2 with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results.

  16. Polarization correction in the theory of energy losses by charged particles

    Energy Technology Data Exchange (ETDEWEB)

    Makarov, D. N., E-mail: makarovd0608@yandex.ru; Matveev, V. I. [Lomonosov Northern (Arctic) Federal University (Russian Federation)

    2015-05-15

    A method for finding the polarization (Barkas) correction in the theory of energy losses by charged particles in collisions with multielectron atoms is proposed. The Barkas correction is presented in a simple analytical form. We make comparisons with experimental data and show that applying the Barkas correction improves the agreement between theory and experiment.

  17. Comparison of laser epithelial keratomileusis and photorefractive keratectomy for the correction of myopia:a meta-analysis

    Institute of Scientific and Technical Information of China (English)

    CUI Min; CHEN Xiao-ming; L(U) Peng

    2008-01-01

    Background It is unclear whether a laser epithelial keratomileusis (LASEK) has any significant advantage over a photorefractive keratectomy (PRK) for correcting myopia.We undertook this meta-analysis of randomized controlled trials to examine possible differences in efficacy,accuracy,safety and side-effects between two methods,LASEK and PRK,for correcting myopia.Methods A systematic literature retrieval was conducted in the PubMed,EMBASE,Chinese Bio-medicine Database,and Cochrane Controlled Trials Register to identify potentially relevant randomized controlled trials.The statistical analysis was performed using a RevMan 4.2 software.The results included efficacy outcomes (proportion of eyes with uncorrected visual acuity (UCVA)≥ 20/20 at 1 month and 12 months post-treatment),accuracy outcomes (proportion of eyes within ±0.50 diopters (D) of target refraction at 1 month and 12 months post-treatment),safety outcomes (loss of ≥2 lines of best spectacle-corrected visual acuity (BSCVA) at ≥ 6 months post-treatment),mean pain scores on day 1 post-treatment,and mean corneal haze scores at 6 and 12 months post-treatment.Results Seven articles describing a total of 604 eyes with myopia from 0 to -9.0 D were identified in this meta-analysis.The combined results showed that the efficacy and accuracy outcomes between the two groups at 1 month and 12 months post-treatment were comparable.No patient lost ≥ 2 lines of BSCVA at ≥ 6 months post-treatment in four relevant trials.Compared with PRK,LASEK did not relieve discomfort on day 1 post-treatment or reduce corneal haze intensity at 6 and 12 months post-treatment.Conclusions According to the available data,LASEK does not appear to have any advantage over PRK for correcting myopia from 0 to -9.0 D.This meta-analysis focuses mainly on the comparison of the early,mid-term and mid-long term results of the two methods.Additional studies to compare the long-term (>one year) results should be considered.

  18. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    International Nuclear Information System (INIS)

    Croft, Stephen; Favalli, Andrea

    2017-01-01

    Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where the next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.

  19. Formatt: Correcting protein multiple structural alignments by incorporating sequence alignment

    Directory of Open Access Journals (Sweden)

    Daniels Noah M

    2012-10-01

    Full Text Available Abstract Background The quality of multiple protein structure alignments are usually computed and assessed based on geometric functions of the coordinates of the backbone atoms from the protein chains. These purely geometric methods do not utilize directly protein sequence similarity, and in fact, determining the proper way to incorporate sequence similarity measures into the construction and assessment of protein multiple structure alignments has proved surprisingly difficult. Results We present Formatt, a multiple structure alignment based on the Matt purely geometric multiple structure alignment program, that also takes into account sequence similarity when constructing alignments. We show that Formatt outperforms Matt and other popular structure alignment programs on the popular HOMSTRAD benchmark. For the SABMark twilight zone benchmark set that captures more remote homology, Formatt and Matt outperform other programs; depending on choice of embedded sequence aligner, Formatt produces either better sequence and structural alignments with a smaller core size than Matt, or similarly sized alignments with better sequence similarity, for a small cost in average RMSD. Conclusions Considering sequence information as well as purely geometric information seems to improve quality of multiple structure alignments, though defining what constitutes the best alignment when sequence and structural measures would suggest different alignments remains a difficult open question.

  20. A correction to 'efficient and secure comparison for on-line auctions'

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Krøigaard, Mikkel; Geisler, Martin

    2009-01-01

    In this paper, we describe a correction to the cryptosystem proposed in Damgard et al. from Int. J. Applied Cryptography, Vol. 1, No. 1. Although, the correction is small and does not affect the performance of the protocols from Damgard et al., it is necessary, as the cryptosystem is not secure...

  1. Reanalysis comparisons of upper tropospheric-lower stratospheric jets and multiple tropopauses

    Science.gov (United States)

    Manney, Gloria L.; Hegglin, Michaela I.; Lawrence, Zachary D.; Wargan, Krzysztof; Millán, Luis F.; Schwartz, Michael J.; Santee, Michelle L.; Lambert, Alyn; Pawson, Steven; Knosp, Brian W.; Fuller, Ryan A.; Daffer, William H.

    2017-09-01

    The representation of upper tropospheric-lower stratospheric (UTLS) jet and tropopause characteristics is compared in five modern high-resolution reanalyses for 1980 through 2014. Climatologies of upper tropospheric jet, subvortex jet (the lowermost part of the stratospheric vortex), and multiple tropopause frequency distributions in MERRA (Modern-Era Retrospective analysis for Research and Applications), ERA-I (ERA-Interim; the European Centre for Medium-Range Weather Forecasts, ECMWF, interim reanalysis), JRA-55 (the Japanese 55-year Reanalysis), and CFSR (the Climate Forecast System Reanalysis) are compared with those in MERRA-2. Differences between alternate products from individual reanalysis systems are assessed; in particular, a comparison of CFSR data on model and pressure levels highlights the importance of vertical grid spacing. Most of the differences in distributions of UTLS jets and multiple tropopauses are consistent with the differences in assimilation model grids and resolution - for example, ERA-I (with coarsest native horizontal resolution) typically shows a significant low bias in upper tropospheric jets with respect to MERRA-2, and JRA-55 (the Japanese 55-year Reanalysis) a more modest one, while CFSR (with finest native horizontal resolution) shows a high bias with respect to MERRA-2 in both upper tropospheric jets and multiple tropopauses. Vertical temperature structure and grid spacing are especially important for multiple tropopause characterizations. Substantial differences between MERRA and MERRA-2 are seen in mid- to high-latitude Southern Hemisphere (SH) winter upper tropospheric jets and multiple tropopauses as well as in the upper tropospheric jets associated with tropical circulations during the solstice seasons; some of the largest differences from the other reanalyses are seen in the same times and places. Very good qualitative agreement among the reanalyses is seen between the large-scale climatological features in UTLS jet and

  2. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    Science.gov (United States)

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  3. Multiplicity in difference geometry

    OpenAIRE

    Tomasic, Ivan

    2011-01-01

    We prove a first principle of preservation of multiplicity in difference geometry, paving the way for the development of a more general intersection theory. In particular, the fibres of a \\sigma-finite morphism between difference curves are all of the same size, when counted with correct multiplicities.

  4. Simultaneous small-sample comparisons in longitudinal or multi-endpoint trials using multiple marginal models

    DEFF Research Database (Denmark)

    Pallmann, Philip; Ritz, Christian; Hothorn, Ludwig A

    2018-01-01

    , however only asymptotically. In this paper, we show how to make the approach also applicable to small-sample data problems. Specifically, we discuss the computation of adjusted P values and simultaneous confidence bounds for comparisons of randomised treatment groups as well as for levels......Simultaneous inference in longitudinal, repeated-measures, and multi-endpoint designs can be onerous, especially when trying to find a reasonable joint model from which the interesting effects and covariances are estimated. A novel statistical approach known as multiple marginal models greatly...... simplifies the modelling process: the core idea is to "marginalise" the problem and fit multiple small models to different portions of the data, and then estimate the overall covariance matrix in a subsequent, separate step. Using these estimates guarantees strong control of the family-wise error rate...

  5. ODMSummary: A Tool for Automatic Structured Comparison of Multiple Medical Forms Based on Semantic Annotation with the Unified Medical Language System.

    Science.gov (United States)

    Storck, Michael; Krumm, Rainer; Dugas, Martin

    2016-01-01

    Medical documentation is applied in various settings including patient care and clinical research. Since procedures of medical documentation are heterogeneous and developed further, secondary use of medical data is complicated. Development of medical forms, merging of data from different sources and meta-analyses of different data sets are currently a predominantly manual process and therefore difficult and cumbersome. Available applications to automate these processes are limited. In particular, tools to compare multiple documentation forms are missing. The objective of this work is to design, implement and evaluate the new system ODMSummary for comparison of multiple forms with a high number of semantically annotated data elements and a high level of usability. System requirements are the capability to summarize and compare a set of forms, enable to estimate the documentation effort, track changes in different versions of forms and find comparable items in different forms. Forms are provided in Operational Data Model format with semantic annotations from the Unified Medical Language System. 12 medical experts were invited to participate in a 3-phase evaluation of the tool regarding usability. ODMSummary (available at https://odmtoolbox.uni-muenster.de/summary/summary.html) provides a structured overview of multiple forms and their documentation fields. This comparison enables medical experts to assess multiple forms or whole datasets for secondary use. System usability was optimized based on expert feedback. The evaluation demonstrates that feedback from domain experts is needed to identify usability issues. In conclusion, this work shows that automatic comparison of multiple forms is feasible and the results are usable for medical experts.

  6. Correcting Grade Deflation Caused by Multiple-Choice Scoring.

    Science.gov (United States)

    Baranchik, Alvin; Cherkas, Barry

    2000-01-01

    Presents a study involving three sections of pre-calculus (n=181) at four-year college where partial credit scoring on multiple-choice questions was examined over an entire semester. Indicates that grades determined by partial credit scoring seemed more reflective of both the quantity and quality of student knowledge than grades determined by…

  7. Quantum gravitational corrections for spinning particles

    International Nuclear Information System (INIS)

    Fröb, Markus B.

    2016-01-01

    We calculate the quantum corrections to the gauge-invariant gravitational potentials of spinning particles in flat space, induced by loops of both massive and massless matter fields of various types. While the corrections to the Newtonian potential induced by massless conformal matter for spinless particles are well known, and the same corrections due to massless minimally coupled scalars http://dx.doi.org/10.1088/0264-9381/27/24/245008, massless non-conformal scalars http://dx.doi.org/10.1103/PhysRevD.87.104027 and massive scalars, fermions and vector bosons http://dx.doi.org/10.1103/PhysRevD.91.064047 have been recently derived, spinning particles receive additional corrections which are the subject of the present work. We give both fully analytic results valid for all distances from the particle, and present numerical results as well as asymptotic expansions. At large distances from the particle, the corrections due to massive fields are exponentially suppressed in comparison to the corrections from massless fields, as one would expect. However, a surprising result of our analysis is that close to the particle itself, on distances comparable to the Compton wavelength of the massive fields running in the loops, these corrections can be enhanced with respect to the massless case.

  8. Attenuation correction method for single photon emission CT

    Energy Technology Data Exchange (ETDEWEB)

    Morozumi, Tatsuru; Nakajima, Masato [Keio Univ., Yokohama (Japan). Faculty of Science and Technology; Ogawa, Koichi; Yuta, Shinichi

    1983-10-01

    A correction method (Modified Correction Matrix method) is proposed to implement iterative correction by exactly measuring attenuation constant distribution in a test body, calculating a correction factor for every picture element, then multiply the image by these factors. Computer simulation for the comparison of the results showed that the proposed method was specifically more effective to an application to the test body, in which the rate of attenuation constant change is large, than the conventional correction matrix method. Since the actual measurement data always contain quantum noise, the noise was taken into account in the simulation. However, the correction effect was large even under the noise. For verifying its clinical effectiveness, the experiment using an acrylic phantom was also carried out. As the result, the recovery of image quality in the parts with small attenuation constant was remarkable as compared with the conventional method.

  9. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    International Nuclear Information System (INIS)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I.; Rota Kops, Elena; Shah, N. Jon; Ribeiro, Andre; Yakushev, Igor

    2016-01-01

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [ 18 F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are inferior

  10. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    Energy Technology Data Exchange (ETDEWEB)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I. [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Rota Kops, Elena; Shah, N. Jon [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Ribeiro, Andre [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Institute of Biophysics and Biomedical Engineering, Lisbon (Portugal); Yakushev, Igor [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Institute TUM Neuroimaging Center (TUM-NIC), Munich (Germany)

    2016-11-15

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [{sup 18}F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are

  11. Item difficulty of multiple choice tests dependant on different item response formats – An experiment in fundamental research on psychological assessment

    Directory of Open Access Journals (Sweden)

    KLAUS D. KUBINGER

    2007-12-01

    Full Text Available Multiple choice response formats are problematical as an item is often scored as solved simply because the test-taker is a lucky guesser. Instead of applying pertinent IRT models which take guessing effects into account, a pragmatic approach of re-conceptualizing multiple choice response formats to reduce the chance of lucky guessing is considered. This paper compares the free response format with two different multiple choice formats. A common multiple choice format with a single correct response option and five distractors (“1 of 6” is used, as well as a multiple choice format with five response options, of which any number of the five is correct and the item is only scored as mastered if all the correct response options and none of the wrong ones are marked (“x of 5”. An experiment was designed, using pairs of items with exactly the same content but different response formats. 173 test-takers were randomly assigned to two test booklets of 150 items altogether. Rasch model analyses adduced a fitting item pool, after the deletion of 39 items. The resulting item difficulty parameters were used for the comparison of the different formats. The multiple choice format “1 of 6” differs significantly from “x of 5”, with a relative effect of 1.63, while the multiple choice format “x of 5” does not significantly differ from the free response format. Therefore, the lower degree of difficulty of items with the “1 of 6” multiple choice format is an indicator of relevant guessing effects. In contrast the “x of 5” multiple choice format can be seen as an appropriate substitute for free response format.

  12. Successful correction of tibial bone deformity through multiple surgical procedures, liquid nitrogen-pretreated bone tumor autograft, three-dimensional external fixation, and internal fixation in a patient with primary osteosarcoma: a case report.

    Science.gov (United States)

    Takeuchi, Akihiko; Yamamoto, Norio; Shirai, Toshiharu; Nishida, Hideji; Hayashi, Katsuhiro; Watanabe, Koji; Miwa, Shinji; Tsuchiya, Hiroyuki

    2015-12-07

    In a previous report, we described a method of reconstruction using tumor-bearing autograft treated by liquid nitrogen for malignant bone tumor. Here we present the first case of bone deformity correction following a tumor-bearing frozen autograft via three-dimensional computerized reconstruction after multiple surgeries. A 16-year-old female student presented with pain in the left lower leg and was diagnosed with a low-grade central tibial osteosarcoma. Surgical bone reconstruction was performed using a tumor-bearing frozen autograft. Bone union was achieved at 7 months after the first surgical procedure. However, local tumor recurrence and lung metastases occurred 2 years later, at which time a second surgical procedure was performed. Five years later, the patient developed a 19° varus deformity and underwent a third surgical procedure, during which an osteotomy was performed using the Taylor Spatial Frame three-dimensional external fixation technique. A fourth corrective surgical procedure was performed in which internal fixation was achieved with a locking plate. Two years later, and 10 years after the initial diagnosis of tibial osteosarcoma, the bone deformity was completely corrected, and the patient's limb function was good. We present the first report in which a bone deformity due to a primary osteosarcoma was corrected using a tumor-bearing frozen autograft, followed by multiple corrective surgical procedures that included osteotomy, three-dimensional external fixation, and internal fixation.

  13. Personalized recommendation with corrected similarity

    International Nuclear Information System (INIS)

    Zhu, Xuzhen; Tian, Hui; Cai, Shimin

    2014-01-01

    Personalized recommendation has attracted a surge of interdisciplinary research. Especially, similarity-based methods in applications of real recommendation systems have achieved great success. However, the computations of similarities are overestimated or underestimated, in particular because of the defective strategy of unidirectional similarity estimation. In this paper, we solve this drawback by leveraging mutual correction of forward and backward similarity estimations, and propose a new personalized recommendation index, i.e., corrected similarity based inference (CSI). Through extensive experiments on four benchmark datasets, the results show a greater improvement of CSI in comparison with these mainstream baselines. And a detailed analysis is presented to unveil and understand the origin of such difference between CSI and mainstream indices. (paper)

  14. A model of diffraction scattering with unitary corrections

    International Nuclear Information System (INIS)

    Etim, E.; Malecki, A.; Satta, L.

    1989-01-01

    The inability of the multiple scattering model of Glauber and similar geometrical picture models to fit data at Collider energies, to fit low energy data at large momentum transfers and to explain the absence of multiple diffraction dips in the data is noted. It is argued and shown that a unitary correction to the multiple scattering amplitude gives rise to a better model and allows to fit all available data on nucleon-nucleon and nucleus-nucleus collisions at all energies and all momentum transfers. There are no multiple diffraction dips

  15. Comparison of sensitivity of magnetic resonance imaging and evoked potentials in the detection of brainstem involvement in multiple sclerosis

    International Nuclear Information System (INIS)

    Comi, G.; Martinelli, V.; Medaglini, S.; Locatelli, T.; Magnani, G.; Poggi, A.; Triulzi, F.

    1988-01-01

    A comparison was made of the sensitivity of magnetic resonance imaging and the combined use of Brainstem Auditory Evoked Potential and Median Somatosensory Evoked Potential in the detection of brainstem dysfunction in 54 multiple sclerosis patients. 10 refs.; 2 tabs

  16. The Impact of Correction for Guessing Formula on MC and Yes/No Vocabulary Tests' Scores

    Directory of Open Access Journals (Sweden)

    abdollah baradaran

    2009-10-01

    Full Text Available A standard correction for random guessing (cfg formula on multiple-choice and Yes/Noexaminations was examined retrospectively in the scores of the intermediate female EFL learners in an English language school. The correctionwas a weighting formula for points awarded for correct answers,incorrect answers, and unanswered questions so that the expectedvalue of the increase in test score due to guessing was zero. The researcher compared uncorrected and corrected scores on examinationsusing multiple-choice and Yes/No formats. These short-answer formats eliminatedor at least greatly reduced the potential for guessing the correctanswer. The expectation for students to improve their grade by guessingon multiple-choice and Yes/No format examinations is well known. The researcher examined a method for correcting for random guessing (cfg " no knowledge" on multiple- choice and Yes/No vocabulary examinations by comparing application and non-application of correction for guessing (cfg formula on scores on these examinations. It was done to determine whether the test takers really knew the correct answer, or they had resorted to a kind of guessing. This study represented a unique opportunity to compare scores from multiple-choice and Yes/No examinations in a settingin which students were given the same number of questions ineach of the two format types testing their knowledge over thesame subject matter. The results of this study indicated that the significant differences were highlighted between the subjects' scores when cfg formula was applied and when it was not.

  17. Atmospheric scattering corrections to solar radiometry

    International Nuclear Information System (INIS)

    Box, M.A.; Deepak, A.

    1979-01-01

    Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. In this paper we shall discuss the correction factors needed to account for the diffuse (i.e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle 0 ) and relatively clear skies (optical depths <0.4), it is shown that the total diffuse contributions represents approximately l% of the total intensity. It is assumed here that the main contributions to the diffuse radiation within the detector's view cone are due to single scattering by molecules and aerosols and multiple scattering by molecules alone, aerosol multiple scattering contributions being treated as negligibly small. The theory and the numerical results discussed in this paper will be helpful not only in making corrections to the measured optical depth data but also in designing improved solar radiometers

  18. CCD Photometry Using Multiple Comparison Stars

    Directory of Open Access Journals (Sweden)

    Yonggi Kim

    2004-09-01

    Full Text Available The accuracy of CCD observations obtained at the Korean 1.8 m telescope has been studied. Seventeen comparison stars in the vicinity of the cataclysmic variable BG CMi have been measured. The ``artificial" star has been used instead of the ``control" star, what made possible to increase accuracy estimates by a factor of 1.3-2.1 times for ``good" and ``cloudy" nights, respectively. The algorithm of iterative determination of accuracy and weights of few comparison stars contributing to the artificial star, has been presented. The accuracy estimates for 13-mag stars are around 0.002 m mag for exposure times of 30 sec.

  19. Voxel-based morphometry and automated lobar volumetry: The trade-off between spatial scale and statistical correction

    Science.gov (United States)

    Voormolen, Eduard H.J.; Wei, Corie; Chow, Eva W.C.; Bassett, Anne S.; Mikulis, David J.; Crawley, Adrian P.

    2011-01-01

    Voxel-based morphometry (VBM) and automated lobar region of interest (ROI) volumetry are comprehensive and fast methods to detect differences in overall brain anatomy on magnetic resonance images. However, VBM and automated lobar ROI volumetry have detected dissimilar gray matter differences within identical image sets in our own experience and in previous reports. To gain more insight into how diverging results arise and to attempt to establish whether one method is superior to the other, we investigated how differences in spatial scale and in the need to statistically correct for multiple spatial comparisons influence the relative sensitivity of either technique to group differences in gray matter volumes. We assessed the performance of both techniques on a small dataset containing simulated gray matter deficits and additionally on a dataset of 22q11-deletion syndrome patients with schizophrenia (22q11DS-SZ) vs. matched controls. VBM was more sensitive to simulated focal deficits compared to automated ROI volumetry, and could detect global cortical deficits equally well. Moreover, theoretical calculations of VBM and ROI detection sensitivities to focal deficits showed that at increasing ROI size, ROI volumetry suffers more from loss in sensitivity than VBM. Furthermore, VBM and automated ROI found corresponding GM deficits in 22q11DS-SZ patients, except in the parietal lobe. Here, automated lobar ROI volumetry found a significant deficit only after a smaller subregion of interest was employed. Thus, sensitivity to focal differences is impaired relatively more by averaging over larger volumes in automated ROI methods than by the correction for multiple comparisons in VBM. These findings indicate that VBM is to be preferred over automated lobar-scale ROI volumetry for assessing gray matter volume differences between groups. PMID:19619660

  20. Local linear density estimation for filtered survival data, with bias correction

    DEFF Research Database (Denmark)

    Nielsen, Jens Perch; Tanggaard, Carsten; Jones, M.C.

    2009-01-01

    it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a 'pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias-correction methods...... within our framework. The multiplicative bias-correction method proves to be the best in a simulation study comparing the performance of the considered estimators. An example concerning old-age mortality demonstrates the importance of the improvements provided....

  1. Local Linear Density Estimation for Filtered Survival Data with Bias Correction

    DEFF Research Database (Denmark)

    Tanggaard, Carsten; Nielsen, Jens Perch; Jones, M.C.

    it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a ‘pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias correction methods...... within our framework. The multiplicative bias correction method proves to be best in a simulation study comparing the performance of the considered estimators. An example concerning old age mortality demonstrates the importance of the improvements provided....

  2. MCPerm: a Monte Carlo permutation method for accurately correcting the multiple testing in a meta-analysis of genetic association studies.

    Directory of Open Access Journals (Sweden)

    Yongshuai Jiang

    Full Text Available Traditional permutation (TradPerm tests are usually considered the gold standard for multiple testing corrections. However, they can be difficult to complete for the meta-analyses of genetic association studies based on multiple single nucleotide polymorphism loci as they depend on individual-level genotype and phenotype data to perform random shuffles, which are not easy to obtain. Most meta-analyses have therefore been performed using summary statistics from previously published studies. To carry out a permutation using only genotype counts without changing the size of the TradPerm P-value, we developed a Monte Carlo permutation (MCPerm method. First, for each study included in the meta-analysis, we used a two-step hypergeometric distribution to generate a random number of genotypes in cases and controls. We then carried out a meta-analysis using these random genotype data. Finally, we obtained the corrected permutation P-value of the meta-analysis by repeating the entire process N times. We used five real datasets and five simulation datasets to evaluate the MCPerm method and our results showed the following: (1 MCPerm requires only the summary statistics of the genotype, without the need for individual-level data; (2 Genotype counts generated by our two-step hypergeometric distributions had the same distributions as genotype counts generated by shuffling; (3 MCPerm had almost exactly the same permutation P-values as TradPerm (r = 0.999; P<2.2e-16; (4 The calculation speed of MCPerm is much faster than that of TradPerm. In summary, MCPerm appears to be a viable alternative to TradPerm, and we have developed it as a freely available R package at CRAN: http://cran.r-project.org/web/packages/MCPerm/index.html.

  3. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  4. Supersymmetric electroweak radiative corrections to e+e-→W+W-. III

    International Nuclear Information System (INIS)

    Alam, S.

    1994-01-01

    This is the third of a series of three papers in which we give a complete analysis of one loop quantum corrections to the W pair production in the context of supersymmetric electroweak theory. We adopt the on-shell-mass subtraction scheme of Sakakibara. In this paper we concentrate mainly on the one loop corrections to the differential cross section arising from the box diagrams. Details of the relevant analytic results are given. We also present our results for the total radiative corrections and wherever possible compare the QFD part of our calculation with previous work. We find a change of approximately 3%--4% in the differential cross section if the Higgs boson mass is varied from 10 GeV to 500 GeV. The differential cross section varies approximately 8% if the top mass is varied between 40 GeV and 150 GeV. Our results for the dependence of the differential cross section on the Higgs boson and top quark are in agreement with Bohm et al. In the context of the SM we find moderate corrections at CERN LEP II energies. We find the percentage (with respect to the tree-level) virtual loop corrections arising from the box diagrams (considered in this paper) due to the addition of SUSY particles varies approximately from 0.18% to -5.67%. As a comparison the percentage virtual corrections due to the box diagrams in the SM varies typically from 0.89% to 8.3%. The SM total percentage virtual loop corrections varies typically from 17.4% to 19%. The above comparison is made at the same center-of-mass energy (200 GeV). The first percentage in this comparison is for center-of-mass angles of 10 degree, the second being at 90 degree. Adding all the corrections up we find that the addition of the supersymmetric particles tends to increase the percentage one loop corrections on the order of 6%--8% provided the photino is kept light. With an accurate measurement at LEP II, one can in principle detect such a deviation away from the standard model

  5. Comparison of Gafchromic EBT2 and EBT3 for patient-specific quality assurance: Cranial stereotactic radiosurgery using volumetric modulated arc therapy with multiple noncoplanar arcs

    Energy Technology Data Exchange (ETDEWEB)

    Fiandra, Christian; Fusella, Marco; Filippi, Andrea Riccardo; Ricardi, Umberto; Ragona, Riccardo [Department of Oncology, Radiation Oncology Unit, University of Torino, Turin 10126 (Italy); Giglioli, Francesca Romana [Medical Physics Unit, Azienda Ospedaliera Città della Salute e della Scienza, Turin 10126 (Italy); Mantovani, Cristina [Radiation Oncology Department, Azienda Ospedaliera Città della Salute e della Scienza, Turin 10126 (Italy)

    2013-08-15

    Purpose: Patient-specific quality assurance in volumetric modulated arc therapy (VMAT) brain stereotactic radiosurgery raises specific issues on dosimetric procedures, mainly represented by the small radiation fields associated with the lack of lateral electronic equilibrium, the need of small detectors and the high dose delivered (up to 30 Gy). Gafchromic{sup TM} EBT2 and EBT3 films may be considered the dosimeter of choice, and the authors here provide some additional data about uniformity correction for this new generation of radiochromic films.Methods: A new analysis method using blue channel for marker dye correction was proposed for uniformity correction both for EBT2 and EBT3 films. Symmetry, flatness, and field-width of a reference field were analyzed to provide an evaluation in a high-spatial resolution of the film uniformity for EBT3. Absolute doses were compared with thermoluminescent dosimeters (TLD) as baseline. VMAT plans with multiple noncoplanar arcs were generated with a treatment planning system on a selected pool of eleven patients with cranial lesions and then recalculated on a water-equivalent plastic phantom by Monte Carlo algorithm for patient-specific QA. 2D quantitative dose comparison parameters were calculated, for the computed and measured dose distributions, and tested for statistically significant differences.Results: Sensitometric curves showed a different behavior above dose of 5 Gy for EBT2 and EBT3 films; with the use of inhouse marker-dye correction method, the authors obtained values of 2.5% for flatness, 1.5% of symmetry, and a field width of 4.8 cm for a 5 × 5 cm{sup 2} reference field. Compared with TLD and selecting a 5% dose tolerance, the percentage of points with ICRU index below 1 was 100% for EBT2 and 83% for EBT3. Patients analysis revealed statistically significant differences (p < 0.05) between EBT2 and EBT3 in the percentage of points with gamma values <1 (p= 0.009 and p= 0.016); the percent difference as well as

  6. Study of tip loss corrections using CFD rotor computations

    DEFF Research Database (Denmark)

    Shen, Wen Zhong; Zhu, Wei Jun; Sørensen, Jens Nørkær

    2014-01-01

    Tip loss correction is known to play an important role for engineering prediction of wind turbine performance. There are two different types of tip loss corrections: tip corrections on momentum theory and tip corrections on airfoil data. In this paper, we study the latter using detailed CFD...... computations for wind turbines with sharp tip. Using the technique of determination of angle of attack and the CFD results for a NordTank 500 kW rotor, airfoil data are extracted and a new tip loss function on airfoil data is derived. To validate, BEM computations with the new tip loss function are carried out...... and compared with CFD results for the NordTank 500 kW turbine and the NREL 5 MW turbine. Comparisons show that BEM with the new tip loss function can predict correctly the loading near the blade tip....

  7. Correction of head motion artifacts in SPECT with fully 3-D OS-EM reconstruction

    International Nuclear Information System (INIS)

    Fulton, R.R.

    1998-01-01

    Full text: A method which relies on continuous monitoring of head position has been developed to correct for head motion in SPECT studies of the brain. Head position and orientation are monitored during data acquisition by an inexpensive head tracking system (ADL-1, Shooting Star Technology, Rosedale, British Colombia). Motion correction involves changing the projection geometry to compensate for motion (using data from the head tracker), and reconstructing with a fully 3-D OS-EM algorithm. The reconstruction algorithm can accommodate any number of movements and any projection geometry. A single iteration of 3-D OS-EM using all available projections provides a satisfactory 3-D reconstruction, essentially free of motion artifacts. The method has been validated in studies of the 3-D Hoffman brain phantom. Multiple 36- degree acquisitions, each with the phantom in a different position, were performed on a Trionix triple head camera. Movements were simulated by combining projections from the different acquisitions. Accuracy was assessed by comparison with a motion-free reconstruction, visually and by calculating mean squared error (MSE). Motion correction reduced distortion perceptibly and, depending on the motions applied, improved MSE by up to an order of magnitude. Three-dimensional reconstruction of the 128 x 128 x 128 data set took 2- minutes on a SUN Ultra 1 workstation. This motion correction technique can be retro-fitted to existing SPECT systems and could be incorporated in future SPECT camera designs. It appears to be applicable in PET as well as SPECT, to be able to correct for any head movements, and to have the potential to improve the accuracy of tomographic brain studies under clinical imaging conditions

  8. Dynamic retardation corrections to the mass spectrum of heavy quarkonia

    International Nuclear Information System (INIS)

    Kopalejshvili, T.; Rusetskij, A.

    1996-01-01

    In the framework of the Logunov-Tavkhelidze quasipotential approach the first-order retardation corrections to the heavy quarkonia mass spectrum are calculated with the use of the stationary wave boundary condition in the covariant kernel of the Bethe-Salpeter equation. As has been expected, these corrections turn out to be small for all low-lying heavy meson states and vanish in the heavy quark limit (m Q →∞). The comparison of the suggested approach to the calculation of retardation corrections with others, known in literature, is carried out. 22 refs., 1 tab

  9. Atlas-based analysis of cardiac shape and function: correction of regional shape bias due to imaging protocol for population studies.

    Science.gov (United States)

    Medrano-Gracia, Pau; Cowan, Brett R; Bluemke, David A; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Suinesiaputra, Avan; Young, Alistair A

    2013-09-13

    Cardiovascular imaging studies generate a wealth of data which is typically used only for individual study endpoints. By pooling data from multiple sources, quantitative comparisons can be made of regional wall motion abnormalities between different cohorts, enabling reuse of valuable data. Atlas-based analysis provides precise quantification of shape and motion differences between disease groups and normal subjects. However, subtle shape differences may arise due to differences in imaging protocol between studies. A mathematical model describing regional wall motion and shape was used to establish a coordinate system registered to the cardiac anatomy. The atlas was applied to data contributed to the Cardiac Atlas Project from two independent studies which used different imaging protocols: steady state free precession (SSFP) and gradient recalled echo (GRE) cardiovascular magnetic resonance (CMR). Shape bias due to imaging protocol was corrected using an atlas-based transformation which was generated from a set of 46 volunteers who were imaged with both protocols. Shape bias between GRE and SSFP was regionally variable, and was effectively removed using the atlas-based transformation. Global mass and volume bias was also corrected by this method. Regional shape differences between cohorts were more statistically significant after removing regional artifacts due to imaging protocol bias. Bias arising from imaging protocol can be both global and regional in nature, and is effectively corrected using an atlas-based transformation, enabling direct comparison of regional wall motion abnormalities between cohorts acquired in separate studies.

  10. Evaluation of ion chamber dependent correction factors for ionisation chamber dosimetry in proton beams using a Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Palmans, H [Ghent Univ. (Belgium). Dept. of Biomedical Physics; Verhaegen, F

    1995-12-01

    In the last decade, several clinical proton beam therapy facilities have been developed. To satisfy the demand for uniformity in clinical (routine) proton beam dosimetry two dosimetry protocols (ECHED and AAPM) have been published. Both protocols neglect the influence of ion chamber dependent parameters on dose determination in proton beams because of the scatter properties of these beams, although the problem has not been studied thoroughly yet. A comparison between water calorimetry and ionisation chamber dosimetry showed a discrepancy of 2.6% between the former method and ionometry following the ECHED protocol. Possibly, a small part of this difference can be attributed to chamber dependent correction factors. Indications for this possibility are found in ionometry measurements. To allow the simulation of complex geometries with different media necessary for the study of those corrections, an existing proton Monte Carlo code (PTRAN, Berger) has been modified. The original code, that applies Mollire`s multiple scattering theory and Vavilov`s energy straggling theory, calculates depth dose profiles, energy distributions and radial distributions for pencil beams in water. Comparisons with measurements and calculations reported in the literature are done to test the program`s accuracy. Preliminary results of the influence of chamber design and chamber materials on dose to water determination are presented.

  11. Evaluation of ion chamber dependent correction factors for ionisation chamber dosimetry in proton beams using a Monte Carlo method

    International Nuclear Information System (INIS)

    Palmans, H.; Verhaegen, F.

    1995-01-01

    In the last decade, several clinical proton beam therapy facilities have been developed. To satisfy the demand for uniformity in clinical (routine) proton beam dosimetry two dosimetry protocols (ECHED and AAPM) have been published. Both protocols neglect the influence of ion chamber dependent parameters on dose determination in proton beams because of the scatter properties of these beams, although the problem has not been studied thoroughly yet. A comparison between water calorimetry and ionisation chamber dosimetry showed a discrepancy of 2.6% between the former method and ionometry following the ECHED protocol. Possibly, a small part of this difference can be attributed to chamber dependent correction factors. Indications for this possibility are found in ionometry measurements. To allow the simulation of complex geometries with different media necessary for the study of those corrections, an existing proton Monte Carlo code (PTRAN, Berger) has been modified. The original code, that applies Mollire's multiple scattering theory and Vavilov's energy straggling theory, calculates depth dose profiles, energy distributions and radial distributions for pencil beams in water. Comparisons with measurements and calculations reported in the literature are done to test the program's accuracy. Preliminary results of the influence of chamber design and chamber materials on dose to water determination are presented

  12. Performance Evaluation of Blind Tropospheric Delay correction ...

    African Journals Online (AJOL)

    lekky

    and Temperature 2 wet (GPT2w) models) for tropospheric delay correction, ... In practice, a user often employs a certain troposphere model based on the popularity ... comparisons between some of the models have been carried out in the past for .... prediction of meteorological parameter values, which are then used to ...

  13. Cost comparison between private and public collection of residual household waste: multiple case studies in the Flemish region of Belgium.

    Science.gov (United States)

    Jacobsen, R; Buysse, J; Gellynck, X

    2013-01-01

    The rising pressure in terms of cost efficiency on public services pushes governments to transfer part of those services to the private sector. A trend towards more privatizing can be noticed in the collection of municipal household waste. This paper reports the findings of a research project aiming to compare the cost between the service of private and public collection of residual household waste. Multiple case studies of municipalities about the Flemish region of Belgium were conducted. Data concerning the year 2009 were gathered through in-depth interviews in 2010. In total 12 municipalities were investigated, divided into three mutual comparable pairs with a weekly and three mutual comparable pairs with a fortnightly residual waste collection. The results give a rough indication that in all cases the cost of private service is lower than public service in the collection of household waste. Albeit that there is an interest in establishing whether there are differences in the costs and service levels between public and private waste collection services, there are clear difficulties in establishing comparisons that can be made without having to rely on a large number of assumptions and corrections. However, given the cost difference, it remains the responsibility of the municipalities to decide upon the service they offer their citizens, regardless the cost efficiency: public or private. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. QED corrections to the 4p-4d transition energies of copperlike heavy ions

    International Nuclear Information System (INIS)

    Chen, M. H.; Cheng, K. T.; Johnson, W. R.; Sapirstein, J.

    2006-01-01

    Quantum electrodynamic (QED) corrections to 4p-4d transition energies of several copperlike ions with Z=70-92 are calculated nonperturbatively in strong external fields to all orders in binding corrections. Dirac-Kohn-Sham potentials are used to account for screening and core-relaxation effects. For the 4p 1/2 -4d 3/2 transition in copperlike bismuth, thorium, and uranium, results are in good agreement with empirical QED corrections deduced from differences between transition energies obtained from recent high-precision electron-beam ion-trap measurements and those calculated with the relativistic many-body perturbation theory (RMBPT). These comparisons provide sensitive tests of QED corrections for high-angular-momentum states in many-electron heavy ions and illustrate the importance of core-relaxation corrections. Comparisons are also made with other theories and with experiments on the 4s-4p transition energies of high-Z Cu-like ions as accuracy checks of the present RMBPT and QED calculations

  15. Evaluation of a health-promoting school program to enhance correct medication use in Taiwan

    Directory of Open Access Journals (Sweden)

    Hsueh-Yun Chi

    2014-06-01

    Full Text Available This study was an evaluation of the Health Promoting School (HPS program in Taiwan and its effectiveness in enhancing students' knowledge and abilities with regard to correct medication usage. In 2011, baseline and follow-up self-administered online surveys were received from 3520 middle-school and primary students from intervention schools, and 3738 students from comparison primary and secondary schools completed the same survey. The results indicated that after implementing the correct medication use HPS program, students' knowledge and abilities concerning correct medication usage (i.e., the need to express clearly personal conditions to physicians, to check information on the medication packages, to take medication correctly and adhere to prescribed medication regimens, not to buy or acquire medication from unlicensed sources, and to consult pharmacists/physicians were significantly increased among the students in the intervention schools (p < 0.001. In addition, students' knowledge and abilities concerning correct medication usage were significantly higher in the intervention schools compared with the comparison schools (p < 0.001. In conclusion, the correct medication use HPS program significantly enhanced students' knowledge and abilities concerning correct medication usage.

  16. Arterial Transit Time-corrected Renal Blood Flow Measurement with Pulsed Continuous Arterial Spin Labeling MR Imaging.

    Science.gov (United States)

    Shimizu, Kazuhiro; Kosaka, Nobuyuki; Fujiwara, Yasuhiro; Matsuda, Tsuyoshi; Yamamoto, Tatsuya; Tsuchida, Tatsuro; Tsuchiyama, Katsuki; Oyama, Nobuyuki; Kimura, Hirohiko

    2017-01-10

    The importance of arterial transit time (ATT) correction for arterial spin labeling MRI has been well debated in neuroimaging, but it has not been well evaluated in renal imaging. The purpose of this study was to evaluate the feasibility of pulsed continuous arterial spin labeling (pcASL) MRI with multiple post-labeling delay (PLD) acquisition for measuring ATT-corrected renal blood flow (ATC-RBF). A total of 14 volunteers were categorized into younger (n = 8; mean age, 27.0 years) and older groups (n = 6; 64.8 years). Images of pcASL were obtained at three different PLDs (0.5, 1.0, and 1.5 s), and ATC-RBF and ATT were calculated using a single-compartment model. To validate ATC-RBF, a comparative study of effective renal plasma flow (ERPF) measured by 99m Tc-MAG3 scintigraphy was performed. ATC-RBF was corrected by kidney volume (ATC-cRBF) for comparison with ERPF. The younger group showed significantly higher ATC-RBF (157.68 ± 38.37 mL/min/100 g) and shorter ATT (961.33 ± 260.87 ms) than the older group (117.42 ± 24.03 mL/min/100 g and 1227.94 ± 226.51 ms, respectively; P renal ASL-MRI as debated in brain imaging.

  17. Raman database of amino acids solutions: A critical study of Extended Multiplicative Signal Correction

    KAUST Repository

    Candeloro, Patrizio

    2013-01-01

    The Raman spectra of biological materials always exhibit complex profiles, constituting several peaks and/or bands which arise due to the large variety of biomolecules. The extraction of quantitative information from these spectra is not a trivial task. While qualitative information can be retrieved from the changes in peaks frequencies or from the appearance/disappearance of some peaks, quantitative analysis requires an examination of peak intensities. Unfortunately in biological samples it is not easy to identify a reference peak for normalizing intensities, and this makes it very difficult to study the peak intensities. In the last decades a more refined mathematical tool, the extended multiplicative signal correction (EMSC), has been proposed for treating infrared spectra, which is also capable of providing quantitative information. From the mathematical and physical point of view, EMSC can also be applied to Raman spectra, as recently proposed. In this work the reliability of the EMSC procedure is tested by application to a well defined biological system: the 20 standard amino acids and their combination in peptides. The first step is the collection of a Raman database of these 20 amino acids, and subsequently EMSC processing is applied to retrieve quantitative information from amino acids mixtures and peptides. A critical review of the results is presented, showing that EMSC has to be carefully handled for complex biological systems. © 2013 The Royal Society of Chemistry.

  18. Attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Hosoba, Minoru

    1986-01-01

    Attenuation correction is required for the reconstruction of a quantitative SPECT image. A new method for detecting body contours, which are important for the correction of tissue attenuation, is presented. The effect of body contours, detected by the newly developed method, on the reconstructed images was evaluated using various techniques for attenuation correction. The count rates in the specified region of interest in the phantom image by the Radial Post Correction (RPC) method, the Weighted Back Projection (WBP) method, Chang's method were strongly affected by the accuracy of the contours, as compared to those by Sorenson's method. To evaluate the effect of non-uniform attenuators on the cardiac SPECT, computer simulation experiments were performed using two types of models, the uniform attenuator model (UAM) and the non-uniform attenuator model (NUAM). The RPC method showed the lowest relative percent error (%ERROR) in UAM (11 %). However, 20 to 30 percent increase in %ERROR was observed for NUAM reconstructed with the RPC, WBP, and Chang's methods. Introducing an average attenuation coefficient (0.12/cm for Tc-99m and 0.14/cm for Tl-201) in the RPC method decreased %ERROR to the levels for UAM. Finally, a comparison between images, which were obtained by 180 deg and 360 deg scans and reconstructed from the RPC method, showed that the degree of the distortion of the contour of the simulated ventricles in the 180 deg scan was 15 % higher than that in the 360 deg scan. (Namekawa, K.)

  19. Continuous Correctness of Business Processes Against Process Interference

    NARCIS (Netherlands)

    van Beest, Nick; Bucur, Doina

    2013-01-01

    In distributed business process support environments, process interference from multiple stakeholders may cause erroneous process outcomes. Existing solutions to detect and correct interference at runtime employ formal verification and the automatic generation of intervention processes at runtime.

  20. Superresolution Imaging Using Resonant Multiples

    KAUST Repository

    Guo, Bowen

    2017-12-22

    A resonant multiple is defined as a multiple reflection that revisits the same subsurface location along coincident reflection raypaths. We show that resonant first-order multiples can be migrated with either Kirchhoff or wave-equation migration methods to give images with approximately twice the spatial resolution compared to post-stack primary-reflection images. A moveout-correction stacking method is proposed to enhance the signal-to-noise ratios (SNRs) of the resonant multiples before superresolution migration. The effectiveness of this procedure is validated by synthetic and field data tests.

  1. Superresolution Imaging Using Resonant Multiples

    KAUST Repository

    Guo, Bowen; Schuster, Gerard T.

    2017-01-01

    A resonant multiple is defined as a multiple reflection that revisits the same subsurface location along coincident reflection raypaths. We show that resonant first-order multiples can be migrated with either Kirchhoff or wave-equation migration methods to give images with approximately twice the spatial resolution compared to post-stack primary-reflection images. A moveout-correction stacking method is proposed to enhance the signal-to-noise ratios (SNRs) of the resonant multiples before superresolution migration. The effectiveness of this procedure is validated by synthetic and field data tests.

  2. Net improvement of correct answers to therapy questions after pubmed searches: pre/post comparison.

    Science.gov (United States)

    McKibbon, Kathleen Ann; Lokker, Cynthia; Keepanasseril, Arun; Wilczynski, Nancy L; Haynes, R Brian

    2013-11-08

    Clinicians search PubMed for answers to clinical questions although it is time consuming and not always successful. To determine if PubMed used with its Clinical Queries feature to filter results based on study quality would improve search success (more correct answers to clinical questions related to therapy). We invited 528 primary care physicians to participate, 143 (27.1%) consented, and 111 (21.0% of the total and 77.6% of those who consented) completed the study. Participants answered 14 yes/no therapy questions and were given 4 of these (2 originally answered correctly and 2 originally answered incorrectly) to search using either the PubMed main screen or PubMed Clinical Queries narrow therapy filter via a purpose-built system with identical search screens. Participants also picked 3 of the first 20 retrieved citations that best addressed each question. They were then asked to re-answer the original 14 questions. We found no statistically significant differences in the rates of correct or incorrect answers using the PubMed main screen or PubMed Clinical Queries. The rate of correct answers increased from 50.0% to 61.4% (95% CI 55.0%-67.8%) for the PubMed main screen searches and from 50.0% to 59.1% (95% CI 52.6%-65.6%) for Clinical Queries searches. These net absolute increases of 11.4% and 9.1%, respectively, included previously correct answers changing to incorrect at a rate of 9.5% (95% CI 5.6%-13.4%) for PubMed main screen searches and 9.1% (95% CI 5.3%-12.9%) for Clinical Queries searches, combined with increases in the rate of being correct of 20.5% (95% CI 15.2%-25.8%) for PubMed main screen searches and 17.7% (95% CI 12.7%-22.7%) for Clinical Queries searches. PubMed can assist clinicians answering clinical questions with an approximately 10% absolute rate of improvement in correct answers. This small increase includes more correct answers partially offset by a decrease in previously correct answers.

  3. T-branes and α{sup ′}-corrections

    Energy Technology Data Exchange (ETDEWEB)

    Marchesano, Fernando; Schwieger, Sebastian [Instituto de Física Teórica UAM-CSIC,Cantoblanco, 28049 Madrid (Spain)

    2016-11-21

    We study α’-corrections in multiple D7-brane configurations with non-commuting profiles for their transverse position fields. We focus on T-brane systems, crucial in F-theory GUT model building. There α{sup ′}-corrections modify the D-term piece of the BPS equations which, already at leading order, require a non-primitive Abelian worldvolume flux background. We find that α{sup ′}-corrections may either i) leave this flux background invariant, ii) modify the Abelian non-primitive flux profile, or iii) deform it to a non-Abelian profile. The last case typically occurs when primitive fluxes, a necessary ingredient to build 4d chiral models, are added to the system. We illustrate these three cases by solving the α{sup ′}-corrected D-term equations in explicit examples, and describe their appearance in more general T-brane backgrounds. Finally, we discuss implications of our findings for F-theory GUT local models.

  4. A Comparison of Error-Correction Procedures on Skill Acquisition during Discrete-Trial Instruction

    Science.gov (United States)

    Carroll, Regina A.; Joachim, Brad T.; St. Peter, Claire C.; Robinson, Nicole

    2015-01-01

    Previous research supports the use of a variety of error-correction procedures to facilitate skill acquisition during discrete-trial instruction. We used an adapted alternating treatments design to compare the effects of 4 commonly used error-correction procedures on skill acquisition for 2 children with attention deficit hyperactivity disorder…

  5. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

    Science.gov (United States)

    Griffiths, Lauren; Higham, Philip A

    2018-02-01

    Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

  6. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    International Nuclear Information System (INIS)

    Soh, R; Lee, J; Harianto, F

    2014-01-01

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm 2 small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm 3 , 2.64g/cm 3 ) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm 3 , HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute material for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm 2 was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm 2 small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced

  7. Empirical correction for PM7 band gaps of transition-metal oxides.

    Science.gov (United States)

    Liu, Xiang; Sohlberg, Karl

    2016-01-01

    A post-calculation correction is established for PM7 band gaps of transition-metal oxides. The correction is based on the charge on the metal cation of interest, as obtained from MOPAC PM7 calculations. Application of the correction reduces the average error in the PM7 band gap from ~3 eV to ~1 eV. The residual error after correction is shown to be uncorrelated to the Hartree-Fock method upon which PM7 is based. Graphical Abstract Comparison between calculated band gaps and experimental band gaps for binary oxides. The orange crosses are for corrected PM7 band gaps. Blue squares are uncorrected values. The orange crosses fall closer to the diagonal dashed line, showing an overall improvement of the accuracy of calculated values.

  8. The welfare comparison of corrective ad valorem and unit taxes under monopolistic competition

    DEFF Research Database (Denmark)

    Dröge, Susanne; Schröder, Philipp J.H.

    2009-01-01

    such as environmental, health, and trade economics, policy makers use taxes to reduce the production/consumption volume in an industry, i.e., to correct an externality rather than to improve tax yield. This paper compares the two tax instruments with respect to equal corrective effect in a Dixit--Stiglitz setting...

  9. Intensity correction method customized for multi-animal abdominal MR imaging with 3 T clinical scanner and multi-array coil

    International Nuclear Information System (INIS)

    Mitsuda, Minoru; Yamaguchi, Masayuki; Nakagami, Ryutaro; Furuta, Toshihiro; Fujii, Hirofumi; Sekine, Norio; Niitsu, Mamoru; Moriyama, Noriyuki

    2013-01-01

    Simultaneous magnetic resonance (MR) imaging of multiple small animals in a single session increases throughput of preclinical imaging experiments. Such imaging using a 3-tesla clinical scanner with multi-array coil requires correction of intensity variation caused by the inhomogeneous sensitivity profile of the coil. We explored a method for correcting intensity that we customized for multi-animal MR imaging, especially abdominal imaging. Our institutional committee for animal experimentation approved the protocol. We acquired high resolution T 1 -, T 2 -, and T 2 * -weighted images and low resolution proton density-weighted images (PDWIs) of 4 rat abdomens simultaneously using a 3T clinical scanner and custom-made multi-array coil. For comparison, we also acquired T 1 -, T 2 -, and T 2 * -weighted volume coil images in the same rats in 4 separate sessions. We used software created in-house to correct intensity variation. We applied thresholding to the PDWIs to produce binary images that displayed only a signal-producing area, calculated multi-array coil sensitivity maps by dividing low-pass filtered PDWIs by low-pass filtered binary images pixel by pixel, and divided uncorrected T 1 -, T 2 -, or T 2 * -weighted images by those maps to obtain intensity-corrected images. We compared tissue contrast among the liver, spinal canal, and muscle between intensity-corrected multi-array coil images and volume coil images. Our intensity correction method performed well for all pulse sequences studied and corrected variation in original multi-array coil images without deteriorating the throughput of animal experiments. Tissue contrasts were comparable between intensity-corrected multi-array coil images and volume coil images. Our intensity correction method customized for multi-animal abdominal MR imaging using a 3T clinical scanner and dedicated multi-array coil could facilitate image interpretation. (author)

  10. A higher-order generalized singular value decomposition for comparison of global mRNA expression from multiple organisms.

    Directory of Open Access Journals (Sweden)

    Sri Priya Ponnapalli

    Full Text Available The number of high-dimensional datasets recording multiple aspects of a single phenomenon is increasing in many areas of science, accompanied by a need for mathematical frameworks that can compare multiple large-scale matrices with different row dimensions. The only such framework to date, the generalized singular value decomposition (GSVD, is limited to two matrices. We mathematically define a higher-order GSVD (HO GSVD for N≥2 matrices D(i∈R(m(i × n, each with full column rank. Each matrix is exactly factored as D(i=U(iΣ(iV(T, where V, identical in all factorizations, is obtained from the eigensystem SV=VΛ of the arithmetic mean S of all pairwise quotients A(iA(j(-1 of the matrices A(i=D(i(TD(i, i≠j. We prove that this decomposition extends to higher orders almost all of the mathematical properties of the GSVD. The matrix S is nondefective with V and Λ real. Its eigenvalues satisfy λ(k≥1. Equality holds if and only if the corresponding eigenvector v(k is a right basis vector of equal significance in all matrices D(i and D(j, that is σ(i,k/σ(j,k=1 for all i and j, and the corresponding left basis vector u(i,k is orthogonal to all other vectors in U(i for all i. The eigenvalues λ(k=1, therefore, define the "common HO GSVD subspace." We illustrate the HO GSVD with a comparison of genome-scale cell-cycle mRNA expression from S. pombe, S. cerevisiae and human. Unlike existing algorithms, a mapping among the genes of these disparate organisms is not required. We find that the approximately common HO GSVD subspace represents the cell-cycle mRNA expression oscillations, which are similar among the datasets. Simultaneous reconstruction in the common subspace, therefore, removes the experimental artifacts, which are dissimilar, from the datasets. In the simultaneous sequence-independent classification of the genes of the three organisms in this common subspace, genes of highly conserved sequences but significantly different cell

  11. Meson exchange current corrections to magnetic moments in quantum hadro-dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Morse, T M; Price, C E; Shepard, J R [Colorado Univ., Boulder (USA). Dept. of Physics

    1990-11-15

    We have calculated pion exchange current corrections to the magnetic moments of closed shell {plus minus}1 particle nuclei near A=16 and 40 within the framework of quantum hadro-dynamics (QHD). We find that the correction is significant and that, in general, the agreement of the QHD isovector moments with experiment is worsened. Comparisons to previous non-relativistic calculations are also made. (orig.).

  12. Corrective response times in a coordinated eye-head-arm countermanding task.

    Science.gov (United States)

    Tao, Gordon; Khan, Aarlenne Z; Blohm, Gunnar

    2018-06-01

    Inhibition of motor responses has been described as a race between two competing decision processes of motor initiation and inhibition, which manifest as the reaction time (RT) and the stop signal reaction time (SSRT); in the case where motor initiation wins out over inhibition, an erroneous movement occurs that usually needs to be corrected, leading to corrective response times (CRTs). Here we used a combined eye-head-arm movement countermanding task to investigate the mechanisms governing multiple effector coordination and the timing of corrective responses. We found a high degree of correlation between effector response times for RT, SSRT, and CRT, suggesting that decision processes are strongly dependent across effectors. To gain further insight into the mechanisms underlying CRTs, we tested multiple models to describe the distribution of RTs, SSRTs, and CRTs. The best-ranked model (according to 3 information criteria) extends the LATER race model governing RTs and SSRTs, whereby a second motor initiation process triggers the corrective response (CRT) only after the inhibition process completes in an expedited fashion. Our model suggests that the neural processing underpinning a failed decision has a residual effect on subsequent actions. NEW & NOTEWORTHY Failure to inhibit erroneous movements typically results in corrective movements. For coordinated eye-head-hand movements we show that corrective movements are only initiated after the erroneous movement cancellation signal has reached a decision threshold in an accelerated fashion.

  13. Lee-Yang zeroes and logarithmic corrections in the Φ44 theory

    International Nuclear Information System (INIS)

    Kenna, R.; Lang, C.B.

    1993-01-01

    The leading mean-field critical behaviour of φ 4 4 -theory is modified by multiplicative logarithmic corrections. We analyse these corrections both analytically and numerically. In particular we present a finite-size scaling theory for the Lee-Yang zeroes and temperature zeroes, both of which exhibit logarithmic corrections. On lattices from size 8 4 to 24 4 , Monte-Carlo cluster methods and multi-histogram techniques are used to determine the partition function zeroes closest to the critical point. Finite-size scaling behaviour is verified and the logarithmic corrections are found to be in good agreement with our analytical predictions. (orig.)

  14. A New Class of Scaling Correction Methods

    International Nuclear Information System (INIS)

    Mei Li-Jie; Wu Xin; Liu Fu-Yao

    2012-01-01

    When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)

  15. Determination of self absorption correction factor (SAF) for gross alpha measurement in water samples by BIS method

    International Nuclear Information System (INIS)

    Raveendran, Nanda; Baburajan, A.; Ravi, P.M.

    2018-01-01

    The laboratories accredited by AERB undertake the measurement of gross alpha and gross beta in packaged drinking water from manufactures across the country and analyze as per the procedure of Bureau of Indian standards. The accurate measurements of gross alpha in the drinking water sample is a challenge due to the self absorption of alpha particle from varying precipitate (Fe(OH) 3 +BaSO 4 ) thickness and total dissolved solids (TDS). This paper deals with a study on tracer recovery generation and self absorption correction factor (SAF). ESL, Tarapur has participated in an inter-laboratory comparison exercise conducted by IDS, RSSD, BARC as per the recommendation of AERB for the accredited laboratories. The thickness of the precipitate is an important aspect which affected the counting process. The activity was reported after conducting multiple experiments with uranium tracer recovery and precipitate thickness. Later on to make our efforts simplified, an average tracer recovery and Self Absorption correction Factor (SAF) was derived by the present experiment and the same was used for the re-calculation of activity from the count rate reported earlier

  16. Multiple treatment comparisons in epilepsy monotherapy trials

    Directory of Open Access Journals (Sweden)

    Chadwick David W

    2007-11-01

    Full Text Available Abstract Background The choice of antiepileptic drug for an individual should be based upon the highest quality evidence regarding potential benefits and harms of the available treatments. Systematic reviews and meta-analysis of randomised controlled trials should be a major source of evidence supporting this decision making process. We summarise all available individual patient data evidence from randomised controlled trials that compared at least two out of eight antiepileptic drugs given as monotherapy. Methods Multiple treatment comparisons from epilepsy monotherapy trials were synthesized in a single stratified Cox regression model adjusted for treatment by epilepsy type interactions and making use of direct and indirect evidence. Primary outcomes were time to treatment failure and time to 12 month remission from seizures. A secondary outcome was time to first seizure. Results Individual patient data for 6418 patients from 20 randomised trials comparing eight antiepileptic drugs were synthesized. For partial onset seizures (4628 (72% patients, lamotrigine, carbamazepine and oxcarbazepine provide the best combination of seizure control and treatment failure. Lamotrigine is clinically superior to all other drugs for treatment failure but estimates suggest a disadvantage compared to carbamazepine for time to 12 month remission [Hazard Ratio (95% Confidence Interval = 0.87(0.73 to 1.04] and time to first seizure [1.29(1.13 to 1.48]. Phenobarbitone may delay time to first seizure [0.77(0.61 to 0.96] but at the expense of increased treatment failure [1.60(1.22 to 2.10]. For generalized onset tonic clonic seizures (1790 (28% patients estimates suggest valproate or phenytoin may provide the best combination of seizure control and treatment failure but some uncertainty remains about the relative effectiveness of other drugs. Conclusion For patients with partial onset seizures, results favour carbamazepine, oxcarbazepine and lamotrigine. For

  17. Relevance of brain lesion location to cognition in relapsing multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Francesca Rossi

    Full Text Available OBJECTIVE: To assess the relationship between cognition and brain white matter (WM lesion distribution and frequency in patients with relapsing-remitting multiple sclerosis (RR MS. METHODS: MRI-based T2 lesion probability map (LPM was used to assess the relevance of brain lesion location for cognitive impairment in a group of 142 consecutive patients with RRMS. Significance of voxelwise analyses was p<0.05, cluster-corrected for multiple comparisons. The Rao Brief Repeatable Battery was administered at the time of brain MRI to categorize the MS population into cognitively preserved (CP and cognitively impaired (CI. RESULTS: Out of 142 RRMS, 106 were classified as CP and 36 as CI. Although the CI group had greater WM lesion volume than the CP group (p = 0.001, T2 lesions tended to be less widespread across the WM. The peak of lesion frequency was almost twice higher in CI (61% in the forceps major than in CP patients (37% in the posterior corona radiata. The voxelwise analysis confirmed that lesion frequency was higher in CI than in CP patients with significant bilateral clusters in the forceps major and in the splenium of the corpus callosum (p<0.05, corrected. Low scores of the Symbol Digit Modalities Test correlated with higher lesion frequency in these WM regions. CONCLUSIONS: Overall these results suggest that in MS patients, areas relevant for cognition lie mostly in the commissural fiber tracts. This supports the notion of a functional (multiple disconnection between grey matter structures, secondary to damage located in specific WM areas, as one of the most important mechanisms leading to cognitive impairment in MS.

  18. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  19. Power corrections and event shapes at LEP

    CERN Document Server

    Sanders, Michiel P

    2000-01-01

    Measurements of event shape variables from hadronic events collected by the LEP experiments, corresponding to hadronic center of mass energies between 30 GeV and 202 GeV are presented. Fits are performed to extract a, and the effective infrared strong coupling o with the power correction ansatz. Universality is observed for the effective coupling and comparisons are made with fragmentation models.

  20. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  1. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    Science.gov (United States)

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  2. Error correcting circuit design with carbon nanotube field effect transistors

    Science.gov (United States)

    Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong

    2018-03-01

    In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.

  3. A systematic comparison of motion artifact correction techniques for functional near-infrared spectroscopy

    DEFF Research Database (Denmark)

    Cooper, Robert J; Selb, Juliette; Gagnon, Louis

    2012-01-01

    a significant reduction in the mean-squared error (MSE) and significant increase in the contrast-to-noise ratio (CNR) of the recovered HRF when compared to no correction and compared to a process of rejecting motion-contaminated trials. Spline interpolation produces the largest average reduction in MSE (55....... Principle component analysis, spline interpolation, wavelet analysis, and Kalman filtering approaches are compared to one another and to standard approaches using the accuracy of the recovered, simulated hemodynamic response function (HRF). Each of the four motion correction techniques we tested yields......%) while wavelet analysis produces the highest average increase in CNR (39%). On the basis of this analysis, we recommend the routine application of motion correction techniques (particularly spline interpolation or wavelet analysis) to minimize the impact of motion artifacts on functional NIRS data....

  4. Comparison of charged particle multiplicity distributions in p tilde p and pp interactions and verification of the dual unitarization scheme

    International Nuclear Information System (INIS)

    Batyunya, B.V.; Boguslavsky, I.V.; Gramenitsky, I.M.

    1979-01-01

    The difference between antiproton annihilation and pp interactions has been discussed. Charged particle multiplicity distributions in anti pp-interactions at 22.4 GeV/c were used to obtain antiproton annihilation characteristics. The comparison of the topological cross section of antipp interactions with those of non-diffractive pp interactions confirms the validity of dual unitarization

  5. Combining morphometric evidence from multiple registration methods using dempster-shafer theory

    Science.gov (United States)

    Rajagopalan, Vidya; Wyatt, Christopher

    2010-03-01

    In tensor-based morphometry (TBM) group-wise differences in brain structure are measured using high degreeof- freedom registration and some form of statistical test. However, it is known that TBM results are sensitive to both the registration method and statistical test used. Given the lack of an objective model of group variation is it difficult to determine a best registration method for TBM. The use of statistical tests is also problematic given the corrections required for multiple testing and the notorius difficulty selecting and intepreting signigance values. This paper presents an approach to address both of these issues by combining multiple registration methods using Dempster-Shafer Evidence theory to produce belief maps of categorical changes between groups. This approach is applied to the comparison brain morphometry in aging, a typical application of TBM, using the determinant of the Jacobian as a measure of volume change. We show that the Dempster-Shafer combination produces a unique and easy to interpret belief map of regional changes between and within groups without the complications associated with hypothesis testing.

  6. Crosstalk corrections for improved energy resolution with highly segmented HPGe-detectors

    International Nuclear Information System (INIS)

    Bruyneel, Bart; Reiter, Peter; Wiens, Andreas; Eberth, Juergen; Hess, Herbert; Pascovici, Gheorghe; Warr, Nigel; Aydin, Sezgin; Bazzacco, Dino; Recchia, Francesco

    2009-01-01

    Crosstalk effects of 36-fold segmented, large volume AGATA HPGe detectors cause shifts in the γ-ray energy measured by the inner core and outer segments as function of segment multiplicity. The positions of the segment sum energy peaks vary approximately linearly with increasing segment multiplicity. The resolution of these peaks deteriorates also linearly as a function of segment multiplicity. Based on single event treatment, two methods were developed in the AGATA Collaboration to correct for the crosstalk induced effects by employing a linear transformation. The matrix elements are deduced from coincidence measurements of γ-rays of various energies as recorded with digital electronics. A very efficient way to determine the matrix elements is obtained by measuring the base line shifts of untriggered segments using γ-ray detection events in which energy is deposited in a single segment. A second approach is based on measuring segment energy values for γ-ray interaction events in which energy is deposited in only two segments. After performing crosstalk corrections, the investigated detector shows a good fit between the core energy and the segment sum energy at all multiplicities and an improved energy resolution of the segment sum energy peaks. The corrected core energy resolution equals the segment sum energy resolution which is superior at all folds compared to the individual uncorrected energy resolutions. This is achieved by combining the two independent energy measurements with the core contact on the one hand and the segment contacts on the other hand.

  7. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  8. A spectrum correction method for fuel assembly rehomogenization

    International Nuclear Information System (INIS)

    Lee, Kyung Taek; Cho, Nam Zin

    2004-01-01

    To overcome the limitation of existing homogenization methods based on the single assembly calculation with zero current boundary condition, we propose a new rehomogenization method, named spectrum correction method (SCM), consisting of the multigroup energy spectrum approximation by spectrum correction and the condensed two-group heterogeneous single assembly calculations with non-zero current boundary condition. In SCM, the spectrum shifting phenomena caused by current across assembly interfaces are considered by the spectrum correction at group condensation stage at first. Then, heterogeneous single assembly calculations with two-group cross sections condensed by using corrected multigroup energy spectrum are performed to obtain rehomogenized nodal diffusion parameters, i.e., assembly-wise homogenized cross sections and discontinuity factors. To evaluate the performance of SCM, it was applied to the analytic function expansion nodal (AFEN) method and several test problems were solved. The results show that SCM can reduce the errors significantly both in multiplication factors and assembly averaged power distributions

  9. Multiple sclerosis

    DEFF Research Database (Denmark)

    Stenager, E; Jensen, K

    1990-01-01

    An investigation on the correlation between ability to read TV subtitles and the duration of visual evoked potential (VEP) latency in 14 patients with definite multiple sclerosis (MS), indicated that VEP latency in patients unable to read the TV subtitles was significantly delayed in comparison...

  10. Gravitational threshold corrections in non-supersymmetric heterotic strings

    Directory of Open Access Journals (Sweden)

    Ioannis Florakis

    2017-03-01

    Full Text Available We compute one-loop quantum corrections to gravitational couplings in the effective action of four-dimensional heterotic strings where supersymmetry is spontaneously broken by Scherk–Schwarz fluxes. We show that in both heterotic and type II theories of this class, no moduli dependent corrections to the Planck mass are generated. We explicitly compute the one-loop corrections to the R2 coupling and find that, despite the absence of supersymmetry, its contributions may still be organised into representations of subgroups of the modular group, and admit a universal form, determined uniquely by the multiplicities of the ground states of the theory. Moreover, similarly to the case of gauge couplings, also the gravitational sector may become strongly coupled in models which dynamically induce large volume for the extra dimensions.

  11. Correct-by-construction approaches for SoC design

    CERN Document Server

    Sinha, Roopak; Basu, Samik

    2013-01-01

    This book describes an approach for designing Systems-on-Chip such that the system meets precise mathematical requirements. The methodologies presented enable embedded systems designers to reuse intellectual property (IP) blocks from existing designs in an efficient, reliable manner, automatically generating correct SoCs from multiple, possibly mismatching, components.

  12. Unpolarised transverse momentum dependent distribution and fragmentation functions from SIDIS multiplicities

    International Nuclear Information System (INIS)

    Anselmino, M.; Boglione, M.; Gonzalez, H. J.O.; Melis, S.; Prokudin, A.

    2014-01-01

    In this study, the unpolarised transverse momentum dependent distribution and fragmentation functions are extracted from HERMES and COMPASS experimental measurements of SIDIS multiplicities for charged hadron production. The data are grouped into independent bins of the kinematical variables, in which the TMD factorisation is expected to hold. A simple factorised functional form of the TMDs is adopted, with a Gaussian dependence on the intrinsic transverse momentum, which turns out to be quite adequate in shape. HERMES data do not need any normalisation correction, while fits of the COMPASS data much improve with a y-dependent overall normalisation factor. A comparison of the extracted TMDs with previous EMC and JLab data confirms the adequacy of the simple gaussian distributions. The possible role of the TMD evolution is briefly considered

  13. Feedback-related brain activity predicts learning from feedback in multiple-choice testing.

    Science.gov (United States)

    Ernst, Benjamin; Steinhauser, Marco

    2012-06-01

    Different event-related potentials (ERPs) have been shown to correlate with learning from feedback in decision-making tasks and with learning in explicit memory tasks. In the present study, we investigated which ERPs predict learning from corrective feedback in a multiple-choice test, which combines elements from both paradigms. Participants worked through sets of multiple-choice items of a Swahili-German vocabulary task. Whereas the initial presentation of an item required the participants to guess the answer, corrective feedback could be used to learn the correct response. Initial analyses revealed that corrective feedback elicited components related to reinforcement learning (FRN), as well as to explicit memory processing (P300) and attention (early frontal positivity). However, only the P300 and early frontal positivity were positively correlated with successful learning from corrective feedback, whereas the FRN was even larger when learning failed. These results suggest that learning from corrective feedback crucially relies on explicit memory processing and attentional orienting to corrective feedback, rather than on reinforcement learning.

  14. Corrected multiple upsets and bit reversals for improved 1-s resolution measurements

    International Nuclear Information System (INIS)

    Brucker, G.J.; Stassinopoulos, E.G.; Stauffer, C.A.

    1994-01-01

    Previous work has studied the generation of single and multiple errors in control and irradiated static RAM samples (Harris 6504RH) which were exposed to heavy ions for relatively long intervals of time (minute), and read out only after the beam was shut off. The present investigation involved storing 4k x 1 bit maps every second during 1 min ion exposures at low flux rates of 10 3 ions/cm 2 -s in order to reduce the chance of two sequential ions upsetting adjacent bits. The data were analyzed for the presence of adjacent upset bit locations in the physical memory plane, which were previously defined to constitute multiple upsets. Improvement in the time resolution of these measurements has provided more accurate estimates of multiple upsets. The results indicate that the percentage of multiples decreased from a high of 17% in the previous experiment to less than 1% for this new experimental technique. Consecutive double and triple upsets (reversals of bits) were detected. These were caused by sequential ions hitting the same bit, with one or two reversals of state occurring in a 1-min run. In addition to these results, a status review for these same parts covering 3.5 years of imprint damage recovery is also presented

  15. Correction of gene expression data

    DEFF Research Database (Denmark)

    Darbani Shirvanehdeh, Behrooz; Stewart, C. Neal, Jr.; Noeparvar, Shahin

    2014-01-01

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies....... For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce...

  16. Injuries in martial arts: a comparison of five styles.

    Science.gov (United States)

    Zetaruk, M N; Violán, M A; Zurakowski, D; Micheli, L J

    2005-01-01

    To compare five martial arts with respect to injury outcomes. A one year retrospective cohort was studied using an injury survey. Data on 263 martial arts participants (Shotokan karate, n = 114; aikido, n = 47; tae kwon do, n = 49; kung fu, n = 39; tai chi, n = 14) were analysed. Predictor variables included age, sex, training frequency (3 h/week), experience (or=3 years), and martial art style. Outcome measures were injuries requiring time off from training, major injuries (>or=7 days off), multiple injuries (>or=3), body region, and type of injury. Logistic regression was used to determine odds ratios (OR) and confidence intervals (CI). Fisher's exact test was used for comparisons between styles, with a Bonferroni correction for multiple comparisons. The rate of injuries, expressed as percentage of participants sustaining an injury that required time off training a year, varied according to style: 59% tae kwon do, 51% aikido, 38% kung fu, 30% karate, and 14% tai chi. There was a threefold increased risk of injury and multiple injury in tae kwon do than karate (por=18 years of age were at greater risk of injury than younger ones (p3 h/week was also a significant predictor of injury (pkarate, the risks of head/neck injury, upper extremity injury, and soft tissue injury were all higher in aikido (pinjuries were higher in tae kwon do (pkarate. Different martial arts have significantly different types and distribution of injuries. Martial arts appear to be safe for young athletes, particularly those at beginner or intermediate levels.

  17. A systematic comparison of motion artifact correction techniques for functional near-infrared spectroscopy.

    Science.gov (United States)

    Cooper, Robert J; Selb, Juliette; Gagnon, Louis; Phillip, Dorte; Schytz, Henrik W; Iversen, Helle K; Ashina, Messoud; Boas, David A

    2012-01-01

    Near-infrared spectroscopy (NIRS) is susceptible to signal artifacts caused by relative motion between NIRS optical fibers and the scalp. These artifacts can be very damaging to the utility of functional NIRS, particularly in challenging subject groups where motion can be unavoidable. A number of approaches to the removal of motion artifacts from NIRS data have been suggested. In this paper we systematically compare the utility of a variety of published NIRS motion correction techniques using a simulated functional activation signal added to 20 real NIRS datasets which contain motion artifacts. Principle component analysis, spline interpolation, wavelet analysis, and Kalman filtering approaches are compared to one another and to standard approaches using the accuracy of the recovered, simulated hemodynamic response function (HRF). Each of the four motion correction techniques we tested yields a significant reduction in the mean-squared error (MSE) and significant increase in the contrast-to-noise ratio (CNR) of the recovered HRF when compared to no correction and compared to a process of rejecting motion-contaminated trials. Spline interpolation produces the largest average reduction in MSE (55%) while wavelet analysis produces the highest average increase in CNR (39%). On the basis of this analysis, we recommend the routine application of motion correction techniques (particularly spline interpolation or wavelet analysis) to minimize the impact of motion artifacts on functional NIRS data.

  18. Inductor Design Comparison of Three-wire and Four-wire Three-phase Voltage Source Converters in Power Factor Correction Applications

    DEFF Research Database (Denmark)

    Kouchaki, Alireza; Nymand, Morten

    2015-01-01

    This paper studies the inductor design for three-wire and four-wire power factor correction converter (PFC). Designing the efficient inductor for this converter (regardless of connecting the midpoint to the ground) requires a comprehensive knowledge of the inductor current and voltage behavior....... This paper investigates how changing three-wire PFC to four-wire counterpart influences the inductor design in terms of size, losses, and overall efficiency of the converter. Therefore, the inductor current and voltage waveforms are analyzed and generalized in both cases for one switching cycle to build...... a foundation for comparison. Accordingly, the analyses are able to interpret the differences between both configurations and explain the core losses and the copper losses of inductors, especially those caused by the high frequency ac current ripple. Finally, two inductors are designed for a 5 kW PFC...

  19. Efficient Color-Dressed Calculation of Virtual Corrections

    CERN Document Server

    Giele, Walter; Winter, Jan

    2010-01-01

    With the advent of generalized unitarity and parametric integration techniques, the construction of a generic Next-to-Leading Order Monte Carlo becomes feasible. Such a generator will entail the treatment of QCD color in the amplitudes. We extend the concept of color dressing to one-loop amplitudes, resulting in the formulation of an explicit algorithmic solution for the calculation of arbitrary scattering processes at Next-to-Leading order. The resulting algorithm is of exponential complexity, that is the numerical evaluation time of the virtual corrections grows by a constant multiplicative factor as the number of external partons is increased. To study the properties of the method, we calculate the virtual corrections to $n$-gluon scattering.

  20. Inventory verification measurements using neutron multiplicity counting

    International Nuclear Information System (INIS)

    Ensslin, N.; Foster, L.A.; Harker, W.C.; Krick, M.S.; Langner, D.G.

    1998-01-01

    This paper describes a series of neutron multiplicity measurements of large plutonium samples at the Los Alamos Plutonium Facility. The measurements were corrected for bias caused by neutron energy spectrum shifts and nonuniform multiplication, and are compared with calorimetry/isotopics. The results show that multiplicity counting can increase measurement throughput and yield good verification results for some inventory categories. The authors provide recommendations on the future application of the technique to inventory verification

  1. Heel effect adaptive flat field correction of digital x-ray detectors

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Yongjian [X-ray Products, Varian Medical Systems Inc., Liverpool, New York 13088 (United States); Wang, Jue [Department of Mathematics, Union College, Schenectady, New York 12308 (United States)

    2013-08-15

    Purpose: Anode heel effect renders large-scale background nonuniformities in digital radiographs. Conventional offset/gain calibration is performed at mono source-to-image distance (SID), and disregards the SID-dependent characteristic of heel effect. It results in a residual nonuniform background in the corrected radiographs when the SID settings for calibration and correction differ. In this work, the authors develop a robust and efficient computational method for digital x-ray detector gain correction adapted to SID-variant heel effect, without resorting to physical filters, phantoms, complicated heel effect models, or multiple-SID calibration and interpolation.Methods: The authors present the Duo-SID projection correction method. In our approach, conventional offset/gain calibrations are performed only twice, at the minimum and maximum SIDs of the system in typical clinical use. A fast iterative separation algorithm is devised to extract the detector gain and basis heel patterns from the min/max SID calibrations. The resultant detector gain is independent of SID, while the basis heel patterns are parameterized by the min- and max-SID. The heel pattern at any SID is obtained from the min-SID basis heel pattern via projection imaging principles. The system gain desired at a specific acquisition SID is then constructed using the projected heel pattern and detector gain map.Results: The method was evaluated for flat field and anatomical phantom image corrections. It demonstrated promising improvements over interpolation and conventional gain calibration/correction methods, lowering their correction errors by approximately 70% and 80%, respectively. The separation algorithm was able to extract the detector gain and heel patterns with less than 2% error, and the Duo-SID corrected images showed perceptually appealing uniform background across the detector.Conclusions: The Duo-SID correction method has substantially improved on conventional offset/gain corrections for

  2. Heel effect adaptive flat field correction of digital x-ray detectors

    International Nuclear Information System (INIS)

    Yu, Yongjian; Wang, Jue

    2013-01-01

    Purpose: Anode heel effect renders large-scale background nonuniformities in digital radiographs. Conventional offset/gain calibration is performed at mono source-to-image distance (SID), and disregards the SID-dependent characteristic of heel effect. It results in a residual nonuniform background in the corrected radiographs when the SID settings for calibration and correction differ. In this work, the authors develop a robust and efficient computational method for digital x-ray detector gain correction adapted to SID-variant heel effect, without resorting to physical filters, phantoms, complicated heel effect models, or multiple-SID calibration and interpolation.Methods: The authors present the Duo-SID projection correction method. In our approach, conventional offset/gain calibrations are performed only twice, at the minimum and maximum SIDs of the system in typical clinical use. A fast iterative separation algorithm is devised to extract the detector gain and basis heel patterns from the min/max SID calibrations. The resultant detector gain is independent of SID, while the basis heel patterns are parameterized by the min- and max-SID. The heel pattern at any SID is obtained from the min-SID basis heel pattern via projection imaging principles. The system gain desired at a specific acquisition SID is then constructed using the projected heel pattern and detector gain map.Results: The method was evaluated for flat field and anatomical phantom image corrections. It demonstrated promising improvements over interpolation and conventional gain calibration/correction methods, lowering their correction errors by approximately 70% and 80%, respectively. The separation algorithm was able to extract the detector gain and heel patterns with less than 2% error, and the Duo-SID corrected images showed perceptually appealing uniform background across the detector.Conclusions: The Duo-SID correction method has substantially improved on conventional offset/gain corrections for

  3. Non-eikonal corrections for the scattering of spin-one particles

    Energy Technology Data Exchange (ETDEWEB)

    Gaber, M.W.; Wilkin, C. [Department of Physics and Astronomy, University College London, WC1E 6BT, London (United Kingdom); Al-Khalili, J.S. [Department of Physics, University of Surrey, GU2 7XH, Guildford, Surrey (United Kingdom)

    2004-08-01

    The Wallace Fourier-Bessel expansion of the scattering amplitude is generalised to the case of the scattering of a spin-one particle from a potential with a single tensor coupling as well as central and spin-orbit terms. A generating function for the eikonal-phase (quantum) corrections is evaluated in closed form. For medium-energy deuteron-nucleus scattering, the first-order correction is dominant and is shown to be significant in the interpretation of analysing power measurements. This conclusion is supported by a numerical comparison of the eikonal observables, evaluated with and without corrections, with those obtained from a numerical resolution of the Schroedinger equation for d-{sup 58}Ni scattering at incident deuteron energies of 400 and 700 MeV. (orig.)

  4. Comparative Efficacy of Daratumumab Monotherapy and Pomalidomide Plus Low-Dose Dexamethasone in the Treatment of Multiple Myeloma: A Matching Adjusted Indirect Comparison.

    Science.gov (United States)

    Van Sanden, Suzy; Ito, Tetsuro; Diels, Joris; Vogel, Martin; Belch, Andrew; Oriol, Albert

    2018-03-01

    Daratumumab (a human CD38-directed monoclonal antibody) and pomalidomide (an immunomodulatory drug) plus dexamethasone are both relatively new treatment options for patients with heavily pretreated multiple myeloma. A matching adjusted indirect comparison (MAIC) was used to compare absolute treatment effects of daratumumab versus pomalidomide + low-dose dexamethasone (LoDex; 40 mg) on overall survival (OS), while adjusting for differences between the trial populations. The MAIC method reduces the risk of bias associated with naïve indirect comparisons. Data from 148 patients receiving daratumumab (16 mg/kg), pooled from the GEN501 and SIRIUS studies, were compared separately with data from patients receiving pomalidomide + LoDex in the MM-003 and STRATUS studies. The MAIC-adjusted hazard ratio (HR) for OS of daratumumab versus pomalidomide + LoDex was 0.56 (95% confidence interval [CI], 0.38-0.83; p  = .0041) for MM-003 and 0.51 (95% CI, 0.37-0.69; p  < .0001) for STRATUS. The treatment benefit was even more pronounced when the daratumumab population was restricted to pomalidomide-naïve patients (MM-003: HR, 0.33; 95% CI, 0.17-0.66; p  = .0017; STRATUS: HR, 0.41; 95% CI, 0.21-0.79; p  = .0082). An additional analysis indicated a consistent trend of the OS benefit across subgroups based on M-protein level reduction (≥50%, ≥25%, and <25%). The MAIC results suggest that daratumumab improves OS compared with pomalidomide + LoDex in patients with heavily pretreated multiple myeloma. This matching adjusted indirect comparison of clinical trial data from four studies analyzes the survival outcomes of patients with heavily pretreated, relapsed/refractory multiple myeloma who received either daratumumab monotherapy or pomalidomide plus low-dose dexamethasone. Using this method, daratumumab conferred a significant overall survival benefit compared with pomalidomide plus low-dose dexamethasone. In the absence of head-to-head trials, these

  5. Fully 3D refraction correction dosimetry system

    International Nuclear Information System (INIS)

    Manjappa, Rakesh; Makki, S Sharath; Kanhirodan, Rajan; Kumar, Rajesh; Vasu, Ram Mohan

    2016-01-01

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  6. Fully 3D refraction correction dosimetry system.

    Science.gov (United States)

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  7. Diagnostic accuracy of full-body linear X-ray scanning in multiple trauma patients in comparison to computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Joeres, A.P.W.; Heverhagen, J.T.; Bonel, H. [Inselspital - University Hospital Bern (Switzerland). Univ. Inst. of Diagnostic, Interventional and Pediatric Radiology; Exadaktylos, A. [Inselspital - University Hospital Bern (Switzerland). Dept. of Emergency Medicine; Klink, T. [Inselspital - University Hospital Bern (Switzerland). Univ. Inst. of Diagnostic, Interventional and Pediatric Radiology; Wuerzburg Univ. (Germany). Inst. of Diagnostic and Interventional Radiology

    2016-02-15

    The purpose of this study was to evaluate the diagnostic accuracy of full-body linear X-ray scanning (LS) in multiple trauma patients in comparison to 128-multislice computed tomography (MSCT). 106 multiple trauma patients (female: 33; male: 73) were retrospectively included in this study. All patients underwent LS of the whole body, including extremities, and MSCT covering the neck, thorax, abdomen, and pelvis. The diagnostic accuracy of LS for the detection of fractures of the truncal skeleton and pneumothoraces was evaluated in comparison to MSCT by two observers in consensus. Extremity fractures detected by LS were documented. The overall sensitivity of LS was 49.2%, the specificity was 93.3%, the positive predictive value was 91%, and the negative predictive value was 57.5%. The overall sensitivity for vertebral fractures was 16.7%, and the specificity was 100%. The sensitivity was 48.7% and the specificity 98.2% for all other fractures. Pneumothoraces were detected in 12 patients by CT, but not by LS.40 extremity fractures were detected by LS, of which 4 fractures were dislocated, and 2 were fully covered by MSCT. The diagnostic accuracy of LS is limited in the evaluation of acute trauma of the truncal skeleton. LS allows fast whole-body X-ray imaging, and may be valuable for detecting extremity fractures in trauma patients in addition to MSCT.

  8. Evaluation of Sinus/Edge-Corrected Zero-Echo-Time-Based Attenuation Correction in Brain PET/MRI.

    Science.gov (United States)

    Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep; Shanbhag, Dattesh; Hope, Thomas A; Larson, Peder E Z; Seo, Youngho

    2017-11-01

    In brain PET/MRI, the major challenge of zero-echo-time (ZTE)-based attenuation correction (ZTAC) is the misclassification of air/tissue/bone mixtures or their boundaries. Our study aimed to evaluate a sinus/edge-corrected (SEC) ZTAC (ZTAC SEC ), relative to an uncorrected (UC) ZTAC (ZTAC UC ) and a CT atlas-based attenuation correction (ATAC). Methods: Whole-body 18 F-FDG PET/MRI scans were obtained for 12 patients after PET/CT scans. Only data acquired at a bed station that included the head were used for this study. Using PET data from PET/MRI, we applied ZTAC UC , ZTAC SEC , ATAC, and reference CT-based attenuation correction (CTAC) to PET attenuation correction. For ZTAC UC , the bias-corrected and normalized ZTE was converted to pseudo-CT with air (-1,000 HU for ZTE 0.75), and bone (-2,000 × [ZTE - 1] + 42 HU for 0.2 ≤ ZTE ≤ 0.75). Afterward, in the pseudo-CT, sinus/edges were automatically estimated as a binary mask through morphologic processing and edge detection. In the binary mask, the overestimated values were rescaled below 42 HU for ZTAC SEC For ATAC, the atlas deformed to MR in-phase was segmented to air, inner air, soft tissue, and continuous bone. For the quantitative evaluation, PET mean uptake values were measured in twenty 1-mL volumes of interest distributed throughout brain tissues. The PET uptake was compared using a paired t test. An error histogram was used to show the distribution of voxel-based PET uptake differences. Results: Compared with CTAC, ZTAC SEC achieved the overall PET quantification accuracy (0.2% ± 2.4%, P = 0.23) similar to CTAC, in comparison with ZTAC UC (5.6% ± 3.5%, P PET quantification in brain PET/MRI, comparable to the accuracy achieved by CTAC, particularly in the cerebellum. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  9. Assessing atmospheric bias correction for dynamical consistency using potential vorticity

    International Nuclear Information System (INIS)

    Rocheta, Eytan; Sharma, Ashish; Evans, Jason P

    2014-01-01

    Correcting biases in atmospheric variables prior to impact studies or dynamical downscaling can lead to new biases as dynamical consistency between the ‘corrected’ fields is not maintained. Use of these bias corrected fields for subsequent impact studies and dynamical downscaling provides input conditions that do not appropriately represent intervariable relationships in atmospheric fields. Here we investigate the consequences of the lack of dynamical consistency in bias correction using a measure of model consistency—the potential vorticity (PV). This paper presents an assessment of the biases present in PV using two alternative correction techniques—an approach where bias correction is performed individually on each atmospheric variable, thereby ignoring the physical relationships that exists between the multiple variables that are corrected, and a second approach where bias correction is performed directly on the PV field, thereby keeping the system dynamically coherent throughout the correction process. In this paper we show that bias correcting variables independently results in increased errors above the tropopause in the mean and standard deviation of the PV field, which are improved when using the alternative proposed. Furthermore, patterns of spatial variability are improved over nearly all vertical levels when applying the alternative approach. Results point to a need for a dynamically consistent atmospheric bias correction technique which results in fields that can be used as dynamically consistent lateral boundaries in follow-up downscaling applications. (letter)

  10. On the Design of Error-Correcting Ciphers

    Directory of Open Access Journals (Sweden)

    Mathur Chetan Nanjunda

    2006-01-01

    Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison

  11. Centrality Dependence of Hadron Multiplicities in Nuclear Collisions in the Dual Parton Model

    CERN Document Server

    Capella, A

    2001-01-01

    We show that, even in purely soft processes, the hadronic multiplicity in nucleus-nucleus interactions contains a term that scales with the number of binary collisions. In the absence of shadowing corrections, this term dominates at mid rapidities and high energies. Shadowing corrections are calculated as a function of impact parameter and the centrality dependence of mid-rapidity multiplicities is determined. The multiplicity per participant increases with centrality with a rate that increases between SPS and RHIC energies, in agreement with experiment.

  12. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  13. Ordinal Welfare Comparisons with Multiple Discrete Indicators

    DEFF Research Database (Denmark)

    Arndt, Channing; Distante, Roberta; Hussain, M. Azhar

    We develop an ordinal method for making welfare comparisons between populations with multidimensional discrete well-being indicators observed at the micro level. The approach assumes that, for each well-being indicator, the levels can be ranked from worse to better; however, no assumptions are made...

  14. Analysing and Correcting the Differences between Multi-Source and Multi-Scale Spatial Remote Sensing Observations

    Science.gov (United States)

    Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun

    2014-01-01

    Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding

  15. Non-common path aberration correction in an adaptive optics scanning ophthalmoscope.

    Science.gov (United States)

    Sulai, Yusufu N; Dubra, Alfredo

    2014-09-01

    The correction of non-common path aberrations (NCPAs) between the imaging and wavefront sensing channel in a confocal scanning adaptive optics ophthalmoscope is demonstrated. NCPA correction is achieved by maximizing an image sharpness metric while the confocal detection aperture is temporarily removed, effectively minimizing the monochromatic aberrations in the illumination path of the imaging channel. Comparison of NCPA estimated using zonal and modal orthogonal wavefront corrector bases provided wavefronts that differ by ~λ/20 in root-mean-squared (~λ/30 standard deviation). Sequential insertion of a cylindrical lens in the illumination and light collection paths of the imaging channel was used to compare image resolution after changing the wavefront correction to maximize image sharpness and intensity metrics. Finally, the NCPA correction was incorporated into the closed-loop adaptive optics control by biasing the wavefront sensor signals without reducing its bandwidth.

  16. gsSKAT: Rapid gene set analysis and multiple testing correction for rare-variant association studies using weighted linear kernels.

    Science.gov (United States)

    Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J

    2017-05-01

    Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.

  17. Inflation via logarithmic entropy-corrected holographic dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Darabi, F.; Felegary, F. [Azarbaijan Shahid Madani University, Department of Physics, Tabriz (Iran, Islamic Republic of); Setare, M.R. [University of Kurdistan, Department of Science, Bijar (Iran, Islamic Republic of)

    2016-12-15

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections. (orig.)

  18. Inflation via logarithmic entropy-corrected holographic dark energy model

    International Nuclear Information System (INIS)

    Darabi, F.; Felegary, F.; Setare, M.R.

    2016-01-01

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections. (orig.)

  19. SORM correction of FORM results for the FBC load combination problem

    DEFF Research Database (Denmark)

    Ditlevsen, Ove

    2005-01-01

    The old stochastic load combination model of Ferry Borges and Castanheta and the corresponding extreme random load effect value is considered. The evaluation of the distribution function of the extreme value by use of a particular first order reliability method was first described in a celebrated...... calculations. The calculation gives a limit state curvature correction factor on the probability approximation obtained by the RF algorithm. This correction factor is based on Breitung’s celebrated asymptotic formula. Example calculations with comparisons with exact results show an impressing accuracy...

  20. What about False Insights? Deconstructing the Aha! Experience along Its Multiple Dimensions for Correct and Incorrect Solutions Separately

    Science.gov (United States)

    Danek, Amory H.; Wiley, Jennifer

    2017-01-01

    The subjective Aha! experience that problem solvers often report when they find a solution has been taken as a marker for insight. If Aha! is closely linked to insightful solution processes, then theoretically, an Aha! should only be experienced when the correct solution is found. However, little work has explored whether the Aha! experience can also accompany incorrect solutions (“false insights”). Similarly, although the Aha! experience is not a unitary construct, little work has explored the different dimensions that have been proposed as its constituents. To address these gaps in the literature, 70 participants were presented with a set of difficult problems (37 magic tricks), and rated each of their solutions for Aha! as well as with regard to Suddenness in the emergence of the solution, Certainty of being correct, Surprise, Pleasure, Relief, and Drive. Solution times were also used as predictors for the Aha! experience. This study reports three main findings: First, false insights exist. Second, the Aha! experience is multidimensional and consists of the key components Pleasure, Suddenness and Certainty. Third, although Aha! experiences for correct and incorrect solutions share these three common dimensions, they are also experienced differently with regard to magnitude and quality, with correct solutions emerging faster, leading to stronger Aha! experiences, and higher ratings of Pleasure, Suddenness, and Certainty. Solution correctness proffered a slightly different emotional coloring to the Aha! experience, with the additional perception of Relief for correct solutions, and Surprise for incorrect ones. These results cast some doubt on the assumption that the occurrence of an Aha! experience can serve as a definitive signal that a true insight has taken place. On the other hand, the quantitative and qualitative differences in the experience of correct and incorrect solutions demonstrate that the Aha! experience is not a mere epiphenomenon. Strong Aha

  1. Towards self-correcting quantum memories

    Science.gov (United States)

    Michnicki, Kamil

    This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real

  2. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  3. Correcting for catchment area nonresidency in studies based on tumor-registry data

    International Nuclear Information System (INIS)

    Sposto, R.; Preston, D.L.

    1993-05-01

    We discuss the effect of catchment area nonresidency on estimates of cancer incidence from a tumor-registry-based cohort study and demonstrate that a relatively simple correction is possible in the context of Poisson regression analysis if individual residency histories or the probabilities of residency are known. A comparison of a complete data maximum likelihood analysis with several Poisson regression analyses demonstrates the adequacy of the simple correction in a large simulated data set. We compare analyses of stomach-cancer incidence from the Radiation Effects Research Foundation tumor registry with and without the correction. We also discuss some implications of including cases identified only on the basis of death certificates. (author)

  4. Corrections to the Eckhaus' stability criterion for one-dimensional stationary structures

    Science.gov (United States)

    Malomed, B. A.; Staroselsky, I. E.; Konstantinov, A. B.

    1989-01-01

    Two amendments to the well-known Eckhaus' stability criterion for small-amplitude non-linear structures generated by weak instability of a spatially uniform state of a non-equilibrium one-dimensional system against small perturbations with finite wavelengths are obtained. Firstly, we evaluate small corrections to the main Eckhaus' term which, on the contrary so that term, do not have a universal form. Comparison of those non-universal corrections with experimental or numerical results gives a possibility to select a more relevant form of an effective nonlinear evolution equation. In particular, the comparison with such results for convective rolls and Taylor vortices gives arguments in favor of the Swift-Hohenberg equation. Secondly, we derive an analog of the Eckhaus criterion for systems degenerate in the sense that in an expansion of their non-linear parts in powers of dynamical variables, the second and third degree terms are absent.

  5. Neural network scatter correction technique for digital radiography

    International Nuclear Information System (INIS)

    Boone, J.M.

    1990-01-01

    This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique

  6. Comparison of a Ring On-Chip Network and a Code-Division Multiple-Access On-Chip Network

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2007-01-01

    Full Text Available Two network-on-chip (NoC designs are examined and compared in this paper. One design applies a bidirectional ring connection scheme, while the other design applies a code-division multiple-access (CDMA connection scheme. Both of the designs apply globally asynchronous locally synchronous (GALS scheme in order to deal with the issue of transferring data in a multiple-clock-domain environment of an on-chip system. The two NoC designs are compared with each other by their network structures, data transfer principles, network node structures, and their asynchronous designs. Both the synchronous and the asynchronous designs of the two on-chip networks are realized using a hardware-description language (HDL in order to make the entire designs suit the commonly used synchronous design tools and flow. The performance estimation and comparison of the two NoC designs which are based on the HDL realizations are addressed. By comparing the two NoC designs, the advantages and disadvantages of applying direct connection and CDMA connection schemes in an on-chip communication network are discussed.

  7. Optimizing signal intensity correction during evaluation of hepatic parenchymal enhancement on gadoxetate disodium-enhanced MRI: Comparison of three methods

    International Nuclear Information System (INIS)

    Onoda, Minori; Hyodo, Tomoko; Murakami, Takamichi; Okada, Masahiro; Uto, Tatsuro; Hori, Masatoshi; Miyati, Tosiaki

    2015-01-01

    Highlights: •Signal intensity is often used to evaluate hepatic enhancement with Gd-EOB-DTPA in the hepatobiliary phase. •Comparison of uncorrected signal intensity with T 1 value revealed signal intensity instability. •Measurement of uncorrected liver SI or SNR often yields erroneous results on late-phase gadoxetate MRI due to shimming and other optimization techniques. •Signal intensity corrected by scale and rescale slope from DICOM data gave comparable results. -- Abstract: Objective: To compare signal intensity (SI) correction using scale and rescale slopes with SI correction using SIs of spleen and muscle for quantifying multiphase hepatic contrast enhancement with Gd-EOB-DTPA by assessing their correlation with T 1 values generated from Look-Locker turbo-field-echo (LL-TFE) sequence data (ER-T 1 ). Materials and methods: Thirty patients underwent Gd-EOB-DTPA-enhanced magnetic resonance imaging (MRI) in this prospective clinical study. For each patient, breath-hold T 1 -weighted fat-suppressed three-dimensional (3D) gradient echo sequences (e-THRIVE) were acquired before and 2 (first phase), 10 (second phase), and 20 min (third phase) after intravenous Gd-EOB-DTPA. Look-Locker turbo-field-echo (LL-TFE) sequences were acquired before and 1.5 (first phase), 8 (second phase), and 18 min (third phase) postcontrast. The liver parenchyma enhancement ratios (ER) of each phase were calculated using the SI from e-THRIVE sequences (ER-SI) and the T 1 values generated from LL-TFE sequence data (ER-T 1 ) respectively. ER-SIs were calculated in three ways: (1) comparing with splenic SI (ER-SI-s), (2) comparing with muscle SI (ER-SI-m), (3) using scale and rescale slopes obtained from DICOM headers (ER-SI-c), to eliminate the effects of receiver gain and scaling. For each of the first, second and third phases, correlation and agreement were assessed between each ER-SI and ER-T 1 . Results: In the first phase, all ER-SIs correlated weakly with ER-T 1 . In the second

  8. Color correction for chromatic distortion in a multi-wavelength digital holographic system

    International Nuclear Information System (INIS)

    Lin, Li-Chien; Huang, Yi-Lun; Tu, Han-Yen; Lai, Xin-Ji; Cheng, Chau-Jern

    2011-01-01

    A multi-wavelength digital holographic (MWDH) system has been developed to record and reconstruct color images. In comparison to working with digital cameras, however, high-quality color reproduction is difficult to achieve, because of the imperfections from the light sources, optical components, optical recording devices and recording processes. Thus, we face the problem of correcting the colors altered during the digital holographic process. We therefore propose a color correction scheme to correct the chromatic distortion caused by the MWDH system. The scheme consists of two steps: (1) creating a color correction profile and (2) applying it to the correction of the distorted colors. To create the color correction profile, we generate two algorithms: the sequential algorithm and the integrated algorithm. The ColorChecker is used to generate the distorted colors and their desired corrected colors. The relationship between these two color patches is fixed into a specific mathematical model, the parameters of which are estimated, creating the profile. Next, the profile is used to correct the color distortion of images, capturing and preserving the original vibrancy of the reproduced colors for different reconstructed images

  9. Comparison of aerodynamic models for Vertical Axis Wind Turbines

    International Nuclear Information System (INIS)

    Ferreira, C Simão; Madsen, H Aagaard; Barone, M; Roscher, B; Deglaire, P; Arduin, I

    2014-01-01

    Multi-megawatt Vertical Axis Wind Turbines (VAWTs) are experiencing an increased interest for floating offshore applications. However, VAWT development is hindered by the lack of fast, accurate and validated simulation models. This work compares six different numerical models for VAWTS: a multiple streamtube model, a double-multiple streamtube model, the actuator cylinder model, a 2D potential flow panel model, a 3D unsteady lifting line model, and a 2D conformal mapping unsteady vortex model. The comparison covers rotor configurations with two NACA0015 blades, for several tip speed ratios, rotor solidity and fixed pitch angle, included heavily loaded rotors, in inviscid flow. The results show that the streamtube models are inaccurate, and that correct predictions of rotor power and rotor thrust are an effect of error cancellation which only occurs at specific configurations. The other four models, which explicitly model the wake as a system of vorticity, show mostly differences due to the instantaneous or time averaged formulation of the loading and flow, for which further research is needed

  10. Comparison of aerodynamic models for Vertical Axis Wind Turbines

    Science.gov (United States)

    Simão Ferreira, C.; Aagaard Madsen, H.; Barone, M.; Roscher, B.; Deglaire, P.; Arduin, I.

    2014-06-01

    Multi-megawatt Vertical Axis Wind Turbines (VAWTs) are experiencing an increased interest for floating offshore applications. However, VAWT development is hindered by the lack of fast, accurate and validated simulation models. This work compares six different numerical models for VAWTS: a multiple streamtube model, a double-multiple streamtube model, the actuator cylinder model, a 2D potential flow panel model, a 3D unsteady lifting line model, and a 2D conformal mapping unsteady vortex model. The comparison covers rotor configurations with two NACA0015 blades, for several tip speed ratios, rotor solidity and fixed pitch angle, included heavily loaded rotors, in inviscid flow. The results show that the streamtube models are inaccurate, and that correct predictions of rotor power and rotor thrust are an effect of error cancellation which only occurs at specific configurations. The other four models, which explicitly model the wake as a system of vorticity, show mostly differences due to the instantaneous or time averaged formulation of the loading and flow, for which further research is needed.

  11. Multiple Intelligences Profiles of Children with Attention Deficit and Hyperactivity Disorder in Comparison with Nonattention Deficit and Hyperactivity Disorder

    OpenAIRE

    Najafi, Mostafa; Akouchekian, Shahla; Ghaderi, Alireza; Mahaki, Behzad; Rezaei, Mariam

    2017-01-01

    Background: Attention deficit and hyperactivity disorder (ADHD) is a common psychological problem during childhood. This study aimed to evaluate multiple intelligences profiles of children with ADHD in comparison with non-ADHD. Materials and Methods: This cross-sectional descriptive analytical study was done on 50 children of 6–13 years old in two groups of with and without ADHD. Children with ADHD were referred to Clinics of Child and Adolescent Psychiatry, Isfahan University of Medical Scie...

  12. Active neutron multiplicity analysis and Monte Carlo calculations

    International Nuclear Information System (INIS)

    Krick, M.S.; Ensslin, N.; Langner, D.G.; Miller, M.C.; Siebelist, R.; Stewart, J.E.; Ceo, R.N.; May, P.K.; Collins, L.L. Jr

    1994-01-01

    Active neutron multiplicity measurements of high-enrichment uranium metal and oxide samples have been made at Los Alamos and Y-12. The data from the measurements of standards at Los Alamos were analyzed to obtain values for neutron multiplication and source-sample coupling. These results are compared to equivalent results obtained from Monte Carlo calculations. An approximate relationship between coupling and multiplication is derived and used to correct doubles rates for multiplication and coupling. The utility of singles counting for uranium samples is also examined

  13. Distribution load forecast with interactive correction of horizon loads

    International Nuclear Information System (INIS)

    Glamochanin, V.; Andonov, D.; Gagovski, I.

    1994-01-01

    This paper presents the interactive distribution load forecast application that performs the distribution load forecast with interactive correction of horizon loads. It consists of two major parts implemented in Fortran and Visual Basic. The Fortran part is used for the forecasts computations. It consists of two methods: Load Transfer Coupling Curve Fitting (LTCCF) and load Forecast Using Curve Shape Clustering (FUCSC). LTCCF is used to 'correct' the contaminated data because of load transfer among neighboring distribution areas. FUCSC uses curve shape clustering to forecast the distribution loads of small areas. The forecast for each small area is achieved by using the shape of corresponding cluster curve. The comparison of forecasted loads of the area with historical data will be used as a tool for the correction of the estimated horizon load. The Visual Basic part is used to provide flexible interactive user-friendly environment. (author). 5 refs., 3 figs

  14. MZDASoft: a software architecture that enables large-scale comparison of protein expression levels over multiple samples based on liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Ghanat Bari, Mehrab; Ramirez, Nelson; Wang, Zhiwei; Zhang, Jianqiu Michelle

    2015-10-15

    enables large-scale comparison of protein expression levels over multiple samples with much larger protein comparison coverage and better quantification accuracy. It is an efficient implementation based on parallel processing which can be used to process large amounts of data. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Passive neutron-multiplication measurements

    International Nuclear Information System (INIS)

    Zolnay, A.S.; Barnett, C.S.; Spracklen, H.P.

    1982-01-01

    We have developed an instrument to measure neutron multiplication by statistical analysis of the timing of neutrons emitted from fissionable material. This instrument is capable of repeated analysis of the same recorded data with selected algorithms, graphical displays showing statistical properties of the data, and preservation of raw data on disk for future comparisons. In our measurements we have made a comparison of the covariance to mean and Feynman variance to mean analysis algorithms to show that the covariance avoids a bias term and measures directly the effect due to the presence of neutron chains. A spherical assembly of enriched uranium shells and acrylic resin reflector/moderator components used for the measurements is described. Preliminary experimental results of the Feynman variance to mean measurements show the expected correlation with assembly multiplication

  16. Radiation Therapy - Multiple Languages

    Science.gov (United States)

    ... W XYZ List of All Topics All Radiation Therapy - Multiple Languages To use the sharing features on this page, ... Information Translations Vietnamese (Tiếng Việt) Expand Section Radiation Therapy - Tiếng Việt (Vietnamese) ... Health Information Translations Characters not displaying correctly on this page? See language display issues . Return to the MedlinePlus Health Information ...

  17. On the Atmospheric Correction of Antarctic Airborne Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Martin Black

    2014-05-01

    Full Text Available The first airborne hyperspectral campaign in the Antarctic Peninsula region was carried out by the British Antarctic Survey and partners in February 2011. This paper presents an insight into the applicability of currently available radiative transfer modelling and atmospheric correction techniques for processing airborne hyperspectral data in this unique coastal Antarctic environment. Results from the Atmospheric and Topographic Correction version 4 (ATCOR-4 package reveal absolute reflectance values somewhat in line with laboratory measured spectra, with Root Mean Square Error (RMSE values of 5% in the visible near infrared (0.4–1 µm and 8% in the shortwave infrared (1–2.5 µm. Residual noise remains present due to the absorption by atmospheric gases and aerosols, but certain parts of the spectrum match laboratory measured features very well. This study demonstrates that commercially available packages for carrying out atmospheric correction are capable of correcting airborne hyperspectral data in the challenging environment present in Antarctica. However, it is anticipated that future results from atmospheric correction could be improved by measuring in situ atmospheric data to generate atmospheric profiles and aerosol models, or with the use of multiple ground targets for calibration and validation.

  18. Follow-up of CT-derived airway wall thickness: Correcting for changes in inspiration level improves reliability

    Energy Technology Data Exchange (ETDEWEB)

    Pompe, Esther, E-mail: e.pompe@umcutrecht.nl [Department of Respiratory Medicine, University Medical Center Utrecht, Utrecht (Netherlands); Rikxoort, Eva M. van [Department of Radiology, Radboud University Medical Center, Nijmegen (Netherlands); Mets, Onno M. [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands); Charbonnier, Jean-Paul [Department of Radiology, Radboud University Medical Center, Nijmegen (Netherlands); Kuhnigk, Jan-Martin [Institute for Medical Image Computing, Fraunhofer MEVIS, Bremen (Germany); Koning, Harry J. de [Department of Public Health, Erasmus Medical Center, Rotterdam (Netherlands); Oudkerk, Matthijs [University of Groningen, University Medical Center Groningen, Groningen, Department of Radiology (Netherlands); Vliegenthart, Rozemarijn [University of Groningen, University Medical Center Groningen, Groningen, Department of Radiology (Netherlands); University of Groningen, University Medical Center Groningen, Center for Medical Imaging-North East Netherlands, Groningen (Netherlands); Zanen, Pieter; Lammers, Jan-Willem J. [Department of Respiratory Medicine, University Medical Center Utrecht, Utrecht (Netherlands); Ginneken, Bram van [Department of Radiology, Radboud University Medical Center, Nijmegen (Netherlands); Jong, Pim A. de; Mohamed Hoesein, Firdaus A.A. [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands)

    2016-11-15

    Objectives: Airway wall thickness (AWT) is affected by changes in lung volume. This study evaluated whether correcting AWT on computed tomography (CT) for differences in inspiration level improves measurement agreement, reliability, and power to detect changes over time. Methods: Participants of the Dutch-Belgian lung cancer screening trial who underwent 3-month repeat CT for an indeterminate pulmonary nodule were included. AWT on CT was calculated by the square root of the wall area at a theoretical airway with an internal perimeter of 10 mm (Pi10). The scan with the highest lung volume was labelled as the reference scan and the scan with the lowest lung volume was labelled as the comparison scan. Pi10 derived from the comparison scan was corrected by multiplying it with the ratio of CT lung volume of the comparison scan to CT lung volume on the reference scan. Agreement of uncorrected and corrected Pi10 was studied with the Bland-Altman method, reliability with intra-class correlation coefficients (ICC), and power to detect changes over time was calculated. Results: 315 male participants were included. Limit of agreement and reliability for Pi10 was −0.61 to 0.57 mm (ICC = 0.87), which improved to −0.38 to 0.37 mm (ICC = 0.94) after correction for inspiration level. To detect a 15% change over 3 months, 71 subjects are needed for Pi10 and 26 subjects for Pi10 adjusted for inspiration level. Conclusions: Correcting Pi10 for differences in inspiration level improves reliability, agreement, and power to detect changes over time.

  19. Evaluation and Comparison of the Position of the Apical Constriction in Single-root and Multiple-root Teeth

    Directory of Open Access Journals (Sweden)

    Alireza Farhad

    2017-12-01

    Full Text Available Introduction: Precise knowledge of the location of the apical constriction is essential to root canal treatment and long-term prognosis. Considering the differences in the apical constriction and size of the roots in single- and multiple-root teeth in various races, examination and comparison of the location of the apical constriction in single-root and multiple-root teeth are of paramount importance. The present studies aimed to measure and compare the distance of the apical constriction from the apical foramen and anatomical apex in single-root and multiple-root teeth. Materials and Methods: In this cross-sectional study, 60 roots of single-rooted teeth and 60 roots of multiple-rooted teeth were collected from the patients referring to the health centers in Isfahan, Iran. After cleansing and disinfecting the surface of the roots, the surface of the teeth was washed with hypochlorite. Based on the direction of the apical foramen, a longitudinal cut was made in the same direction, and the roots were examined microscopically at the magnification of 25. Following that, the distance of the apical constriction from the apical foramen and anatomical apex was measured using a digital camera. In addition, mean and standard deviation of the obtained distance values were determined. Distances in the single-root and multiple-root teeth were compared using independent t-test, at the significance level of Results: Mean distance between the apical constriction and apical foramen was 0.86±0.33 mm in the single-root teeth and 0.072±0.27 mm in the multiple-root teeth. Mean distance between the apical constriction and anatomical apex was 1.14±0.36 mm in the single-root teeth and 1.03±0.36 mm in the multiple-root teeth. Moreover, the results of independent t-test showed the distance of the apical constriction from the apical foramen to be significant between single-root and multiple-rooted teeth (P=0.013. However, the distance between the apical constriction

  20. ecco: An error correcting comparator theory.

    Science.gov (United States)

    Ghirlanda, Stefano

    2018-03-08

    Building on the work of Ralph Miller and coworkers (Miller and Matzel, 1988; Denniston et al., 2001; Stout and Miller, 2007), I propose a new formalization of the comparator hypothesis that seeks to overcome some shortcomings of existing formalizations. The new model, dubbed ecco for "Error-Correcting COmparisons," retains the comparator process and the learning of CS-CS associations based on contingency. ecco assumes, however, that learning of CS-US associations is driven by total error correction, as first introduced by Rescorla and Wagner (1972). I explore ecco's behavior in acquisition, compound conditioning, blocking, backward blocking, and unovershadowing. In these paradigms, ecco appears capable of avoiding the problems of current comparator models, such as the inability to solve some discriminations and some paradoxical effects of stimulus salience. At the same time, ecco exhibits the retrospective revaluation phenomena that are characteristic of comparator theory. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Effect of Inhomogeneity correction for lung volume model in TPS

    International Nuclear Information System (INIS)

    Chung, Se Young; Lee, Sang Rok; Kim, Young Bum; Kwon, Young Ho

    2004-01-01

    The phantom that includes high density materials such as steel was custom-made to fix lung and bone in order to evaluation inhomogeneity correction at the time of conducting radiation therapy to treat lung cancer. Using this, values resulting from the inhomogeneous correction algorithm are compared on the 2 and 3 dimensional radiation therapy planning systems. Moreover, change in dose calculation was evaluated according to inhomogeneous by comparing with the actual measurement. As for the image acquisition, inhomogeneous correction phantom(Pig's vertebra, steel(8.21 g/cm 3 ), cork(0.23 g/cm 3 )) that was custom-made and the CT(Volume zoom, Siemens, Germany) were used. As for the radiation therapy planning system, Marks Plan(2D) and XiO(CMS, USA, 3D) were used. To compare with the measurement value, linear accelerator(CL/1800, Varian, USA) and ion chamber were used. Image, obtained from the CT was used to obtain point dose and dose distribution from the region of interest (ROI) while on the radiation therapy planning device. After measurement was conducted under the same conditions, value on the treatment planning device and measured value were subjected to comparison and analysis. And difference between the resulting for the evaluation on the use (or non-use) of inhomogeneity correction algorithm, and diverse inhomogeneity correction algorithm that is included in the radiation therapy planning device was compared as well. As result of comparing the results of measurement value on the region of interest within the inhomogeneity correction phantom and the value that resulted from the homogeneous and inhomogeneous correction, gained from the therapy planning device, margin of error of the measurement value and inhomogeneous correction value at the location 1 of the lung showed 0.8% on 2D and 0.5% on 3D. Margin of error of the measurement value and inhomogeneous correction value at the location 1 of the steel showed 12% on 2D and 5% on 3D, however, it is possible to

  2. Correction for decay during counting in gamma spectrometry

    International Nuclear Information System (INIS)

    Nir-El, Y.

    2013-01-01

    A basic result in gamma spectrometry is the count rate of a relevant peak. Correction for decay during counting and expressing the count rate at the beginning of the measurement can be done by a multiplicative factor that is derived from integrating the count rate over time. The counting time substituted in this factor must be the live time, whereas the use of the real-time is an error that underestimates the count rate by about the dead-time (DT) (in percentage). This error of underestimation of the count rate is corroborated in the measurement of a nuclide with a high DT. The present methodology is not applicable in systems that include a zero DT correction function. (authors)

  3. MR-based attenuation correction for cardiac FDG PET on a hybrid PET/MRI scanner: comparison with standard CT attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Vontobel, Jan; Liga, Riccardo; Possner, Mathias; Clerc, Olivier F.; Mikulicic, Fran; Veit-Haibach, Patrick; Voert, Edwin E.G.W. ter; Fuchs, Tobias A.; Stehli, Julia; Pazhenkottil, Aju P.; Benz, Dominik C.; Graeni, Christoph; Gaemperli, Oliver; Herzog, Bernhard; Buechel, Ronny R.; Kaufmann, Philipp A. [University Hospital Zurich, Department of Nuclear Medicine, Zurich (Switzerland)

    2015-09-15

    The aim of this study was to evaluate the feasibility of attenuation correction (AC) for cardiac {sup 18}F-labelled fluorodeoxyglucose (FDG) positron emission tomography (PET) using MR-based attenuation maps. We included 23 patients with no known cardiac history undergoing whole-body FDG PET/CT imaging for oncological indications on a PET/CT scanner using time-of-flight (TOF) and subsequent whole-body PET/MR imaging on an investigational hybrid PET/MRI scanner. Data sets from PET/MRI (with and without TOF) were reconstructed using MR AC and semi-quantitative segmental (20-segment model) myocardial tracer uptake (per cent of maximum) and compared to PET/CT which was reconstructed using CT AC and served as standard of reference. Excellent correlations were found for regional uptake values between PET/CT and PET/MRI with TOF (n = 460 segments in 23 patients; r = 0.913; p < 0.0001) with narrow Bland-Altman limits of agreement (-8.5 to +12.6 %). Correlation coefficients were slightly lower between PET/CT and PET/MRI without TOF (n = 460 segments in 23 patients; r = 0.851; p < 0.0001) with broader Bland-Altman limits of agreement (-12.5 to +15.0 %). PET/MRI with and without TOF showed minimal underestimation of tracer uptake (-2.08 and -1.29 %, respectively), compared to PET/CT. Relative myocardial FDG uptake obtained from MR-based attenuation corrected FDG PET is highly comparable to standard CT-based attenuation corrected FDG PET, suggesting interchangeability of both AC techniques. (orig.)

  4. Reducing neutron multiplicity counting bias for plutonium warhead authentication

    Energy Technology Data Exchange (ETDEWEB)

    Goettsche, Malte

    2015-06-05

    Confidence in future nuclear arms control agreements could be enhanced by direct verification of warheads. It would include warhead authentication. This is the assessment based on measurements whether a declaration that a specific item is a nuclear warhead is true. An information barrier can be used to protect sensitive information during measurements. It could for example show whether attributes such as a fissile mass exceeding a threshold are met without indicating detailed measurement results. Neutron multiplicity measurements would be able to assess a plutonium fissile mass attribute if it were possible to show that their bias is low. Plutonium measurements have been conducted with the He-3 based Passive Scrap Multiplicity Counter. The measurement data has been used as a reference to test the capacity of the Monte Carlo code MCNPX-PoliMi to simulate neutron multiplicity measurements. The simulation results with their uncertainties are in agreement with the experimental results. It is essential to use cross-sections which include neutron scattering with the detector's polyethylene molecular structure. Further MCNPX-PoliMi simulations have been conducted in order to study bias that occurs when measuring samples with large plutonium masses such as warheads. Simulation results of solid and hollow metal spheres up to 6000 g show that the masses are underpredicted by as much as 20%. The main source of this bias has been identified in the false assumption that the neutron multiplication does not depend on the position where a spontaneous fission event occurred. The multiplication refers to the total number of neutrons leaking a sample after a primary spontaneous fission event, taking induced fission into consideration. The correction of the analysis has been derived and implemented in a MATLAB code. It depends on four geometry-dependent correction coefficients. When the sample configuration is fully known, these can be exactly determined and remove this type of

  5. TU-H-206-04: An Effective Homomorphic Unsharp Mask Filtering Method to Correct Intensity Inhomogeneity in Daily Treatment MR Images

    International Nuclear Information System (INIS)

    Yang, D; Gach, H; Li, H; Mutic, S

    2016-01-01

    Purpose: The daily treatment MRIs acquired on MR-IGRT systems, like diagnostic MRIs, suffer from intensity inhomogeneity issue, associated with B1 and B0 inhomogeneities. An improved homomorphic unsharp mask (HUM) filtering method, automatic and robust body segmentation, and imaging field-of-view (FOV) detection methods were developed to compute the multiplicative slow-varying correction field and correct the intensity inhomogeneity. The goal is to improve and normalize the voxel intensity so that the images could be processed more accurately by quantitative methods (e.g., segmentation and registration) that require consistent image voxel intensity values. Methods: HUM methods have been widely used for years. A body mask is required, otherwise the body surface in the corrected image would be incorrectly bright due to the sudden intensity transition at the body surface. In this study, we developed an improved HUM-based correction method that includes three main components: 1) Robust body segmentation on the normalized image gradient map, 2) Robust FOV detection (needed for body segmentation) using region growing and morphologic filters, and 3) An effective implementation of HUM using repeated Gaussian convolution. Results: The proposed method was successfully tested on patient images of common anatomical sites (H/N, lung, abdomen and pelvis). Initial qualitative comparisons showed that this improved HUM method outperformed three recently published algorithms (FCM, LEMS, MICO) in both computation speed (by 50+ times) and robustness (in intermediate to severe inhomogeneity situations). Currently implemented in MATLAB, it takes 20 to 25 seconds to process a 3D MRI volume. Conclusion: Compared to more sophisticated MRI inhomogeneity correction algorithms, the improved HUM method is simple and effective. The inhomogeneity correction, body mask, and FOV detection methods developed in this study would be useful as preprocessing tools for many MRI-related research and

  6. Optimization of Broadband Wavefront Correction at the Princeton High Contrast Imaging Laboratory

    Science.gov (United States)

    Groff, Tyler Dean; Kasdin, N.; Carlotti, A.

    2011-01-01

    Wavefront control for imaging of terrestrial planets using coronagraphic techniques requires improving the performance of the wavefront control techniques to expand the correction bandwidth and the size of the dark hole over which it is effective. At the Princeton High Contrast Imaging Laboratory we have focused on increasing the search area using two deformable mirrors (DMs) in series to achieve symmetric correction by correcting both amplitude and phase aberrations. Here we are concerned with increasing the bandwidth of light over which this correction is effective so we include a finite bandwidth into the optimization problem to generate a new stroke minimization algorithm. This allows us to minimize the actuator stroke on the DMs given contrast constraints at multiple wavelengths which define a window over which the dark hole will persist. This windowed stroke minimization algorithm is written in such a way that a weight may be applied to dictate the relative importance of the outer wavelengths to the central wavelength. In order to supply the estimates at multiple wavelengths a functional relationship to a central estimation wavelength is formed. Computational overhead and new experimental results of this windowed stroke minimization algorithm are discussed. The tradeoff between symmetric correction and achievable bandwidth is compared to the observed contrast degradation with wavelength in the experimental results. This work is supported by NASA APRA Grant #NNX09AB96G. The author is also supported under an NESSF Fellowship.

  7. Conduction Losses and Common Mode EMI Analysis on Bridgeless Power Factor Correction

    DEFF Research Database (Denmark)

    Li, Qingnan; Andersen, Michael Andreas E.; Thomsen, Ole Cornelius

    2009-01-01

    In this paper, a review of Bridgeless Boost power factor correction (PFC) converters is presented at first. Performance comparison on conduction losses and common mode electromagnetic interference (EMI) are analyzed between conventional Boost PFC converter and members of Bridgeless PFC family...

  8. Evaluation of bias-correction methods for ensemble streamflow volume forecasts

    Directory of Open Access Journals (Sweden)

    T. Hashino

    2007-01-01

    Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.

  9. Multiple ionization of noble gases by 2.0 MeV proton impact: comparison with equi-velocity electron impact ionization

    International Nuclear Information System (INIS)

    Melo, W.S.; Santos, A.C.F.; Sant'Anna, M.M.; Sigaud, G.M.; Montenegro, E.C.

    2002-01-01

    Absolute single- and multiple-ionization cross sections of rare gases (He, Ne, Ar, Kr and Xe) have been measured for collisions with 2.0 MeV p + . A comparison is made with equi-velocity electron impact ionization cross sections as well as with the available proton impact data. For the light rare gases the single-ionization cross sections are essentially the same for both proton and electron impacts, but increasing differences appear for the heavier targets. (author). Letter-to-the-editor

  10. Error field and its correction strategy in tokamaks

    International Nuclear Information System (INIS)

    In, Yongkyoon

    2014-01-01

    While error field correction (EFC) is to minimize the unwanted kink-resonant non-axisymmetric components, resonant magnetic perturbation (RMP) application is to maximize the benefits of pitch-resonant non-axisymmetric components. As the plasma response against non-axisymmetric field increases with beta increase, feedback-controlled EFC is a more promising EFC strategy in reactor-relevant high-beta regimes. Nonetheless, various physical aspects and uncertainties associated with EFC should be taken into account and clarified in the terms of multiple low-n EFC and multiple MHD modes, in addition to the compatibility issue with RMP application. Such a multi-faceted view of EFC strategy is briefly discussed. (author)

  11. GPU accelerated manifold correction method for spinning compact binaries

    Science.gov (United States)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  12. Improved scatter correction with factor analysis for planar and SPECT imaging

    Science.gov (United States)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user

  13. Classical gluon production amplitude for nucleus-nucleus collisions:First saturation correction in the projectile

    International Nuclear Information System (INIS)

    Chirilli, Giovanni A.; Kovchegov, Yuri V.; Wertepny, Douglas E.

    2015-01-01

    We calculate the classical single-gluon production amplitude in nucleus-nucleus collisions including the first saturation correction in one of the nuclei (the projectile) while keeping multiple-rescattering (saturation) corrections to all orders in the other nucleus (the target). In our approximation only two nucleons interact in the projectile nucleus: the single-gluon production amplitude we calculate is order-g"3 and is leading-order in the atomic number of the projectile, while resumming all order-one saturation corrections in the target nucleus. Our result is the first step towards obtaining an analytic expression for the first projectile saturation correction to the gluon production cross section in nucleus-nucleus collisions.

  14. Measurement of the pulse pileup correction for the HPGe gamma spectrometer

    International Nuclear Information System (INIS)

    Kulkarni, D.B.; Anuradha, R.; Joseph, Leena; Kulkarni, M.S.

    2018-01-01

    Radiation Standards Section (RSS), RSSD, has HPGe gamma spectrometry system maintained as a secondary standard for standardization of gamma emitting radionuclides. This system is also used to detect the impurities in the radioactivity samples supplied for the international inter-comparison exercises, so that the appropriate correction can be made for the standardized activity of principle radionuclide. The system is calibrated as per the recommended procedure (ANSI standard N42.14, 1999). As a part of this calibration, measurement of the pulse pile up correction was carried out in the energy range of 81 keV to 1408 keV. The measurement of pileup correction is very important for the standardization of sources having higher counting rates where the extent of the pileup effect is more and considerable deviation from the true counting rates was observed. For these sources the measured photo peak counting rate is less than true counting rate and needs to be corrected for pileup effect. The details of experiments are discussed in this paper

  15. In-medium effects in K+ scattering versus Glauber model with noneikonal corrections

    International Nuclear Information System (INIS)

    Eliseev, S.M.; Rihan, T.H.

    1996-01-01

    The discrepancy between the experimental and the theoretical ratio R of the total cross sections, R=σ(K + - 12 C)/6σ(K + - d), at momenta up to 800 MeV/c is discussed in the framework of the Glauber multiple scattering approach. It is shown that various corrections such as adopting relativistic K + -N amplitudes as well as noneikonal corrections seem to fail in reproducing the experimental data especially at higher momenta. 17 refs., 1 fig

  16. The Combined Quantification and Interpretation of Multiple Quantitative Magnetic Resonance Imaging Metrics Enlightens Longitudinal Changes Compatible with Brain Repair in Relapsing-Remitting Multiple Sclerosis Patients.

    Science.gov (United States)

    Bonnier, Guillaume; Maréchal, Benedicte; Fartaria, Mário João; Falkowskiy, Pavel; Marques, José P; Simioni, Samanta; Schluep, Myriam; Du Pasquier, Renaud; Thiran, Jean-Philippe; Krueger, Gunnar; Granziera, Cristina

    2017-01-01

    Quantitative and semi-quantitative MRI (qMRI) metrics provide complementary specificity and differential sensitivity to pathological brain changes compatible with brain inflammation, degeneration, and repair. Moreover, advanced magnetic resonance imaging (MRI) metrics with overlapping elements amplify the true tissue-related information and limit measurement noise. In this work, we combined multiple advanced MRI parameters to assess focal and diffuse brain changes over 2 years in a group of early-stage relapsing-remitting MS patients. Thirty relapsing-remitting MS patients with less than 5 years disease duration and nine healthy subjects underwent 3T MRI at baseline and after 2 years including T1, T2, T2* relaxometry, and magnetization transfer imaging. To assess longitudinal changes in normal-appearing (NA) tissue and lesions, we used analyses of variance and Bonferroni correction for multiple comparisons. Multivariate linear regression was used to assess the correlation between clinical outcome and multiparametric MRI changes in lesions and NA tissue. In patients, we measured a significant longitudinal decrease of mean T2 relaxation times in NA white matter ( p  = 0.005) and a decrease of T1 relaxation times in the pallidum ( p  decrease in T1 relaxation time ( p -value  0.4, p  < 0.05). In summary, the combination of multiple advanced MRI provided evidence of changes compatible with focal and diffuse brain repair at early MS stages as suggested by histopathological studies.

  17. A comparison of different experimental methods for general recombination correction for liquid ionization chambers

    DEFF Research Database (Denmark)

    Andersson, Jonas; Kaiser, Franz-Joachim; Gomez, Faustino

    2012-01-01

    Radiation dosimetry of highly modulated dose distributions requires a detector with a high spatial resolution. Liquid filled ionization chambers (LICs) have the potential to become a valuable tool for the characterization of such radiation fields. However, the effect of an increased recombination...... of the charge carriers, as compared to using air as the sensitive medium has to be corrected for. Due to the presence of initial recombination in LICs, the correction for general recombination losses is more complicated than for air-filled ionization chambers. In the present work, recently published...

  18. A singular choice for multiple choice

    DEFF Research Database (Denmark)

    Frandsen, Gudmund Skovbjerg; Schwartzbach, Michael Ignatieff

    2006-01-01

    How should multiple choice tests be scored and graded, in particular when students are allowed to check several boxes to convey partial knowledge? Many strategies may seem reasonable, but we demonstrate that five self-evident axioms are sufficient to determine completely the correct strategy. We ...

  19. Age and education corrected older adult normative data for a short form version of the Financial Capacity Instrument.

    Science.gov (United States)

    Gerstenecker, Adam; Eakin, Amanda; Triebel, Kristen; Martin, Roy; Swenson-Dravis, Dana; Petersen, Ronald C; Marson, Daniel

    2016-06-01

    Financial capacity is an instrumental activity of daily living (IADL) that comprises multiple abilities and is critical to independence and autonomy in older adults. Because of its cognitive complexity, financial capacity is often the first IADL to show decline in prodromal and clinical Alzheimer's disease and related disorders. Despite its importance, few standardized assessment measures of financial capacity exist and there is little, if any, normative data available to evaluate financial skills in the elderly. The Financial Capacity Instrument-Short Form (FCI-SF) is a brief measure of financial skills designed to evaluate financial skills in older adults with cognitive impairment. In the current study, we present age- and education-adjusted normative data for FCI-SF variables in a sample of 1344 cognitively normal, community-dwelling older adults participating in the Mayo Clinic Study of Aging (MCSA) in Olmsted County, Minnesota. Individual FCI-SF raw scores were first converted to age-corrected scaled scores based on position within a cumulative frequency distribution and then grouped within 4 empirically supported and overlapping age ranges. These age-corrected scaled scores were then converted to age- and education-corrected scaled scores using the same methodology. This study has the potential to substantially enhance financial capacity evaluations of older adults through the introduction of age- and education-corrected normative data for the FCI-SF by allowing clinicians to: (a) compare an individual's performance to that of a sample of similar age and education peers, (b) interpret various aspects of financial capacity relative to a normative sample, and (c) make comparisons between these aspects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Cosmic Rays and Dynamical Meteorology, 2. Snow Effect In Different Multiplicities According To Neutron Monitor Data of Emilio Segre' Observatory

    Science.gov (United States)

    Dorman, L. I.; Iucci, N.; Pustil'Nik, L. A.; Sternlieb, A.; Villoresi, G.; Zukerman, I. G.

    On the basis of cosmic ray hourly data obtained by NM of Emilio Segre' Observatory (hight 2025 m above s.l., cut-off rigidity for vertical direction 10.8 GV) we determine the snow effect in CR for total neutron intensity and for multiplicities m=1, m=2, m=3, m=4, m=5, m=6, and m=7. For comparison and excluding primary CR variations we use also hourly data on neutron multiplicities obtained by Rome NM (about sea level, cut-off rigidity 6.7 GV). In this paper we will analize effects of snow in periods from 4 January 2000 to 15 April 2000 with maximal absorption effect about 5%, and from 21 December 2000 up to 31 March 2001 with maximal effect 13% in the total neu- tron intensity. We use the periods without snow to determine regeression coefficients between primary CR variations observed by NM of Emilio Segre' Observatory, and by Rome NM. On the basis of obtained results we develop a method to correct data on snow effect by using several NM hourly data. On the basis of our data we estimate the accuracy with what can be made correction of NM data of stations where the snow effect can be important.

  1. Optimizing signal intensity correction during evaluation of hepatic parenchymal enhancement on gadoxetate disodium-enhanced MRI: Comparison of three methods

    Energy Technology Data Exchange (ETDEWEB)

    Onoda, Minori, E-mail: onoda@radt.med.kindai.ac.jp [Department of Radiological Technology, Kinki University Hospital, 377-2 Ohno-Higashi, Osaka-Sayama, Osaka 589-8511 (Japan); Division of Health Sciences, Graduate School of Medical Science, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa 920-0942 (Japan); Hyodo, Tomoko, E-mail: neneth@m.ehime-u.ac.jp [Department of Radiology, Kinki University Faculty of Medicine, 377-2 Ohno-Higashi, Osaka-Sayama, Osaka 589-8511 (Japan); Murakami, Takamichi, E-mail: murakami@med.kindai.ac.jp [Department of Radiology, Kinki University Faculty of Medicine, 377-2 Ohno-Higashi, Osaka-Sayama, Osaka 589-8511 (Japan); Okada, Masahiro, E-mail: okada777@med.u-ryukyu.ac.jp [Department of Radiology, Kinki University Faculty of Medicine, 377-2 Ohno-Higashi, Osaka-Sayama, Osaka 589-8511 (Japan); Uto, Tatsuro, E-mail: chuho@med.kindai.ac.jp [Department of Radiological Technology, Kinki University Hospital, 377-2 Ohno-Higashi, Osaka-Sayama, Osaka 589-8511 (Japan); Hori, Masatoshi, E-mail: mhori@radiol.med.osaka-u.ac.jp [Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871 (Japan); Miyati, Tosiaki, E-mail: ramiyati@mhs.mp.kanazawa-u.ac.jp [Division of Health Sciences, Graduate School of Medical Science, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa 920-0942 (Japan)

    2015-03-15

    Highlights: •Signal intensity is often used to evaluate hepatic enhancement with Gd-EOB-DTPA in the hepatobiliary phase. •Comparison of uncorrected signal intensity with T{sub 1} value revealed signal intensity instability. •Measurement of uncorrected liver SI or SNR often yields erroneous results on late-phase gadoxetate MRI due to shimming and other optimization techniques. •Signal intensity corrected by scale and rescale slope from DICOM data gave comparable results. -- Abstract: Objective: To compare signal intensity (SI) correction using scale and rescale slopes with SI correction using SIs of spleen and muscle for quantifying multiphase hepatic contrast enhancement with Gd-EOB-DTPA by assessing their correlation with T{sub 1} values generated from Look-Locker turbo-field-echo (LL-TFE) sequence data (ER-T{sub 1}). Materials and methods: Thirty patients underwent Gd-EOB-DTPA-enhanced magnetic resonance imaging (MRI) in this prospective clinical study. For each patient, breath-hold T{sub 1}-weighted fat-suppressed three-dimensional (3D) gradient echo sequences (e-THRIVE) were acquired before and 2 (first phase), 10 (second phase), and 20 min (third phase) after intravenous Gd-EOB-DTPA. Look-Locker turbo-field-echo (LL-TFE) sequences were acquired before and 1.5 (first phase), 8 (second phase), and 18 min (third phase) postcontrast. The liver parenchyma enhancement ratios (ER) of each phase were calculated using the SI from e-THRIVE sequences (ER-SI) and the T{sub 1} values generated from LL-TFE sequence data (ER-T{sub 1}) respectively. ER-SIs were calculated in three ways: (1) comparing with splenic SI (ER-SI-s), (2) comparing with muscle SI (ER-SI-m), (3) using scale and rescale slopes obtained from DICOM headers (ER-SI-c), to eliminate the effects of receiver gain and scaling. For each of the first, second and third phases, correlation and agreement were assessed between each ER-SI and ER-T{sub 1}. Results: In the first phase, all ER-SIs correlated

  2. A Review of Multiple Hypothesis Testing in Otolaryngology Literature

    Science.gov (United States)

    Kirkham, Erin M.; Weaver, Edward M.

    2018-01-01

    Objective Multiple hypothesis testing (or multiple testing) refers to testing more than one hypothesis within a single analysis, and can inflate the Type I error rate (false positives) within a study. The aim of this review was to quantify multiple testing in recent large clinical studies in the otolaryngology literature and to discuss strategies to address this potential problem. Data sources Original clinical research articles with >100 subjects published in 2012 in the four general otolaryngology journals with the highest Journal Citation Reports 5-year impact factors. Review methods Articles were reviewed to determine whether the authors tested greater than five hypotheses in at least one family of inferences. For the articles meeting this criterion for multiple testing, Type I error rates were calculated and statistical correction was applied to the reported results. Results Of the 195 original clinical research articles reviewed, 72% met the criterion for multiple testing. Within these studies, there was a mean 41% chance of a Type I error and, on average, 18% of significant results were likely to be false positives. After the Bonferroni correction was applied, only 57% of significant results reported within the articles remained significant. Conclusion Multiple testing is common in recent large clinical studies in otolaryngology and deserves closer attention from researchers, reviewers and editors. Strategies for adjusting for multiple testing are discussed. PMID:25111574

  3. The Effect and Implications of a "Self-Correcting" Assessment Procedure

    Science.gov (United States)

    Francis, Alisha L.; Barnett, Jerrold

    2012-01-01

    We investigated Montepare's (2005, 2007) self-correcting procedure for multiple-choice exams. Findings related to memory suggest this procedure should lead to improved retention by encouraging students to distribute the time spent reviewing the material. Results from a general psychology class (n = 98) indicate that the benefits are not as…

  4. Practical estimate of gradient nonlinearity for implementation of apparent diffusion coefficient bias correction.

    Science.gov (United States)

    Malkyarenko, Dariya I; Chenevert, Thomas L

    2014-12-01

    To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.

  5. Multiple Intelligences Profiles of Children with Attention Deficit and Hyperactivity Disorder in Comparison with Nonattention Deficit and Hyperactivity Disorder.

    Science.gov (United States)

    Najafi, Mostafa; Akouchekian, Shahla; Ghaderi, Alireza; Mahaki, Behzad; Rezaei, Mariam

    2017-01-01

    Attention deficit and hyperactivity disorder (ADHD) is a common psychological problem during childhood. This study aimed to evaluate multiple intelligences profiles of children with ADHD in comparison with non-ADHD. This cross-sectional descriptive analytical study was done on 50 children of 6-13 years old in two groups of with and without ADHD. Children with ADHD were referred to Clinics of Child and Adolescent Psychiatry, Isfahan University of Medical Sciences, in 2014. Samples were selected based on clinical interview (based on Diagnostic and Statistical Manual of Mental Disorders IV and parent-teacher strengths and difficulties questionnaire), which was done by psychiatrist and psychologist. Raven intelligence quotient (IQ) test was used, and the findings were compared to the results of multiple intelligences test. Data analysis was done using a multivariate analysis of covariance using SPSS20 software. Comparing the profiles of multiple intelligence among two groups, there are more kinds of multiple intelligences in control group than ADHD group, a difference which has been more significant in logical, interpersonal, and intrapersonal intelligence ( P multiple intelligences in two groups ( P > 0.05). The IQ average score in the control group and ADHD group was 102.42 ± 16.26 and 96.72 ± 16.06, respectively, that reveals the negative effect of ADHD on IQ average value. There was an insignificance relationship between linguistic and naturalist intelligence ( P > 0.05). However, in other kinds of multiple intelligences, direct and significant relationships were observed ( P < 0.05). Since the levels of IQ (Raven test) and MI in control group were more significant than ADHD group, ADHD is likely to be associated with logical-mathematical, interpersonal, and intrapersonal profiles.

  6. Comparison of prostate set-up accuracy and margins with off-line bony anatomy corrections and online implanted fiducial-based corrections.

    Science.gov (United States)

    Greer, P B; Dahl, K; Ebert, M A; Wratten, C; White, M; Denham, J W

    2008-10-01

    The aim of the study was to determine prostate set-up accuracy and set-up margins with off-line bony anatomy-based imaging protocols, compared with online implanted fiducial marker-based imaging with daily corrections. Eleven patients were treated with implanted prostate fiducial markers and online set-up corrections. Pretreatment orthogonal electronic portal images were acquired to determine couch shifts and verification images were acquired during treatment to measure residual set-up error. The prostate set-up errors that would result from skin marker set-up, off-line bony anatomy-based protocols and online fiducial marker-based corrections were determined. Set-up margins were calculated for each set-up technique using the percentage of encompassed isocentres and a margin recipe. The prostate systematic set-up errors in the medial-lateral, superior-inferior and anterior-posterior directions for skin marker set-up were 2.2, 3.6 and 4.5 mm (1 standard deviation). For our bony anatomy-based off-line protocol the prostate systematic set-up errors were 1.6, 2.5 and 4.4 mm. For the online fiducial based set-up the results were 0.5, 1.4 and 1.4 mm. A prostate systematic error of 10.2 mm was uncorrected by the off-line bone protocol in one patient. Set-up margins calculated to encompass 98% of prostate set-up shifts were 11-14 mm with bone off-line set-up and 4-7 mm with online fiducial markers. Margins from the van Herk margin recipe were generally 1-2 mm smaller. Bony anatomy-based set-up protocols improve the group prostate set-up error compared with skin marks; however, large prostate systematic errors can remain undetected or systematic errors increased for individual patients. The margin required for set-up errors was found to be 10-15 mm unless implanted fiducial markers are available for treatment guidance.

  7. Publisher Correction: Unravelling the immune signature of Plasmodium falciparum transmission-reducing immunity.

    Science.gov (United States)

    Stone, Will J R; Campo, Joseph J; Ouédraogo, André Lin; Meerstein-Kessel, Lisette; Morlais, Isabelle; Da, Dari; Cohuet, Anna; Nsango, Sandrine; Sutherland, Colin J; van de Vegte-Bolmer, Marga; Siebelink-Stoter, Rianne; van Gemert, Geert-Jan; Graumans, Wouter; Lanke, Kjerstin; Shandling, Adam D; Pablo, Jozelyn V; Teng, Andy A; Jones, Sophie; de Jong, Roos M; Fabra-García, Amanda; Bradley, John; Roeffen, Will; Lasonder, Edwin; Gremo, Giuliana; Schwarzer, Evelin; Janse, Chris J; Singh, Susheel K; Theisen, Michael; Felgner, Phil; Marti, Matthias; Drakeley, Chris; Sauerwein, Robert; Bousema, Teun; Jore, Matthijs M

    2018-04-11

    The original version of this Article contained errors in Fig. 3. In panel a, bars from a chart depicting the percentage of antibody-positive individuals in non-infectious and infectious groups were inadvertently included in place of bars depicting the percentage of infectious individuals, as described in the Article and figure legend. However, the p values reported in the Figure and the resulting conclusions were based on the correct dataset. The corrected Fig. 3a now shows the percentage of infectious individuals in antibody-negative and -positive groups, in both the PDF and HTML versions of the Article. The incorrect and correct versions of Figure 3a are also presented for comparison in the accompanying Publisher Correction as Figure 1.The HTML version of the Article also omitted a link to Supplementary Data 6. The error has now been fixed and Supplementary Data 6 is available to download.

  8. Efficient orbit integration by manifold correction methods.

    Science.gov (United States)

    Fukushima, Toshio

    2005-12-01

    Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.

  9. Hadron multiplicity as the limit of jet multiplicity at high resolution

    International Nuclear Information System (INIS)

    Lupia, S.; Ochs, W.

    1998-01-01

    Recently exact numerical results from the evolution equation for parton multiplicities in QCD jets have been obtained. A comparison with various approximate results is presented. A good description is obtained not only of the jet multiplicities measured at LEP-1 but also of the hadron multiplicities for cm s energies above 1.6 GeV in e + e - annihilation. The solution suggests that a final state hadron can be represented by a jet in the limit of small (nonperturbative) k perpendicular to cut-off Q 0 . In this description using as adjustable parameters only the QCD scale Λ and the cut-off Q 0 , the coupling α s can be seen to rise towards large values above unity at low energies. (orig.)

  10. Self-correcting electronically scanned pressure sensor

    Science.gov (United States)

    Gross, C. (Inventor)

    1983-01-01

    A multiple channel high data rate pressure sensing device is disclosed for use in wind tunnels, spacecraft, airborne, process control, automotive, etc., pressure measurements. Data rates in excess of 100,000 measurements per second are offered with inaccuracies from temperature shifts less than 0.25% (nominal) of full scale over a temperature span of 55 C. The device consists of thirty-two solid state sensors, signal multiplexing electronics to electronically address each sensor, and digital electronic circuitry to automatically correct the inherent thermal shift errors of the pressure sensors and their associated electronics.

  11. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...

  12. The Impact of Correctional Officer Perceptions of Inmates on Job Stress

    Directory of Open Access Journals (Sweden)

    Marcos Misis

    2013-05-01

    Full Text Available Research suggests that job-related stress affects correctional officers’ attitudes toward their work environment, coworkers, and supervisors, as well as their physical and mental health; however, very few studies have examined the relationship between stress and attitudes toward inmates. This study examined the relationship between correctional officers’ levels of stress and their perceptions of inmates by surveying a sample of 501 correctional officers employed by a Southern prison system. Hierarchical multiple regression analysis was used to test the principal hypothesis of this study—that more negative perceptions of inmates would result in higher levels of stress for correctional officers. Independent variables were grouped into four groups (demographic variables, supervisory support, job characteristics, and attitudes toward inmates and were entered into the model in blocks. Lower supervisory support and perceptions of the job being dangerous were associated with higher levels of job stress. More importantly, correctional officers who saw inmates as intimidated (not arrogant and nonmanipulative reported lower levels of job stress, while officers who perceived inmates as being unfriendly, antisocial, and cold reported higher levels of stress.

  13. Comparison of classical methods for blade design and the influence of tip correction on rotor performance

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Okulov, Valery; Mikkelsen, Robert Flemming

    2016-01-01

    The classical blade-element/momentum (BE/M) method, which is used together with different types of corrections (e.g. the Prandtl or Glauert tip correction), is today the most basic tool in the design of wind turbine rotors. However, there are other classical techniques based on a combination...

  14. Restoration of γ-ray multiplicity distributions from experiments with low efficiency multiplicity filters

    International Nuclear Information System (INIS)

    Bellia, G.; Del Zoppo, A.; Migneco, E.; Russo, G.; Istituto Nazionale di Fisica Nucleare, Catania

    1984-01-01

    The restoration of γ-ray multiplicity distributions from experimental p-fold coincidence distributions is discussed. It is shown that the restoration of the multiplicity from measurements with low total detection efficiency is an 'incorrectly posed problem'. While in the literature the analysis of the experimental data has been attempted only in terms of the lowest central moments of the multiplicity distribution, in this paper an unfolding method based on the minimization of the directioned discrepancies in the probability space is used. The method is found to work very well even if the total efficiency Ω <= 0.1. Realistic tests and a comparison with the usual method of analysis are presented. (orig.)

  15. Characterizations of double pulsing in neutron multiplicity and coincidence counting systems

    Energy Technology Data Exchange (ETDEWEB)

    Koehler, Katrina E., E-mail: kkoehler@lanl.gov [Los Alamos National Laboratory, P. O. Box 1663, Los Alamos, NM 87545 (United States); Henzl, Vladimir [Los Alamos National Laboratory, P. O. Box 1663, Los Alamos, NM 87545 (United States); Croft, Stephen S. [Oak Ridge National Laboratory, 1 Bethel Valley Rd, Oak Ridge, TN 37831 (United States); Henzlova, Daniela; Santi, Peter A. [Los Alamos National Laboratory, P. O. Box 1663, Los Alamos, NM 87545 (United States)

    2016-10-01

    Passive neutron coincidence/multiplicity counters are subject to non-ideal behavior, such as double pulsing and dead time. It has been shown in the past that double-pulsing exhibits a distinct signature in a Rossi-alpha distribution, which is not readily noticed using traditional Multiplicity Shift Register analysis. However, it has been assumed that the use of a pre-delay in shift register analysis removes any effects of double pulsing. In this work, we use high-fidelity simulations accompanied by experimental measurements to study the effects of double pulsing on multiplicity rates. By exploiting the information from the double pulsing signature peak observable in the Rossi-alpha distribution, the double pulsing fraction can be determined. Algebraic correction factors for the multiplicity rates in terms of the double pulsing fraction have been developed. We discuss the role of these corrections across a range of scenarios.

  16. Comparison of a neural network with multiple linear regression for quantitative analysis in ICP-atomic emission spectroscopy

    International Nuclear Information System (INIS)

    Schierle, C.; Otto, M.

    1992-01-01

    A two layer perceptron with backpropagation of error is used for quantitative analysis in ICP-AES. The network was trained by emission spectra of two interfering lines of Cd and As and the concentrations of both elements were subsequently estimated from mixture spectra. The spectra of the Cd and As lines were also used to perform multiple linear regression (MLR) via the calculation of the pseudoinverse S + of the sensitivity matrix S. In the present paper it is shown that there exist close relations between the operation of the perceptron and the MLR procedure. These are most clearly apparent in the correlation between the weights of the backpropagation network and the elements of the pseudoinverse. Using MLR, the confidence intervals over the predictions are exploited to correct for the optical device of the wavelength shift. (orig.)

  17. Characterizing the marker-dye correction for Gafchromic(®) EBT2 film: a comparison of three analysis methods.

    Science.gov (United States)

    McCaw, Travis J; Micka, John A; Dewerd, Larry A

    2011-10-01

    Gafchromic(®) EBT2 film has a yellow marker dye incorporated into the active layer of the film that can be used to correct the film response for small variations in thickness. This work characterizes the effect of the marker-dye correction on the uniformity and uncertainty of dose measurements with EBT2 film. The effect of variations in time postexposure on the uniformity of EBT2 is also investigated. EBT2 films were used to measure the flatness of a (60)Co field to provide a high-spatial resolution evaluation of the film uniformity. As a reference, the flatness of the (60)Co field was also measured with Kodak EDR2 films. The EBT2 films were digitized with a flatbed document scanner 24, 48, and 72 h postexposure, and the images were analyzed using three methods: (1) the manufacturer-recommended marker-dye correction, (2) an in-house marker-dye correction, and (3) a net optical density (OD) measurement in the red color channel. The field flatness was calculated from orthogonal profiles through the center of the field using each analysis method, and the results were compared with the EDR2 measurements. Uncertainty was propagated through a dose calculation for each analysis method. The change in the measured field flatness for increasing times postexposure was also determined. Both marker-dye correction methods improved the field flatness measured with EBT2 film relative to the net OD method, with a maximum improvement of 1% using the manufacturer-recommended correction. However, the manufacturer-recommended correction also resulted in a dose uncertainty an order of magnitude greater than the other two methods. The in-house marker-dye correction lowered the dose uncertainty relative to the net OD method. The measured field flatness did not exhibit any unidirectional change with increasing time postexposure and showed a maximum change of 0.3%. The marker dye in EBT2 can be used to improve the response uniformity of the film. Depending on the film analysis method used

  18. Comparison between TRMM PR and

    Indian Academy of Sciences (India)

    A comparison between TRMM PR rainfall estimates and rain gauge data from ANEEL and com- bined gauge/satellite ..... correctly the of the south Atlantic convergence ..... vapor, snow cover, and sea ice derived from SSM/I mea- surements ...

  19. A Comparison of Single-Cycle Versus Multiple-Cycle Proof Testing Strategies

    Science.gov (United States)

    McClung, R. C.; Chell, G. G.; Millwater, H. R.; Russell, D. A.; Millwater, H. R.

    1999-01-01

    Single-cycle and multiple-cycle proof testing (SCPT and MCPT) strategies for reusable aerospace propulsion system components are critically evaluated and compared from a rigorous elastic-plastic fracture mechanics perspective. Earlier MCPT studies are briefly reviewed. New J-integral estimation methods for semielliptical surface cracks and cracks at notches are derived and validated. Engineering methods are developed to characterize crack growth rates during elastic-plastic fatigue crack growth (FCG) and the tear-fatigue interaction near instability. Surface crack growth experiments are conducted with Inconel 718 to characterize tearing resistance, FCG under small-scale yielding and elastic-plastic conditions, and crack growth during simulated MCPT. Fractography and acoustic emission studies provide additional insight. The relative merits of SCPT and MCPT are directly compared using a probabilistic analysis linked with an elastic-plastic crack growth computer code. The conditional probability of failure in service is computed for a population of components that have survived a previous proof test, based on an assumed distribution of initial crack depths. Parameter studies investigate the influence of proof factor, tearing resistance, crack shape, initial crack depth distribution, and notches on the MCPT versus SCPT comparison. The parameter studies provide a rational basis to formulate conclusions about the relative advantages and disadvantages of SCPT and MCPT. Practical engineering guidelines are proposed to help select the optimum proof test protocol in a given application.

  20. Accounting for Chromatic Atmospheric Effects on Barycentric Corrections

    Energy Technology Data Exchange (ETDEWEB)

    Blackman, Ryan T.; Szymkowiak, Andrew E.; Fischer, Debra A.; Jurgenson, Colby A., E-mail: ryan.blackman@yale.edu [Department of Astronomy, Yale University, 52 Hillhouse Avenue, New Haven, CT 06511 (United States)

    2017-03-01

    Atmospheric effects on stellar radial velocity measurements for exoplanet discovery and characterization have not yet been fully investigated for extreme precision levels. We carry out calculations to determine the wavelength dependence of barycentric corrections across optical wavelengths, due to the ubiquitous variations in air mass during observations. We demonstrate that radial velocity errors of at least several cm s{sup −1} can be incurred if the wavelength dependence is not included in the photon-weighted barycentric corrections. A minimum of four wavelength channels across optical spectra (380–680 nm) are required to account for this effect at the 10 cm s{sup −1} level, with polynomial fits of the barycentric corrections applied to cover all wavelengths. Additional channels may be required in poor observing conditions or to avoid strong telluric absorption features. Furthermore, consistent flux sampling on the order of seconds throughout the observation is necessary to ensure that accurate photon weights are obtained. Finally, we describe how a multiple-channel exposure meter will be implemented in the EXtreme PREcision Spectrograph (EXPRES).

  1. Improving comparability between microarray probe signals by thermodynamic intensity correction

    DEFF Research Database (Denmark)

    Bruun, G. M.; Wernersson, Rasmus; Juncker, Agnieszka

    2007-01-01

    different probes. It is therefore of great interest to correct for the variation between probes. Much of this variation is sequence dependent. We demonstrate that a thermodynamic model for hybridization of either DNA or RNA to a DNA microarray, which takes the sequence-dependent probe affinities...... determination of transcription start sites for a subset of yeast genes. In another application, we identify present/absent calls for probes hybridized to the sequenced Escherichia coli strain O157:H7 EDL933. The model improves the correct calls from 85 to 95% relative to raw intensity measures. The model thus...... makes applications which depend on comparisons between probes aimed at different sections of the same target more reliable....

  2. Self-corrected chip-based dual-comb spectrometer.

    Science.gov (United States)

    Hébert, Nicolas Bourbeau; Genest, Jérôme; Deschênes, Jean-Daniel; Bergeron, Hugo; Chen, George Y; Khurmi, Champak; Lancaster, David G

    2017-04-03

    We present a dual-comb spectrometer based on two passively mode-locked waveguide lasers integrated in a single Er-doped ZBLAN chip. This original design yields two free-running frequency combs having a high level of mutual stability. We developed in parallel a self-correction algorithm that compensates residual relative fluctuations and yields mode-resolved spectra without the help of any reference laser or control system. Fluctuations are extracted directly from the interferograms using the concept of ambiguity function, which leads to a significant simplification of the instrument that will greatly ease its widespread adoption and commercial deployment. Comparison with a correction algorithm relying on a single-frequency laser indicates discrepancies of only 50 attoseconds on optical timings. The capacities of this instrument are finally demonstrated with the acquisition of a high-resolution molecular spectrum covering 20 nm. This new chip-based multi-laser platform is ideal for the development of high-repetition-rate, compact and fieldable comb spectrometers in the near- and mid-infrared.

  3. The optimal hormonal replacement modality selection for multiple organ procurement from brain-dead organ donors.

    Science.gov (United States)

    Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David Kc

    2015-01-01

    The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu's multiple comparisons with the best (MCB) - adapted from the Dunnett's multiple comparisons with control (MCC) - has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation.

  4. A study of the dosimetry of small field photon beams used in intensity modulated radiation therapy in inhomogeneous media: Monte Carlo simulations, and algorithm comparisons and corrections

    International Nuclear Information System (INIS)

    Jones, Andrew Osler

    2004-01-01

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however, their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs farther downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. The dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the

  5. Cost comparison between private and public collection of residual household waste: Multiple case studies in the Flemish region of Belgium

    International Nuclear Information System (INIS)

    Jacobsen, R.; Buysse, J.; Gellynck, X.

    2013-01-01

    Highlights: ► The goal is to compare collection costs for residual household waste. ► We have clustered all municipalities in order to find mutual comparable pairs. ► Each pair consists of one private and one public operating waste collection program. ► All cases show that private service has lower costs than public service. ► Municipalities were contacted to identify the deeper causes for the waste management program. - Abstract: The rising pressure in terms of cost efficiency on public services pushes governments to transfer part of those services to the private sector. A trend towards more privatizing can be noticed in the collection of municipal household waste. This paper reports the findings of a research project aiming to compare the cost between the service of private and public collection of residual household waste. Multiple case studies of municipalities about the Flemish region of Belgium were conducted. Data concerning the year 2009 were gathered through in-depth interviews in 2010. In total 12 municipalities were investigated, divided into three mutual comparable pairs with a weekly and three mutual comparable pairs with a fortnightly residual waste collection. The results give a rough indication that in all cases the cost of private service is lower than public service in the collection of household waste. Albeit that there is an interest in establishing whether there are differences in the costs and service levels between public and private waste collection services, there are clear difficulties in establishing comparisons that can be made without having to rely on a large number of assumptions and corrections. However, given the cost difference, it remains the responsibility of the municipalities to decide upon the service they offer their citizens, regardless the cost efficiency: public or private.

  6. Cost comparison between private and public collection of residual household waste: Multiple case studies in the Flemish region of Belgium

    Energy Technology Data Exchange (ETDEWEB)

    Jacobsen, R., E-mail: ray.jacobsen@ugent.be [Department of Agricultural Economics, Ghent University, Coupure Links 653, B-9000 Ghent (Belgium); Buysse, J., E-mail: j.buysse@ugent.be [Department of Agricultural Economics, Ghent University, Coupure Links 653, B-9000 Ghent (Belgium); Gellynck, X., E-mail: xavier.gellynck@ugent.be [Department of Agricultural Economics, Ghent University, Coupure Links 653, B-9000 Ghent (Belgium)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer The goal is to compare collection costs for residual household waste. Black-Right-Pointing-Pointer We have clustered all municipalities in order to find mutual comparable pairs. Black-Right-Pointing-Pointer Each pair consists of one private and one public operating waste collection program. Black-Right-Pointing-Pointer All cases show that private service has lower costs than public service. Black-Right-Pointing-Pointer Municipalities were contacted to identify the deeper causes for the waste management program. - Abstract: The rising pressure in terms of cost efficiency on public services pushes governments to transfer part of those services to the private sector. A trend towards more privatizing can be noticed in the collection of municipal household waste. This paper reports the findings of a research project aiming to compare the cost between the service of private and public collection of residual household waste. Multiple case studies of municipalities about the Flemish region of Belgium were conducted. Data concerning the year 2009 were gathered through in-depth interviews in 2010. In total 12 municipalities were investigated, divided into three mutual comparable pairs with a weekly and three mutual comparable pairs with a fortnightly residual waste collection. The results give a rough indication that in all cases the cost of private service is lower than public service in the collection of household waste. Albeit that there is an interest in establishing whether there are differences in the costs and service levels between public and private waste collection services, there are clear difficulties in establishing comparisons that can be made without having to rely on a large number of assumptions and corrections. However, given the cost difference, it remains the responsibility of the municipalities to decide upon the service they offer their citizens, regardless the cost efficiency: public or private.

  7. Approximation of Corrected Calcium Concentrations in Advanced Chronic Kidney Disease Patients with or without Dialysis Therapy

    Directory of Open Access Journals (Sweden)

    Yoshio Kaku

    2015-08-01

    Full Text Available Background: The following calcium (Ca correction formula (Payne is conventionally used for serum Ca estimation: corrected total Ca (TCa (mg/dl = TCa (mg/dl + [4 - albumin (g/dl]; however, it is inapplicable to advanced chronic kidney disease (CKD patients. Methods: 1,922 samples in CKD G4 + G5 patients and 341 samples in CKD G5D patients were collected. Levels of TCa (mg/day, ionized Ca2+ (iCa2+ (mmol/l and other clinical parameters were measured. We assumed the corrected TCa to be equal to eight times the iCa2+ value (measured corrected TCa. We subsequently performed stepwise multiple linear regression analysis using the clinical parameters. Results: The following formula was devised from multiple linear regression analysis. For CKD G4 + G5 patients: approximated corrected TCa (mg/dl = TCa + 0.25 × (4 - albumin + 4 × (7.4 - pH + 0.1 × (6 - P + 0.22. For CKD G5D patients: approximated corrected TCa (mg/dl = TCa + 0.25 × (4 - albumin + 0.1 × (6 - P + 0.05 × (24 - HCO3- + 0.35. Receiver operating characteristic analysis showed the high values of the area under the curve of approximated corrected TCa for the detection of measured corrected TCa ≥8.4 mg/dl and ≤10.4 mg/dl for each CKD sample. Both intraclass correlation coefficients for each CKD sample demonstrated superior agreement using the new formula compared to the previously reported formulas. Conclusion: Compared to other formulas, the approximated corrected TCa values calculated from the new formula for patients with CKD G4 + G5 and CKD G5D demonstrates superior agreement with the measured corrected TCa.

  8. Corrective Action Decision Document for Corrective Action Unit 536: Area 3 Release Site, Nevada Test Site, Nevada, Revision 0 with Errata

    Energy Technology Data Exchange (ETDEWEB)

    Boehlecke, Robert

    2004-11-01

    corrective action investigation (CAI). Record of Technical Change No. 1 to the CAIP documents changes to the PALs agreed to by the Nevada Division of Environmental Protection (NDEP) and DOE, National Nuclear Security Administration Nevada Site Office (NNSA/NSO). This ROTC provides the justification for changing from background-based to dose-based radiological PALs. This ROTC was approved and the dose-based PAL comparison implemented on March 9, 2004.

  9. Modified protrusion arch for anterior crossbite correction - a case report.

    Science.gov (United States)

    Roy, Abhishek Singha; Singh, Gulshan Kr; Tandon, Pradeep; Chaudhary, Ramsukh

    2013-01-01

    Borderline and mild skeletal Class III relationships in adult patients are usually treated by orthodontic camouflage. Reasonably rood results have been achieved with nonsurgical teatment of anterior crossbite. Class III malocclusion may be associated with mandibular prognathism, maxillary retrognathism, or both. Class III maxillary retrognathism generally involves anterior crossbite, which must be opened if upper labial brackets are to be bonded. If multiple teeth are in crossbite, after opening the bite usual step is to ligate forward or advancement arch made of 0.018" or 0.020" stainless steel or NiTi wire main arch that must be kept separated 2 mm from the slot ofupper incisor braces. Two stops or omegas are made 1 mm mesial to the tubes of the molar bands that will impede main arch from slipping,and in this manner the arch will push the anterior teeth forward Here we have fabricated a modified multiple loop protrusion arch to correct an anterior crossbite with severe crowding that was not amenable to correct by advancement arches.

  10. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  11. Implementation of real-time nonuniformity correction with multiple NUC tables using FPGA in an uncooled imaging system

    Science.gov (United States)

    Oh, Gyong Jin; Kim, Lyang-June; Sheen, Sue-Ho; Koo, Gyou-Phyo; Jin, Sang-Hun; Yeo, Bo-Yeon; Lee, Jong-Ho

    2009-05-01

    This paper presents a real time implementation of Non Uniformity Correction (NUC). Two point correction and one point correction with shutter were carried out in an uncooled imaging system which will be applied to a missile application. To design a small, light weight and high speed imaging system for a missile system, SoPC (System On a Programmable Chip) which comprises of FPGA and soft core (Micro-blaze) was used. Real time NUC and generation of control signals are implemented using FPGA. Also, three different NUC tables were made to make the operating time shorter and to reduce the power consumption in a large range of environment temperature. The imaging system consists of optics and four electronics boards which are detector interface board, Analog to Digital converter board, Detector signal generation board and Power supply board. To evaluate the imaging system, NETD was measured. The NETD was less than 160mK in three different environment temperatures.

  12. Matrix elements of the electric multiple transition and relativistic correction operators in the case of complex configurations

    International Nuclear Information System (INIS)

    Kanyauskas, Yu.M.; Rudzikas, Z.B.

    1976-01-01

    Operators and their submatrix elements are studied in the framework of the electric multipole transitions of complex atoms with account of relativistic corrections of the order of the square of the fine structure constant. The analysis is performed by means of irreducible tensor operators and genealogical coefficients. It has been assumed that angular momenta of individual shells are coupled with each other according to ls, lk, jk and jj coupling. Formulas are given for the operator which causes the relativistic corrections for the single-electron multipole transition and for its submatrix element in the case of configurations with two unfilled shells. A possibility is discussed of using the formulas suggested for calculation. As follows from analysis, the relativistic correction operators even with the pure ls coupling allow intercombination transitions with ΔS equals +-1. The expressions obtained may turn out to be useful for performing calculations in the case of the intermediate type of coupling

  13. Comparison of accuracy of uncorrected and corrected sagittal tomography in detection of mandibular condyle erosions: An exvivo study

    Directory of Open Access Journals (Sweden)

    Asieh Zamani Naser

    2010-01-01

    Full Text Available Background: Radiographic examination of TMJ is indicated when there are clinical signs of pathological conditions, mainly bone changes that may influence the diagnosis and treatment planning. The purpose of this study was to evaluate and to compare the validity and diagnostic accuracy of uncorrected and corrected sagittal tomographic images in the detection of simulated mandibular condyle erosions. Methods : Simulated lesions were created in 10 dry mandibles using a dental round bur. Using uncorrected and corrected sagittal tomography techniques, mandibular condyles were imaged by a Cranex Tome X-ray unit before and after creating the lesions. The uncorrected and corrected tomography images were examined by two independent observers for absence or presence of a lesion. The accuracy for detecting mandibular condyle lesions was expressed as sensitivity, specificity, and validity values. Differences between the two radiographic modalities were tested by Wilcoxon for paired data tests. Inter-observer agreement was determined by Cohen′s Kappa. Results: The sensitivity, specificity and validity were 45%, 85% and 30% in uncorrected sagittal tomographic images, respectively, and 70%, 92.5% and 60% in corrected sagittal tomographic images, respectively. There was a significant statistical difference between the accuracy of uncorrected and corrected sagittal tomography in detection of mandibular condyle erosions (P = 0.016. The inter-observer agreement was slight for uncorrected sagittal tomography and moderate for corrected sagittal tomography. Conclusion: The accuracy of corrected sagittal tomography is significantly higher than that of uncorrected sagittal tomography. Therefore, corrected sagittal tomography seems to be a better modality in detection of mandibular condyle erosions.

  14. Deconvolution based attenuation correction for time-of-flight positron emission tomography

    Science.gov (United States)

    Lee, Nam-Yong

    2017-10-01

    For an accurate quantitative reconstruction of the radioactive tracer distribution in positron emission tomography (PET), we need to take into account the attenuation of the photons by the tissues. For this purpose, we propose an attenuation correction method for the case when a direct measurement of the attenuation distribution in the tissues is not available. The proposed method can determine the attenuation factor up to a constant multiple by exploiting the consistency condition that the exact deconvolution of noise-free time-of-flight (TOF) sinogram must satisfy. Simulation studies shows that the proposed method corrects attenuation artifacts quite accurately for TOF sinograms of a wide range of temporal resolutions and noise levels, and improves the image reconstruction for TOF sinograms of higher temporal resolutions by providing more accurate attenuation correction.

  15. Hadron multiplicity as the limit of jet multiplicity at high resolution

    Energy Technology Data Exchange (ETDEWEB)

    Lupia, S.; Ochs, W. [Max-Planck-Institut fuer Physik, Muenchen (Germany). Werner-Heisenberg-Institut

    1998-05-01

    Recently exact numerical results from the evolution equation for parton multiplicities in QCD jets have been obtained. A comparison with various approximate results is presented. A good description is obtained not only of the jet multiplicities measured at LEP-1 but also of the hadron multiplicities for cm s energies above 1.6 GeV in e{sup +}e{sup -} annihilation. The solution suggests that a final state hadron can be represented by a jet in the limit of small (nonperturbative) k {sub perpendicular} {sub to} cut-off Q{sub 0}. In this description using as adjustable parameters only the QCD scale {Lambda} and the cut-off Q{sub 0}, the coupling {alpha}{sub s} can be seen to rise towards large values above unity at low energies. (orig.). 8 refs.

  16. Charged-particle multiplicity at LHC energies

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    The talk presents the measurement of the pseudorapidity density and the multiplicity distribution with ALICE at the achieved LHC energies of 0.9 and 2.36 TeV.An overview about multiplicity measurements prior to LHC is given and the related theoretical concepts are briefly discussed.The analysis procedure is presented and the systematic uncertainties are detailed. The applied acceptance corrections and the treatment of diffraction are discussed.The results are compared with model predictions. The validity of KNO scaling in restricted phase space regions is revisited. 

  17. MultiSETTER: web server for multiple RNA structure comparison.

    Science.gov (United States)

    Čech, Petr; Hoksza, David; Svozil, Daniel

    2015-08-12

    Understanding the architecture and function of RNA molecules requires methods for comparing and analyzing their tertiary and quaternary structures. While structural superposition of short RNAs is achievable in a reasonable time, large structures represent much bigger challenge. Therefore, we have developed a fast and accurate algorithm for RNA pairwise structure superposition called SETTER and implemented it in the SETTER web server. However, though biological relationships can be inferred by a pairwise structure alignment, key features preserved by evolution can be identified only from a multiple structure alignment. Thus, we extended the SETTER algorithm to the alignment of multiple RNA structures and developed the MultiSETTER algorithm. In this paper, we present the updated version of the SETTER web server that implements a user friendly interface to the MultiSETTER algorithm. The server accepts RNA structures either as the list of PDB IDs or as user-defined PDB files. After the superposition is computed, structures are visualized in 3D and several reports and statistics are generated. To the best of our knowledge, the MultiSETTER web server is the first publicly available tool for a multiple RNA structure alignment. The MultiSETTER server offers the visual inspection of an alignment in 3D space which may reveal structural and functional relationships not captured by other multiple alignment methods based either on a sequence or on secondary structure motifs.

  18. An Experimental Evaluation of Blockage Corrections for Current Turbines

    Science.gov (United States)

    Ross, Hannah; Polagye, Brian

    2017-11-01

    Flow confinement has been shown to significantly alter the performance of turbines that extract power from water currents. These performance effects are related to the degree of constraint, defined by the ratio of turbine projected area to channel cross-sectional area. This quantity is referred to as the blockage ratio. Because it is often desirable to adjust experimental observations in water channels to unconfined conditions, analytical corrections for both wind and current turbines have been derived. These are generally based on linear momentum actuator disk theory but have been applied to turbines without experimental validation. This work tests multiple blockage corrections on performance and thrust data from a cross-flow turbine and porous plates (experimental analogues to actuator disks) collected in laboratory flumes at blockage ratios ranging between 10 and 35%. To isolate the effects of blockage, the Reynolds number, Froude number, and submergence depth were held constant while the channel width was varied. Corrected performance data are compared to performance in a towing tank at a blockage ratio of less than 5%. In addition to examining the accuracy of each correction, underlying assumptions are assessed to determine why some corrections perform better than others. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1256082 and the Naval Facilities Engineering Command (NAVFAC).

  19. Comparison of multiple support excitation solution techniques for piping systems

    International Nuclear Information System (INIS)

    Sterkel, H.P.; Leimbach, K.R.

    1980-01-01

    Design and analysis of nuclear power plant piping systems exposed to a variety of dynamic loads often require multiple support excitation analysis by modal or direct time integration methods. Both methods have recently been implemented in the computer program KWUROHR for static and dynamic analysis of piping systems, following the previous implementation of the multiple support excitation response spectrum method (see papers K 6/15 and K 6/15a of the SMiRT-4 Conference). The results of multiple support excitation response spectrum analyses can be examined by carrying out the equivalent time history analyses which do not distort the time phase relationship between the excitations at different support points. A frequent point of discussion is multiple versus single support excitation. A single support excitation analysis is computationally straightforward and tends to be on the conservative side, as the numerical results show. A multiple support excitation analysis, however, does not incur much more additional computer cost than the expenditure for an initial static solution involving three times the number, L, of excitation levels, i.e. 3L static load cases. The results are more realistic than those from a single support excitation analysis. A number of typical nuclear plant piping systems have been analyzed using single and multiple support excitation algorithms for: (1) the response spectrum method, (2) the modal time history method via the Wilson, Newmark and Goldberg integration operators and (3) the direct time history method via the Wilson integration operator. Characteristic results are presented to compare the computational quality of all three methods. (orig.)

  20. Comparison of prostate set-up accuracy and margins with off-line bony anatomy corrections and online implanted fiducial-based corrections

    International Nuclear Information System (INIS)

    Greer, P. B.; Dahl, K.; Ebert, M. A.; Wratten, C.; White, M.; Denham, K. W.

    2008-01-01

    Full text: The aim of the study was to determine prostate set-up accuracy and set-up margins with off-line bony anatomy-based imaging protocols, compared with online implanted fiducial marker-based imaging with daily corrections. Eleven patients were treated with implanted prostate fiducial markers and online set-up corrections. Pretreatment orthogonal electronic portal images were acquired to determine couch shifts and verification images were acquired during treatment to measure residual set-up error. The prostate set-up errors that would result from skin marker set-up, off-line bony anatomy-based protocols and online fiducial marker-based corrections were determined. Set-up margins were calculated for each set-up technique using the percentage of encompassed isocentres land a margin recipe. The prostate systematic set-up errors in the medial-lateral, superior-inferior and anterior-I posterior directions for skin marker set-up were 2.2, 3.6 and 4.5 mm (1 standard deviation). For our bony anatomy-I based off-line protocol the prostate systematic set-up errors were 1.6, 2.5 and 4.4 mm. For the online fiducial based set-up the results were 0.5, 1.4 and 1.4 mm. A prostate systematic error of 10.2 mm was uncorrected by the off-line bone protocol in one patient. Set-up margins calculated to encompass 98% of prostate set-up shifts were 111-14 mm with bone off-line set-up and 4-7 mm with online fiducial markers. Margins from the van Herk margin I recipe were generally 1-2 mm smaller. Bony anatomy-based set-up protocols improve the group prostate set-up error compared with skin marks; however, large prostate systematic errors can remain undetected or systematic (errors increased for individual patients. The margin required for set-up errors was found to be 10-15 mm unless I implanted fiducial markers are available for treatment guidance.

  1. Correction for polychromatic X-ray image distortion in computer tomography images

    International Nuclear Information System (INIS)

    1979-01-01

    A method and apparatus are described which correct the polychromatic distortion of CT images that is produced by the non-linear interaction of body constituents with a polychromatic X-ray beam. A CT image is processed to estimate the proportion of the attenuation coefficients of the constituents in each pixel element. A multiplicity of projections for each constituent are generated from the original image and are combined utilizing a multidimensional polynomial which approximates the non-linear interaction involved. An error image is then generated from the combined projections and is subtracted from the original image to correct for the polychromatic distortion. (Auth.)

  2. Measurement of the charged-particle multiplicity in proton-proton collisions with the ALICE detector

    Energy Technology Data Exchange (ETDEWEB)

    Grosse-Oetringhaus, Jan Fiete

    2009-04-17

    This thesis has introduced the theoretical framework to describe multiple-particle production. The functioning of two event generators, Pythia and Phojet, as well as theoretical descriptions of the charged-particle multiplicity have been discussed. A summary of pseudorapidity-density (dN{sub ch}/d{eta}) and multiplicity-distribution measurements of charged particles has been presented. Existing results have been shown in an energy range of {radical}(s) = 6GeV to 1.8TeV from bubble chamber experiments and detectors at the ISR, Sp anti pS, and Tevatron. The validity of the introduced models was reviewed and the behavior as function of {radical}(s) was discussed. Analysis procedures for two basic measurements with ALICE, the pseudorapidity density and the multiplicity distribution of charged particles, have been developed. The former allows corrections on a bin-by-bin basis, while the latter requires unfolding of the measured distribution. The procedures have been developed for two independent subdetectors of ALICE, the Silicon Pixel Detector (SPD) and the Time-Projection Chamber (TPC). This allows the comparison of the analysis result in the overlapping regions as an independent cross-check of the measured distribution. Their implementation successfully reproduces different assumed spectra. The procedures have been extensively tested on simulated data using two different event generators, Pythia and Phojet. A comprehensive list of systematic uncertainties was evaluated. Some of these uncertainties still require measured data to verify or extract their magnitude. (orig.)

  3. Measurement of the charged-particle multiplicity in proton-proton collisions with the ALICE detector

    International Nuclear Information System (INIS)

    Grosse-Oetringhaus, Jan Fiete

    2009-01-01

    This thesis has introduced the theoretical framework to describe multiple-particle production. The functioning of two event generators, Pythia and Phojet, as well as theoretical descriptions of the charged-particle multiplicity have been discussed. A summary of pseudorapidity-density (dN ch /dη) and multiplicity-distribution measurements of charged particles has been presented. Existing results have been shown in an energy range of √(s) = 6GeV to 1.8TeV from bubble chamber experiments and detectors at the ISR, Sp anti pS, and Tevatron. The validity of the introduced models was reviewed and the behavior as function of √(s) was discussed. Analysis procedures for two basic measurements with ALICE, the pseudorapidity density and the multiplicity distribution of charged particles, have been developed. The former allows corrections on a bin-by-bin basis, while the latter requires unfolding of the measured distribution. The procedures have been developed for two independent subdetectors of ALICE, the Silicon Pixel Detector (SPD) and the Time-Projection Chamber (TPC). This allows the comparison of the analysis result in the overlapping regions as an independent cross-check of the measured distribution. Their implementation successfully reproduces different assumed spectra. The procedures have been extensively tested on simulated data using two different event generators, Pythia and Phojet. A comprehensive list of systematic uncertainties was evaluated. Some of these uncertainties still require measured data to verify or extract their magnitude. (orig.)

  4. Comparison of results using second-order moments with and without width correction to solve the advection equation

    International Nuclear Information System (INIS)

    Pepper, D.W.; Long, P.E.

    1978-01-01

    The method of moments is used with and without a a width-correction technique to solve the advection of a passive scalar. The method of moments is free of numerical dispersion but suffers from numerical diffusion (damping). In order to assess the effect of the width-correction procedure on reducing numerical diffusion, both versions are used to advect a passive scalar in straight-line and rotational wind fields. Although the width-correction procedure reduces numerical diffusion under some circumstances, the unmodified version of the second-moment procedure is better suited as a general method

  5. The effect of quantum correction on plasma electron heating in ultraviolet laser interaction

    Energy Technology Data Exchange (ETDEWEB)

    Zare, S.; Sadighi-Bonabi, R., E-mail: Sadighi@sharif.ir; Anvari, A. [Department of Physics, Sharif University of Technology, P.O. Box 11365-9567, Tehran (Iran, Islamic Republic of); Yazdani, E. [Department of Energy Engineering and Physics, Amirkabir University of Technology, P.O. Box 15875-4413, Tehran (Iran, Islamic Republic of); Hora, H. [Department of Theoretical Physics, University of New South Wales, Sydney 2052 (Australia)

    2015-04-14

    The interaction of the sub-picosecond UV laser in sub-relativistic intensities with deuterium is investigated. At high plasma temperatures, based on the quantum correction in the collision frequency, the electron heating and the ion block generation in plasma are studied. It is found that due to the quantum correction, the electron heating increases considerably and the electron temperature uniformly reaches up to the maximum value of 4.91 × 10{sup 7 }K. Considering the quantum correction, the electron temperature at the laser initial coupling stage is improved more than 66.55% of the amount achieved in the classical model. As a consequence, by the modified collision frequency, the ion block is accelerated quicker with higher maximum velocity in comparison with the one by the classical collision frequency. This study proves the necessity of considering a quantum mechanical correction in the collision frequency at high plasma temperatures.

  6. Drift Correction of Lightweight Microbolometer Thermal Sensors On-Board Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Francisco-Javier Mesas-Carrascosa

    2018-04-01

    Full Text Available The development of lightweight sensors compatible with mini unmanned aerial vehicles (UAVs has expanded the agronomical applications of remote sensing. Of particular interest in this paper are thermal sensors based on lightweight microbolometer technology. These are mainly used to assess crop water stress with thermal images where an accuracy greater than 1 °C is necessary. However, these sensors lack precise temperature control, resulting in thermal drift during image acquisition that requires correction. Currently, there are several strategies to manage thermal drift effect. However, these strategies reduce useful flight time over crops due to the additional in-flight calibration operations. This study presents a drift correction methodology for microbolometer sensors based on redundant information from multiple overlapping images. An empirical study was performed in an orchard of high-density hedgerow olive trees with flights at different times of the day. Six mathematical drift correction models were developed and assessed to explain and correct drift effect on thermal images. Using the proposed methodology, the resulting thermally corrected orthomosaics yielded a rate of error lower than 1° C compared to those where no drift correction was applied.

  7. Implementation of electroweak corrections in the POWHEG BOX: single W production

    CERN Document Server

    Barzè, L; Nason, P; Nicrosini, O; Piccinini, F

    2012-01-01

    We present a fully consistent implementation of electroweak and strong radiative corrections to single W hadroproduction in the POWHEG BOX framework, treating soft and collinear photon emissions on the same ground as coloured parton emissions. This framework can be easily extended to more complex electroweak processes. We describe how next-to-leading order (NLO) electroweak corrections are combined with the NLO QCD calculation, and show how they are interfaced to QCD and QED shower Monte Carlo. The resulting tool fills a gap in the literature and allows to study comprehensively the interplay of QCD and electroweak effects to W production using a single computational framework. Numerical comparisons with the predictions of the electroweak generator HORACE, as well as with existing results on the combination of electroweak and QCD corrections to W production, are shown for the LHC energies, to validate the reliability and accuracy of the approach

  8. The optimal hormonal replacement modality selection for multiple organ procurement from brain-dead organ donors

    Directory of Open Access Journals (Sweden)

    Mi Z

    2014-12-01

    Full Text Available Zhibao Mi,1 Dimitri Novitzky,2 Joseph F Collins,1 David KC Cooper3 1Cooperative Studies Program Coordinating Center, VA Maryland Health Care Systems, Perry Point, MD, USA; 2Department of Cardiothoracic Surgery, University of South Florida, Tampa, FL, USA; 3Thomas E Starzl Transplantation Institute, University of Pittsburgh, Pittsburgh, PA, USA Abstract: The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA, are statistically conservative. Hsu’s multiple comparisons with the best (MCB – adapted from the Dunnett’s multiple comparisons with control (MCC – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM or generalized linear mixed models (GLMM, and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS, among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. Keywords: best treatment selection, brain-dead organ donors, hormonal replacement, multiple binary endpoints, organ procurement, multiple comparisons

  9. Should total landings be used to correct estimated catch in numbers or mean-weight-at-age?

    DEFF Research Database (Denmark)

    Lewy, Peter; Lassen, H.

    1997-01-01

    Many ICES fish stock assessment working groups have practised Sum Of Products, SOP, correction. This correction stems from a comparison of total weights of the known landings and the SOP over age of catch in number and mean weight-at-age, which ideally should be identical. In case of SOP...... discrepancies some countries correct catch in numbers while others correct mean weight-at-age by a common factor, the ratio between landing and SOP. The paper shows that for three sampling schemes the SOP corrections are statistically incorrect and should not be made since the SOP is an unbiased estimate...... of the total landings. Calculation of the bias of estimated catch in numbers and mean weight-at-age shows that SOP corrections of either of these estimates may increase the bias. Furthermore, for five demersal and one pelagic North Sea species it is shown that SOP discrepancies greater than 2% from...

  10. Gamma camera correction system and method for using the same

    International Nuclear Information System (INIS)

    Inbar, D.; Gafni, G.; Grimberg, E.; Bialick, K.; Koren, J.

    1986-01-01

    A gamma camera is described which consists of: (a) a detector head that includes photodetectors for producing output signals in response to radiation stimuli which are emitted by a radiation field and which interact with the detector head and produce an event; (b) signal processing circuitry responsive to the output signals of the photodetectors for producing a sum signal that is a measure of the total energy of the event; (c) an energy discriminator having a relatively wide window for comparison with the sum signal; (d) the signal processing circuitry including coordinate computation circuitry for operating on the output signals, and calculating an X,Y coordinate of an event when the sum signal lies within the window of the energy discriminator; (e) an energy correction table containing spatially dependent energy windows for producing a validation signal if the total energy of an event lies within the window associated with the X,Y coordinates of the event; (f) the signal processing circuitry including a dislocation correction table containing spatially dependent correction factors for converting the X,Y coordinates of an event to relocated coordinates in accordance with correction factors determined by the X,Y coordinates; (g) a digital memory for storing a map of the radiation field; and (h) means for recording an event at its relocated coordinates in the memory if the energy correction table produces a validation signal

  11. A promising hybrid approach to SPECT attenuation correction

    International Nuclear Information System (INIS)

    Lewis, N.H.; Faber, T.L.; Corbett, J.R.; Stokely, E.M.

    1984-01-01

    Most methods for attenuation compensation in SPECT either rely on the assumption of uniform attenuation, or use slow iteration to achieve accuracy. However, hybrid methods that combine iteration with simple multiplicative correction can accommodate nonuniform attenuation, and such methods converge faster than other iterative techniques. The authors evaluated two such methods, which differ in use of a damping factor to control convergence. Both uniform and nonuniform attenuation were modeled, using simulated and phantom data for a rotating gamma camera. For simulations done with 360 0 data and the correct attenuation map, activity levels were reconstructed to within 5% of the correct values after one iteration. Using 180 0 data, reconstructed levels in regions representing lesion and background were within 5% of the correct values in three iterations; however, further iterations were needed to eliminate the characteristic streak artifacts. The damping factor had little effect on 360 0 reconstruction, but was needed for convergence with 180 0 data. For both cold- and hot-lesion models, image contrast was better from the hybrid methods than from the simpler geometric-mean corrector. Results from the hybrid methods were comparable to those obtained using the conjugate-gradient iterative method, but required 50-100% less reconstruction time. The relative speed of the hybrid methods, and their accuracy in reconstructing photon activity in the presence of nonuniform attenuation, make them promising tools for quantitative SPECT reconstruction

  12. Coincidence corrections for a multi-detector gamma spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Britton, R., E-mail: r.britton@surrey.ac.uk [University of Surrey, Guildford GU2 7XH (United Kingdom); AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom); Burnett, J.L.; Davies, A.V. [AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom); Regan, P.H. [University of Surrey, Guildford GU2 7XH (United Kingdom)

    2015-01-01

    List-mode data acquisition has been utilised in conjunction with a high-efficiency γ–γ coincidence system, allowing both the energetic and temporal information to be retained for each recorded event. Collected data is re-processed multiple times to extract any coincidence information from the γ-spectroscopy system, correct for the time-walk of low-energy events, and remove accidental coincidences from the projected coincidence spectra. The time-walk correction has resulted in a reduction in the width of the coincidence delay gate of 18.4±0.4%, and thus an equivalent removal of ‘background’ coincidences. The correction factors applied to ∼5.6% of events up to ∼500 keV for a combined {sup 137}Cs and {sup 60}Co source, and are crucial for accurate coincidence measurements of low-energy events that may otherwise be missed by a standard delay gate. By extracting both the delay gate and a representative ‘background’ region for the coincidences, a coincidence background subtracted spectrum is projected from the coincidence matrix, which effectively removes ∼100% of the accidental coincidences (up to 16.6±0.7% of the total coincidence events seen during this work). This accidental-coincidence removal is crucial for accurate characterisation of the events seen in coincidence systems, as without this correction false coincidence signatures may be incorrectly interpreted.

  13. Correction of gene expression data: Performance-dependency on inter-replicate and inter-treatment biases.

    Science.gov (United States)

    Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren

    2014-10-20

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Practical aspects of phase correction determination for gauge blocks measured by optical interferometry

    International Nuclear Information System (INIS)

    Ramotowski, Zbigniew; Salbut, Leszek

    2012-01-01

    Determination of a phase correction is necessary when making interferometric measurements of gauge blocks with an auxiliary platen. The phase correction compensates for the differences in the reflecting properties of the gauge block and the platen surfaces. Different phase corrections are reported for gauge blocks of different manufacturers, made from different materials and with different surface roughness compared to the platen. In this paper, the process of selection of the best surface roughness parameter and the influence of different complex refractive indices of the same type of material are analysed. The new surface roughness parameter based on the difference between the weighted mean of maximum and minimum asperities of 3D surface roughness measured by a modernized Linnik phase shifting interferometer is introduced. The results of comparison of the phase correction values calculated from the difference between the weighted mean values and calculated from stack method measurements are presented and discussed. The complementary method of phase correction measurement based on the cross-wringing method with the use of the modernized phase shifting Kösters interferometer is proposed. (paper)

  15. Comparison of Ordinary Kriging and Multiple Indicator Kriging ...

    African Journals Online (AJOL)

    Michael O. Mensah

    Multiple Indicator Kriging (MIK) is one of the popular non-linear methods that can handle skewed distribution such as that for gold ... historical deposits: Nkran, Adubia, Abore, and a ... information from the mine on the geology of the deposit.

  16. Does Correct Answer Distribution Influence Student Choices When Writing Multiple Choice Examinations?

    Science.gov (United States)

    Carnegie, Jacqueline A.

    2017-01-01

    Summative evaluation for large classes of first- and second-year undergraduate courses often involves the use of multiple choice question (MCQ) exams in order to provide timely feedback. Several versions of those exams are often prepared via computer-based question scrambling in an effort to deter cheating. An important parameter to consider when…

  17. Optimizing multiple-choice tests as tools for learning.

    Science.gov (United States)

    Little, Jeri L; Bjork, Elizabeth Ligon

    2015-01-01

    Answering multiple-choice questions with competitive alternatives can enhance performance on a later test, not only on questions about the information previously tested, but also on questions about related information not previously tested-in particular, on questions about information pertaining to the previously incorrect alternatives. In the present research, we assessed a possible explanation for this pattern: When multiple-choice questions contain competitive incorrect alternatives, test-takers are led to retrieve previously studied information pertaining to all of the alternatives in order to discriminate among them and select an answer, with such processing strengthening later access to information associated with both the correct and incorrect alternatives. Supporting this hypothesis, we found enhanced performance on a later cued-recall test for previously nontested questions when their answers had previously appeared as competitive incorrect alternatives in the initial multiple-choice test, but not when they had previously appeared as noncompetitive alternatives. Importantly, however, competitive alternatives were not more likely than noncompetitive alternatives to be intruded as incorrect responses, indicating that a general increased accessibility for previously presented incorrect alternatives could not be the explanation for these results. The present findings, replicated across two experiments (one in which corrective feedback was provided during the initial multiple-choice testing, and one in which it was not), thus strongly suggest that competitive multiple-choice questions can trigger beneficial retrieval processes for both tested and related information, and the results have implications for the effective use of multiple-choice tests as tools for learning.

  18. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...

  19. Two-flavor QCD correction to lepton magnetic moments at leading-order in the electromagnetic coupling

    Energy Technology Data Exchange (ETDEWEB)

    Dru Renner, Xu Feng, Karl Jansen, Marcus Petschlies

    2011-08-01

    We present a reliable nonperturbative calculation of the QCD correction, at leading-order in the electromagnetic coupling, to the anomalous magnetic moment of the electron, muon and tau leptons using two-flavor lattice QCD. We use multiple lattice spacings, multiple volumes and a broad range of quark masses to control the continuum, infinite-volume and chiral limits. We examine the impact of the commonly ignored disconnected diagrams and introduce a modification to the previously used method that results in a well-controlled lattice calculation. We obtain 1.513 (43) 10^-12, 5.72 (16) 10^-8 and 2.650 (54) 10^-6 for the leading-order QCD correction to the anomalous magnetic moment of the electron, muon and tau respectively, each accurate to better than 3%.

  20. Possibilities of pharmacologic correction of cognitive disorders in conditions of experimental equivalent of multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Nefyodov A.A.

    2015-06-01

    Full Text Available A comparative analysis of the impact of citicoline, ±-lipoic acid, nicergoline, donepezil and colloidal solution of nano-silver (CSNS on the processes of learning and consolidation of memorable track in the test of the conditional reaction of passive avoidance (CRPA in conditions of experimental allergic encephalomyelitis (EAE was conducted. Testing of passive defensive skill was performed on days 12 and 20 after the induction of EAE. To assess the impact of drugs on the inputted information processes the investigated substances were administered intragastrically (CSNS - intraperitoneally once daily in the definite dose from the second to the day 10 after the induction of EAE (latent phase of the disease, and assessing processes of conditional skill preserving, further administration of drugs by the day 20 of the experiment (average duration of EAE was used. A positive effect of citicoline, ±-lipoic acid, nicergoline and donepezil on the input information processes and the ability to prevent accelerated extinction of acquired contingent skill in the conditions of experimental pathology was established. Drugs statistically significantly increased duration of the latent period of CRPA in comparison with a group of active control by 49%, 43%, 39% and 34%, respectively. Here with preparations were characterized by a high coefficient of antiamnesic activity, by the end of the experiment it was recorded at the level of 95% (citicoline, 81% (±-lipoic acid, 76% (nicergoline and 53% (donepezil. It is shown that the ability to prevent development of cognitive impairment in conditions of experimental equivalent of multiple sclerosis decreases in the number of citicoline (500 mg/kg > ±-lipoic acid (50 mg/kg H nicergoline (10 mg/kg > donepezil (10 mg/kg.

  1. The Differential Effect of Two Types of Direct Written Corrective Feedback on Noticing and Uptake: Reformulation vs. Error Correction

    Directory of Open Access Journals (Sweden)

    Rosa M. Manchón

    2010-06-01

    Full Text Available Framed in a cognitively-oriented strand of research on corrective feedback (CF in SLA, the controlled three- stage (composition/comparison-noticing/revision study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation on noticing and uptake, as evidenced in the written output produced by a group of 8 secondary school EFL learners. Noticing was operationalized as the amount of corrections noticed in the comparison stage of the writing task, whereas uptake was operationally defined as the type and amount of accurate revisions incorporated in the participants’ revised versions of their original texts. Results support previous research findings on the positive effects of written CF on noticing and uptake, with a clear advantage of error correction over reformulation as far as uptake was concerned. Data also point to the existence of individual differences in the way EFL learners process and make use of CF in their writing. These findings are discussed from the perspective of the light they shed on the learning potential of CF in instructed SLA, and suggestions for future research are put forward.Enmarcado en la investigación de orden cognitivo sobre la corrección (“corrective feedback”, en este trabajo se investigó la incidencia de dos tipos de corrección escrita (corrección de errores y reformulación en los procesos de detección (noticing e incorporación (“uptake”. Ocho alumnos de inglés de Educción Secundaria participaron en un experimento que constó de tres etapas: redacción, comparación-detección y revisión. La detección se definió operacionalmente en términos del número de correcciones registradas por los alumnos durante la etapa de detección-comparación, mientras que la operacionalización del proceso de incorporación fue el tipo y cantidad de revisiones llevadas a cabo en la última etapa del experimento. Nuestros resultados confirman los hallazgos de la

  2. Barkas effect, shell correction, screening and correlation in collisional energy-loss straggling of an ion beam

    CERN Document Server

    Sigmund, P

    2003-01-01

    Collisional electronic energy-loss straggling has been treated theoretically on the basis of the binary theory of electronic stopping. In view of the absence of a Bloch correction in straggling the range of validity of the theory includes both the classical and the Born regime. The theory incorporates Barkas effect and projectile screening. Shell correction and electron bunching are added on. In the absence of shell corrections the Barkas effect has a dominating influence on straggling, but much of this is wiped out when the shell correction is included. Weak projectile screening tends to noticeably reduce collisional straggling. Sizable bunching effects are found in particular for heavy ions. Comparisons are made with selected results of the experimental and theoretical literature. (authors)

  3. Physical and social environment and the risk of multiple sclerosis

    DEFF Research Database (Denmark)

    Magyari, Melinda; Koch-Henriksen, Nils; Pfleger, Claudia C.

    2014-01-01

    , or social environmental influence the risk of MS differently in women than in men. Methods: The cohort consists of all 1403 patients (939 women, 464 men) identified through Danish Multiple Sclerosis Registry aged 1-55 of years at clinical onset between 2000 and 2004, and up to 25 control persons for each...... case, matched by sex, year of birth and residential municipality. The same cohort was previously used to investigate the influence of the reproductive factors on the risk of MS. Results: By linkage to Danish population registers we found a slight albeit statistically significant excess for 6 female MS...... patients who had been employed in agriculture: OR 3.52; 95% CI 1.38-9.00, p=0.008 (0.046 when corrected for multiple significance) and a trend for exposure to outdoor work in 12 : OR 1.94, 95% CI 1.06-3.55, p=0.03 (0.09 when corrected for multiple significance), but the numbers of cases were small...

  4. Long-term comparison of temperature measurements by the multi-plate shield and Czech-Slovak thermometer screen

    Energy Technology Data Exchange (ETDEWEB)

    Mozny, Martin; Stepanek, Petr; Hajkova, Lenka; Bares, Daniel [Doksany Observatory, Doksany (Czech Republic). Czech Hydrometeorological Inst.; Trnka, Mirek [Academy of Science of the Czech Republic, Brno (Czech Republic). Global Change Research Centre; Zalud, Zdenek; Semeradova, Daniela [Mendel Univ., Brno (Czech Republic). Agrosystems and Bioclimatology; Koznarova, Vera [Czech Univ. of Life Sciences, Prague (Czech Republic). Dept. of Agroecology and Biometeorology

    2012-04-15

    Differences between measurements taken with the Czech-Slovak thermometer screen (TS) and the multiplate radiation shield (MRS) should not be neglected. The average difference between the TS and the MRS measurements varied between 0.3 and 2.8 C during suitable weather conditions (wind speed less than 3 m/s, bright and sunny day) throughout the year, during both daytime and nighttime hours. A 10-year time series of comparative measurements in Doksany, Czech Republic, showed that relative to TS, measurements from MRS yielded average and minimum air temperatures that were lower in the winter and higher in the summer. Daily maximum air temperatures were lower for MRS than TS throughout the year. The greatest differences were observed in the maximum air temperatures; only 62 % of all differences between the TS and MRS were less than 0.5 C, and 70 % were less than 1 C. Among minimum air temperatures, 60 % of differences were less than 0.5 C, and 79 % were less than 1 C. In contrast, 74 % of all differences in average daily temperature were less than 0.5 C, and 97 % were less than 1 C. The use of temperature measurements from multiple equipments may negatively affect inference from climate and hydro-meteorological models. Irregular temperature data could be corrected using a simulation of temperature differences (SITEDI) model, which incorporates differences between the MRS and the TS. It is important to consider whether temperature data in the Czech Republic and Slovakia come from the TS or the MRS when analyzing and modeling temperature in Central Europe. (orig.)

  5. Analysis and prediction of Multiple-Site Damage (MSD) fatigue crack growth

    Science.gov (United States)

    Dawicke, D. S.; Newman, J. C., Jr.

    1992-08-01

    A technique was developed to calculate the stress intensity factor for multiple interacting cracks. The analysis was verified through comparison with accepted methods of calculating stress intensity factors. The technique was incorporated into a fatigue crack growth prediction model and used to predict the fatigue crack growth life for multiple-site damage (MSD). The analysis was verified through comparison with experiments conducted on uniaxially loaded flat panels with multiple cracks. Configuration with nearly equal and unequal crack distribution were examined. The fatigue crack growth predictions agreed within 20 percent of the experimental lives for all crack configurations considered.

  6. Analysis and prediction of Multiple-Site Damage (MSD) fatigue crack growth

    Science.gov (United States)

    Dawicke, D. S.; Newman, J. C., Jr.

    1992-01-01

    A technique was developed to calculate the stress intensity factor for multiple interacting cracks. The analysis was verified through comparison with accepted methods of calculating stress intensity factors. The technique was incorporated into a fatigue crack growth prediction model and used to predict the fatigue crack growth life for multiple-site damage (MSD). The analysis was verified through comparison with experiments conducted on uniaxially loaded flat panels with multiple cracks. Configuration with nearly equal and unequal crack distribution were examined. The fatigue crack growth predictions agreed within 20 percent of the experimental lives for all crack configurations considered.

  7. A sun-crown-sensor model and adapted C-correction logic for topographic correction of high resolution forest imagery

    Science.gov (United States)

    Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.

    2014-10-01

    Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model

  8. Radiometric Correction of Close-Range Spectral Image Blocks Captured Using an Unmanned Aerial Vehicle with a Radiometric Block Adjustment

    Directory of Open Access Journals (Sweden)

    Eija Honkavaara

    2018-02-01

    Full Text Available Unmanned airborne vehicles (UAV equipped with novel, miniaturized, 2D frame format hyper- and multispectral cameras make it possible to conduct remote sensing measurements cost-efficiently, with greater accuracy and detail. In the mapping process, the area of interest is covered by multiple, overlapping, small-format 2D images, which provide redundant information about the object. Radiometric correction of spectral image data is important for eliminating any external disturbance from the captured data. Corrections should include sensor, atmosphere and view/illumination geometry (bidirectional reflectance distribution function—BRDF related disturbances. An additional complication is that UAV remote sensing campaigns are often carried out under difficult conditions, with varying illumination conditions and cloudiness. We have developed a global optimization approach for the radiometric correction of UAV image blocks, a radiometric block adjustment. The objective of this study was to implement and assess a combined adjustment approach, including comprehensive consideration of weighting of various observations. An empirical study was carried out using imagery captured using a hyperspectral 2D frame format camera of winter wheat crops. The dataset included four separate flights captured during a 2.5 h time period under sunny weather conditions. As outputs, we calculated orthophoto mosaics using the most nadir images and sampled multiple-view hyperspectral spectra for vegetation sample points utilizing multiple images in the dataset. The method provided an automated tool for radiometric correction, compensating for efficiently radiometric disturbances in the images. The global homogeneity factor improved from 12–16% to 4–6% with the corrections, and a reduction in disturbances could be observed in the spectra of the object points sampled from multiple overlapping images. Residuals in the grey and white reflectance panels were less than 5% of the

  9. A FIRST COMPARISON OF KEPLER PLANET CANDIDATES IN SINGLE AND MULTIPLE SYSTEMS

    International Nuclear Information System (INIS)

    Latham, David W.; Quinn, Samuel N.; Carter, Joshua A.; Holman, Matthew J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Howell, Steve B.; Batalha, Natalie M.; Brown, Timothy M.; Buchhave, Lars A.; Caldwell, Douglas A.; Christiansen, Jessie L.; Ciardi, David R.; Cochran, William D.; Dunham, Edward W.; Fabrycky, Daniel C.; Ford, Eric B.; Gautier, Thomas N. III; Gilliland, Ronald L.

    2011-01-01

    In this Letter, we present an overview of the rich population of systems with multiple candidate transiting planets found in the first four months of Kepler data. The census of multiples includes 115 targets that show two candidate planets, 45 with three, eight with four, and one each with five and six, for a total of 170 systems with 408 candidates. When compared to the 827 systems with only one candidate, the multiples account for 17% of the total number of systems, and one-third of all the planet candidates. We compare the characteristics of candidates found in multiples with those found in singles. False positives due to eclipsing binaries are much less common for the multiples, as expected. Singles and multiples are both dominated by planets smaller than Neptune; 69 +2 -3 % for singles and 86 +2 -5 % for multiples. This result, that systems with multiple transiting planets are less likely to include a transiting giant planet, suggests that close-in giant planets tend to disrupt the orbital inclinations of small planets in flat systems, or maybe even prevent the formation of such systems in the first place.

  10. Multiple Intelligences Profiles of Children with Attention Deficit and Hyperactivity Disorder in Comparison with Nonattention Deficit and Hyperactivity Disorder

    Directory of Open Access Journals (Sweden)

    Mostafa Najafi

    2017-01-01

    Full Text Available Background: Attention deficit and hyperactivity disorder (ADHD is a common psychological problem during childhood. This study aimed to evaluate multiple intelligences profiles of children with ADHD in comparison with non-ADHD. Materials and Methods: This cross-sectional descriptive analytical study was done on 50 children of 6–13 years old in two groups of with and without ADHD. Children with ADHD were referred to Clinics of Child and Adolescent Psychiatry, Isfahan University of Medical Sciences, in 2014. Samples were selected based on clinical interview (based on Diagnostic and Statistical Manual of Mental Disorders IV and parent–teacher strengths and difficulties questionnaire, which was done by psychiatrist and psychologist. Raven intelligence quotient (IQ test was used, and the findings were compared to the results of multiple intelligences test. Data analysis was done using a multivariate analysis of covariance using SPSS20 software. Results: Comparing the profiles of multiple intelligence among two groups, there are more kinds of multiple intelligences in control group than ADHD group, a difference which has been more significant in logical, interpersonal, and intrapersonal intelligence (P 0.05. The IQ average score in the control group and ADHD group was 102.42 ± 16.26 and 96.72 ± 16.06, respectively, that reveals the negative effect of ADHD on IQ average value. There was an insignificance relationship between linguistic and naturalist intelligence (P > 0.05. However, in other kinds of multiple intelligences, direct and significant relationships were observed (P < 0.05. Conclusions: Since the levels of IQ (Raven test and MI in control group were more significant than ADHD group, ADHD is likely to be associated with logical-mathematical, interpersonal, and intrapersonal profiles.

  11. Iterative channel decoding of FEC-based multiple-description codes.

    Science.gov (United States)

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  12. Does vagotomy protect against multiple sclerosis?

    Science.gov (United States)

    Sundbøll, Jens; Horváth-Puhó, Erzsébet; Adelborg, Kasper; Svensson, Elisabeth

    2017-07-01

    To examine the association between vagotomy and multiple sclerosis. We conducted a matched cohort study of all patients who underwent truncal or super-selective vagotomy and a comparison cohort, by linking Danish population-based medical registries (1977-1995). Hazard ratios (HRs) for multiple sclerosis, adjusting for potential confounders were computed by means of Cox regression analysis. Median age of multiple sclerosis onset corresponded to late onset multiple sclerosis. No association with multiple sclerosis was observed for truncal vagotomy (0-37 year adjusted HR=0.91, 95% confidence interval [CI]: 0.48-1.74) or super-selective vagotomy (0-37 year adjusted HR=1.28, 95% CI: 0.79-2.09) compared with the general population. We found no association between vagotomy and later risk of late onset multiple sclerosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Refractive outcomes of an advanced aspherically optimized profile for myopia corrections by LASIK: a retrospective comparison with the standard aspherically optimized profile

    Directory of Open Access Journals (Sweden)

    Meyer B

    2015-02-01

    Full Text Available Bertram Meyer,1 Georg Sluyterman van Langeweyde,2 Matthias Wottke2 1Augencentrum Köln, Cologne, Germany; 2Carl Zeiss Meditec AG, Jena, Germany Purpose: A retrospective comparison of refractive outcomes of a new, aspherically optimized profile with an enhanced energy correction feature (Triple-A and the conventionally used aspherically optimized profile (ASA, or aberration smart ablation for correction of low-to-high myopia.Setting: Augen-OP-Centrum, Cologne, GermanyDesign: Retrospective nonrandomized comparative studyMethods: A central database at the Augen-OP-Centrum was used to gather retrospective data for low-to-high myopia (up to -10 D. One hundred and seven eyes (56 patients were treated with the ASA profile, and 79 eyes (46 patients were treated with the Triple-A profile. Postoperative outcomes were evaluated at 1 month, 3 months, 6 months, and 1 year follow-up time points.Results: The Triple-A profile showed better predictability indicated by a significantly lower standard deviation of residuals (0.32–0.34 vs 0.36–0.44, Triple-A vs ASA in the 6-month to 1-year period. The Triple-A group had better stability across all time intervals and achieved better postoperative astigmatism improvements with significantly lower scatter. This group achieved better safety at 1 year, with 100% of eyes showing no change or gain in Snellen lines, compared with 97% in the ASA group. A better safety index was observed for the Triple-A group at later time points. The Triple-A group had a better efficacy index and a higher percentage of eyes with an uncorrected Snellen visual acuity of 20/20 or greater at all investigated follow-up time points.Conclusion: The new aspherically optimized Triple-A profile can safely and effectively correct low-to-high myopia. It has demonstrated superiority over the ASA profile in most refractive outcomes. Keywords: Triple-A, wavefront measurements, corneal aberrations, corneal asphericity, ablation profile

  14. Seismic reflection imaging, accounting for primary and multiple reflections

    Science.gov (United States)

    Wapenaar, Kees; van der Neut, Joost; Thorbecke, Jan; Broggini, Filippo; Slob, Evert; Snieder, Roel

    2015-04-01

    Imaging of seismic reflection data is usually based on the assumption that the seismic response consists of primary reflections only. Multiple reflections, i.e. waves that have reflected more than once, are treated as primaries and are imaged at wrong positions. There are two classes of multiple reflections, which we will call surface-related multiples and internal multiples. Surface-related multiples are those multiples that contain at least one reflection at the earth's surface, whereas internal multiples consist of waves that have reflected only at subsurface interfaces. Surface-related multiples are the strongest, but also relatively easy to deal with because the reflecting boundary (the earth's surface) is known. Internal multiples constitute a much more difficult problem for seismic imaging, because the positions and properties of the reflecting interfaces are not known. We are developing reflection imaging methodology which deals with internal multiples. Starting with the Marchenko equation for 1D inverse scattering problems, we derived 3D Marchenko-type equations, which relate reflection data at the surface to Green's functions between virtual sources anywhere in the subsurface and receivers at the surface. Based on these equations, we derived an iterative scheme by which these Green's functions can be retrieved from the reflection data at the surface. This iterative scheme requires an estimate of the direct wave of the Green's functions in a background medium. Note that this is precisely the same information that is also required by standard reflection imaging schemes. However, unlike in standard imaging, our iterative Marchenko scheme retrieves the multiple reflections of the Green's functions from the reflection data at the surface. For this, no knowledge of the positions and properties of the reflecting interfaces is required. Once the full Green's functions are retrieved, reflection imaging can be carried out by which the primaries and multiples are

  15. Physical Limitations To Nonuniformity Correction In IR Focal Plane Arrays

    Science.gov (United States)

    Scribner, D. A.; Kruer, M. R.; Gridley, J. C.; Sarkady, K.

    1988-05-01

    Simple nonuniformity correction algorithms currently in use can be severely limited by nonlinear response characteristics of the individual pixels in an IR focal plane array. Although more complicated multi-point algorithms improve the correction process they too can be limited by nonlinearities. Furthermore, analysis of single pixel noise power spectrums usually show some level of 1 /f noise. This in turn causes pixel outputs to drift independent of each other thus causing the spatial noise (often called fixed pattern noise) of the array to increase as a function of time since the last calibration. Measurements are presented for two arrays (a HgCdTe hybrid and a Pt:Si CCD) describing pixel nonlinearities, 1/f noise, and residual spatial noise (after nonuniforming correction). Of particular emphasis is spatial noise as a function of the lapsed time since the last calibration and the calibration process selected. The resulting spatial noise is examined in terms of its effect on the NEAT performance of each array tested and comparisons are made. Finally, a discussion of implications for array developers is given.

  16. A Comparison of Equality in Computer Algebra and Correctness in Mathematical Pedagogy (II)

    Science.gov (United States)

    Bradford, Russell; Davenport, James H.; Sangwin, Chris

    2010-01-01

    A perennial problem in computer-aided assessment is that "a right answer", pedagogically speaking, is not the same thing as "a mathematically correct expression", as verified by a computer algebra system, or indeed other techniques such as random evaluation. Paper I in this series considered the difference in cases where there was "the right…

  17. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    KAUST Repository

    Allam, Amin

    2015-07-14

    Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.

  18. Comparison of Deep Learning With Multiple Machine Learning Methods and Metrics Using Diverse Drug Discovery Data Sets.

    Science.gov (United States)

    Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean

    2017-12-04

    Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further

  19. Comparison of the evolution of tumor cells after unique and multiple (accelerated) daily irradiation in mammary carcinoma of C3H mice

    International Nuclear Information System (INIS)

    Pfersdorff, J.; Sack, H.

    1986-01-01

    The comparison of two fractionation schemes, i.e. the usual irradiation once a day with 2 Gy (SDF) and the fractionation with 3 times 1.6 Gy per day (MDF) at intervals of at least four hours shows the stronger action of higher fractionation on the destruction of tumor cells and the inhibition of their proliferation kinetics. So the number of pycnotic cells is considerably increased in case of multiple daily irradiation, and the mitosis rate as well as the labelling index show a more significant decrease. In case of one irradiation per day, the number of pycnotic cells increases during radiotherapy, too, but the mitosis rate and the labelling index only decrease until the fifth or sixth treatment day, remaining then unchanged or increasing slightly. This suggests a recurring multiplication of tumor cells already during radiotherapy. The higher efficacy of multiple daily fractionation in rapidly proliferating tumors is proved by the measurements of changing tumor volumes in the living animal during irradiation as well as by the observation of the survival time after irradiation. (orig.) [de

  20. A comparison of radiometric correction techniques in the evaluation of the relationship between LST and NDVI in Landsat imagery.

    Science.gov (United States)

    Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin

    2012-06-01

    Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.

  1. Monte Carlo evaluation of scattering correction methods in 131I studies using pinhole collimator

    International Nuclear Information System (INIS)

    López Díaz, Adlin; San Pedro, Aley Palau; Martín Escuela, Juan Miguel; Rodríguez Pérez, Sunay; Díaz García, Angelina

    2017-01-01

    Scattering is quite important for image activity quantification. In order to study the scattering factors and the efficacy of 3 multiple window energy scatter correction methods during 131 I thyroid studies with a pinhole collimator (5 mm hole) a Monte Carlo simulation (MC) was developed. The GAMOS MC code was used to model the gamma camera and the thyroid source geometry. First, to validate the MC gamma camera pinhole-source model, sensibility in air and water of the simulated and measured thyroid phantom geometries were compared. Next, simulations to investigate scattering and the result of triple energy (TEW), Double energy (DW) and Reduced double (RDW) energy windows correction methods were performed for different thyroid sizes and depth thicknesses. The relative discrepancies to MC real event were evaluated. Results: The accuracy of the GAMOS MC model was verified and validated. The image’s scattering contribution was significant, between 27-40 %. The discrepancies between 3 multiple window energy correction method results were significant (between 9-86 %). The Reduce Double Window methods (15%) provide discrepancies of 9-16 %. Conclusions: For the simulated thyroid geometry with pinhole, the RDW (15 %) was the most effective. (author)

  2. Validation of attenuation-corrected equilibrium radionuclide angiographic determinations of right ventricular volume: comparison with cast-validated biplane cineventriculography

    International Nuclear Information System (INIS)

    Dell'Italia, L.J.; Starling, M.R.; Walsh, R.A.; Badke, F.R.; Lasher, J.C.; Blumhardt, R.

    1985-01-01

    To determine the accuracy of attenuation-corrected equilibrium radionuclide angiographic determinations of right ventricular volumes, the authors initially studied 14 postmortem human right ventricular casts by water displacement and biplane cineventriculography. Biplane cineventriculographic right ventricular cast volumes, calculated by a modification of Simpson's rule algorithm, correlated well with right ventricular cast volumes measured by water displacement (r = .97, y = 8 + 0.88x, SEE = 6 ml). Moreover, the mean volumes obtained by both methods were no different (73 +/- 28 vs 73 +/- 25 ml). Subsequently, they studied 16 patients by both biplane cineventriculography and equilibrium radionuclide angiography. The uncorrected radionuclide right ventricular volumes were calculated by normalizing background corrected end-diastolic and end-systolic counts from hand-drawn regions of interest obtained by phase analysis for cardiac cycles processed, frame rate, and blood sample counts. Attenuation correction was performed by a simple geometric method. The attenuation-corrected radionuclide right ventricular end-diastolic volumes correlated with the cineventriculographic end-diastolic volumes (r = .91, y = 3 + 0.92x, SEE = 27 ml). Similarly, the attenuation-corrected radionuclide right ventricular end-systolic volumes correlated with the cineventriculographic end-systolic volumes (r = .93, y = - 1 + 0.91x, SEE = 16 ml). Also, the mean attenuation-corrected radionuclide end-diastolic and end-systolic volumes were no different than the average cineventriculographic end-diastolic and end-systolic volumes (160 +/- 61 and 83 +/- 44 vs 170 +/- 61 and 86 +/- 43 ml, respectively)

  3. Discontinuous functions in correction procedure for x-ray microanalysis of light elements in inorganic materials

    International Nuclear Information System (INIS)

    Kaminska, M.; Missol, W.

    2002-01-01

    A formula for absorption correction was developed and verified when multiplying it by the Love, Cox, Scott atomic number expression using the program NEWKOR and by comparison of the product with experimental and literature data. A correction error was calculated in reference to measure intensity ratios for 409 analyses of light elements (beryllium, boron, carbon, nitrogen, oxygen, fluorine) as well as 193 analyses of heavy elements (from sodium to uranium). Another computer program (MARCON) has been developed for iterative determination of elemental concentrations in the materials. (author)

  4. Two-flavor QCD correction to lepton magnetic moments at leading-order in the electromagnetic coupling

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xu [DESY, Zeuthen (Germany). NIC; Muenster Univ. (Germany). Inst. fuer Theoretische Physik; Jansen, Karl; Renner, Dru B. [DESY, Zeuthen (Germany). NIC; Petschlies, Marcus [Humboldt Univ. Berlin (Germany). Inst. fuer Physik

    2011-03-15

    We present a reliable nonperturbative calculation of the QCD correction, at leading-order in the electromagnetic coupling, to the anomalous magnetic moment of the electron, muon and tau leptons using two-flavor lattice QCD. We use multiple lattice spacings, multiple volumes and a broad range of quark masses to control the continuum, in nite-volume and chiral limits. We examine the impact of the commonly ignored disconnected diagrams and introduce a modi cation to the previously used method that results in a well-controlled lattice calculation. We obtain 1.513(43).10{sup -12}, 5.72(16).10{sup -8} and 2.650(54).10{sup -6} for the leading-order QCD correction to the anomalous magnetic moment of the electron, muon and tau respectively, each accurate to better than 3%. (orig.)

  5. Dispersion- and Exchange-Corrected Density Functional Theory for Sodium Ion Hydration.

    Science.gov (United States)

    Soniat, Marielle; Rogers, David M; Rempe, Susan B

    2015-07-14

    A challenge in density functional theory is developing functionals that simultaneously describe intermolecular electron correlation and electron delocalization. Recent exchange-correlation functionals address those two issues by adding corrections important at long ranges: an atom-centered pairwise dispersion term to account for correlation and a modified long-range component of the electron exchange term to correct for delocalization. Here we investigate how those corrections influence the accuracy of binding free energy predictions for sodium-water clusters. We find that the dual-corrected ωB97X-D functional gives cluster binding energies closest to high-level ab initio methods (CCSD(T)). Binding energy decomposition shows that the ωB97X-D functional predicts the smallest ion-water (pairwise) interaction energy and larger multibody contributions for a four-water cluster than most other functionals - a trend consistent with CCSD(T) results. Also, ωB97X-D produces the smallest amounts of charge transfer and the least polarizable waters of the density functionals studied, which mimics the lower polarizability of CCSD. When compared with experimental binding free energies, however, the exchange-corrected CAM-B3LYP functional performs best (error <1 kcal/mol), possibly because of its parametrization to experimental formation enthalpies. For clusters containing more than four waters, "split-shell" coordination must be considered to obtain accurate free energies in comparison with experiment.

  6. Higher-order conductivity corrections to the Casimir force

    International Nuclear Information System (INIS)

    Bezerra, Valdir Barbosa; Klimchitskaya, Galina; Mostepanenko, Vladimir

    2000-01-01

    Full text follows: Considerable recent attention has been focused on the new experiments on measuring the Casimir force. To be confident that experimental data fit theory at a level of several percent, a variety of corrections to the ideal expression for the Casimir force should be taken into account. One of the main corrections at small separations between interacting bodies is the one due to finite conductivity of the boundary metal. This correction has its origin in non-zero penetration depth δ 0 of electromagnetic vacuum oscillations into the metal (for a perfect metal of infinitely large conductivity δ 0 = 0). The other quantity of the dimension of length is the space separation a between two plates or a plate and a sphere. Their relation δ 0 /a is the natural perturbation parameter in which powers the corrections to the Casimir force due to finite conductivity can be expanded. Such an expansion works good for all separations a >> δ 0 (i.e. for separations larger than 100-150 nm). The first-order term of this expansion was calculated almost forty years ago, and the second-order one in 1985 [1]. These two terms are not sufficient for the comparison of the theory with precision modern experiments. In this talk we report the results of paper [2] where the third- and fourth-order terms in δ 0 /a expansion of the Casimir force were calculated first. They gave the possibility to achieve an excellent agreement of a theory and experiment. (author)

  7. Characterizing the Joint Effect of Diverse Test-Statistic Correlation Structures and Effect Size on False Discovery Rates in a Multiple-Comparison Study of Many Outcome Measures

    Science.gov (United States)

    Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James

    2011-01-01

    In their 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report the results of a simulation assessing the robustness of their adaptive step-down procedure (GBS) for controlling the false discovery rate (FDR) when normally distributed test statistics are serially correlated. In this study we extend the investigation to the case of multiple comparisons involving correlated non-central t-statistics, in particular when several treatments or time periods are being compared to a control in a repeated-measures design with many dependent outcome measures. In addition, we consider several dependence structures other than serial correlation and illustrate how the FDR depends on the interaction between effect size and the type of correlation structure as indexed by Foerstner s distance metric from an identity. The relationship between the correlation matrix R of the original dependent variables and R, the correlation matrix of associated t-statistics is also studied. In general R depends not only on R, but also on sample size and the signed effect sizes for the multiple comparisons.

  8. Evaluation of the global orbit correction algorithm for the APS real-time orbit feedback system

    International Nuclear Information System (INIS)

    Carwardine, J.; Evans, K. Jr.

    1997-01-01

    The APS real-time orbit feedback system uses 38 correctors per plane and has available up to 320 rf beam position monitors. Orbit correction is implemented using multiple digital signal processors. Singular value decomposition is used to generate a correction matrix from a linear response matrix model of the storage ring lattice. This paper evaluates the performance of the APS system in terms of its ability to correct localized and distributed sources of orbit motion. The impact of regulator gain and bandwidth, choice of beam position monitors, and corrector dynamics are discussed. The weighted least-squares algorithm is reviewed in the context of local feedback

  9. Making a difference? A comparison between multi-sensory and regular storytelling for persons with profound intellectual and multiple disabilities.

    Science.gov (United States)

    Ten Brug, A; Van der Putten, A A J; Penne, A; Maes, B; Vlaskamp, C

    2016-11-01

    Multi-sensory storytelling (MSST) was developed to include persons with profound intellectual and multiple disabilities in storytelling culture. In order to increase the listeners' attention, MSST stories are individualised and use multiple sensory stimuli to support the verbal text. In order to determine the value of MSST, this study compared listeners' attention under two conditions: (1) being read MSST books and (2) being read regular stories. A non-randomised control study was executed in which the intervention group read MSST books (n = 45) and a comparison group (n = 31) read regular books. Books were read 10 times during a 5-week period. The 1st, 5th and 10th storytelling sessions were recorded on video in both groups, and the percentage of attention directed to the book and/or stimuli and to the storyteller was scored by a trained and independent rater. Two repeated measure analyses (with the storytelling condition as a between-subject factor and the three measurements as factor) were performed to determine the difference between the groups in terms of attention directed to the book/stimuli (first analysis) and storyteller (second analysis). A further analysis established whether the level of attention changed between the reading sessions and whether there was an interaction effect between the repetition of the book and the storytelling condition. The attention directed to the book and/or the stimuli was significantly higher in the MSST group than in the comparison group. No significant difference between the two groups was found in the attention directed to the storyteller. For MSST stories, most attention was observed during the fifth reading session, while for regular stories, the fifth session gained least attentiveness from the listener. The persons with profound intellectual and multiple disabilities paid more attention to the book and/or stimuli in the MSST condition compared with the regular story telling group. Being more attentive towards

  10. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  11. What Is Social Comparison and How Should We Study It?

    Science.gov (United States)

    Wood, Joanne V.

    1996-01-01

    Examines frequently used measures and procedures in social comparison research. The question of whether a method truly captures social comparison requires a clear understanding of what social comparison is; hence a definition of social comparison is proposed, multiple ancillary processes in social comparison are identified, and definitional…

  12. A distortion correction method for image intensifier and electronic portal images used in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Ioannidis, G T; Geramani, K N; Zamboglou, N [Strahlenklinik, Stadtische Kliniken Offenbach, Offenbach (Germany); Uzunoglu, N [Department of Electrical and Computer Engineering, National Technical University of Athens, Athens (Greece)

    1999-12-31

    At the most of radiation departments a simulator and an `on line` verification system of the treated volume, in form of an electronic portal imaging device (EPID), are available. Networking and digital handling (saving, archiving etc.) of the image information is a necessity in the image processing procedures in order to evaluate verification and simulation recordings at the computer screen. Distortion is on the other hand prerequisite for quantitative comparison of both image modalities. Another limitation factor, in order to make quantitative assertions, is the fact that the irradiation fields in radiotherapy are usually bigger than the field of view of an image intensifier. Several segments of the irradiation field must therefore be acquired. Using pattern recognition techniques these segments can be composed into a single image. In this paper a distortion correction method will be presented. The method is based upon a well defined Grid which is embedded during the registration process on the image. The video signal from the image intensifier is acquired and processed. The grid is then recognised using image processing techniques. Ideally if all grid points are recognised, various methods can be applied in order to correct the distortion. But in practice this is not the case. Overlapping structures (bones etc.) have as a consequence that not all of the grid points can be recognised. Mathematical models from the Graph theory are applied in order to reconstruct the whole grid. The deviation of the grid points positions from the rated value is then used to calculate correction coefficients. This method (well defined grid, grid recognition, correction factors) can also be applied in verification images from the EPID or in other image modalities, and therefore a quantitative comparison in radiation treatment is possible. The distortion correction method and the application on simulator images will be presented. (authors)

  13. Comparison of stress in single and multiple layer depositions of plasma-deposited amorphous silicon dioxide

    International Nuclear Information System (INIS)

    Au, V; Charles, C; Boswell, R W

    2006-01-01

    The stress in a single-layer continuous deposition of amorphous silicon dioxide (SiO 2 ) film is compared with the stress within multiple-layer intermittent or 'stop-start' depositions. The films were deposited by helicon activated reactive evaporation (plasma assisted deposition with electron beam evaporation source) to a 1 μm total film thickness. The relationships for stress as a function of film thickness for single, two, four and eight layer depositions have been obtained by employing the substrate curvature technique on a post-deposition etch-back of the SiO 2 film. At film thicknesses of less than 300 nm, the stress-thickness relationships clearly show an increase in stress in the multiple-layer samples compared with the relationship for the single-layer film. By comparison, there is little variation in the film stress between the samples when it is measured at 1 μm film thickness. Localized variations in stress were not observed in the regions where the 'stop-start' depositions occurred. The experimental results are interpreted as a possible indication of the presence of unstable, strained Si-O-Si bonds in the amorphous SiO 2 film. It is proposed that the subsequent introduction of a 'stop-start' deposition process places additional strain on these bonds to affect the film structure. The experimental stress-thickness relationships were reproduced independently by assuming a linear relationship between the measured bow and film thickness. The constants of the linear model are interpreted as an indication of the density of the amorphous film structure

  14. Comparison of online IGRT techniques for prostate IMRT treatment: Adaptive vs repositioning correction

    International Nuclear Information System (INIS)

    Thongphiew, Danthai; Wu, Q. Jackie; Lee, W. Robert; Chankong, Vira; Yoo, Sua; McMahon, Ryan; Yin Fangfang

    2009-01-01

    This study compares three online image guidance techniques (IGRT) for prostate IMRT treatment: bony-anatomy matching, soft-tissue matching, and online replanning. Six prostate IMRT patients were studied. Five daily CBCT scans from the first week were acquired for each patient to provide representative ''snapshots'' of anatomical variations during the course of treatment. Initial IMRT plans were designed for each patient with seven coplanar 15 MV beams on a Eclipse treatment planning system. Two plans were created, one with a PTV margin of 10 mm and another with a 5 mm PTV margin. Based on these plans, the delivered dose distributions to each CBCT anatomy was evaluated to compare bony-anatomy matching, soft-tissue matching, and online replanning. Matching based on bony anatomy was evaluated using the 10 mm PTV margin (''bone10''). Soft-tissue matching was evaluated using both the 10 mm (''soft10'') and 5 mm (''soft5'') PTV margins. Online reoptimization was evaluated using the 5 mm PTV margin (''adapt''). The replanning process utilized the original dose distribution as the basis and linear goal programming techniques for reoptimization. The reoptimized plans were finished in less than 2 min for all cases. Using each IGRT technique, the delivered dose distribution was evaluated on all 30 CBCT scans (6 patientsx5CBCT/patient). The mean minimum dose (in percentage of prescription dose) to the CTV over five treatment fractions were in the ranges of 99%-100%(SD=0.1%-0.8%), 65%-98%(SD=0.4%-19.5%), 87%-99%(SD=0.7%-23.3%), and 95%-99%(SD=0.4%-10.4%) for the adapt, bone10, soft5, and soft10 techniques, respectively. Compared to patient position correction techniques, the online reoptimization technique also showed improvement in OAR sparing when organ motion/deformations were large. For bladder, the adapt technique had the best (minimum) D90, D50, and D30 values for 24, 17, and 15 fractions out of 30 total fractions, while it also had the best D90, D50, and D30 values for

  15. The Role of Correction in the Conservative Treatment of Adolescent Idiopathic Scoliosis.

    Science.gov (United States)

    Ng, Shu-Yan; Nan, Xiao-Feng; Lee, Sang-Gil; Tournavitis, Nico

    2017-01-01

    Physiotherapeutic Scoliosis-Specific Exercises (PSSE) and bracing have been found to be effective in the stabilization of curves in patients with Adolescent Idiopathic Scoliosis (AIS). Yet, the difference among the many PSSEs and braces has not been studied. The present review attempts to investigate the role of curve correction in the outcome of treatment for PSSEs and braces. A PubMed manual search has been conducted for studies on the role of correction in the effectiveness of PSSE and bracing. For the PSSEs, the key words used were "adolescent idiopathic scoliosis, correction, physiotherapy, physical therapy, exercise, and rehabilitation." For bracing, the key words used were "adolescent idiopathic scoliosis, correction and brace". Only papers that were published from 2001-2017 were included and reviewed, as there were very few relevant papers dating earlier than 2001. The search found no studies on the role of correction on the effectiveness of different PSSEs. The effectiveness of different PSSEs might or might not be related to the magnitude of curve correction during the exercises. However, many studies showed a relationship between the magnitude of in-brace correction and the outcome of the brace treatment. The role of correction on the effectiveness of PSSE has not been studied. In-brace correction, however, has been found to be associated with the outcome of brace treatment. An in-brace correction of 40-50% was associated with an increased rate of brace treatment success ( i.e . stabilization or improvement of curves). Thus, in the treatment of AIS, patients should be advised to use highly corrective braces, in conjunction with PSSE since exercises have been found to help stabilize the curves during weaning of the brace. Presently, no specific PSSE can be recommended. Braces of high in-brace correction should be used in conjunction with PSSEs in the treatment of AIS. No specific PSSE can be recommended as comparison studies of the effectiveness of

  16. Attenuation correction for renal scintigraphy with 99mTc - DMSA: comparison between Raynaud and the geometric mean methods

    International Nuclear Information System (INIS)

    Argenta, J.; Brambilla, C.R.; Marques da Silva, A.M.

    2009-01-01

    The evaluation of the index of renal function (IF) requires soft-tissue attenuation correction. This paper investigates the impact over the IF, when attenuation correction is applied using the Raynaud method and the geometric mean method in renal planar scintigraphy, using posterior and anterior views. The study was conducted with Monte Carlo simulated images of five GSF family voxel phantoms with different relative uptakes in each kidney from normal (50% -50%) to pathological (10% -90%). The results showed that Raynaud method corrects more efficiently the cases where the renal depth is close to the value of the standard phantom. The geometric mean method showed similar results to the Raynaud method for Baby, Child and Golem. For Helga and Donna models, the errors were above 20%, increasing with relative uptake. Further studies should be conducted to assess the influences of the standard phantom in the correcting attenuation methods. (author)

  17. Compensation Methods for Non-uniform and Incomplete Data Sampling in High Resolution PET with Multiple Scintillation Crystal Layers

    International Nuclear Information System (INIS)

    Lee, Jae Sung; Kim, Soo Mee; Lee, Dong Soo; Hong, Jong Hong; Sim, Kwang Souk; Rhee, June Tak

    2008-01-01

    To establish the methods for sinogram formation and correction in order to appropriately apply the filtered backprojection (FBP) reconstruction algorithm to the data acquired using PET scanner with multiple scintillation crystal layers. Formation for raw PET data storage and conversion methods from listmode data to histogram and sinogram were optimized. To solve the various problems occurred while the raw histogram was converted into sinogram, optimal sampling strategy and sampling efficiency correction method were investigated. Gap compensation methods that is unique in this system were also investigated. All the sinogram data were reconstructed using 2D filtered backprojection algorithm and compared to estimate the improvements by the correction algorithms. Optimal radial sampling interval and number of angular samples in terms of the sampling theorem and sampling efficiency correction algorithm were pitch/2 and 120, respectively. By applying the sampling efficiency correction and gap compensation, artifacts and background noise on the reconstructed image could be reduced. Conversion method from the histogram to sinogram was investigated for the FBP reconstruction of data acquired using multiple scintillation crystal layers. This method will be useful for the fast 2D reconstruction of multiple crystal layer PET data

  18. Effects of scatter and attenuation corrections on phantom and clinical brain SPECT

    International Nuclear Information System (INIS)

    Prando, S.; Robilotta, C.C.R.; Oliveira, M.A.; Alves, T.C.; Busatto Filho, G.

    2002-01-01

    Aim: The present work evaluated the effects of combinations of scatter and attenuation corrections on the analysis of brain SPECT. Materials and Methods: We studied images of the 3D Hoffman brain phantom and from a group of 20 depressive patients with confirmed cardiac insufficiency (CI) and 14 matched healthy controls (HC). Data were acquired with a Sophy-DST/SMV-GE dual-head camera after venous injection of 1110MBq 99m Tc-HMPAO. Two energy windows, 15% on 140keV and 30% centered on 108keV of the Compton distribution, were used to obtain corresponding sets of 128x128x128 projections. Tomograms were reconstructed using OSEM (2 iterations, 8 sub-sets) and Metz filter (order 8, 4 pixels FWHM psf) and FBP with Butterworth filter (order 10, frequency 0.7 Nyquist). Ten combinations of Jaszczak correction (factors 0.3, 0.4 and 0.5) and the 1st order Chang correction (u=0.12cm -1 and 0.159cm -1 ) were applied on the phantom data. In all the phantom images, contrast and signal-noise ratio between 3 ROIs (ventricle, occipital and thalamus) and cerebellum, as well as the ratio between activities in gray and white matters, were calculated and compared with the expected values. The patients images were corrected with k=0.5 and u=0.159cm -1 and reconstructed with OSEM and Metz filter. The images were inspected visually and blood flow comparisons between the CI and the HC groups were performed using Statistical Parametric Mapping (SPM). Results: The best results in the analysis of the contrast and activities ratio were obtained with k=0.5 and u=0.159cm -1 . The results of the activities ratio obtained with OSEM e Metz filter are similar to those published by Laere et al.[J.Nucl.Med 2000;41:2051-2062]. The method of correction using effective attenuation coefficient produced results visually acceptable, but inadequate for the quantitative evaluation. The results of signal-noise ratio are better with OSEM than FBP reconstruction method. The corrections in the CI patients studies

  19. Investigating the dominant corrections to the strong-stretching theory for dry polymeric brushes.

    Science.gov (United States)

    Matsen, M W

    2004-07-22

    The accuracy of strong-stretching theory (SST) is examined against a detailed comparison to self-consistent field theory (SCFT) on dry polymeric brushes with thicknesses of up to approximately 17 times the natural chain extension. The comparison provides the strongest evidence to date that SST represents the exact thick-brush limit of SCFT. More importantly, it allows us to assess the effectiveness of proposed finite-stretching corrections to SST. Including the entropy of the free ends is shown to rectify the most severe inaccuracies in SST. The proximal layer proposed by Likhtman and Semenov provides another significant improvement, and we identify one further effect of similar importance for which there is not yet an accurate treatment. Furthermore, our study provides a valuable means of rejecting mistaken refinements to SST, and indeed one such example is revealed. A proper treatment of finite-stretching corrections is vital to a wide range of phenomena that depend on a small excess free energy, such as autophobic dewetting and the interaction between opposing brushes.

  20. NNLO leptonic and hadronic corrections to Bhabha scattering and luminosity monitoring at meson factories

    Energy Technology Data Exchange (ETDEWEB)

    Carloni Calame, C. [Southampton Univ. (United Kingdom). School of Physics; Czyz, H.; Gluza, J.; Gunia, M. [Silesia Univ., Katowice (Poland). Dept. of Field Theory and Particle Physics; Montagna, G. [Pavia Univ. (Italy). Dipt. di Fisica Nucleare e Teorica; INFN, Sezione di Pavia (Italy); Nicrosini, O.; Piccinini, F. [INFN, Sezione di Pavia (Italy); Riemann, T. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Worek, M. [Wuppertal Univ. (Germany). Fachbereich C Physik

    2011-07-15

    Virtual fermionic N{sub f}=1 and N{sub f}=2 contributions to Bhabha scattering are combined with realistic real corrections at next-to-next-to-leading order in QED. The virtual corrections are determined by the package BHANNLOHF, and real corrections with the Monte Carlo generators BHAGEN-1PH, HELAC-PHEGAS and EKHARA. Numerical results are discussed at the energies of and with realistic cuts used at the {phi} factory DA{phi}NE, at the B factories PEP-II and KEK, and at the charm/{tau} factory BEPC II. We compare these complete calculations with the approximate ones realized in the generator BABAYAGA rate at NLO used at meson factories to evaluate their luminosities. For realistic reference event selections we find agreement for the NNLO leptonic and hadronic corrections within 0.07% or better and conclude that they are well accounted for in the generator by comparison with the present experimental accuracy. (orig.)

  1. Modular correction method of bending elastic modulus based on sliding behavior of contact point

    International Nuclear Information System (INIS)

    Ma, Zhichao; Zhao, Hongwei; Zhang, Qixun; Liu, Changyi

    2015-01-01

    During the three-point bending test, the sliding behavior of the contact point between the specimen and supports was observed, the sliding behavior was verified to affect the measurements of both deflection and span length, which directly affect the calculation of the bending elastic modulus. Based on the Hertz formula to calculate the elastic contact deformation and the theoretical calculation of the sliding behavior of the contact point, a theoretical model to precisely describe the deflection and span length as a function of bending load was established. Moreover, a modular correction method of bending elastic modulus was proposed, via the comparison between the corrected elastic modulus of three materials (H63 copper–zinc alloy, AZ31B magnesium alloy and 2026 aluminum alloy) and the standard modulus obtained from standard uniaxial tensile tests, the universal feasibility of the proposed correction method was verified. Also, the ratio of corrected to raw elastic modulus presented a monotonically decreasing tendency as the raw elastic modulus of materials increased. (technical note)

  2. Multiview Trajectory Mapping Using Homography with Lens Distortion Correction

    Directory of Open Access Journals (Sweden)

    Andrea Cavallaro

    2008-11-01

    Full Text Available We present a trajectory mapping algorithm for a distributed camera setting that is based on statistical homography estimation accounting for the distortion introduced by camera lenses. Unlike traditional approaches based on the direct linear transformation (DLT algorithm and singular value decomposition (SVD, the planar homography estimation is derived from renormalization. In addition to this, the algorithm explicitly introduces a correction parameter to account for the nonlinear radial lens distortion, thus improving the accuracy of the transformation. We demonstrate the proposed algorithm by generating mosaics of the observed scenes and by registering the spatial locations of moving objects (trajectories from multiple cameras on the mosaics. Moreover, we objectively compare the transformed trajectories with those obtained by SVD and least mean square (LMS methods on standard datasets and demonstrate the advantages of the renormalization and the lens distortion correction.

  3. Multiview Trajectory Mapping Using Homography with Lens Distortion Correction

    Directory of Open Access Journals (Sweden)

    Kayumbi Gabin

    2008-01-01

    Full Text Available Abstract We present a trajectory mapping algorithm for a distributed camera setting that is based on statistical homography estimation accounting for the distortion introduced by camera lenses. Unlike traditional approaches based on the direct linear transformation (DLT algorithm and singular value decomposition (SVD, the planar homography estimation is derived from renormalization. In addition to this, the algorithm explicitly introduces a correction parameter to account for the nonlinear radial lens distortion, thus improving the accuracy of the transformation. We demonstrate the proposed algorithm by generating mosaics of the observed scenes and by registering the spatial locations of moving objects (trajectories from multiple cameras on the mosaics. Moreover, we objectively compare the transformed trajectories with those obtained by SVD and least mean square (LMS methods on standard datasets and demonstrate the advantages of the renormalization and the lens distortion correction.

  4. "An integrative formal model of motivation and decision making: The MGPM*": Correction to Ballard et al. (2016).

    Science.gov (United States)

    2017-02-01

    Reports an error in "An integrative formal model of motivation and decision making: The MGPM*" by Timothy Ballard, Gillian Yeo, Shayne Loft, Jeffrey B. Vancouver and Andrew Neal ( Journal of Applied Psychology , 2016[Sep], Vol 101[9], 1240-1265). Equation A3 contained an error. This correct equation is provided in the erratum. (The following abstract of the original article appeared in record 2016-28692-001.) We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c

  5. Multiplicities and parton dynamics

    International Nuclear Information System (INIS)

    Knuteson, R.O.

    1987-01-01

    The production of strongly interacting particles from the annihilation of electrons and positrons at high energies is studied, with emphasis on the multiplicity, or number, of particles produced. A probabilistic branching model based on the leading log approximation in QCD is formulated to predict the evolution of particle number with the energy of collision. Direct integration of a master equation for the probabilities allows a comparison to the experimentally observed particle distribution. The production of strongly interacting particles from proton-antiproton collisions is also considered. A model for the production of particles from parton-parton collisions is presented and the growth in multiplicity with energy demonstrated

  6. NLO corrections to the photon impact factor: Combining real and virtual corrections

    International Nuclear Information System (INIS)

    Bartels, J.; Colferai, D.; Kyrieleis, A.; Gieseke, S.

    2002-08-01

    In this third part of our calculation of the QCD NLO corrections to the photon impact factor we combine our previous results for the real corrections with the singular pieces of the virtual corrections and present finite analytic expressions for the quark-antiquark-gluon intermediate state inside the photon impact factor. We begin with a list of the infrared singular pieces of the virtual correction, obtained in the first step of our program. We then list the complete results for the real corrections (longitudinal and transverse photon polarization). In the next step we defined, for the real corrections, the collinear and soft singular regions and calculate their contributions to the impact factor. We then subtract the contribution due to the central region. Finally, we combine the real corrections with the singular pieces of the virtual corrections and obtain our finite results. (orig.)

  7. Gender Differences in Comparisons and Entitlement: Implications for Comparable Worth.

    Science.gov (United States)

    Major, Brenda

    1989-01-01

    Addresses the role of comparison processes in the persistence of the gender wage gap, its toleration by those disadvantaged by it, and resistance to comparable worth as a corrective strategy. Argues that gender segregation and undercompensation for women's jobs leads women to use different comparison standards when evaluating what they deserve.…

  8. Comprehensive strategy for corrective actions at the Savannah River Site General Separations Area

    International Nuclear Information System (INIS)

    Ebra, M.A.; Lewis, C.M.; Amidon, M.B.; McClain, L.K.

    1991-01-01

    The Savannah River Site (SRS), operated by the Westinghouse Savannah River Company for the United States Department of Energy, contains a number of waste disposal units that are currently in various stages of corrective action investigations, closures, and postclosure corrective actions. Many of these sites are located within a 40-square-kilometer area called the General Separations Area (GSA). The SRS has proposed to the regulatory agencies, the United States Environmental Protection Agency (EPA) and the South Carolina Department of Health and Environmental Control (SCDHEC), that groundwater investigations and corrective actions in this area be conducted under a comprehensive plan. The proposed plan would address the continuous nature of the hydrogeologic regime below the GSA and the potential for multiple sources of contamination. This paper describes the proposed approach

  9. Progress of the ITER Correction Coils in China

    CERN Document Server

    Wei, J; Han, S; Yu, X; Du, S; Li, C; Fang, C; Wang, L; Zheng, W; Liu, L; Wen, J; Li, H; Libeyre, P; Dolgetta, N; Cormany, C; Sgobba, S

    2014-01-01

    The ITER Correction Coils (CC) include three sets of six coils each, distributed symmetrically around the tokamak to correct error fields. Each pair of coils, located on opposite sides of the tokamak, is series connected with polarity to produce asymmetric fields. The manufacturing of these superconducting coils is undergoing qualification of the main fabrication processes: winding into multiple pancakes, welding helium inlet/outlet on the conductor jacket, turn and ground insulation, vacuum pressure impregnation, inserting into an austenitic stainless steel case, enclosure welding, and assembling the terminal service box. It has been proceeding by an intense phase of R\\&D, trials tests, and final adjustment of the tooling. This paper mainly describes the progress in ASIPP for the CC manufacturing process before and on qualification phase and the status of corresponding equipment which are ordered or designed for each process. Some test results for the key component and procedure are also presented.

  10. Scattering Correction For Image Reconstruction In Flash Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo [Xi' an Jiaotong Univ., Xi' an (China)

    2013-08-15

    Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency.

  11. Scattering Correction For Image Reconstruction In Flash Radiography

    International Nuclear Information System (INIS)

    Cao, Liangzhi; Wang, Mengqi; Wu, Hongchun; Liu, Zhouyu; Cheng, Yuxiong; Zhang, Hongbo

    2013-01-01

    Scattered photons cause blurring and distortions in flash radiography, reducing the accuracy of image reconstruction significantly. The effect of the scattered photons is taken into account and an iterative deduction of the scattered photons is proposed to amend the scattering effect for image restoration. In order to deduct the scattering contribution, the flux of scattered photons is estimated as the sum of two components. The single scattered component is calculated accurately together with the uncollided flux along the characteristic ray, while the multiple scattered component is evaluated using correction coefficients pre-obtained from Monte Carlo simulations.The arbitrary geometry pretreatment and ray tracing are carried out based on the customization of AutoCAD. With the above model, an Iterative Procedure for image restORation code, IPOR, is developed. Numerical results demonstrate that the IPOR code is much more accurate than the direct reconstruction solution without scattering correction and it has a very high computational efficiency

  12. Comments on the Bagger-Lambert theory and multiple M2-branes

    International Nuclear Information System (INIS)

    Raamsdonk, Mark Van

    2008-01-01

    We study the SO(8) superconformal theory proposed recently by Bagger and Lambert as a possible worldvolume theory for multiple M2-branes. For their explicit example with gauge group SO(4), we rewrite the theory (originally formulated in terms of a three-algebra) as an ordinary SU(2) x SU(2) gauge theory with bifundamental matter. In this description, the parity invariance of the theory, required for a proper description of M2-branes, is clarified. We describe the subspace of scalar field configurations on which the potential vanishes, correcting an earlier claim. Finally, we point out, for general three-algebras, a difficulty in constructing the required set of superconformal primary operators which should be present in the correct theory describing multiple M2-branes.

  13. A Correction Method for UAV Helicopter Airborne Temperature and Humidity Sensor

    Directory of Open Access Journals (Sweden)

    Longqing Fan

    2017-01-01

    Full Text Available This paper presents a correction method for UAV helicopter airborne temperature and humidity including an error correction scheme and a bias-calibration scheme. As rotor downwash flow brings measurement error on helicopter airborne sensors inevitably, the error correction scheme constructs a model between the rotor induced velocity and temperature and humidity by building the heat balance equation for platinum resistor temperature sensor and the pressure correction term for humidity sensor. The induced velocity of a spatial point below the rotor disc plane can be calculated by the sum of the induced velocities excited by center line vortex, rotor disk vortex, and skew cylinder vortex based on the generalized vortex theory. In order to minimize the systematic biases, the bias-calibration scheme adopts a multiple linear regression to achieve a systematically consistent result with the tethered balloon profiles. Two temperature and humidity sensors were mounted on “Z-5” UAV helicopter in the field experiment. Overall, the result of applying the calibration method shows that the temperature and relative humidity obtained by UAV helicopter closely align with tethered balloon profiles in providing measurements of the temperature profiles and humidity profiles within marine atmospheric boundary layers.

  14. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...

  15. Correction of diagnostic x-ray spectra measured with CdTe and CdZnTe detectors

    Energy Technology Data Exchange (ETDEWEB)

    Matsumoto, M [Osaka Univ., Suita (Japan). Medical School; Kanamori, H; Toragaito, T; Taniguchi, A

    1996-07-01

    We modified the formula of stripping procedure presented by E. Di. Castor et al. We added the Compton scattering and separated K{sub {alpha}} radiation of Cd and Te (23 and 27keV, respectively). Using the new stripping procedure diagnostic x-ray spectra (object 4mm-Al) of tube voltage 50kV to 100kV for CdTe and CdZnTe detectors are corrected with comparison of those spectra for the Ge detector. The corrected spectra for CdTe and CdZnTe detectors coincide with those for Ge detector at lower tube voltage than 70kV. But the corrected spectra at higher tube voltage than 70kV do not coincide with those for Ge detector. The reason is incomplete correction for full energy peak efficiencies of real CdTe and CdZnTe detectors. (J.P.N.)

  16. Multiple preequilibrium decay processes

    International Nuclear Information System (INIS)

    Blann, M.

    1987-11-01

    Several treatments of multiple preequilibrium decay are reviewed with emphasis on the exciton and hybrid models. We show the expected behavior of this decay mode as a function of incident nucleon energy. The algorithms used in the hybrid model treatment are reviewed, and comparisons are made between predictions of the hybrid model and a broad range of experimental results. 24 refs., 20 figs

  17. Design of Service Net based Correctness Verification Approach for Multimedia Conferencing Service Orchestration

    Directory of Open Access Journals (Sweden)

    Cheng Bo

    2012-02-01

    Full Text Available Multimedia conferencing is increasingly becoming a very important and popular application over Internet. Due to the complexity of asynchronous communications and handle large and dynamically concurrent processes for multimedia conferencing, which confront relevant challenge to achieve sufficient correctness guarantees, and supporting the effective verification methods for multimedia conferencing services orchestration is an extremely difficult and challenging problem. In this paper, we firstly present the Business Process Execution Language (BPEL based conferencing service orchestration, and mainly focus on the service net based correction verification approach for multimedia conferencing services orchestration, which can automatically translated the BPEL based service orchestration into a corresponding Petri net model with the Petri Net Markup Language (PNML, and also present the BPEL service net reduction rules and multimedia conferencing service orchestration correction verification algorithms. We perform the correctness analysis and verification using the service net properties as safeness, reachability and deadlocks, and also provide an automated support tool for the formal analysis and soundness verification for the multimedia conferencing services orchestration scenarios. Finally, we give the comparison and evaluations.

  18. Online versus offline corrections: opposition or evolution? A comparison of two electronic portal imaging approaches for locally advanced prostate cancer

    International Nuclear Information System (INIS)

    Middleton, Mark; Medwell, Steve; Wong, Jacky; Lynton-Moll, Mary; Rolfo, Aldo; See Andrew; Joon, Michael Lim

    2006-01-01

    Given the onset of dose escalation and increased planning target volume (PTV) conformity, the requirement of accurate field placement has also increased. This study compares and contrasts a combination offline/online electronic portal imaging (EPI) device correction with a complete online correction protocol and assesses their relative effectiveness in managing set-up error. Field placement data was collected on patients receiving radical radiotherapy to the prostate. Ten patients were on an initial combination offline/online correction protocol, followed by another 10 patients on a complete online correction protocol. Analysis of 1480 portal images from 20 patients was carried out, illustrating that a combination offline/online approach can be very effective in dealing with the systematic component of set-up error, but it is only when a complete online correction protocol is employed that both systematic and random set-up errors can be managed. Now, EPI protocols have evolved considerably and online corrections are a highly effective tool in the quest for more accurate field placement. This study discusses the clinical workload impact issues that need to be addressed in order for an online correction protocol to be employed, and addresses many of the practical issues that need to be resolved. Management of set-up error is paramount when seeking to dose escalate and only an online correction protocol can manage both components of set-up error. Both systematic and random errors are important and can be effectively and efficiently managed

  19. Quantum corrections to Bekenstein–Hawking black hole entropy and gravity partition functions

    International Nuclear Information System (INIS)

    Bytsenko, A.A.; Tureanu, A.

    2013-01-01

    Algebraic aspects of the computation of partition functions for quantum gravity and black holes in AdS 3 are discussed. We compute the sub-leading quantum corrections to the Bekenstein–Hawking entropy. It is shown that the quantum corrections to the classical result can be included systematically by making use of the comparison with conformal field theory partition functions, via the AdS 3 /CFT 2 correspondence. This leads to a better understanding of the role of modular and spectral functions, from the point of view of the representation theory of infinite-dimensional Lie algebras. Besides, the sum of known quantum contributions to the partition function can be presented in a closed form, involving the Patterson–Selberg spectral function. These contributions can be reproduced in a holomorphically factorized theory whose partition functions are associated with the formal characters of the Virasoro modules. We propose a spectral function formulation for quantum corrections to the elliptic genus from supergravity states

  20. Needed improvements in the development of systemic corrective actions.

    Energy Technology Data Exchange (ETDEWEB)

    Campisi, John A.

    2009-07-01

    There are indications that corrective actions, as implemented at Sandia National Laboratories are not fully adequate. Review of independent audits spanning multiple years provides evidence of recurring issues within the same or similar operations and programs. Several external audits have directly called into question the ability Sandia's assessment and evaluation processes to prevent recurrence. Examples of repeated findings include lockout/tagout programs, local exhaust ventilation controls and radiological controls. Recurrence clearly shows that there are underlying systemic factors that are not being adequately addressed by corrective actions stemming from causal analyses. Information suggests that improvements in the conduct of causal analyses and, more importantly, in the development of subsequent corrective actions are warranted. Current methodolgies include Management Oversight Risk Tree, developed in the early 1970s and Systemic Factors Analysis. Recommendations for improvements include review of other causal analysis systems, training, improved formality of operations, improved documentation, and a corporate method that uses truly systemic solutions. This report was written some years ago and is being published now to form the foundation for current, follow-on reports being developed. Some outdated material is recognized but is retained for report completeness.

  1. Electroweak corrections in the hadronic production of heavy quarks; Elektroschwache Korrekturen bei der hadronischen Erzeugung schwerer Quarks

    Energy Technology Data Exchange (ETDEWEB)

    Scharf, Andreas Bernhard

    2008-06-27

    In this thesis the electroweak corrections to the top-quark pair production and to the production of bottom-quark jets were studied. especially mixed one-loop amplitudes as well as the interferences of electroweak Born amplitudes and one-loop QCD corrections were calculated. These corrections are of great importance for the experimental analyses at the LHC. For both processes compact analytical results for the virtual and real corrections were calculated. For the Tevatron and the LHC the corrections to the total cross section for the top-quark pair production were determined. At the Tevatron these corrections are only some permille large and therefore concerning the total cross section presumably negligible. For the LHC these corrections are some percent large and by this of the same order of magnitude as the QCD corrections in next-to-leading order to the total cross section to be expected. For the differential distributions in M{sub t} {sub anti} {sub t} and p{sub T} the relative corrections lie in dependence on the Higgs mass between +4% and -6%. A comparison between the integrated distributions in p{sub T} respectively M{sub t} {sub anti} {sub t} and the estimated statistical error shows that these corrections are presently not of importance. At the LHC for the M{sub t} {sub anti} {sub t} respectively p{sub T} distribution in dependence of the Higgs mass large negative corrections of up to -15% respectively -20% were found for M{sub t} {sub anti} {sub t}=5 TeV (p{sub T}=2 TeV). The comparison between the integrated distributions and the statistical error shows that the weak O({alpha}) corrections at the LHC are phenomenologically relevant. This is especially valid for the search for new physics at large M{sub t} {sub anti} {sub t}. For the bottom-jet production the weak O({alpha}) corrections for the differential and integrated p{sub T} distribution were calculated for a simple and two-fold b-tag. At the Tevatron the corrections for a simple b-tag for the

  2. Correction of Microplate Data from High-Throughput Screening.

    Science.gov (United States)

    Wang, Yuhong; Huang, Ruili

    2016-01-01

    High-throughput screening (HTS) makes it possible to collect cellular response data from a large number of cell lines and small molecules in a timely and cost-effective manner. The errors and noises in the microplate-formatted data from HTS have unique characteristics, and they can be generally grouped into three categories: run-wise (temporal, multiple plates), plate-wise (background pattern, single plate), and well-wise (single well). In this chapter, we describe a systematic solution for identifying and correcting such errors and noises, mainly basing on pattern recognition and digital signal processing technologies.

  3. Loop corrections and a new test of inflation

    CERN Document Server

    Tasinato, Gianmassimo; Nurmi, Sami; Wands, David

    2013-01-01

    Inflation is the leading paradigm for explaining the origin of primordial density perturbations and the observed temperature fluctuations of the cosmic microwave background. However many open questions remain, in particular whether one or more scalar fields were present during inflation and how they contributed to the primordial density perturbation. We propose a new observational test of whether multiple fields, or only one (not necessarily the inflaton) generated the perturbations. We show that our test, relating the bispectrum and trispectrum, is protected against loop corrections at all orders, unlike previous relations.

  4. Monte carlo sampling of fission multiplicity.

    Energy Technology Data Exchange (ETDEWEB)

    Hendricks, J. S. (John S.)

    2004-01-01

    Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.

  5. Localization Performance of Multiple Vibrotactile Cues on Both Arms.

    Science.gov (United States)

    Wang, Dangxiao; Peng, Cong; Afzal, Naqash; Li, Weiang; Wu, Dong; Zhang, Yuru

    2018-01-01

    To present information using vibrotactile stimuli in wearable devices, it is fundamental to understand human performance of localizing vibrotactile cues across the skin surface. In this paper, we studied human ability to identify locations of multiple vibrotactile cues activated simultaneously on both arms. Two haptic bands were mounted in proximity to the elbow and shoulder joints on each arm, and two vibrotactile motors were mounted on each band to provide vibration cues to the dorsal and palmar side of the arm. The localization performance under four conditions were compared, with the number of the simultaneously activated cues varying from one to four in each condition. Experimental results illustrate that the rate of correct localization decreases linearly with the increase in the number of activated cues. It was 27.8 percent for three activated cues, and became even lower for four activated cues. An analysis of the correct rate and error patterns show that the layout of vibrotactile cues can have significant effects on the localization performance of multiple vibrotactile cues. These findings might provide guidelines for using vibrotactile cues to guide the simultaneous motion of multiple joints on both arms.

  6. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies.

    NARCIS (Netherlands)

    Kromhout, D.

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the

  7. Are multiple-trial experiments appropriate for eyewitness identification studies? Accuracy, choosing, and confidence across trials.

    Science.gov (United States)

    Mansour, J K; Beaudry, J L; Lindsay, R C L

    2017-12-01

    Eyewitness identification experiments typically involve a single trial: A participant views an event and subsequently makes a lineup decision. As compared to this single-trial paradigm, multiple-trial designs are more efficient, but significantly reduce ecological validity and may affect the strategies that participants use to make lineup decisions. We examined the effects of a number of forensically relevant variables (i.e., memory strength, type of disguise, degree of disguise, and lineup type) on eyewitness accuracy, choosing, and confidence across 12 target-present and 12 target-absent lineup trials (N = 349; 8,376 lineup decisions). The rates of correct rejections and choosing (across both target-present and target-absent lineups) did not vary across the 24 trials, as reflected by main effects or interactions with trial number. Trial number had a significant but trivial quadratic effect on correct identifications (OR = 0.99) and interacted significantly, but again trivially, with disguise type (OR = 1.00). Trial number did not significantly influence participants' confidence in correct identifications, confidence in correct rejections, or confidence in target-absent selections. Thus, multiple-trial designs appear to have minimal effects on eyewitness accuracy, choosing, and confidence. Researchers should thus consider using multiple-trial designs for conducting eyewitness identification experiments.

  8. Frequency Correction for MIRO Chirp Transformation Spectroscopy Spectrum

    Science.gov (United States)

    Lee, Seungwon

    2012-01-01

    This software processes the flyby spectra of the Chirp Transform Spectrometer (CTS) of the Microwave Instrument for Rosetta Orbiter (MIRO). The tool corrects the effect of Doppler shift and local-oscillator (LO) frequency shift during the flyby mode of MIRO operations. The frequency correction for CTS flyby spectra is performed and is integrated with multiple spectra into a high signal-to-noise averaged spectrum at the rest-frame RF frequency. This innovation also generates the 8 molecular line spectra by dividing continuous 4,096-channel CTS spectra. The 8 line spectra can then be readily used for scientific investigations. A spectral line that is at its rest frequency in the frame of the Earth or an asteroid will be observed with a time-varying Doppler shift as seen by MIRO. The frequency shift is toward the higher RF frequencies on approach, and toward lower RF frequencies on departure. The magnitude of the shift depends on the flyby velocity. The result of time-varying Doppler shift is that of an observed spectral line will be seen to move from channel to channel in the CTS spectrometer. The direction (higher or lower frequency) in the spectrometer depends on the spectral line frequency under consideration. In order to analyze the flyby spectra, two steps are required. First, individual spectra must be corrected for the Doppler shift so that individual spectra can be superimposed at the same rest frequency for integration purposes. Second, a correction needs to be applied to the CTS spectra to account for the LO frequency shifts that are applied to asteroid mode.

  9. Social contagion of correct and incorrect information in memory.

    Science.gov (United States)

    Rush, Ryan A; Clark, Steven E

    2014-01-01

    The present study examines how discussion between individuals regarding a shared memory affects their subsequent individual memory reports. In three experiments pairs of participants recalled items from photographs of common household scenes, discussed their recall with each other, and then recalled the items again individually. Results showed that after the discussion. individuals recalled more correct items and more incorrect items, with very small non-significant increases, or no change, in recall accuracy. The information people were exposed to during the discussion was generally accurate, although not as accurate as individuals' initial recall. Individuals incorporated correct exposure items into their subsequent recall at a higher rate than incorrect exposure items. Participants who were initially more accurate became less accurate, and initially less-accurate participants became more accurate as a result of their discussion. Comparisons to no-discussion control groups suggest that the effects were not simply the product of repeated recall opportunities or self-cueing, but rather reflect the transmission of information between individuals.

  10. Bias-corrected estimation in potentially mildly explosive autoregressive models

    DEFF Research Database (Denmark)

    Haufmann, Hendrik; Kruse, Robinson

    This paper provides a comprehensive Monte Carlo comparison of different finite-sample bias-correction methods for autoregressive processes. We consider classic situations where the process is either stationary or exhibits a unit root. Importantly, the case of mildly explosive behaviour is studied...... that the indirect inference approach oers a valuable alternative to other existing techniques. Its performance (measured by its bias and root mean squared error) is balanced and highly competitive across many different settings. A clear advantage is its applicability for mildly explosive processes. In an empirical...

  11. Modified Hitschfeld-Bordan Equations for Attenuation-Corrected Radar Rain Reflectivity: Application to Nonuniform Beamfilling at Off-Nadir Incidence

    Science.gov (United States)

    Meneghini, Robert; Liao, Liang

    2013-01-01

    As shown by Takahashi et al., multiple path attenuation estimates over the field of view of an airborne or spaceborne weather radar are feasible for off-nadir incidence angles. This follows from the fact that the surface reference technique, which provides path attenuation estimates, can be applied to each radar range gate that intersects the surface. This study builds on this result by showing that three of the modified Hitschfeld-Bordan estimates for the attenuation-corrected radar reflectivity factor can be generalized to the case where multiple path attenuation estimates are available, thereby providing a correction to the effects of nonuniform beamfilling. A simple simulation is presented showing some strengths and weaknesses of the approach.

  12. Autologous hematopoietic stem cell transplantation in relapsing-remitting multiple sclerosis: comparison with secondary progressive multiple sclerosis.

    Science.gov (United States)

    Casanova, Bonaventura; Jarque, Isidro; Gascón, Francisco; Hernández-Boluda, Juan Carlos; Pérez-Miralles, Francisco; de la Rubia, Javier; Alcalá, Carmen; Sanz, Jaime; Mallada, Javier; Cervelló, Angeles; Navarré, Arantxa; Carcelén-Gadea, María; Boscá, Isabel; Gil-Perotin, Sara; Solano, Carlos; Sanz, Miguel Angel; Coret, Francisco

    2017-07-01

    The main objective of our work is to describe the long-term results of myeloablative autologous hematopoietic stem cell transplant (AHSCT) in multiple sclerosis patients. Patients that failed to conventional therapies for multiple sclerosis (MS) underwent an approved protocol for AHSCT, which consisted of peripheral blood stem cell mobilization with cyclophosphamide and granulocyte colony-stimulating factor (G-CSF), followed by a conditioning regimen of BCNU, Etoposide, Ara-C, Melphalan IV, plus Rabbit Thymoglobulin. Thirty-eight MS patients have been transplanted since 1999. Thirty-one patients have been followed for more than 2 years (mean 8.4 years). There were 22 relapsing-remitting multiple sclerosis (RRMS) patients and 9 secondary progressive multiple sclerosis (SPMS) patients. No death related to AHSCT. A total of 10 patients (32.3%) had at least one relapse during post-AHSCT evolution, 6 patients in the RRMS group (27.2%) and 4 in the SPMS group (44.4%). After AHSCT, 7 patients (22.6%) experienced progression of disability, all within SP form. By contrast, no patients with RRMS experienced worsening of disability after a median follow-up of 5.4 years, 60% of them showed a sustained reduction in disability (SRD), defined as the improvement of 1.0 point in the expanded disability status scale (EDSS) sustains for 6 months (0.5 in cases of EDSS ≥ 5.5). The only clinical variable that predicted a poor response to AHSCT was a high EDSS in the year before transplant. AHSCT using the BEAM-ATG scheme is safe and efficacious to control the aggressive forms of RRMS.

  13. Comparison of Parenting Style in Single Child and Multiple Children Families

    Directory of Open Access Journals (Sweden)

    Masoumeh Alidosti

    2016-06-01

    Full Text Available Background and Purpose: Family is the first and the most important structure in human civilization in which social lifestyles, mutual understanding, and compatibility is learned. Studies have shown that parenting style, is one the most important and fundamental factors in personality development. The purpose of this study was comparison of parenting style in single child and multiple children families. Materials and Methods: This study, in total, 152 mothers from Andimeshk city, Iran, were selected by random sampling. Data were collected from a health-care center was chosen randomly, mothers who had 5-7 years old children were enrolled in this study. The data collecting tool was the questionnaire which investigates permissive, authoritative, and authoritarian parenting styles in parents. After data entry in SPSS software, the collected data were analyzed by ANOVA, independent t-test, and Pearson correlation test. Results: The mean age of the participants was 32.71 ± 5.39 years old participated in this study. 69 mothers (45.4% had one child, 53 (34.9% had 2 children, and 30 mothers (19.7% had 3 and more children. The mean score of permissive parenting style was 19.97 ± 5.13 in single child families; the mean score of authoritative (19.56 ± 4.70 and authoritarian parenting style (34.50 ± 2.81 that difference was significantly (P < 0.050. Conclusion: According to the results of this study, it seems that having more children would make parents more logical and paves the way for upbringing children. Therefore, it is recommended to plan some educational programs about this issue for parents.

  14. Patient motion correction for single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Geckle, W.J.; Becker, L.C.; Links, J.M.; Frank, T.

    1986-01-01

    An investigation has been conducted to develop and validate techniques for the correction of projection images in SPECT studies of the myocardium subject to misalignment due to voluntary patient motion. The problem is frequently encountered due to the uncomfortable position the patient must assume during the 30 minutes required to obtain a 180 degree set of projection images. The reconstruction of misaligned projections can lead to troublesome artifacts in reconstructed images and degrade the diagnostic potential of the procedure. Significant improvement in the quality of heart reconstructions has been realized with the implementation of an algorithm to provide detection of and correction for patient motion. Normal, involuntary motion is not corrected for, however, since such movement is below the spatial resolution of the thallium imaging system under study. The algorithm is based on a comparison of the positions of an object in a set of projection images to the known, sinusoidal trajectory of an off-axis fixed point in space. Projection alignment, therefore, is achieved by shifting the position of a point or set of points in a projection image to the sinusoid of a fixed position in space

  15. Quantum spin correction scheme based on spin-correlation functional for Kohn-Sham spin density functional theory

    International Nuclear Information System (INIS)

    Yamanaka, Shusuke; Takeda, Ryo; Nakata, Kazuto; Takada, Toshikazu; Shoji, Mitsuo; Kitagawa, Yasutaka; Yamaguchi, Kizashi

    2007-01-01

    We present a simple quantum correction scheme for ab initio Kohn-Sham spin density functional theory (KS-SDFT). This scheme is based on a mapping from ab initio results to a Heisenberg model Hamiltonian. The effective exchange integral is estimated by using energies and spin correlation functionals calculated by ab initio KS-SDFT. The quantum-corrected spin-correlation functional is open to be designed to cover specific quantum spin fluctuations. In this article, we present a simple correction for dinuclear compounds having multiple bonds. The computational results are discussed in relation to multireference (MR) DFT, by which we treat the quantum many-body effects explicitly

  16. Energy evolution of the moments of the hadron distribution in QCD jets including NNLL resummation and NLO running-coupling corrections

    CERN Document Server

    Perez-Ramos, Redamy

    2014-01-01

    The moments of the single inclusive momentum distribution of hadrons in QCD jets, are studied in the next-to-modified-leading-log approximation (NMLLA) including next-to-leading-order (NLO) corrections to the alpha_s strong coupling. The evolution equations are solved using a distorted Gaussian parametrisation, which successfully reproduces the spectrum of charged hadrons of jets measured in e+e- collisions. The energy dependencies of the maximum peak, multiplicity, width, kurtosis and skewness of the jet hadron distribution are computed analytically. Comparisons of all the existing jet data measured in e+e- collisions in the range sqrt(s)~2-200 GeV to the NMLLA+NLO* predictions allow one to extract a value of the QCD parameter Lambda_QCD, and associated two-loop coupling constant at the Z resonance alpha_s(m_Z^2)= 0.1195 +/- 0.0022, in excellent numerical agreement with the current world average obtained using other methods.

  17. Attention should be given to multiplicity issues in systematic reviews

    DEFF Research Database (Denmark)

    Bender, R.; Bunce, C.; Clarke, M.

    2008-01-01

    OBJECTIVE: The objective of this paper is to describe the problem of multiple comparisons in systematic reviews and to provide some guidelines on how to deal with it in practice. STUDY DESIGN AND SETTING: We describe common reasons for multiplicity in systematic reviews, and present some examples...

  18. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  19. 'TrueCoinc' software utility for calculation of the true coincidence correction

    International Nuclear Information System (INIS)

    Sudar, S.

    2002-01-01

    The true coincidence correction plays an important role in the overall accuracy of the γ ray spectrometry especially in the case of present-day high volume detectors. The calculation of true coincidence corrections needs detailed nuclear structure information. Recently these data are available in computerized form from the Nuclear Data Centers through the Internet or on a CD-ROM of the Table of Isotopes. The aim has been to develop software for this calculation, using available databases for the levels data. The user has to supply only the parameters of the detector to be used. The new computer program runs under the Windows 95/98 operating system. In the framework of the project a new formula was prepared for calculating the summing out correction and calculation of the intensity of alias lines (sum peaks). The file converter for reading the ENDSF-2 type files was completed. Reading and converting the original ENDSF was added to the program. A computer accessible database of the X rays energies and intensities was created. The X ray emissions were taken in account in the 'summing out' calculation. Calculation of the true coincidence 'summing in' correction was done. The output was arranged to show independently two types of corrections and to calculate the final correction as multiplication of the two. A minimal intensity threshold can be set to show the final list only for the strongest lines. The calculation takes into account all the transitions, independently of the threshold. The program calculates the intensity of X rays (K, L lines). The true coincidence corrections for X rays were calculated. The intensities of the alias γ lines were calculated. (author)

  20. Charged-particle multiplicities in pp interactions measured with the ATLAS detector at the LHC

    CERN Document Server

    Aad, G.; Abdallah, J.; Abdelalim, A.A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Abreu, H.; Acerbi, E.; Acharya, B.S.; Ackers, M.; Adams, D.L.; Addy, T.N.; Adelman, J.; Aderholz, M.; Adomeit, S.; Adragna, P.; Adye, T.; Aefsky, S.; Aguilar-Saavedra, J.A.; Aharrouche, M.; Ahlen, S.P.; Ahles, F.; Ahmad, A.; Ahsan, M.; Aielli, G.; Akdogan, T.; Akesson, T.P.A.; Akimoto, G.; Akimov, A.V.; Alam, M.S.; Alam, M.A.; Albrand, S.; Aleksa, M.; Aleksandrov, I.N.; Aleppo, M.; Alessandria, F.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alison, J.; Aliyev, M.; Allport, P.P.; Allwood-Spiers, S.E.; Almond, J.; Aloisio, A.; Alon, R.; Alonso, A.; Alonso, J.; Alviggi, M.G.; Amako, K.; Amaral, P.; Amelung, C.; Ammosov, V.V.; Amorim, A.; Amoros, G.; Amram, N.; Anastopoulos, C.; Andeen, T.; Anders, C.F.; Anderson, K.J.; Andreazza, A.; Andrei, V.; Andrieux, M-L.; Anduaga, X.S.; Angerami, A.; Anghinolfi, F.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonelli, S.; Antos, J.; Anulli, F.; Aoun, S.; Aperio Bella, L.; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A.T.H.; Archambault, J.P.; Arfaoui, S.; Arguin, J-F.; Arik, E.; Arik, M.; Armbruster, A.J.; Arms, K.E.; Armstrong, S.R.; Arnaez, O.; Arnault, C.; Artamonov, A.; Artoni, G.; Arutinov, D.; Asai, S.; Silva, J.; Asfandiyarov, R.; Ask, S.; Asman, B.; Asquith, L.; Assamagan, K.; Astbury, A.; Astvatsatourov, A.; Atoian, G.; Aubert, B.; Auerbach, B.; Auge, E.; Augsten, K.; Aurousseau, M.; Austin, N.; Avramidou, R.; Axen, D.; Ay, C.; Azuelos, G.; Azuma, Y.; Baak, M.A.; Baccaglioni, G.; Bacci, C.; Bach, A.M.; Bachacou, H.; Bachas, K.; Bachy, G.; Backes, M.; Badescu, E.; Bagnaia, P.; Bahinipati, S.; Bai, Y.; Bailey, D.C.; Bain, T.; Baines, J.T.; Baker, O.K.; Baker, S.; Baltasar Dos Santos Pedrosa, F.; Banas, E.; Banerjee, P.; Banerjee, Sw.; Banfi, D.; Bangert, A.; Bansal, V.; Bansil, H.S.; Barak, L.; Baranov, S.P.; Barashkou, A.; Barbaro Galtieri, A.; Barber, T.; Barberio, E.L.; Barberis, D.; Barbero, M.; Bardin, D.Y.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B.M.; Barnett, R.M.; Baroncelli, A.; Barr, A.J.; Barreiro, F.; Barreiro Guimaraes da Costa, J.; Barrillon, P.; Bartoldus, R.; Barton, A.E.; Bartsch, D.; Bates, R.L.; Batkova, L.; Batley, J.R.; Battaglia, A.; Battistin, M.; Battistoni, G.; Bauer, F.; Bawa, H.S.; Beare, B.; Beau, T.; Beauchemin, P.H.; Beccherle, R.; Bechtle, P.; Beck, H.P.; Beckingham, M.; Becks, K.H.; Beddall, A.J.; Beddall, A.; Bednyakov, V.A.; Bee, C.; Begel, M.; Behar Harpaz, S.; Behera, P.K.; Beimforde, M.; Belanger-Champagne, C.; Bell, P.J.; Bell, W.H.; Bella, G.; Bellagamba, L.; Bellina, F.; Bellomo, G.; Bellomo, M.; Belloni, A.; Belotskiy, K.; Beltramello, O.; Ben Ami, S.; Benary, O.; Benchekroun, D.; Benchouk, C.; Bendel, M.; Benedict, B.H.; Benekos, N.; Benhammou, Y.; Benjamin, D.P.; Benoit, M.; Bensinger, J.R.; Benslama, K.; Bentvelsen, S.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Berglund, E.; Beringer, J.; Bernardet, K.; Bernat, P.; Bernhard, R.; Bernius, C.; Berry, T.; Bertin, A.; Bertinelli, F.; Bertolucci, F.; Besana, M.I.; Besson, N.; Bethke, S.; Bhimji, W.; Bianchi, R.M.; Bianco, M.; Biebel, O.; Biesiada, J.; Biglietti, M.; Bilokon, H.; Bindi, M.; Bingul, A.; Bini, C.; Biscarat, C.; Bitenc, U.; Black, K.M.; Blair, R.E.; Blanchard, J.B.; Blanchot, G.; Blocker, C.; Blocki, J.; Blondel, A.; Blum, W.; Blumenschein, U.; Bobbink, G.J.; Bobrovnikov, V.B.; Bocci, A.; Bock, R.; Boddy, C.R.; Boehler, M.; Boek, J.; Boelaert, N.; Boser, S.; Bogaerts, J.A.; Bogdanchikov, A.; Bogouch, A.; Bohm, C.; Boisvert, V.; Bold, T.; Boldea, V.; Boonekamp, M.; Boorman, G.; Booth, C.N.; Booth, P.; Booth, J.R.A.; Bordoni, S.; Borer, C.; Borisov, A.; Borissov, G.; Borjanovic, I.; Borroni, S.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Botterill, D.; Bouchami, J.; Boudreau, J.; Bouhova-Thacker, E.V.; Boulahouache, C.; Bourdarios, C.; Bousson, N.; Boveia, A.; Boyd, J.; Boyko, I.R.; Bozhko, N.I.; Bozovic-Jelisavcic, I.; Bracinik, J.; Braem, A.; Brambilla, E.; Branchini, P.; Brandenburg, G.W.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J.E.; Braun, H.M.; Brelier, B.; Bremer, J.; Brenner, R.; Bressler, S.; Breton, D.; Brett, N.D.; Bright-Thomas, P.G.; Britton, D.; Brochu, F.M.; Brock, I.; Brock, R.; Brodbeck, T.J.; Brodet, E.; Broggi, F.; Bromberg, C.; Brooijmans, G.; Brooks, W.K.; Brown, G.; Brubaker, E.; Bruckman de Renstrom, P.A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Buanes, T.; Bucci, F.; Buchanan, J.; Buchanan, N.J.; Buchholz, P.; Buckingham, R.M.; Buckley, A.G.; Buda, S.I.; Budagov, I.A.; Budick, B.; Buscher, V.; Bugge, L.; Buira-Clark, D.; Buis, E.J.; Bulekov, O.; Bunse, M.; Buran, T.; Burckhart, H.; Burdin, S.; Burgess, T.; Burke, S.; Busato, E.; Bussey, P.; Buszello, C.P.; Butin, F.; Butler, B.; Butler, J.M.; Buttar, C.M.; Butterworth, J.M.; Buttinger, W.; Byatt, T.; Cabrera Urban, S.; Caccia, M.; Caforio, D.; Cakir, O.; Calafiura, P.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L.P.; Caloi, R.; Calvet, D.; Calvet, S.; Camard, A.; Camarri, P.; Cambiaghi, M.; Cameron, D.; Cammin, J.; Campana, S.; Campanelli, M.; Canale, V.; Canelli, F.; Canepa, A.; Cantero, J.; Capasso, L.; Capeans Garrido, M.D.M.; Caprini, I.; Caprini, M.; Capriotti, D.; Capua, M.; Caputo, R.; Caramarcu, C.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, B.; Caron, S.; Carpentieri, C.; Carrillo Montoya, G.D.; Carron Montero, S.; Carter, A.A.; Carter, J.R.; Carvalho, J.; Casadei, D.; Casado, M.P.; Cascella, M.; Caso, C.; Castaneda Hernandez, A.M.; Castaneda-Miranda, E.; Castillo Gimenez, V.; Castro, N.F.; Cataldi, G.; Cataneo, F.; Catinaccio, A.; Catmore, J.R.; Cattai, A.; Cattani, G.; Caughron, S.; Cavallari, A.; Cavalleri, P.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Cazzato, A.; Ceradini, F.; Cerna, C.; Cerqueira, A.S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cetin, S.A.; Cevenini, F.; Chafaq, A.; Chakraborty, D.; Chan, K.; Chapleau, B.; Chapman, J.D.; Chapman, J.W.; Chareyre, E.; Charlton, D.G.; Chavda, V.; Cheatham, S.; Chekanov, S.; Chekulaev, S.V.; Chelkov, G.A.; Chen, H.; Chen, L.; Chen, S.; Chen, T.; Chen, X.; Cheng, S.; Cheplakov, A.; Chepurnov, V.F.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Cheung, S.L.; Chevalier, L.; Chevallier, F.; Chiefari, G.; Chikovani, L.; Childers, J.T.; Chilingarov, A.; Chiodini, G.; Chizhov, M.V.; Choudalakis, G.; Chouridou, S.; Christidi, I.A.; Christov, A.; Chromek-Burckhart, D.; Chu, M.L.; Chudoba, J.; Ciapetti, G.; Ciftci, A.K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciobotaru, M.D.; Ciocca, C.; Ciocio, A.; Cirilli, M.; Ciubancan, M.; Clark, A.; Clark, P.J.; Cleland, W.; Clemens, J.C.; Clement, B.; Clement, C.; Clifft, R.W.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coe, P.; Cogan, J.G.; Coggeshall, J.; Cogneras, E.; Cojocaru, C.D.; Colas, J.; Colijn, A.P.; Collard, C.; Collins, N.J.; Collins-Tooth, C.; Collot, J.; Colon, G.; Coluccia, R.; Comune, G.; Conde Muino, P.; Coniavitis, E.; Conidi, M.C.; Consonni, M.; Constantinescu, S.; Conta, C.; Conventi, F.; Cook, J.; Cooke, M.; Cooper, B.D.; Cooper-Sarkar, A.M.; Cooper-Smith, N.J.; Copic, K.; Cornelissen, T.; Corradi, M.; Correard, S.; Corriveau, F.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M.J.; Costanzo, D.; Costin, T.; Cote, D.; Coura Torres, R.; Courneyea, L.; Cowan, G.; Cowden, C.; Cox, B.E.; Cranmer, K.; Cristinziani, M.; Crosetti, G.; Crupi, R.; Crepe-Renaudin, S.; Cuenca Almenar, C.; Cuhadar Donszelmann, T.; Cuneo, S.; Curatolo, M.; Curtis, C.J.; Cwetanski, P.; Czirr, H.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; D'Orazio, A.; Da Rocha Gesualdi Mello, A.; Da Silva, P.V.M.; Da Via, C.; Dabrowski, W.; Dahlhoff, A.; Dai, T.; Dallapiccola, C.; Dallison, S.J.; Dam, M.; Dameri, M.; Damiani, D.S.; Danielsson, H.O.; Dankers, R.; Dannheim, D.; Dao, V.; Darbo, G.; Darlea, G.L.; Daum, C.; Dauvergne, J.P.; Davey, W.; Davidek, T.; Davidson, N.; Davidson, R.; Davies, M.; Davison, A.R.; Dawe, E.; Dawson, I.; Dawson, J.W.; Daya, R.K.; De, K.; de Asmundis, R.; De Castro, S.; De Cecco, S.; de Graat, J.; De Groot, N.; de Jong, P.; De La Cruz-Burelo, E.; De La Taille, C.; De Lotto, B.; De Mora, L.; De Nooij, L.; De Oliveira Branco, M.; De Pedis, D.; de Saintignon, P.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J.B.; Dean, S.; Dedes, G.; Dedovich, D.V.; Degenhardt, J.; Dehchar, M.; Deile, M.; Del Papa, C.; Del Peso, J.; Del Prete, T.; Dell'Acqua, A.; Dell'Asta, L.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delpierre, P.; Delruelle, N.; Delsart, P.A.; Deluca, C.; Demers, S.; Demichev, M.; Demirkoz, B.; Deng, J.; Denisov, S.P.; Dennis, C.; Derendarz, D.; Derkaoui, J.E.; Derue, F.; Dervan, P.; Desch, K.; Devetak, E.; Deviveiros, P.O.; Dewhurst, A.; DeWilde, B.; Dhaliwal, S.; Dhullipudi, R.; Di Ciaccio, A.; Di Ciaccio, L.; Di Girolamo, A.; Di Girolamo, B.; Di Luise, S.; Di Mattia, A.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Diaz, M.A.; Diblen, F.; Diehl, E.B.; Dietl, H.; Dietrich, J.; Dietzsch, T.A.; Diglio, S.; Dindar Yagci, K.; Dingfelder, J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djilkibaev, R.; Djobava, T.; do Vale, M.A.B.; Do Valle Wemans, A.; Doan, T.K.O.; Dobbs, M.; Dobinson, R.; Dobos, D.; Dobson, E.; Dobson, M.; Dodd, J.; Dogan, O.B.; Doglioni, C.; Doherty, T.; Doi, Y.; Dolejsi, J.; Dolenc, I.; Dolezal, Z.; Dolgoshein, B.A.; Dohmae, T.; Donadelli, M.; Donega, M.; Donini, J.; Dopke, J.; Doria, A.; Dos Anjos, A.; Dosil, M.; Dotti, A.; Dova, M.T.; Dowell, J.D.; Doxiadis, A.D.; Doyle, A.T.; Drasal, Z.; Drees, J.; Dressnandt, N.; Drevermann, H.; Driouichi, C.; Dris, M.; Drohan, J.G.; Dubbert, J.; Dubbs, T.; Dube, S.; Duchovni, E.; Duckeck, G.; Dudarev, A.; Dudziak, F.; Duhrssen, M.; Duerdoth, I.P.; Duflot, L.; Dufour, M-A.; Dunford, M.; Duran Yildiz, H.; Duxfield, R.; Dwuznik, M.; Dydak, F.; Dzahini, D.; Duren, M.; Ebke, J.; Eckert, S.; Eckweiler, S.; Edmonds, K.; Edwards, C.A.; Efthymiopoulos, I.; Ehrenfeld, W.; Ehrich, T.; Eifert, T.; Eigen, G.; Einsweiler, K.; Eisenhandler, E.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, K.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Ely, R.; Emeliyanov, D.; Engelmann, R.; Engl, A.; Epp, B.; Eppig, A.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Escobar, C.; Espinal Curull, X.; Esposito, B.; Etienne, F.; Etienvre, A.I.; Etzion, E.; Evangelakou, D.; Evans, H.; Fabbri, L.; Fabre, C.; Facius, K.; Fakhrutdinov, R.M.; Falciano, S.; Falou, A.C.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farley, J.; Farooque, T.; Farrington, S.M.; Farthouat, P.; Fasching, D.; Fassnacht, P.; Fassouliotis, D.; Fatholahzadeh, B.; Favareto, A.; Fayard, L.; Fazio, S.; Febbraro, R.; Federic, P.; Fedin, O.L.; Fedorko, I.; Fedorko, W.; Fehling-Kaschek, M.; Feligioni, L.; Fellmann, D.; Felzmann, C.U.; Feng, C.; Feng, E.J.; Fenyuk, A.B.; Ferencei, J.; Ferguson, D.; Ferland, J.; Fernandes, B.; Fernando, W.; Ferrag, S.; Ferrando, J.; Ferrara, V.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferrer, A.; Ferrer, M.L.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiascaris, M.; Fiedler, F.; Filipcic, A.; Filippas, A.; Filthaut, F.; Fincke-Keeler, M.; Fiolhais, M.C.N.; Fiorini, L.; Firan, A.; Fischer, G.; Fischer, P.; Fisher, M.J.; Fisher, S.M.; Flammer, J.; Flechl, M.; Fleck, I.; Fleckner, J.; Fleischmann, P.; Fleischmann, S.; Flick, T.; Flores Castillo, L.R.; Flowerdew, M.J.; Fohlisch, F.; Fokitis, M.; Fonseca Martin, T.; Forbush, D.A.; Formica, A.; Forti, A.; Fortin, D.; Foster, J.M.; Fournier, D.; Foussat, A.; Fowler, A.J.; Fowler, K.; Fox, H.; Francavilla, P.; Franchino, S.; Francis, D.; Frank, T.; Franklin, M.; Franz, S.; Fraternali, M.; Fratina, S.; French, S.T.; Froeschl, R.; Froidevaux, D.; Frost, J.A.; Fukunaga, C.; Fullana Torregrosa, E.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gadfort, T.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Gallas, E.J.; Gallas, M.V.; Gallo, V.; Gallop, B.J.; Gallus, P.; Galyaev, E.; Gan, K.K.; Gao, Y.S.; Gapienko, V.A.; Gaponenko, A.; Garberson, F.; Garcia-Sciveres, M.; Garcia, C.; Garcia Navarro, J.E.; Gardner, R.W.; Garelli, N.; Garitaonandia, H.; Garonne, V.; Garvey, J.; Gatti, C.; Gaudio, G.; Gaumer, O.; Gaur, B.; Gauthier, L.; Gavrilenko, I.L.; Gay, C.; Gaycken, G.; Gayde, J-C.; Gazis, E.N.; Ge, P.; Gee, C.N.P.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Genest, M.H.; Gentile, S.; Georgatos, F.; George, S.; Gerlach, P.; Gershon, A.; Geweniger, C.; Ghazlane, H.; Ghez, P.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giakoumopoulou, V.; Giangiobbe, V.; Gianotti, F.; Gibbard, B.; Gibson, A.; Gibson, S.M.; Gieraltowski, G.F.; Gilbert, L.M.; Gilchriese, M.; Gildemeister, O.; Gilewsky, V.; Gillberg, D.; Gillman, A.R.; Gingrich, D.M.; Ginzburg, J.; Giokaris, N.; Giordano, R.; Giorgi, F.M.; Giovannini, P.; Giraud, P.F.; Giugni, D.; Giusti, P.; Gjelsten, B.K.; Gladilin, L.K.; Glasman, C.; Glatzer, J.; Glazov, A.; Glitza, K.W.; Glonti, G.L.; Godfrey, J.; Godlewski, J.; Goebel, M.; Gopfert, T.; Goeringer, C.; Gossling, C.; Gottfert, T.; Goldfarb, S.; Goldin, D.; Golling, T.; Gollub, N.P.; Golovnia, S.N.; Gomes, A.; Gomez Fajardo, L.S.; Goncalo, R.; Gonella, L.; Gong, C.; Gonidec, A.; Gonzalez, S.; Gonzalez de la Hoz, S.; Gonzalez Silva, M.L.; Gonzalez-Sevilla, S.; Goodson, J.J.; Goossens, L.; Gorbounov, P.A.; Gordon, H.A.; Gorelov, I.; Gorfine, G.; Gorini, B.; Gorini, E.; Gorisek, A.; Gornicki, E.; Gorokhov, S.A.; Gorski, B.T.; Goryachev, V.N.; Gosdzik, B.; Gosselink, M.; Gostkin, M.I.; Gouanere, M.; Gough Eschrich, I.; Gouighri, M.; Goujdami, D.; Goulette, M.P.; Goussiou, A.G.; Goy, C.; Grabowska-Bold, I.; Grabski, V.; Grafstrom, P.; Grah, C.; Grahn, K-J.; Grancagnolo, F.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Grau, N.; Gray, H.M.; Gray, J.A.; Graziani, E.; Grebenyuk, O.G.; Greenfield, D.; Greenshaw, T.; Greenwood, Z.D.; Gregor, I.M.; Grenier, P.; Griesmayer, E.; Griffiths, J.; Grigalashvili, N.; Grillo, A.A.; Grimm, K.; Grinstein, S.; Gris, P.L.Y.; Grishkevich, Y.V.; Grivaz, J.F.; Grognuz, J.; Groh, M.; Gross, E.; Grosse-Knetter, J.; Groth-Jensen, J.; Gruwe, M.; Grybel, K.; Guarino, V.J.; Guicheney, C.; Guida, A.; Guillemin, T.; Guindon, S.; Guler, H.; Gunther, J.; Guo, B.; Guo, J.; Gupta, A.; Gusakov, Y.; Gushchin, V.N.; Gutierrez, A.; Gutierrez, P.; Guttman, N.; Gutzwiller, O.; Guyot, C.; Gwenlan, C.; Gwilliam, C.B.; Haas, A.; Haas, S.; Haber, C.; Hackenburg, R.; Hadavand, H.K.; Hadley, D.R.; Haefner, P.; Hahn, F.; Haider, S.; Hajduk, Z.; Hakobyan, H.; Haller, J.; Hamacher, K.; Hamilton, A.; Hamilton, S.; Han, H.; Han, L.; Hanagaki, K.; Hance, M.; Handel, C.; Hanke, P.; Hansen, C.J.; Hansen, J.R.; Hansen, J.B.; Hansen, J.D.; Hansen, P.H.; Hansson, P.; Hara, K.; Hare, G.A.; Harenberg, T.; Harper, D.; Harrington, R.D.; Harris, O.M.; Harrison, K.; Hart, J.C.; Hartert, J.; Hartjes, F.; Haruyama, T.; Harvey, A.; Hasegawa, S.; Hasegawa, Y.; Hassani, S.; Hatch, M.; Hauff, D.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawes, B.M.; Hawkes, C.M.; Hawkings, R.J.; Hawkins, D.; Hayakawa, T.; Hayden, D; Hayward, H.S.; Haywood, S.J.; Hazen, E.; He, M.; Head, S.J.; Hedberg, V.; Heelan, L.; Heim, S.; Heinemann, B.; Heisterkamp, S.; Helary, L.; Heldmann, M.; Heller, M.; Hellman, S.; Helsens, C.; Henderson, R.C.W.; Henke, M.; Henrichs, A.; Henriques Correia, A.M.; Henrot-Versille, S.; Henry-Couannier, F.; Hensel, C.; Henss, T.; Hernandez Jimenez, Y.; Herrberg, R.; Hershenhorn, A.D.; Herten, G.; Hertenberger, R.; Hervas, L.; Hessey, N.P.; Hidvegi, A.; Higon-Rodriguez, E.; Hill, D.; Hill, J.C.; Hill, N.; Hiller, K.H.; Hillert, S.; Hillier, S.J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirsch, F.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M.C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M.R.; Hoffman, J.; Hoffmann, D.; Hohlfeld, M.; Holder, M.; Holmes, A.; Holmgren, S.O.; Holy, T.; Holzbauer, J.L.; Homer, R.J.; Homma, Y.; Horazdovsky, T.; Horn, C.; Horner, S.; Horton, K.; Hostachy, J-Y.; Hott, T.; Hou, S.; Houlden, M.A.; Hoummada, A.; Howarth, J.; Howell, D.F.; Hristova, I.; Hrivnac, J.; Hruska, I.; Hryn'ova, T.; Hsu, P.J.; Hsu, S.C.; Huang, G.S.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T.B.; Hughes, E.W.; Hughes, G.; Hughes-Jones, R.E.; Huhtinen, M.; Hurst, P.; Hurwitz, M.; Husemann, U.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibbotson, M.; Ibragimov, I.; Ichimiya, R.; Iconomidou-Fayard, L.; Idarraga, J.; Idzik, M.; Iengo, P.; Igonkina, O.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Imbault, D.; Imhaeuser, M.; Imori, M.; Ince, T.; Inigo-Golfin, J.; Ioannou, P.; Iodice, M.; Ionescu, G.; Irles Quiles, A.; Ishii, K.; Ishikawa, A.; Ishino, M.; Ishmukhametov, R.; Isobe, T.; Issever, C.; Istin, S.; Itoh, Y.; Ivashin, A.V.; Iwanski, W.; Iwasaki, H.; Izen, J.M.; Izzo, V.; Jackson, B.; Jackson, J.N.; Jackson, P.; Jaekel, M.R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakubek, J.; Jana, D.K.; Jankowski, E.; Jansen, E.; Jantsch, A.; Janus, M.; Jarlskog, G.; Jeanty, L.; Jelen, K.; Jen-La Plante, I.; Jenni, P.; Jeremie, A.; Jez, P.; Jezequel, S.; Ji, H.; Ji, W.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, G.; Jin, S.; Jinnouchi, O.; Joergensen, M.D.; Joffe, D.; Johansen, L.G.; Johansen, M.; Johansson, K.E.; Johansson, P.; Johnert, S.; Johns, K.A.; Jon-And, K.; Jones, G.; Jones, R.W.L.; Jones, T.W.; Jones, T.J.; Jonsson, O.; Joo, K.K.; Joram, C.; Jorge, P.M.; Joseph, J.; Ju, X.; Juranek, V.; Jussel, P.; Kabachenko, V.V.; Kabana, S.; Kaci, M.; Kaczmarska, A.; Kadlecik, P.; Kado, M.; Kagan, H.; Kagan, M.; Kaiser, S.; Kajomovitz, E.; Kalinin, S.; Kalinovskaya, L.V.; Kama, S.; Kanaya, N.; Kaneda, M.; Kanno, T.; Kantserov, V.A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kaplon, J.; Kar, D.; Karagoz, M.; Karnevskiy, M.; Karr, K.; Kartvelishvili, V.; Karyukhin, A.N.; Kashif, L.; Kasmi, A.; Kass, R.D.; Kastanas, A.; Kataoka, M.; Kataoka, Y.; Katsoufis, E.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kayl, M.S.; Kazanin, V.A.; Kazarinov, M.Y.; Kazi, S.I.; Keates, J.R.; Keeler, R.; Kehoe, R.; Keil, M.; Kekelidze, G.D.; Kelly, M.; Kennedy, J.; Kenney, C.J.; Kenyon, M.; Kepka, O.; Kerschen, N.; Kersevan, B.P.; Kersten, S.; Kessoku, K.; Ketterer, C.; Khakzad, M.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Kharchenko, D.; Khodinov, A.; Kholodenko, A.G.; Khomich, A.; Khoo, T.J.; Khoriauli, G.; Khovanskiy, N.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kilvington, G.; Kim, H.; Kim, M.S.; Kim, P.C.; Kim, S.H.; Kimura, N.; Kind, O.; King, B.T.; King, M.; King, R.S.B.; Kirk, J.; Kirsch, G.P.; Kirsch, L.E.; Kiryunin, A.E.; Kisielewska, D.; Kittelmann, T.; Kiver, A.M.; Kiyamura, H.; Kladiva, E.; Klaiber-Lodewigs, J.; Klein, M.; Klein, U.; Kleinknecht, K.; Klemetti, M.; Klier, A.; Klimentov, A.; Klingenberg, R.; Klinkby, E.B.; Klioutchnikova, T.; Klok, P.F.; Klous, S.; Kluge, E.E.; Kluge, T.; Kluit, P.; Kluth, S.; Kneringer, E.; Knobloch, J.; Knue, A.; Ko, B.R.; Kobayashi, T.; Kobel, M.; Koblitz, B.; Kocian, M.; Kocnar, A.; Kodys, P.; Koneke, K.; Konig, A.C.; Koenig, S.; Konig, S.; Kopke, L.; Koetsveld, F.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kohn, F.; Kohout, Z.; Kohriki, T.; Koi, T.; Kokott, T.; Kolachev, G.M.; Kolanoski, H.; Kolesnikov, V.; Koletsou, I.; Koll, J.; Kollar, D.; Kollefrath, M.; Kolya, S.D.; Komar, A.A.; Komaragiri, J.R.; Kondo, T.; Kono, T.; Kononov, A.I.; Konoplich, R.; Konstantinidis, N.; Kootz, A.; Koperny, S.; Kopikov, S.V.; Korcyl, K.; Kordas, K.; Koreshev, V.; Korn, A.; Korol, A.; Korolkov, I.; Korolkova, E.V.; Korotkov, V.A.; Kortner, O.; Kortner, S.; Kostyukhin, V.V.; Kotamaki, M.J.; Kotov, S.; Kotov, V.M.; Kourkoumelis, C.; Koutsman, A.; Kowalewski, R.; Kowalski, T.Z.; Kozanecki, W.; Kozhin, A.S.; Kral, V.; Kramarenko, V.A.; Kramberger, G.; Krasel, O.; Krasny, M.W.; Krasznahorkay, A.; Kraus, J.; Kreisel, A.; Krejci, F.; Kretzschmar, J.; Krieger, N.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Kruger, H.; Krumshteyn, Z.V.; Kruth, A.; Kubota, T.; Kuehn, S.; Kugel, A.; Kuhl, T.; Kuhn, D.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kummer, C.; Kuna, M.; Kundu, N.; Kunkle, J.; Kupco, A.; Kurashige, H.; Kurata, M.; Kurochkin, Y.A.; Kus, V.; Kuykendall, W.; Kuze, M.; Kuzhir, P.; Kvasnicka, O.; Kwee, R.; La Rosa, A.; La Rotonda, L.; Labarga, L.; Labbe, J.; Lacasta, C.; Lacava, F.; Lacker, H.; Lacour, D.; Lacuesta, V.R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Laisne, E.; Lamanna, M.; Lampen, C.L.; Lampl, W.; Lancon, E.; Landgraf, U.; Landon, M.P.J.; Landsman, H.; Lane, J.L.; Lange, C.; Lankford, A.J.; Lanni, F.; Lantzsch, K.; Lapin, V.V.; Laplace, S.; Lapoire, C.; Laporte, J.F.; Lari, T.; Larionov, A.V.; Larner, A.; Lasseur, C.; Lassnig, M.; Lau, W.; Laurelli, P.; Lavorato, A.; Lavrijsen, W.; Laycock, P.; Lazarev, A.B.; Lazzaro, A.; Le Dortz, O.; Le Guirriec, E.; Le Maner, C.; Le Menedeu, E.; Leahu, M.; Lebedev, A.; Lebel, C.; LeCompte, T.; Ledroit-Guillon, F.; Lee, H.; Lee, J.S.H.; Lee, S.C.; Lee JR, L.; Lefebvre, M.; Legendre, M.; Leger, A.; LeGeyt, B.C.; Legger, F.; Leggett, C.; Lehmacher, M.; Lehmann Miotto, G.; Lehto, M.; Lei, X.; Leite, M.A.L.; Leitner, R.; Lellouch, D.; Lellouch, J.; Leltchouk, M.; Lendermann, V.; Leney, K.J.C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leonhardt, K.; Leontsinis, S.; Leroy, C.; Lessard, J-R.; Lesser, J.; Lester, C.G.; Leung Fook Cheong, A.; Leveque, J.; Levin, D.; Levinson, L.J.; Levitski, M.S.; Lewandowska, M.; Leyton, M.; Li, B.; Li, H.; Li, S.; Li, X.; Liang, Z.; Liang, Z.; Liberti, B.; Lichard, P.; Lichtnecker, M.; Lie, K.; Liebig, W.; Lifshitz, R.; Lilley, J.N.; Limosani, A.; Limper, M.; Lin, S.C.; Linde, F.; Linnemann, J.T.; Lipeles, E.; Lipinsky, L.; Lipniacka, A.; Liss, T.M.; Lister, A.; Litke, A.M.; Liu, C.; Liu, D.; Liu, H.; Liu, J.B.; Liu, M.; Liu, S.; Liu, Y.; Livan, M.; Livermore, S.S.A.; Lleres, A.; Lloyd, S.L.; Lobodzinska, E.; Loch, P.; Lockman, W.S.; Lockwitz, S.; Loddenkoetter, T.; Loebinger, F.K.; Loginov, A.; Loh, C.W.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Loken, J.; Lombardo, V.P.; Long, R.E.; Lopes, L.; Lopez Mateos, D.; Losada, M.; Loscutoff, P.; Lo Sterzo, F.; Losty, M.J.; Lou, X.; Lounis, A.; Loureiro, K.F.; Love, J.; Love, P.A.; Lowe, A.J.; Lu, F.; Lu, J.; Lu, L.; Lubatti, H.J.; Luci, C.; Lucotte, A.; Ludwig, A.; Ludwig, D.; Ludwig, I.; Ludwig, J.; Luehring, F.; Luijckx, G.; Lumb, D.; Luminari, L.; Lund, E.; Lund-Jensen, B.; Lundberg, B.; Lundberg, J.; Lundquist, J.; Lungwitz, M.; Lupi, A.; Lutz, G.; Lynn, D.; Lys, J.; Lytken, E.; Ma, H.; Ma, L.L.; Maass en, M.; Macana Goia, J.A.; Maccarrone, G.; Macchiolo, A.; Macek, B.; Machado Miguens, J.; Macina, D.; Mackeprang, R.; Madaras, R.J.; Mader, W.F.; Maenner, R.; Maeno, T.; Mattig, P.; Mattig, S.; Magalhaes Martins, P.J.; Magnoni, L.; Magradze, E.; Magrath, C.A.; Mahalalel, Y.; Mahboubi, K.; Mahout, G.; Maiani, C.; Maidantchik, C.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Mal, P.; Malecki, Pa.; Malecki, P.; Maleev, V.P.; Malek, F.; Mallik, U.; Malon, D.; Maltezos, S.; Malyshev, V.; Malyukov, S.; Mameghani, R.; Mamuzic, J.; Manabe, A.; Mandelli, L.; Mandic, I.; Mandrysch, R.; Maneira, J.; Mangeard, P.S.; Manjavidze, I.D.; Mann, A.; Manning, P.M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Manz, A.; Mapelli, A.; Mapelli, L.; March, L.; Marchand, J.F.; Marchese, F.; Marchesotti, M.; Marchiori, G.; Marcisovsky, M.; Marin, A.; Marino, C.P.; Marroquim, F.; Marshall, R.; Marshall, Z.; Martens, F.K.; Marti-Garcia, S.; Martin, A.J.; Martin, B.; Martin, B.; Martin, F.F.; Martin, J.P.; Martin, Ph.; Martin, T.A.; Martin dit Latour, B.; Martinez, M.; Martinez Outschoorn, V.; Martyniuk, A.C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A.L.; Mass, M.; Massa, I.; Massaro, G.; Massol, N.; Mastroberardino, A.; Masubuchi, T.; Mathes, M.; Matricon, P.; Matsumoto, H.; Matsunaga, H.; Matsushita, T.; Mattravers, C.; Maugain, J.M.; Maxfield, S.J.; May, E.N.; Mayne, A.; Mazini, R.; Mazur, M.; Mazzanti, M.; Mazzoni, E.; Mc Kee, S.P.; McCarn, A.; McCarthy, R.L.; McCarthy, T.G.; McCubbin, N.A.; McFarlane, K.W.; Mcfayden, J.A.; McGlone, H.; Mchedlidze, G.; McLaren, R.A.; Mclaughlan, T.; McMahon, S.J.; McMahon, T.R.; McMahon, T.J.; McPherson, R.A.; Meade, A.; Mechnich, J.; Mechtel, M.; Medinnis, M.; Meera-Lebbai, R.; Meguro, T.; Mehdiyev, R.; Mehlhase, S.; Mehta, A.; Meier, K.; Meinhardt, J.; Meirose, B.; Melachrinos, C.; Mellado Garcia, B.R.; Mendoza Navas, L.; Meng, Z.; Mengarelli, A.; Menke, S.; Menot, C.; Meoni, E.; Merkl, D.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F.S.; Messina, A.; Metcalfe, J.; Mete, A.S.; Meuser, S.; Meyer, C.; Meyer, J-P.; Meyer, J.; Meyer, J.; Meyer, T.C.; Meyer, W.T.; Miao, J.; Michal, S.; Micu, L.; Middleton, R.P.; Miele, P.; Migas, S.; Mijovic, L.; Mikenberg, G.; Mikestikova, M.; Mikulec, B.; Mikuz, M.; Miller, D.W.; Miller, R.J.; Mills, W.J.; Mills, C.; Milov, A.; Milstead, D.A.; Milstein, D.; Minaenko, A.A.; Minano, M.; Minashvili, I.A.; Mincer, A.I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L.M.; Mirabelli, G.; Miralles Verge, L.; Misiejuk, A.; Mitra, A.; Mitrevski, J.; Mitrofanov, G.Y.; Mitsou, V.A.; Mitsui, S.; Miyagawa, P.S.; Miyazaki, K.; Mjornmark, J.U.; Moa, T.; Mockett, P.; Moed, S.; Moeller, V.; Monig, K.; Moser, N.; Mohapatra, S.; Mohn, B.; Mohr, W.; Mohrdieck-Mock, S.; Moisseev, A.M.; Moles-Valls, R.; Molina-Perez, J.; Moneta, L.; Monk, J.; Monnier, E.; Montesano, S.; Monticelli, F.; Monzani, S.; Moore, R.W.; Moorhead, G.F.; Mora Herrera, C.; Moraes, A.; Morais, A.; Morange, N.; Morel, J.; Morello, G.; Moreno, D.; Moreno Llacer, M.; Morettini, P.; Morii, M.; Morin, J.; Morita, Y.; Morley, A.K.; Mornacchi, G.; Morone, M-C.; Morris, J.D.; Moser, H.G.; Mosidze, M.; Moss, J.; Mount, R.; Mountricha, E.; Mouraviev, S.V.; Moyse, E.J.W.; Mudrinic, M.; Mueller, F.; Mueller, J.; Mueller, K.; Muller, T.A.; Muenstermann, D.; Muijs, A.; Muir, A.; Munwes, Y.; Murakami, K.; Murray, W.J.; Mussche, I.; Musto, E.; Myagkov, A.G.; Myska, M.; Nadal, J.; Nagai, K.; Nagano, K.; Nagasaka, Y.; Nairz, A.M.; Nakahama, Y.; Nakamura, K.; Nakano, I.; Nanava, G.; Napier, A.; Nash, M.; Nasteva, I.; Nation, N.R.; Nattermann, T.; Naumann, T.; Navarro, G.; Neal, H.A.; Nebot, E.; Nechaeva, P.; Negri, A.; Negri, G.; Nektarijevic, S.; Nelson, A.; Nelson, S.; Nelson, T.K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A.A.; Nessi, M.; Nesterov, S.Y.; Neubauer, M.S.; Neusiedl, A.; Neves, R.M.; Nevski, P.; Newman, P.R.; Nickerson, R.B.; Nicolaidou, R.; Nicolas, L.; Nicquevert, B.; Niedercorn, F.; Nielsen, J.; Niinikoski, T.; Nikiforov, A.; Nikolaenko, V.; Nikolaev, K.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, H.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nishiyama, T.; Nisius, R.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Nomoto, H.; Nordberg, M.; Nordkvist, B.; Norniella Francisco, O.; Norton, P.R.; Novakova, J.; Nozaki, M.; Nozicka, M.; Nugent, I.M.; Nuncio-Quiroz, A.E.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; Nyman, T.; O'Brien, B.J.; O'Neale, S.W.; O'Neil, D.C.; O'Shea, V.; Oakham, F.G.; Oberlack, H.; Ocariz, J.; Ochi, A.; Oda, S.; Odaka, S.; Odier, J.; Odino, G.A.; Ogren, H.; Oh, A.; Oh, S.H.; Ohm, C.C.; Ohshima, T.; Ohshita, H.; Ohska, T.K.; Ohsugi, T.; Okada, S.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olcese, M.; Olchevski, A.G.; Oliveira, M.; Oliveira Damazio, D.; Oliver Garcia, E.; Olivito, D.; Olszewski, A.; Olszowska, J.; Omachi, C.; Onofre, A.; Onyisi, P.U.E.; Oram, C.J.; Ordonez, G.; Oreglia, M.J.; Orellana, F.; Oren, Y.; Orestano, D.; Orlov, I.; Oropeza Barrera, C.; Orr, R.S.; Ortega, E.O.; Osculati, B.; Ospanov, R.; Osuna, C.; Otero y Garzon, G.; Ottersbach, J.P; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Ouyang, Q.; Owen, M.; Owen, S.; Oyarzun, A.; Oye, O.K.; Ozcan, V.E.; Ozturk, N.; Pacheco Pages, A.; Padilla Aranda, C.; Paganis, E.; Paige, F.; Pajchel, K.; Palestini, S.; Pallin, D.; Palma, A.; Palmer, J.D.; Pan, Y.B.; Panagiotopoulou, E.; Panes, B.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Panuskova, M.; Paolone, V.; Paoloni, A.; Papadelis, A.; Papadopoulou, Th.D.; Paramonov, A.; Park, S.J.; Park, W.; Parker, M.A.; Parodi, F.; Parsons, J.A.; Parzefall, U.; Pasqualucci, E.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pasztor, G.; Pataraia, S.; Patel, N.; Pater, J.R.; Patricelli, S.; Pauly, T.; Pecsy, M.; Pedraza Morales, M.I.; Peleganchuk, S.V.; Peng, H.; Pengo, R.; Penson, A.; Penwell, J.; Perantoni, M.; Perez, K.; Perez Cavalcanti, T.; Perez Codina, E.; Perez Garcia-Estan, M.T.; Perez Reale, V.; Peric, I.; Perini, L.; Pernegger, H.; Perrino, R.; Perrodo, P.; Persembe, S.; Perus, P.; Peshekhonov, V.D.; Peters, O.; Petersen, B.A.; Petersen, J.; Petersen, T.C.; Petit, E.; Petridis, A.; Petridou, C.; Petrolo, E.; Petrucci, F.; Petschull, D.; Petteni, M.; Pezoa, R.; Phan, A.; Phillips, A.W.; Phillips, P.W.; Piacquadio, G.; Piccaro, E.; Piccinini, M.; Pickford, A.; Piegaia, R.; Pilcher, J.E.; Pilkington, A.D.; Pina, J.; Pinamonti, M.; Pinfold, J.L.; Ping, J.; Pinto, B.; Pirotte, O.; Pizio, C.; Placakyte, R.; Plamondon, M.; Plano, W.G.; Pleier, M.A.; Pleskach, A.V.; Poblaguev, A.; Poddar, S.; Podlyski, F.; Poggioli, L.; Poghosyan, T.; Pohl, M.; Polci, F.; Polesello, G.; Policicchio, A.; Polini, A.; Poll, J.; Polychronakos, V.; Pomarede, D.M.; Pomeroy, D.; Pommes, K.; Pontecorvo, L.; Pope, B.G.; Popeneciu, G.A.; Popovic, D.S.; Poppleton, A.; Portell Bueso, X.; Porter, R.; Posch, C.; Pospelov, G.E.; Pospisil, S.; Potrap, I.N.; Potter, C.J.; Potter, C.T.; Poulard, G.; Poveda, J.; Prabhu, R.; Pralavorio, P.; Prasad, S.; Pravahan, R.; Prell, S.; Pretzl, K.; Pribyl, L.; Price, D.; Price, L.E.; Price, M.J.; Prichard, P.M.; Prieur, D.; Primavera, M.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Prudent, X.; Przysiezniak, H.; Psoroulas, S.; Ptacek, E.; Purdham, J.; Purohit, M.; Puzo, P.; Pylypchenko, Y.; Qian, J.; Qian, Z.; Qin, Z.; Quadt, A.; Quarrie, D.R.; Quayle, W.B.; Quinonez, F.; Raas, M.; Radescu, V.; Radics, B.; Rador, T.; Ragusa, F.; Rahal, G.; Rahimi, A.M.; Rajagopalan, S.; Rajek, S.; Rammensee, M.; Rammes, M.; Ramstedt, M.; Randrianarivony, K.; Ratoff, P.N.; Rauscher, F.; Rauter, E.; Raymond, M.; Read, A.L.; Rebuzzi, D.M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Reichold, A.; Reinherz-Aronis, E.; Reinsch, A.; Reisinger, I.; Reljic, D.; Rembser, C.; Ren, Z.L.; Renaud, A.; Renkel, P.; Rensch, B.; Rescigno, M.; Resconi, S.; Resende, B.; Reznicek, P.; Rezvani, R.; Richards, A.; Richter, R.; Richter-Was, E.; Ridel, M.; Rieke, S.; Rijpstra, M.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Rios, R.R.; Riu, I.; Rivoltella, G.; Rizatdinova, F.; Rizvi, E.; Robertson, S.H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J.E.M.; Robinson, M.; Robson, A.; Rocha de Lima, J.G.; Roda, C.; Roda Dos Santos, D.; Rodier, S.; Rodriguez, D.; Rodriguez Garcia, Y.; Roe, A.; Roe, S.; Rohne, O.; Rojo, V.; Rolli, S.; Romaniouk, A.; Romanov, V.M.; Romeo, G.; Romero Maltrana, D.; Roos, L.; Ros, E.; Rosati, S.; Rose, M.; Rosenbaum, G.A.; Rosenberg, E.I.; Rosendahl, P.L.; Rosselet, L.; Rossetti, V.; Rossi, E.; Rossi, L.P.; Rossi, L.; Rotaru, M.; Roth, I.; Rothberg, J.; Rottlander, I.; Rousseau, D.; Royon, C.R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubinskiy, I.; Ruckert, B.; Ruckstuhl, N.; Rud, V.I.; Rudolph, G.; Ruhr, F.; Ruiz-Martinez, A.; Rulikowska-Zarebska, E.; Rumiantsev, V.; Rumyantsev, L.; Runge, K.; Runolfsson, O.; Rurikova, Z.; Rusakovich, N.A.; Rust, D.R.; Rutherfoord, J.P.; Ruwiedel, C.; Ruzicka, P.; Ryabov, Y.F.; Ryadovikov, V.; Ryan, P.; Rybar, M.; Rybkin, G.; Ryder, N.C.; Rzaeva, S.; Saavedra, A.F.; Sadeh, I.; Sadrozinski, H.F-W.; Sadykov, R.; Safai Tehrani, F.; Sakamoto, H.; Salamanna, G.; Salamon, A.; Saleem, M.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvachua Ferrando, B.M.; Salvatore, D.; Salvatore, F.; Salzburger, A.; Sampsonidis, D.; Samset, B.H.; Sandaker, H.; Sander, H.G.; Sanders, M.P.; Sandhoff, M.; Sandhu, P.; Sandoval, T.; Sandstroem, R.; Sandvoss, S.; Sankey, D.P.C.; Sansoni, A.; Santamarina Rios, C.; Santoni, C.; Santonico, R.; Santos, H.; Saraiva, J.G.; Sarangi, T.; Sarkisyan-Grinbaum, E.; Sarri, F.; Sartisohn, G.; Sasaki, O.; Sasaki, T.; Sasao, N.; Satsounkevitch, I.; Sauvage, G.; Sauvan, J.B.; Savard, P.; Savinov, V.; Savva, P.; Sawyer, L.; Saxon, D.H.; Says, L.P.; Sbarra, C.; Sbrizzi, A.; Scallon, O.; Scannicchio, D.A.; Schaarschmidt, J.; Schacht, P.; Schafer, U.; Schaetzel, S.; Schaffer, A.C.; Schaile, D.; Schamberger, R.D.; Schamov, A.G.; Scharf, V.; Schegelsky, V.A.; Scheirich, D.; Scherzer, M.I.; Schiavi, C.; Schieck, J.; Schioppa, M.; Schlenker, S.; Schlereth, J.L.; Schmidt, E.; Schmidt, M.P.; Schmieden, K.; Schmitt, C.; Schmitz, M.; Schoning, A.; Schott, M.; Schouten, D.; Schovancova, J.; Schram, M.; Schreiner, A.; Schroeder, C.; Schroer, N.; Schuh, S.; Schuler, G.; Schultes, J.; Schultz-Coulon, H.C.; Schulz, H.; Schumacher, J.W.; Schumacher, M.; Schumm, B.A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwemling, Ph.; Schwienhorst, R.; Schwierz, R.; Schwindling, J.; Scott, W.G.; Searcy, J.; Sedykh, E.; Segura, E.; Seidel, S.C.; Seiden, A.; Seifert, F.; Seixas, J.M.; Sekhniaidze, G.; Seliverstov, D.M.; Sellden, B.; Sellers, G.; Seman, M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Seuster, R.; Severini, H.; Sevior, M.E.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L.Y.; Shank, J.T.; Shao, Q.T.; Shapiro, M.; Shatalov, P.B.; Shaver, L.; Shaw, C.; Shaw, K.; Sherman, D.; Sherwood, P.; Shibata, A.; Shimizu, S.; Shimojima, M.; Shin, T.; Shmeleva, A.; Shochet, M.J.; Short, D.; Shupe, M.A.; Sicho, P.; Sidoti, A.; Siebel, A.; Siegert, F.; Siegrist, J.; Sijacki, Dj.; Silbert, O.; Silver, Y.; Silverstein, D.; Silverstein, S.B.; Simak, V.; Simic, Lj.; Simion, S.; Simmons, B.; Simonyan, M.; Sinervo, P.; Sinev, N.B.; Sipica, V.; Siragusa, G.; Sisakyan, A.N.; Sivoklokov, S.Yu.; Sjolin, J.; Sjursen, T.B.; Skinnari, L.A.; Skovpen, K.; Skubic, P.; Skvorodnev, N.; Slater, M.; Slavicek, T.; Sliwa, K.; Sloan, T.J.; Sloper, J.; Smakhtin, V.; Smirnov, S.Yu.; Smirnova, L.N.; Smirnova, O.; Smith, B.C.; Smith, D.; Smith, K.M.; Smizanska, M.; Smolek, K.; Snesarev, A.A.; Snow, S.W.; Snow, J.; Snuverink, J.; Snyder, S.; Soares, M.; Sobie, R.; Sodomka, J.; Soffer, A.; Solans, C.A.; Solar, M.; Solc, J.; Soldevila, U.; Solfaroli Camillocci, E.; Solodkov, A.A.; Solovyanov, O.V.; Sondericker, J.; Soni, N.; Sopko, V.; Sopko, B.; Sorbi, M.; Sosebee, M.; Soukharev, A.; Spagnolo, S.; Spano, F.; Spighi, R.; Spigo, G.; Spila, F.; Spiriti, E.; Spiwoks, R.; Spousta, M.; Spreitzer, T.; Spurlock, B.; St. Denis, R.D.; Stahl, T.; Stahlman, J.; Stamen, R.; Stanecka, E.; Stanek, R.W.; Stanescu, C.; Stapnes, S.; Starchenko, E.A.; Stark, J.; Staroba, P.; Starovoitov, P.; Staude, A.; Stavina, P.; Stavropoulos, G.; Steele, G.; Steinbach, P.; Steinberg, P.; Stekl, I.; Stelzer, B.; Stelzer, H.J.; Stelzer-Chilton, O.; Stenzel, H.; Stevenson, K.; Stewart, G.A.; Stockmanns, T.; Stockton, M.C.; Stoerig, K.; Stoicea, G.; Stonjek, S.; Strachota, P.; Stradling, A.R.; Straessner, A.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strang, M.; Strauss, E.; Strauss, M.; Strizenec, P.; Strohmer, R.; Strom, D.M.; Strong, J.A.; Stroynowski, R.; Strube, J.; Stugu, B.; Stumer, I.; Stupak, J.; Sturm, P.; Soh, D.A.; Su, D.; Subramania, S.; Sugaya, Y.; Sugimoto, T.; Suhr, C.; Suita, K.; Suk, M.; Sulin, V.V.; Sultansoy, S.; Sumida, T.; Sun, X.; Sundermann, J.E.; Suruliz, K.; Sushkov, S.; Susinno, G.; Sutton, M.R.; Suzuki, Y.; Sviridov, Yu.M.; Swedish, S.; Sykora, I.; Sykora, T.; Szeless, B.; Sanchez, J.; Ta, D.; Tackmann, K.; Taffard, A.; Tafirout, R.; Taga, A.; Taiblum, N.; Takahashi, Y.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Talby, M.; Talyshev, A.; Tamsett, M.C.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tanaka, Y.; Tani, K.; Tannoury, N.; Tappern, G.P.; Tapprogge, S.; Tardif, D.; Tarem, S.; Tarrade, F.; Tartarelli, G.F.; Tas, P.; Tasevsky, M.; Tassi, E.; Tatarkhanov, M.; Taylor, C.; Taylor, F.E.; Taylor, G.; Taylor, G.N.; Taylor, W.; Teixeira Dias Castanheira, M.; Teixeira-Dias, P.; Temming, K.K.; Ten Kate, H.; Teng, P.K.; Tennenbaum-Katan, Y.D.; Terada, S.; Terashi, K.; Terron, J.; Terwort, M.; Testa, M.; Teuscher, R.J.; Tevlin, C.M.; Thadome, J.; Therhaag, J.; Theveneaux-Pelzer, T.; Thioye, M.; Thoma, S.; Thomas, J.P.; Thompson, E.N.; Thompson, P.D.; Thompson, P.D.; Thompson, A.S.; Thomson, E.; Thomson, M.; Thun, R.P.; Tic, T.; Tikhomirov, V.O.; Tikhonov, Y.A.; Timmermans, C.J.W.P.; Tipton, P.; Tique Aires Viegas, F.J.; Tisserant, S.; Tobias, J.; Toczek, B.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokar, S.; Tokunaga, K.; Tokushuku, K.; Tollefson, K.; Tomoto, M.; Tompkins, L.; Toms, K.; Tonazzo, A.; Tong, G.; Tonoyan, A.; Topfel, C.; Topilin, N.D.; Torchiani, I.; Torrence, E.; Torro Pastor, E.; Toth, J.; Touchard, F.; Tovey, D.R.; Traynor, D.; Trefzger, T.; Treis, J.; Tremblet, L.; Tricoli, A.; Trigger, I.M.; Trincaz-Duvoid, S.; Trinh, T.N.; Tripiana, M.F.; Triplett, N.; Trischuk, W.; Trivedi, A.; Trocme, B.; Troncon, C.; Trottier-McDonald, M.; Trzupek, A.; Tsarouchas, C.; Tseng, J.C-L.; Tsiakiris, M.; Tsiareshka, P.V.; Tsionou, D.; Tsipolitis, G.; Tsiskaridze, V.; Tskhadadze, E.G.; Tsukerman, I.I.; Tsulaia, V.; Tsung, J.W.; Tsuno, S.; Tsybychev, D.; Tua, A.; Tuggle, J.M.; Turala, M.; Turecek, D.; Turk Cakir, I.; Turlay, E.; Tuts, P.M.; Tykhonov, A.; Tylmad, M.; Tyndel, M.; Typaldos, D.; Tyrvainen, H.; Tzanakos, G.; Uchida, K.; Ueda, I.; Ueno, R.; Ugland, M.; Uhlenbrock, M.; Uhrmacher, M.; Ukegawa, F.; Unal, G.; Underwood, D.G.; Undrus, A.; Unel, G.; Unno, Y.; Urbaniec, D.; Urkovsky, E.; Urquijo, P.; Urrejola, P.; Usai, G.; Uslenghi, M.; Vacavant, L.; Vacek, V.; Vachon, B.; Vahsen, S.; Valderanis, C.; Valenta, J.; Valente, P.; Valentinetti, S.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J.A.; van der Graaf, H.; van der Kraaij, E.; van der Poel, E.; van der Ster, D.; Van Eijk, B.; van Eldik, N.; van Gemmeren, P.; van Kesteren, Z.; van Vulpen, I.; Vandelli, W.; Vandoni, G.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Varela Rodriguez, F.; Vari, R.; Varnes, E.W.; Varouchas, D.; Vartapetian, A.; Varvell, K.E.; Vassilakopoulos, V.I.; Vazeille, F.; Vegni, G.; Veillet, J.J.; Vellidis, C.; Veloso, F.; Veness, R.; Veneziano, S.; Ventura, A.; Ventura, D.; Ventura, S.; Venturi, M.; Venturi, N.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J.C.; Vest, A.; Vetterli, M.C.; Vichou, I.; Vickey, T.; Viehhauser, G.H.A.; Viel, S.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M.G.; Vinek, E.; Vinogradov, V.B.; Virchaux, M.; Viret, S.; Virzi, J.; Vitale, A.; Vitells, O.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vlasak, M.; Vlasov, N.; Vogel, A.; Vokac, P.; Volpi, M.; Volpini, G.; von der Schmitt, H.; von Loeben, J.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorobiev, A.P.; Vorwerk, V.; Vos, M.; Voss, R.; Voss, T.T.; Vossebeld, J.H.; Vovenko, A.S.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vuillermet, R.; Vukotic, I.; Wagner, W.; Wagner, P.; Wahlen, H.; Wakabayashi, J.; Walbersloh, J.; Walch, S.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Waller, P.; Wang, C.; Wang, H.; Wang, J.; Wang, J.; Wang, J.C.; Wang, R.; Wang, S.M.; Warburton, A.; Ward, C.P.; Warsinsky, M.; Watkins, P.M.; Watson, A.T.; Watson, M.F.; Watts, G.; Watts, S.; Waugh, A.T.; Waugh, B.M.; Weber, J.; Weber, M.; Weber, M.S.; Weber, P.; Weidberg, A.R.; Weingarten, J.; Weiser, C.; Wellenstein, H.; Wells, P.S.; Wen, M.; Wenaus, T.; Wendler, S.; Weng, Z.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Werth, M.; Wessels, M.; Whalen, K.; Wheeler-Ellis, S.J.; Whitaker, S.P.; White, A.; White, M.J.; White, S.; Whitehead, S.R.; Whiteson, D.; Whittington, D.; Wicek, F.; Wicke, D.; Wickens, F.J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik, L.A.M.; Wildauer, A.; Wildt, M.A.; Wilhelm, I.; Wilkens, H.G.; Will, J.Z.; Williams, E.; Williams, H.H.; Willis, W.; Willocq, S.; Wilson, J.A.; Wilson, M.G.; Wilson, A.; Wingerter-Seez, I.; Winkelmann, S.; Winklmeier, F.; Wittgen, M.; Wolter, M.W.; Wolters, H.; Wooden, G.; Wosiek, B.K.; Wotschack, J.; Woudstra, M.J.; Wraight, K.; Wright, C.; Wrona, B.; Wu, S.L.; Wu, X.; Wu, Y.; Wulf, E.; Wunstorf, R.; Wynne, B.M.; Xaplanteris, L.; Xella, S.; Xie, S.; Xie, Y.; Xu, C.; Xu, D.; Xu, G.; Yabsley, B.; Yamada, M.; Yamamoto, A.; Yamamoto, K.; Yamamoto, S.; Yamamura, T.; Yamaoka, J.; Yamazaki, T.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, U.K.; Yang, Y.; Yang, Y.; Yang, Z.; Yanush, S.; Yao, W-M.; Yao, Y.; Yasu, Y.; Ye, J.; Ye, S.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Young, C.; Youssef, S.; Yu, D.; Yu, J.; Yu, J.; Yuan, L.; Yurkewicz, A.; Zaets, V.G.; Zaidan, R.; Zaitsev, A.M.; Zajacova, Z.; Zalite, Yo.K.; Zanello, L.; Zarzhitsky, P.; Zaytsev, A.; Zdrazil, M.; Zeitnitz, C.; Zeller, M.; Zema, P.F.; Zemla, A.; Zendler, C.; Zenin, A.V.; Zenin, O.; Zenis, T.; Zenonos, Z.; Zenz, S.; Zerwas, D.; Zevi della Porta, G.; Zhan, Z.; Zhang, D.; Zhang, H.; Zhang, J.; Zhang, X.; Zhang, Z.; Zhao, L.; Zhao, T.; Zhao, Z.; Zhemchugov, A.; Zheng, S.; Zhong, J.; Zhou, B.; Zhou, N.; Zhou, Y.; Zhu, C.G.; Zhu, H.; Zhu, Y.; Zhuang, X.; Zhuravlov, V.; Zieminska, D.; Zilka, B.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Ziolkowski, M.; Zitoun, R.; Zivkovic, L.; Zmouchko, V.V.; Zobernig, G.; Zoccoli, A.; Zolnierowski, Y.; Zsenei, A.; zur Nedden, M.; Zutshi, V.; Zwalinski, L.

    2011-01-01

    Measurements are presented from proton-proton collisions at centre-of-mass energies of sqrt(s) = 0.9, 2.36 and 7 TeV recorded with the ATLAS detector at the LHC. Events were collected using a single-arm minimum-bias trigger. The charged-particle multiplicity, its dependence on transverse momentum and pseudorapidity and the relationship between the mean transverse momentum and charged-particle multiplicity are measured. Measurements in different regions of phase-space are shown, providing diffraction-reduced measurements as well as more inclusive ones. The observed distributions are corrected to well-defined phase-space regions, using model-independent corrections. The results are compared to each other and to various Monte Carlo models, including a new AMBT1 PYTHIA 6 tune. In all the kinematic regions considered, the particle multiplicities are higher than predicted by the Monte Carlo models. The central charged-particle multiplicity per event and unit of pseudorapidity, for tracks with pT >100 MeV, is...

  1. Dissipative dynamics with the corrected propagator method. Numerical comparison between fully quantum and mixed quantum/classical simulations

    International Nuclear Information System (INIS)

    Gelman, David; Schwartz, Steven D.

    2010-01-01

    The recently developed quantum-classical method has been applied to the study of dissipative dynamics in multidimensional systems. The method is designed to treat many-body systems consisting of a low dimensional quantum part coupled to a classical bath. Assuming the approximate zeroth order evolution rule, the corrections to the quantum propagator are defined in terms of the total Hamiltonian and the zeroth order propagator. Then the corrections are taken to the classical limit by introducing the frozen Gaussian approximation for the bath degrees of freedom. The evolution of the primary part is governed by the corrected propagator yielding the exact quantum dynamics. The method has been tested on two model systems coupled to a harmonic bath: (i) an anharmonic (Morse) oscillator and (ii) a double-well potential. The simulations have been performed at zero temperature. The results have been compared to the exact quantum simulations using the surrogate Hamiltonian approach.

  2. Automatic segmentation of male pelvic anatomy on computed tomography images: a comparison with multiple observers in the context of a multicentre clinical trial.

    Science.gov (United States)

    Geraghty, John P; Grogan, Garry; Ebert, Martin A

    2013-04-30

    This study investigates the variation in segmentation of several pelvic anatomical structures on computed tomography (CT) between multiple observers and a commercial automatic segmentation method, in the context of quality assurance and evaluation during a multicentre clinical trial. CT scans of two prostate cancer patients ('benchmarking cases'), one high risk (HR) and one intermediate risk (IR), were sent to multiple radiotherapy centres for segmentation of prostate, rectum and bladder structures according to the TROG 03.04 "RADAR" trial protocol definitions. The same structures were automatically segmented using iPlan software for the same two patients, allowing structures defined by automatic segmentation to be quantitatively compared with those defined by multiple observers. A sample of twenty trial patient datasets were also used to automatically generate anatomical structures for quantitative comparison with structures defined by individual observers for the same datasets. There was considerable agreement amongst all observers and automatic segmentation of the benchmarking cases for bladder (mean spatial variations segmenting a prostate with considerably more volume (mean +113.3%) than that automatically segmented. Similar results were seen across the twenty sample datasets, with disagreement between iPlan and observers dominant at the prostatic apex and superior part of the rectum, which is consistent with observations made during quality assurance reviews during the trial. This study has demonstrated quantitative analysis for comparison of multi-observer segmentation studies. For automatic segmentation algorithms based on image-registration as in iPlan, it is apparent that agreement between observer and automatic segmentation will be a function of patient-specific image characteristics, particularly for anatomy with poor contrast definition. For this reason, it is suggested that automatic registration based on transformation of a single reference dataset

  3. Metrics with vanishing quantum corrections

    International Nuclear Information System (INIS)

    Coley, A A; Hervik, S; Gibbons, G W; Pope, C N

    2008-01-01

    We investigate solutions of the classical Einstein or supergravity equations that solve any set of quantum corrected Einstein equations in which the Einstein tensor plus a multiple of the metric is equated to a symmetric conserved tensor T μν (g αβ , ∂ τ g αβ , ∂ τ ∂ σ g αβ , ...,) constructed from sums of terms, the involving contractions of the metric and powers of arbitrary covariant derivatives of the curvature tensor. A classical solution, such as an Einstein metric, is called universal if, when evaluated on that Einstein metric, T μν is a multiple of the metric. A Ricci flat classical solution is called strongly universal if, when evaluated on that Ricci flat metric, T μν vanishes. It is well known that pp-waves in four spacetime dimensions are strongly universal. We focus attention on a natural generalization; Einstein metrics with holonomy Sim(n - 2) in which all scalar invariants are zero or constant. In four dimensions we demonstrate that the generalized Ghanam-Thompson metric is weakly universal and that the Goldberg-Kerr metric is strongly universal; indeed, we show that universality extends to all four-dimensional Sim(2) Einstein metrics. We also discuss generalizations to higher dimensions

  4. International Comparisons: Issues of Methodology and Practice

    Directory of Open Access Journals (Sweden)

    Serova Irina A.

    2017-12-01

    Full Text Available The article discusses the methodology and organization of statistical observation of the level of countries’ economic development. The theoretical basis of international comparisons is singled out and on its basis the comparative evaluation of inconsistency of theoretical positions and the reasons of differences of GDP growth is carried out. Based on the complexity of the formation of homogeneous data sets in order to obtain correct comparison results, a general scheme for the relationship between the theoretical base of international comparisons and PPP constraints is defined. The possibility of obtaining a single measurement of the indicators of national economies based on the existing sampling errors, measurement uncertainties and classification errors is considered. The emphasis is placed on combining the work using the ICP and CPI with the aim of achieving comparability of data in the territorial and temporal cross-section. Using the basic characteristics of sustainable economic growth, long-term prospects for changing the ranking positions of countries with different levels of income are determined. It is shown that the clarity and unambiguity of the theoretical provisions is the defining condition for the further process of data collection and formation of correct analytical conclusions.

  5. Corrected ROC analysis for misclassified binary outcomes.

    Science.gov (United States)

    Zawistowski, Matthew; Sussman, Jeremy B; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A; Wiitala, Wyndy L

    2017-06-15

    Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  6. Spatial Correlation of Pathology and Perfusion Changes within the Cortex and White Matter in Multiple Sclerosis.

    Science.gov (United States)

    Mulholland, A D; Vitorino, R; Hojjat, S-P; Ma, A Y; Zhang, L; Lee, L; Carroll, T J; Cantrell, C G; Figley, C R; Aviv, R I

    2018-01-01

    The spatial correlation between WM and cortical GM disease in multiple sclerosis is controversial and has not been previously assessed with perfusion MR imaging. We sought to determine the nature of association between lobar WM, cortical GM, volume and perfusion. Nineteen individuals with secondary-progressive multiple sclerosis, 19 with relapsing-remitting multiple sclerosis, and 19 age-matched healthy controls were recruited. Quantitative MR perfusion imaging was used to derive CBF, CBV, and MTT within cortical GM, WM, and T2-hyperintense lesions. A 2-step multivariate linear regression (corrected for age, disease duration, and Expanded Disability Status Scale) was used to assess correlations between perfusion and volume measures in global and lobar normal-appearing WM, cortical GM, and T2-hyperintense lesions. The Bonferroni adjustment was applied as appropriate. Global cortical GM and WM volume was significantly reduced for each group comparison, except cortical GM volume of those with relapsing-remitting multiple sclerosis versus controls. Global and lobar cortical GM CBF and CBV were reduced in secondary-progressive multiple sclerosis compared with other groups but not for relapsing-remitting multiple sclerosis versus controls. Global and lobar WM CBF and CBV were not significantly different across groups. The distribution of lobar cortical GM and WM volume reduction was disparate, except for the occipital lobes in patients with secondary-progressive multiple sclerosis versus those with relapsing-remitting multiple sclerosis. Moderate associations were identified between lobar cortical GM and lobar normal-appearing WM volume in controls and in the left temporal lobe in relapsing-remitting multiple sclerosis. No significant associations occurred between cortical GM and WM perfusion or volume. Strong correlations were observed between cortical-GM perfusion, normal appearing WM and lesional perfusion, with respect to each global and lobar region within HC, and

  7. Development of an approach to correcting MicroPEM baseline drift.

    Science.gov (United States)

    Zhang, Ting; Chillrud, Steven N; Pitiranggon, Masha; Ross, James; Ji, Junfeng; Yan, Beizhan

    2018-07-01

    Fine particulate matter (PM 2.5 ) is associated with various adverse health outcomes. The MicroPEM (RTI, NC), a miniaturized real-time portable particulate sensor with an integrated filter for collecting particles, has been widely used for personal PM 2.5 exposure assessment. Five-day deployments were targeted on a total of 142 deployments (personal or residential) to obtain real-time PM 2.5 levels from children living in New York City and Baltimore. Among these 142 deployments, 79 applied high-efficiency particulate air (HEPA) filters in the field at the beginning and end of each deployment to adjust the zero level of the nephelometer. However, unacceptable baseline drift was observed in a large fraction (> 40%) of acquisitions in this study even after HEPA correction. This drift issue has been observed in several other studies as well. The purpose of the present study is to develop an algorithm to correct the baseline drift in MicroPEM based on central site ambient data during inactive time periods. A running baseline & gravimetric correction (RBGC) method was developed based on the comparison of MicroPEM readings during inactive periods to ambient PM 2.5 levels provided by fixed monitoring sites and the gravimetric weight of PM 2.5 collected on the MicroPEM filters. The results after RBGC correction were compared with those using HEPA approach and gravimetric correction alone. Seven pairs of duplicate acquisitions were used to validate the RBGC method. The percentages of acquisitions with baseline drift problems were 42%, 53% and 10% for raw, HEPA corrected, and RBGC corrected data, respectively. Pearson correlation analysis of duplicates showed an increase in the coefficient of determination from 0.75 for raw data to 0.97 after RBGC correction. In addition, the slope of the regression line increased from 0.60 for raw data to 1.00 after RBGC correction. The RBGC approach corrected the baseline drift issue associated with MicroPEM data. The algorithm developed

  8. A scheme for PET data normalization in event-based motion correction

    International Nuclear Information System (INIS)

    Zhou, Victor W; Kyme, Andre Z; Fulton, Roger; Meikle, Steven R

    2009-01-01

    Line of response (LOR) rebinning is an event-based motion-correction technique for positron emission tomography (PET) imaging that has been shown to compensate effectively for rigid motion. It involves the spatial transformation of LORs to compensate for motion during the scan, as measured by a motion tracking system. Each motion-corrected event is then recorded in the sinogram bin corresponding to the transformed LOR. It has been shown previously that the corrected event must be normalized using a normalization factor derived from the original LOR, that is, based on the pair of detectors involved in the original coincidence event. In general, due to data compression strategies (mashing), sinogram bins record events detected on multiple LORs. The number of LORs associated with a sinogram bin determines the relative contribution of each LOR. This paper provides a thorough treatment of event-based normalization during motion correction of PET data using LOR rebinning. We demonstrate theoretically and experimentally that normalization of the corrected event during LOR rebinning should account for the number of LORs contributing to the sinogram bin into which the motion-corrected event is binned. Failure to account for this factor may cause artifactual slice-to-slice count variations in the transverse slices and visible horizontal stripe artifacts in the coronal and sagittal slices of the reconstructed images. The theory and implementation of normalization in conjunction with the LOR rebinning technique is described in detail, and experimental verification of the proposed normalization method in phantom studies is presented.

  9. Phase correction of electromagnetic coupling effects in cross-borehole EIT measurements

    International Nuclear Information System (INIS)

    Zhao, Y; Zimmermann, E; Wolters, B; Van Waasen, S; Huisman, J A; Treichel, A; Kemna, A

    2015-01-01

    Borehole EIT measurements in a broad frequency range (mHz to kHz) are used to study subsurface geophysical properties. However, accurate measurements have long been difficult because the required long electric cables introduce undesired inductive and capacitive coupling effects. Recently, it has been shown that such effects can successfully be corrected in the case of single-borehole measurements. The aim of this paper is to extend the previously developed correction procedure for inductive coupling during EIT measurements in a single borehole to cross-borehole EIT measurements with multiple borehole electrode chains. In order to accelerate and simplify the previously developed correction procedure for inductive coupling, a pole–pole matrix of mutual inductances is defined. This consists of the inductances of each individual chain obtained from calibration measurements and the inductances between two chains calculated from the known cable positions using numerical modelling. The new correction procedure is successfully verified with measurements in a water-filled pool under controlled conditions where the errors introduced by capacitive coupling were well-defined and could be estimated by FEM forward modelling. In addition, EIT field measurements demonstrate that the correction methods increase the phase accuracy considerably. Overall, the phase accuracy of cross-hole EIT measurements after correction of inductive and capacitive coupling is improved to better than 1 mrad up to a frequency of 1 kHz, which substantially improves our ability to characterize the frequency-dependent complex electrical resistivity of weakly polarizable soils and sediments in situ. (paper)

  10. Mathematical model of rhodium self-powered detectors and algorithms for correction of their time delay

    International Nuclear Information System (INIS)

    Bur'yan, V.I.; Kozlova, L.V.; Kuzhil', A.S.; Shikalov, V.F.

    2005-01-01

    The development of algorithms for correction of self-powered neutron detector (SPND) inertial is caused by necessity to increase the fast response of the in-core instrumentation systems (ICIS). The increase of ICIS fast response will permit to monitor in real time fast transient processes in the core, and in perspective - to use the signals of rhodium SPND for functions of emergency protection by local parameters. In this paper it is proposed to use mathematical model of neutron flux measurements by means of SPND in integral form for creation of correction algorithms. This approach, in the case, is the most convenient for creation of recurrent algorithms for flux estimation. The results of comparison for estimation of neutron flux and reactivity by readings of ionization chambers and SPND signals, corrected by proposed algorithms, are presented [ru

  11. The determination of beam quality correction factors: Monte Carlo simulations and measurements.

    Science.gov (United States)

    González-Castaño, D M; Hartmann, G H; Sánchez-Doblado, F; Gómez, F; Kapsch, R-P; Pena, J; Capote, R

    2009-08-07

    Modern dosimetry protocols are based on the use of ionization chambers provided with a calibration factor in terms of absorbed dose to water. The basic formula to determine the absorbed dose at a user's beam contains the well-known beam quality correction factor that is required whenever the quality of radiation used at calibration differs from that of the user's radiation. The dosimetry protocols describe the whole ionization chamber calibration procedure and include tabulated beam quality correction factors which refer to 60Co gamma radiation used as calibration quality. They have been calculated for a series of ionization chambers and radiation qualities based on formulae, which are also described in the protocols. In the case of high-energy photon beams, the relative standard uncertainty of the beam quality correction factor is estimated to amount to 1%. In the present work, two alternative methods to determine beam quality correction factors are prescribed-Monte Carlo simulation using the EGSnrc system and an experimental method based on a comparison with a reference chamber. Both Monte Carlo calculations and ratio measurements were carried out for nine chambers at several radiation beams. Four chamber types are not included in the current dosimetry protocols. Beam quality corrections for the reference chamber at two beam qualities were also measured using a calorimeter at a PTB Primary Standards Dosimetry Laboratory. Good agreement between the Monte Carlo calculated (1% uncertainty) and measured (0.5% uncertainty) beam quality correction factors was obtained. Based on these results we propose that beam quality correction factors can be generated both by measurements and by the Monte Carlo simulations with an uncertainty at least comparable to that given in current dosimetry protocols.

  12. First clinical experience with a multiple region of interest registration and correction method in radiotherapy of head-and-neck cancer patients

    International Nuclear Information System (INIS)

    Beek, Suzanne van; Kranen, Simon van; Mencarelli, Angelo; Remeijer, Peter; Rasch, Coen; Herk, Marcel van; Sonke, Jan-Jakob

    2010-01-01

    Purpose: To discuss the first clinical experience with a multiple region of interest (mROI) registration and correction method for high-precision radiotherapy of head-and-neck cancer patients. Materials and methods: 12-13 3D rectangular-shaped ROIs were automatically placed around bony structures on the planning CT scans (n = 50 patients) which were individually registered to subsequent CBCT scans. mROI registration was used to quantify global and local setup errors. The time required to perform the mROI registration was compared with that of a previously used single-ROI method. The number of scans with residual local setup error exceeding 5 mm/5 deg. (warnings) was scored together with the frequency ROIs exceeding these limits for three or more consecutive imaging fractions (systematic errors). Results: In 40% of the CBCT scans, one or more ROI-registrations exceeded the 5 mm/5 deg.. Most warnings were seen in ROI 'hyoid', 31% of the rotation warnings and 14% of the translation warnings. Systematic errors lead to 52 consults of the treating physician. The preparation and registration time was similar for both registration methods. Conclusions: The mROI registration method is easy to use with little extra workload, provides additional information on local setup errors, and helps to select patients for re-planning.

  13. International comparison of activity measurements of a solution of 75Se

    Science.gov (United States)

    Ratel, Guy

    2002-04-01

    Activity measurements of a solution of 75Se, supplied by the BIPM, have been carried out by 21 laboratories within the framework of an international comparison. Seven different methods were used. Details on source preparation, experimental facilities and counting data are reported. The measured activity-concentration values show a total spread of 6.62% before correction and 6.02% after correction for delayed events, with standard deviations of the unweighted means of 0.45% and 0.36%, respectively. The correction for delayed events was measured directly by four laboratories. Unfortunately no consensus on the activity value could be deduced from their results. The results of the comparison have been entered in the tables of the International Reference System (SIR) for γ-ray emitting radionuclides. The half-life of the metastable state was also determined by two laboratories and found to be in good agreement with the values found in the literature.

  14. Basic dynamics at a multiple resonance

    International Nuclear Information System (INIS)

    Ferraz-Mello, S.; Yokoyama, T.

    The problem of multiple resonance is dealt with as it occurs in Celestial Mechanics and in non-linear Mechanics. In perturbation theory small divisors occur as a consequence of the fact that the flows in the phase space of the real system and the flows in the phase space of the so-called undisturbed system are not homeomorphic at all. Whatever the perturbation technique we adopt, the first step is to correct the topology of the undisturbed flows. It is shown that at a multiple resonance we are led to dynamical systems that are generally non-integrable. The basic representatives of these systems are the n-pendulums theta sup(:) sub(k) = σ sub(j)A sub(jk) sin theta sub(j). Multiple resonances are classified as syndetic or asyndetic following the eigenvalues of a quadratic form. Some degenerate cases are also presented. (Author) [pt

  15. Murasaki: a fast, parallelizable algorithm to find anchors from multiple genomes.

    Directory of Open Access Journals (Sweden)

    Kris Popendorf

    Full Text Available BACKGROUND: With the number of available genome sequences increasing rapidly, the magnitude of sequence data required for multiple-genome analyses is a challenging problem. When large-scale rearrangements break the collinearity of gene orders among genomes, genome comparison algorithms must first identify sets of short well-conserved sequences present in each genome, termed anchors. Previously, anchor identification among multiple genomes has been achieved using pairwise alignment tools like BLASTZ through progressive alignment tools like TBA, but the computational requirements for sequence comparisons of multiple genomes quickly becomes a limiting factor as the number and scale of genomes grows. METHODOLOGY/PRINCIPAL FINDINGS: Our algorithm, named Murasaki, makes it possible to identify anchors within multiple large sequences on the scale of several hundred megabases in few minutes using a single CPU. Two advanced features of Murasaki are (1 adaptive hash function generation, which enables efficient use of arbitrary mismatch patterns (spaced seeds and therefore the comparison of multiple mammalian genomes in a practical amount of computation time, and (2 parallelizable execution that decreases the required wall-clock and CPU times. Murasaki can perform a sensitive anchoring of eight mammalian genomes (human, chimp, rhesus, orangutan, mouse, rat, dog, and cow in 21 hours CPU time (42 minutes wall time. This is the first single-pass in-core anchoring of multiple mammalian genomes. We evaluated Murasaki by comparing it with the genome alignment programs BLASTZ and TBA. We show that Murasaki can anchor multiple genomes in near linear time, compared to the quadratic time requirements of BLASTZ and TBA, while improving overall accuracy. CONCLUSIONS/SIGNIFICANCE: Murasaki provides an open source platform to take advantage of long patterns, cluster computing, and novel hash algorithms to produce accurate anchors across multiple genomes with

  16. The design and implementation of a motion correction scheme for neurological PET

    International Nuclear Information System (INIS)

    Bloomfield, Peter M; Spinks, Terry J; Reed, Johnny; Schnorr, Leonard; Westrip, Anthony M; Livieratos, Lefteris; Fulton, Roger; Jones, Terry

    2003-01-01

    A method is described to monitor the motion of the head during neurological positron emission tomography (PET) acquisitions and to correct the data post acquisition for the recorded motion prior to image reconstruction. The technique uses an optical tracking system, Polaris TM , to accurately monitor the position of the head during the PET acquisition. The PET data are acquired in list mode where the events are written directly to disk during acquisition. The motion tracking information is aligned to the PET data using a sequence of pseudo-random numbers, which are inserted into the time tags in the list mode event stream through the gating input interface on the tomograph. The position of the head is monitored during the transmission acquisition, and it is assumed that there is minimal head motion during this measurement. Each event, prompt and delayed, in the list mode event stream is corrected for motion and transformed into the transmission space. For a given line of response, normalization, including corrections for detector efficiency, geometry and crystal interference and dead time are applied prior to motion correction and rebinning in the sinogram. A series of phantom experiments were performed to confirm the accuracy of the method: (a) a point source located in three discrete axial positions in the tomograph field of view, 0 mm, 10 mm and 20 mm from a reference point, (b) a multi-line source phantom rotated in both discrete and gradual rotations through ±5 deg. and ±15 deg, including a vertical and horizontal movement in the plane. For both phantom experiments images were reconstructed for both the fixed and motion corrected data. Measurements for resolution, full width at half maximum (FWHM) and full width at tenth maximum (FWTM), were calculated from these images and a comparison made between the fixed and motion corrected datasets. From the point source measurements, the FWHM at each axial position was 7.1 mm in the horizontal direction, and

  17. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    Science.gov (United States)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-07-01

    An extension of the point kinetics model is developed to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. The spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  18. A screening-corrected additivity rule for the calculation of electron scattering from macro-molecules

    International Nuclear Information System (INIS)

    Blanco, F; Garcia, G

    2009-01-01

    A simplified form of the well-known screening-corrected additivity rule procedure for the calculation of electron-molecule cross sections is proposed for the treatment of some very large macro-molecules. While the comparison of the standard and simplified treatments for a DNA dodecamer reveals very similar results, the new treatment presents some important advantages for large molecules.

  19. Multiplicity distributions in high-energy neutrino interactions

    International Nuclear Information System (INIS)

    Chapman, J.W.; Coffin, C.T.; Diamond, R.N.; French, H.; Louis, W.; Roe, B.P.; Seidl, A.A.; Vander Velde, J.C.; Berge, J.P.; Bogert, D.V.; DiBianca, F.A.; Cundy, D.C.; Dunaitsev, A.; Efremenko, V.; Ermolov, P.; Fowler, W.; Hanft, R.; Harigel, G.; Huson, F.R.; Kolganov, V.; Mukhin, A.; Nezrick, F.A.; Rjabov, Y.; Scott, W.G.; Smart, W.

    1976-01-01

    Results from the Fermilab 15-ft bubble chamber on the charged-particle multiplicity distributions produced in high-energy charged-current neutrino-proton interactions are presented. Comparisons are made to γp, ep, μp, and inclusive pp scattering. The mean hadronic multiplicity appears to depend only on the mass of the excited hadronic state, independent of the mode of excitation. A fit to the neutrino data gives = (1.09+-0.38) +(1.09+-0.03)lnW 2

  20. Accelerating Multiple Compound Comparison Using LINGO-Based Load-Balancing Strategies on Multi-GPUs

    OpenAIRE

    Lin, Chun-Yuan; Wang, Chung-Hung; Hung, Che-Lun; Lin, Yu-Shiang

    2015-01-01

    Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n 2), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore,...

  1. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  2. Concurrent conditional clustering of multiple networks: COCONETS.

    Directory of Open Access Journals (Sweden)

    Sabrina Kleessen

    Full Text Available The accumulation of high-throughput data from different experiments has facilitated the extraction of condition-specific networks over the same set of biological entities. Comparing and contrasting of such multiple biological networks is in the center of differential network biology, aiming at determining general and condition-specific responses captured in the network structure (i.e., included associations between the network components. We provide a novel way for comparison of multiple networks based on determining network clustering (i.e., partition into communities which is optimal across the set of networks with respect to a given cluster quality measure. To this end, we formulate the optimization-based problem of concurrent conditional clustering of multiple networks, termed COCONETS, based on the modularity. The solution to this problem is a clustering which depends on all considered networks and pinpoints their preserved substructures. We present theoretical results for special classes of networks to demonstrate the implications of conditionality captured by the COCONETS formulation. As the problem can be shown to be intractable, we extend an existing efficient greedy heuristic and applied it to determine concurrent conditional clusters on coexpression networks extracted from publically available time-resolved transcriptomics data of Escherichia coli under five stresses as well as on metabolite correlation networks from metabolomics data set from Arabidopsis thaliana exposed to eight environmental conditions. We demonstrate that the investigation of the differences between the clustering based on all networks with that obtained from a subset of networks can be used to quantify the specificity of biological responses. While a comparison of the Escherichia coli coexpression networks based on seminal properties does not pinpoint biologically relevant differences, the common network substructures extracted by COCONETS are supported by

  3. Brain connectivity measures: computation and comparison

    Directory of Open Access Journals (Sweden)

    Jovanović Aleksandar

    2013-12-01

    Full Text Available In this article computation and comparison of causality measures which are used in determination of brain connectivity patterns is investigated. Main analyzed examples included published computation and comparisons of Directed Transfer Function ‐ DTF and Partial Directed Coherence ‐ PDC. It proved that serious methodology mistakes were involved in measure computations and comparisons. It is shown that the neighborhood of zero is of accented importance in such evaluations and that the issues of semantic stability have to be treated with more attention. Published results on the relationship of these two important measures are partly unstable with small changes of zero threshold and pictures of involved brain structures deduced from the cited articles have to be corrected. Analysis of the operators involved in evaluation and comparisons is given with suggestions for their improvement and complementary additional actions.

  4. MS vs. pole masses of gauge bosons II: Two-loop electroweak fermion correct

    International Nuclear Information System (INIS)

    Jegerlehner, F.; Kalmykov, M.Yu.; Veretin, O.

    2002-12-01

    We have calculated the fermion contributions to the shift of the position of the poles of the massive gauge boson propagators at two-loop order in the Standard Model. Together with the bosonic contributions calculated previously the full two-loop corrections are available. This allows us to investigate the full correction in the relationship between anti M anti S and pole masses of the vector bosons Z and W. Two-loop renormalization and the corresponding renormalization group equations are discussed. Analytical results for the master-integrals appearing in the massless fermion contributions are given. A new approach of summing multiple binomial sums has been developed. (orig.)

  5. Synchronous atmospheric radiation correction of GF-2 satellite multispectral image

    Science.gov (United States)

    Bian, Fuqiang; Fan, Dongdong; Zhang, Yan; Wang, Dandan

    2018-02-01

    GF-2 remote sensing products have been widely used in many fields for its high-quality information, which provides technical support for the the macroeconomic decisions. Atmospheric correction is the necessary part in the data preprocessing of the quantitative high resolution remote sensing, which can eliminate the signal interference in the radiation path caused by atmospheric scattering and absorption, and reducting apparent reflectance into real reflectance of the surface targets. Aiming at the problem that current research lack of atmospheric date which are synchronization and region matching of the surface observation image, this research utilize the MODIS Level 1B synchronous data to simulate synchronized atmospheric condition, and write programs to implementation process of aerosol retrieval and atmospheric correction, then generate a lookup table of the remote sensing image based on the radioactive transfer model of 6S (second simulation of a satellite signal in the solar spectrum) to correct the atmospheric effect of multispectral image from GF-2 satellite PMS-1 payload. According to the correction results, this paper analyzes the pixel histogram of the reflectance spectrum of the 4 spectral bands of PMS-1, and evaluates the correction results of different spectral bands. Then conducted a comparison experiment on the same GF-2 image based on the QUAC. According to the different targets respectively statistics the average value of NDVI, implement a comparative study of NDVI from two different results. The degree of influence was discussed by whether to adopt synchronous atmospheric date. The study shows that the result of the synchronous atmospheric parameters have significantly improved the quantitative application of the GF-2 remote sensing data.

  6. A multiple treatment comparison meta-analysis of monoamine oxidase type B inhibitors for Parkinson's disease.

    Science.gov (United States)

    Binde, C D; Tvete, I F; Gåsemyr, J; Natvig, B; Klemp, M

    2018-05-30

    To the best of our knowledge, there are no systematic reviews or meta-analyses that compare rasagiline, selegiline and safinamide. Therefore, we aimed to perform a drug class review comparing all available monoamine oxidase type B (MAO-B) inhibitors in a multiple treatment comparison. We performed a systematic literature search to identify randomized controlled trials assessing the efficacy of MAO-B inhibitors in patients with Parkinson's disease. MAO-B inhibitors were evaluated either as monotherapy or in combination with levodopa or dopamine agonists. Endpoints of interest were change in the Unified Parkinson's Disease Rating Scale (UPDRS) score and serious adverse events. We estimated the relative effect of each MAO-B inhibitor versus the comparator drug by creating three networks of direct and indirect comparisons. For each of the networks, we considered a joint model. The systematic literature search and study selection process identified 27 publications eligible for our three network analyses. We found the relative effects of rasagiline, safinamide and selegiline treatment given alone and compared to placebo in a model without explanatory variables to be 1.560 (1.409, 1.734), 1.449 (0.873, 2.413) and 1.532 (1.337, 1.757) respectively. We also found all MAO-B inhibitors to be efficient when given together with levodopa. When ranking the MAO-B inhibitors given in combination with levodopa, selegiline was the most effective and rasagiline was the second best. All of the included MAO-B inhibitors were effective compared to placebo when given as monotherapy. Combination therapy with MAO-B inhibitors and levodopa showed that all three MAO-B inhibitors were effective compared to placebo, but selegiline was the most effective drug. © 2018 The British Pharmacological Society.

  7. Guideline validation in multiple trauma care through business process modeling.

    Science.gov (United States)

    Stausberg, Jürgen; Bilir, Hüseyin; Waydhas, Christian; Ruchholtz, Steffen

    2003-07-01

    Clinical guidelines can improve the quality of care in multiple trauma. In our Department of Trauma Surgery a specific guideline is available paper-based as a set of flowcharts. This format is appropriate for the use by experienced physicians but insufficient for electronic support of learning, workflow and process optimization. A formal and logically consistent version represented with a standardized meta-model is necessary for automatic processing. In our project we transferred the paper-based into an electronic format and analyzed the structure with respect to formal errors. Several errors were detected in seven error categories. The errors were corrected to reach a formally and logically consistent process model. In a second step the clinical content of the guideline was revised interactively using a process-modeling tool. Our study reveals that guideline development should be assisted by process modeling tools, which check the content in comparison to a meta-model. The meta-model itself could support the domain experts in formulating their knowledge systematically. To assure sustainability of guideline development a representation independent of specific applications or specific provider is necessary. Then, clinical guidelines could be used for eLearning, process optimization and workflow management additionally.

  8. The importance of correcting for signal drift in diffusion MRI

    OpenAIRE

    Vos, Sjoerd B; Tax, Chantal M W; Luijten, Peter R; Ourselin, Sebastien; Leemans, Alexander; Froeling, Martijn

    2017-01-01

    PURPOSE: To investigate previously unreported effects of signal drift as a result of temporal scanner instability on diffusion MRI data analysis and to propose a method to correct this signal drift. METHODS: We investigated the signal magnitude of non-diffusion-weighted EPI volumes in a series of diffusion-weighted imaging experiments to determine whether signal magnitude changes over time. Different scan protocols and scanners from multiple vendors were used to verify this on phantom data, a...

  9. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  10. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  11. Improving Graduate Students' Graphing Skills of Multiple Baseline Designs with Microsoft[R] Excel 2007

    Science.gov (United States)

    Lo, Ya-yu; Starling, A. Leyf Peirce

    2009-01-01

    This study examined the effects of a graphing task analysis using the Microsoft[R] Office Excel 2007 program on the single-subject multiple baseline graphing skills of three university graduate students. Using a multiple probe across participants design, the study demonstrated a functional relationship between the number of correct graphing…

  12. Review and comparison of geometric distortion correction schemes in MR images used in stereotactic radiosurgery applications

    Science.gov (United States)

    Pappas, E. P.; Dellios, D.; Seimenis, I.; Moutsatsos, A.; Georgiou, E.; Karaiskos, P.

    2017-11-01

    In Stereotactic Radiosurgery (SRS), MR-images are widely used for target localization and delineation in order to take advantage of the superior soft tissue contrast they exhibit. However, spatial dose delivery accuracy may be deteriorated due to geometric distortions which are partly attributed to static magnetic field inhomogeneity and patient/object-induced chemical shift and susceptibility related artifacts, known as sequence-dependent distortions. Several post-imaging sequence-dependent distortion correction schemes have been proposed which mainly employ the reversal of read gradient polarity. The scope of this work is to review, evaluate and compare the efficacy of two proposed correction approaches. A specially designed phantom which incorporates 947 control points (CPs) for distortion detection was utilized. The phantom was MR scanned at 1.5T using the head coil and the clinically employed pulse sequence for SRS treatment planning. An additional scan was performed with identical imaging parameters except for reversal of read gradient polarity. In-house MATLAB routines were developed for implementation of the signal integration and average-image distortion correction techniques. The mean CP locations of the two MR scans were regarded as the reference CP distribution. Residual distortion was assessed by comparing the corrected CP locations with corresponding reference positions. Mean absolute distortion on frequency encoding direction was reduced from 0.34mm (original images) to 0.15mm and 0.14mm following application of signal integration and average-image methods, respectively. However, a maximum residual distortion of 0.7mm was still observed for both techniques. The signal integration method relies on the accuracy of edge detection and requires 3-4 hours of post-imaging computational time. The average-image technique is a more efficient (processing time of the order of seconds) and easier to implement method to improve geometric accuracy in such

  13. General solutions to multiple testing problems. Translation of "Sonnemann, E. (1982). Allgemeine Lösungen multipler Test probleme. EDV in Medizin und Biologie 13(4), 120-128".

    Science.gov (United States)

    Sonnemann, Eckart

    2008-10-01

    The introduction of sequentially rejective multiple test procedures (Einot and Gabriel, 1975; Naik, 1975; Holm, 1977; Holm, 1979) has caused considerable progress in the theory of multiple comparisons. Emphasizing the closure of multiple tests we give a survey of the general theory and its recent results in applications. Some new applications are given including a discussion of the connection with the theory of confidence regions.

  14. Efficiency correction for disk sources using coaxial High-Purity Ge detectors

    International Nuclear Information System (INIS)

    Chatani, Hiroshi.

    1993-03-01

    Efficiency correction factors for disk sources were determined by making use of closed-ended coaxial High-Purity Ge (HPGe) detectors, their relative efficiencies for a 3' 'x3' ' NaI(Tl) with the 1.3 MeV γ-rays were 30 % and 10 %, respectively. Parameters for the correction by mapping method were obtained systematically, using several monoenergetic (i.e. no coincidence summing loses) γ-ray sources produced by irradiation in the Kyoto University Reactor (KUR) core. These were found out that (1) the systematics of the Gaussian fitting parameters, which were calculated using the relative efficiency distributions of HPGe, to the γ-ray energies are recognized, (2) the efficiency distributions deviate from the Gaussian distributions outside of the radii of HPGe. (3) mapping method is a practical use in satisfactory accuracy, as the results in comparison with the disk source measurements. (author)

  15. Semiempirical Quantum-Chemical Orthogonalization-Corrected Methods: Benchmarks for Ground-State Properties.

    Science.gov (United States)

    Dral, Pavlo O; Wu, Xin; Spörkel, Lasse; Koslowski, Axel; Thiel, Walter

    2016-03-08

    The semiempirical orthogonalization-corrected OMx methods (OM1, OM2, and OM3) go beyond the standard MNDO model by including additional interactions in the electronic structure calculation. When augmented with empirical dispersion corrections, the resulting OMx-Dn approaches offer a fast and robust treatment of noncovalent interactions. Here we evaluate the performance of the OMx and OMx-Dn methods for a variety of ground-state properties using a large and diverse collection of benchmark sets from the literature, with a total of 13035 original and derived reference data. Extensive comparisons are made with the results from established semiempirical methods (MNDO, AM1, PM3, PM6, and PM7) that also use the NDDO (neglect of diatomic differential overlap) integral approximation. Statistical evaluations show that the OMx and OMx-Dn methods outperform the other methods for most of the benchmark sets.

  16. A high precision recipe for correcting images distorted by a tapered fiber optic

    International Nuclear Information System (INIS)

    Islam, M Sirajul; Kitchen, M J; Lewis, R A; Uesugi, K

    2010-01-01

    Images captured with a tapered fiber optic camera show significant spatial distortion mainly because the spatial orientation of the fiber bundles is not identical at each end of the taper. We present three different techniques for the automatic distortion correction of images acquired with a charge-coupled device (CCD) camera bonded to a tapered optical fiber. In this paper we report - (i) comparison of various methods for distortion correction (ii) extensive quantitative analysis of the techniques and (iii) experiments carried out using a high resolution fiber optic camera. A pinhole array was used to find control points in the distorted image space. These control points were then associated with their known true coordinates. To apply geometric correction, three different approaches were investigated - global polynomial fitting, local polynomial fitting and triangulated interpolation. Sub-pixel accuracy was achieved in all approaches, but the experimental results reveal that the triangulated interpolation gave the most satisfactory result for the distortion correction. The effect of proper alignment of the mask with the fiber optic taper (FOT) camera was also investigated. It was found that the overall dewarping error is minimal when the mask is almost parallel to the CCD.

  17. A PDP model of the simultaneous perception of multiple objects

    Science.gov (United States)

    Henderson, Cynthia M.; McClelland, James L.

    2011-06-01

    Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.

  18. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    Science.gov (United States)

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  19. Evaluation of relative radiometric correction techniques on Landsat 8 OLI sensor data

    Science.gov (United States)

    Novelli, Antonio; Caradonna, Grazia; Tarantino, Eufemia

    2016-08-01

    The quality of information derived from processed remotely sensed data may depend upon many factors, mostly related to the extent data acquisition is influenced by atmospheric conditions, topographic effects, sun angle and so on. The goal of radiometric corrections is to reduce such effects in order enhance the performance of change detection analysis. There are two approaches to radiometric correction: absolute and relative calibrations. Due to the large amount of free data products available, absolute radiometric calibration techniques may be time consuming and financially expensive because of the necessary inputs for absolute calibration models (often these data are not available and can be difficult to obtain). The relative approach to radiometric correction, known as relative radiometric normalization, is preferred with some research topics because no in situ ancillary data, at the time of satellite overpasses, are required. In this study we evaluated three well known relative radiometric correction techniques using two Landsat 8 - OLI scenes over a subset area of the Apulia Region (southern Italy): the IR-MAD (Iteratively Reweighted Multivariate Alteration Detection), the HM (Histogram Matching) and the DOS (Dark Object Subtraction). IR-MAD results were statistically assessed within a territory with an extremely heterogeneous landscape and all computations performed in a Matlab environment. The panchromatic and thermal bands were excluded from the comparisons.

  20. A comparison of equality in computer algebra and correctness in mathematical pedagogy (II)

    OpenAIRE

    Bradford, Russell; Davenport, James H; Sangwin, C

    2010-01-01

    A perennial problem in computer-aided assessment is that “a right answer”, pedagogically speaking, is not the same thing as “a mathematically correct expression”, as verified by a computer algebra system, or indeed other techniques such as random evaluation. Paper I in this series considered the difference in cases where there was “the right answer”, typically calculus questions. Here we look at some other cases, notably in linear algebra, where there can be many “right answers”, but still th...

  1. Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media

    Science.gov (United States)

    Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.

    2017-09-01

    It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.

  2. Mild Traumatic Brain Injury and Dynamic Simulated Shooting Performance

    Science.gov (United States)

    2016-02-01

    differences between tasks. All pairwise comparisons were adjusted with a Sidak correction for multiple comparisons. TLX scores 0 5 10 15 20 25 30 35...research at multiple sites. Specific to the question of MTBI-related balance, we recommend that future studies seek, when feasible, to quantify body sway...higher ratings of perceived workload. In addition, the alternate analyses yielded some preliminary evidence of shooting performance decrements

  3. Expected precision of neutron multiplicity measurements of waste drums

    International Nuclear Information System (INIS)

    Ensslin, N.; Krick, M.S.; Menlove, H.O.

    1995-01-01

    DOE facilities are beginning to apply passive neutron multiplicity counting techniques to the assay of plutonium scrap and residues. There is also considerable interest in applying this new measurement technique to 208-liter waste drums. The additional information available from multiplicity counting could flag the presence of shielding materials or improve assay accuracy by correcting for matrix effects such as (α,n) induced fission or detector efficiency variations. The potential for multiplicity analysis of waste drums, and the importance of better detector design, can be estimated by calculating the expected assay precision using a Figure of Merit code for assay variance. This paper reports results obtained as a function of waste drum content and detector characteristics. We find that multiplicity analysis of waste drums is feasible if a high-efficiency neutron counter is used. However, results are significantly poorer if the multiplicity analysis must be used to solve for detection efficiency

  4. Repeat-aware modeling and correction of short read errors.

    Science.gov (United States)

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  5. Sleep habits in middle-aged, non-hospitalized men and women with schizophrenia: a comparison with healthy controls.

    Science.gov (United States)

    Poulin, Julie; Chouinard, Sylvie; Pampoulova, Tania; Lecomte, Yves; Stip, Emmanuel; Godbout, Roger

    2010-10-30

    Patients with schizophrenia may have sleep disorders even when clinically stable under antipsychotic treatments. To better understand this issue, we measured sleep characteristics between 1999 and 2003 in 150 outpatients diagnosed with Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV) schizophrenia or schizoaffective disorder and 80 healthy controls using a sleep habits questionnaire. Comparisons between both groups were performed and multiple comparisons were Bonferroni corrected. Compared to healthy controls, patients with schizophrenia reported significantly increased sleep latency, time in bed, total sleep time and frequency of naps during weekdays and weekends along with normal sleep efficiency, sleep satisfaction, and feeling of restfulness in the morning. In conclusion, sleep-onset insomnia is a major, enduring disorder in middle-aged, non-hospitalized patients with schizophrenia that are otherwise clinically stable under antipsychotic and adjuvant medications. Noteworthy, these patients do not complain of sleep-maintenance insomnia but report increased sleep propensity and normal sleep satisfaction. These results may reflect circadian disturbances in schizophrenia, but objective laboratory investigations are needed to confirm subjective sleep reports. Copyright © 2009 Elsevier Ltd. All rights reserved.

  6. Change in hippocampal theta oscillation associated with multiple lever presses in a bimanual two-lever choice task for robot control in rats.

    Directory of Open Access Journals (Sweden)

    Norifumi Tanaka

    Full Text Available Hippocampal theta oscillations have been implicated in working memory and attentional process, which might be useful for the brain-machine interface (BMI. To further elucidate the properties of the hippocampal theta oscillations that can be used in BMI, we investigated hippocampal theta oscillations during a two-lever choice task. During the task body-restrained rats were trained with a food reward to move an e-puck robot towards them by pressing the correct lever, ipsilateral to the robot several times, using the ipsilateral forelimb. The robot carried food and moved along a semicircle track set in front of the rat. We demonstrated that the power of hippocampal theta oscillations gradually increased during a 6-s preparatory period before the start of multiple lever pressing, irrespective of whether the correct lever choice or forelimb side were used. In addition, there was a significant difference in the theta power after the first choice, between correct and incorrect trials. During the correct trials the theta power was highest during the first lever-releasing period, whereas in the incorrect trials it occurred during the second correct lever-pressing period. We also analyzed the hippocampal theta oscillations at the termination of multiple lever pressing during the correct trials. Irrespective of whether the correct forelimb side was used, the power of hippocampal theta oscillations gradually decreased with the termination of multiple lever pressing. The frequency of theta oscillation also demonstrated an increase and decrease, before and after multiple lever pressing, respectively. There was a transient increase in frequency after the first lever press during the incorrect trials, while no such increase was observed during the correct trials. These results suggested that hippocampal theta oscillations reflect some aspects of preparatory and cognitive neural activities during the robot controlling task, which could be used for BMI.

  7. Exploitation of jet properties for energy scale corrections for the CMS calorimeters

    International Nuclear Information System (INIS)

    Kirschenmann, Henning

    2011-02-01

    Jets form important event signatures in proton-proton collisions at the Large Hadron Collider (LHC) and the precise measurement of their energy is a crucial premise for a manifold of physics studies. Jets, which are reconstructed exclusively from calorimeter information, have been widely used within the CMS collaboration. However, the response of the calorimeters to incident particles depends heavily on their energy. In addition, it has been observed at previous experiments that the charged particle multiplicity and the radial distribution of constituents differ for jets induced by light quarks or by gluons. In conjunction with the non-linearity of the CMS calorimeters, this contributes to a mean energy response deviating from unity for calorimeter jets, depending on the jet-flavour. This thesis describes a jet-energy correction to be applied in addition to the default corrections within the CMS collaboration. This correction aims at decreasing the flavour dependence of the jet-energy response and improving the energy resolution. As many different effects contribute to the observed jet-energy response, a set of observables are introduced and corrections based on these observables are tested with respect to the above aims. A jet-width variable, which is defined from energy measured in the calorimeter, shows the best performance: A correction based on this observable improves the energy resolution by up to 20% at high transverse momenta in the central detector region and decreases the flavour dependence of the jet-energy response by a factor of two. A parametrisation of the correction is both derived from and validated on simulated data. First results from experimental data, to which the correction has been applied, are presented. The proposed jet-width correction shows a promising level of performance. (orig.)

  8. Exploitation of jet properties for energy scale corrections for the CMS calorimeters

    Energy Technology Data Exchange (ETDEWEB)

    Kirschenmann, Henning

    2011-02-15

    Jets form important event signatures in proton-proton collisions at the Large Hadron Collider (LHC) and the precise measurement of their energy is a crucial premise for a manifold of physics studies. Jets, which are reconstructed exclusively from calorimeter information, have been widely used within the CMS collaboration. However, the response of the calorimeters to incident particles depends heavily on their energy. In addition, it has been observed at previous experiments that the charged particle multiplicity and the radial distribution of constituents differ for jets induced by light quarks or by gluons. In conjunction with the non-linearity of the CMS calorimeters, this contributes to a mean energy response deviating from unity for calorimeter jets, depending on the jet-flavour. This thesis describes a jet-energy correction to be applied in addition to the default corrections within the CMS collaboration. This correction aims at decreasing the flavour dependence of the jet-energy response and improving the energy resolution. As many different effects contribute to the observed jet-energy response, a set of observables are introduced and corrections based on these observables are tested with respect to the above aims. A jet-width variable, which is defined from energy measured in the calorimeter, shows the best performance: A correction based on this observable improves the energy resolution by up to 20% at high transverse momenta in the central detector region and decreases the flavour dependence of the jet-energy response by a factor of two. A parametrisation of the correction is both derived from and validated on simulated data. First results from experimental data, to which the correction has been applied, are presented. The proposed jet-width correction shows a promising level of performance. (orig.)

  9. Multiple comparisons in drug efficacy studies: scientific or marketing principles?

    Science.gov (United States)

    Leo, Jonathan

    2004-01-01

    When researchers design an experiment to compare a given medication to another medication, a behavioral therapy, or a placebo, the experiment often involves numerous comparisons. For instance, there may be several different evaluation methods, raters, and time points. Although scientifically justified, such comparisons can be abused in the interests of drug marketing. This article provides two recent examples of such questionable practices. The first involves the case of the arthritis drug celecoxib (Celebrex), where the study lasted 12 months but the authors only presented 6 months of data. The second case involves the NIMH Multimodal Treatment Study (MTA) study evaluating the efficacy of stimulant medication for attention-deficit hyperactivity disorder where ratings made by several groups are reported in contradictory fashion. The MTA authors have not clarified the confusion, at least in print, suggesting that the actual findings of the study may have played little role in the authors' reported conclusions.

  10. The Conical Singularity and Quantum Corrections to Entropy of Black Hole

    International Nuclear Information System (INIS)

    Solodukhin, S.N.

    1994-01-01

    It is well known that at the temperature different from the Hawking temperature there appears a conical singularity in the Euclidean classical solution of gravitational equations. The method of regularizing the cone by regular surface is used to determine the curvature tensors for such metrics. It allows to calculate the one-loop matter effective action and the corresponding one-loop quantum corrections to the entropy in the framework of the path integral approach of Gibbons and Hawking. The two-dimensional and four-dimensional cases are considered. The entropy of the Rindler space is shown to be divergent logarithmically in two dimensions and quadratically in four dimensions. It corresponds to the results obtained earlier. For the eternal 2D black hole we observe finite, dependent on the mass, correction to the entropy. The entropy of the 4D Schwarzschild black hole is shown to possess an additional (in comparison to the 4D Rindler space) logarithmically divergent correction which does not vanish in the limit of infinite mass of the black hole. We argue that infinities of the entropy in four dimensions are renormalized with the renormalization of the gravitational coupling. (author). 35 refs

  11. Multiple Criteria and Multiple Periods Performance Analysis: The Comparison of North African Railways

    Science.gov (United States)

    Sabri, Karim; Colson, Gérard E.; Mbangala, Augustin M.

    2008-10-01

    Multi-period differences of technical and financial performances are analysed by comparing five North African railways over the period (1990-2004). A first approach is based on the Malmquist DEA TFP index for measuring the total factors productivity change, decomposed into technical efficiency change and technological changes. A multiple criteria analysis is also performed using the PROMETHEE II method and the software ARGOS. These methods provide complementary detailed information, especially by discriminating the technological and management progresses by Malmquist and the two dimensions of performance by Promethee: that are the service to the community and the enterprises performances, often in conflict.

  12. Unitarity corrections in the pT distribution for the Drell-Yan process

    International Nuclear Information System (INIS)

    Betempts, M.A.; Gay Ducaty, M.B.; Machado, M.V.T.

    2001-01-01

    In this contribution we investigate the Drell-Yan transverse momentum distribution considering the color dipole approach, taking into account unitarity aspects in the dipole cross section. The process is analyzed in the current energies on pp collisions (√s = 62 GeV) and at LHC energies (√s = 8.8 TeV. The unitarity corrections are implemented through the multiple scattering Glauber-Mueller approach. (author)

  13. Comparison and clinical utility evaluation of four multiple allergen simultaneous tests including two newly introduced fully automated analyzers

    Directory of Open Access Journals (Sweden)

    John Hoon Rim

    2016-04-01

    Full Text Available Background: We compared the diagnostic performances of two newly introduced fully automated multiple allergen simultaneous tests (MAST analyzers with two conventional MAST assays. Methods: The serum samples from a total of 53 and 104 patients were tested for food panels and inhalant panels, respectively, in four analyzers including AdvanSure AlloScreen (LG Life Science, Korea, AdvanSure Allostation Smart II (LG Life Science, PROTIA Allergy-Q (ProteomeTech, Korea, and RIDA Allergy Screen (R-Biopharm, Germany. We compared not only the total agreement percentages but also positive propensities among four analyzers. Results: Evaluation of AdvanSure Allostation Smart II as upgraded version of AdvanSure AlloScreen revealed good concordance with total agreement percentages of 93.0% and 92.2% in food and inhalant panel, respectively. Comparisons of AdvanSure Allostation Smart II or PROTIA Allergy-Q with RIDA Allergy Screen also showed good concordance performance with positive propensities of two new analyzers for common allergens (Dermatophagoides farina and Dermatophagoides pteronyssinus. The changes of cut-off level resulted in various total agreement percentage fluctuations among allergens by different analyzers, although current cut-off level of class 2 appeared to be generally suitable. Conclusions: AdvanSure Allostation Smart II and PROTIA Allergy-Q presented favorable agreement performances with RIDA Allergy Screen, although positive propensities were noticed in common allergens. Keywords: Multiple allergen simultaneous test, Automated analyzer

  14. Neural network error correction for solving coupled ordinary differential equations

    Science.gov (United States)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  15. On Thermally Interacting Multiple Boreholes with Variable Heating Strength: Comparison between Analytical and Numerical Approaches

    Directory of Open Access Journals (Sweden)

    Marc A. Rosen

    2012-08-01

    Full Text Available The temperature response in the soil surrounding multiple boreholes is evaluated analytically and numerically. The assumption of constant heat flux along the borehole wall is examined by coupling the problem to the heat transfer problem inside the borehole and presenting a model with variable heat flux along the borehole length. In the analytical approach, a line source of heat with a finite length is used to model the conduction of heat in the soil surrounding the boreholes. In the numerical method, a finite volume method in a three dimensional meshed domain is used. In order to determine the heat flux boundary condition, the analytical quasi-three-dimensional solution to the heat transfer problem of the U-tube configuration inside the borehole is used. This solution takes into account the variation in heating strength along the borehole length due to the temperature variation of the fluid running in the U-tube. Thus, critical depths at which thermal interaction occurs can be determined. Finally, in order to examine the validity of the numerical method, a comparison is made with the results of line source method.

  16. A two-dimensional matrix correction for off-axis portal dose prediction errors

    International Nuclear Information System (INIS)

    Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.

    2013-01-01

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. [“An effective correction algorithm for off-axis portal dosimetry errors,” Med. Phys. 36, 4089–4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As

  17. Author Correction

    DEFF Research Database (Denmark)

    Grundle, D S; Löscher, C R; Krahmann, G

    2018-01-01

    A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.......A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper....

  18. Multi-axial correction system in the treatment of radial club hand.

    Science.gov (United States)

    Bhat, Suneel B; Kamath, Atul F; Sehgal, Kriti; Horn, B David; Hosalkar, Harish S

    2009-12-01

    Radial club hand is a well-recognized congenital malformation characterized by hypoplasia of bone and soft tissue on the radial aspect of the forearm and hand. The modalities of treatment have traditionally varied from stretching casts with soft-tissue procedures to the use of multiple corrective osteotomies. These osteotomies can be stabilized by a variety of methods, including external fixators that allow the possibility of gradual distraction with neohistiogenesis. This current study outlines the usage of one such device (multi-axial correction system [MAC]) in the management of deformity associated with severe radial club hand. Three consecutive cases of unilateral or bilateral severe (Bayne type IV) congenital radial club hand were corrected using MAC fixation in the last 5 years. This is a retrospective review of all three cases. Data parameters included: patient demographics, presentation findings, degree of deformity, amount of correction/lengthening, length of procedure, length of treatment, and associated complications. The surgical technique is described in detail for the benefit of the readership. The three patients with severe congenital radial club hand had a total of four limb involvements that underwent correction using osteotomies and usage of the MAC device for external fixation. All three patients underwent successful correction of deformity with the restoration of alignment, lengthening of forearm for improvement of function, and stabilization of the wrist (mean duration, mean lengthening, mean time to consolidation). The MAC system was well tolerated in all patients and associated complications were limited. The MAC fixator seems to be a good alternative modality of stabilization and correction for severe congenital radial club hand deformities. Its usage is fairly simple and it provides the ease of application of a mono-lateral fixator with far superior three-dimensional control, like the circular external fixator. We recommend that

  19. Proximal processes of children with profound multiple disabilities

    OpenAIRE

    Wilder, Jenny

    2008-01-01

    In this thesis four empirical studies dealt with children with profound multiple disabilities and their parents with regard to: (a) how parents perceived interaction with their children (b) how observed child/parent interaction was linked to behavior style of the children as perceived by the parents (c) how parents of children with profound multiple disabilities perceived child/parent interaction and behavior style of their children in comparison to parents to children without disabilities ma...

  20. High-energy expansion for nuclear multiple scattering

    International Nuclear Information System (INIS)

    Wallace, S.J.

    1975-01-01

    The Watson multiple scattering series is expanded to develop the Glauber approximation plus systematic corrections arising from three (1) deviations from eikonal propagation between scatterings, (2) Fermi motion of struck nucleons, and (3) the kinematic transformation which relates the many-body scattering operators of the Watson series to the physical two-body scattering amplitude. Operators which express effects ignored at the outset to obtain the Glauber approximation are subsequently reintroduced via perturbation expansions. Hence a particular set of approximations is developed which renders the sum of the Watson series to the Glauber form in the center of mass system, and an expansion is carried out to find leading order corrections to that summation. Although their physical origins are quite distinct, the eikonal, Fermi motion, and kinematic corrections produce strikingly similar contributions to the scattering amplitude. It is shown that there is substantial cancellation between their effects and hence the Glauber approximation is more accurate than the individual approximations used in its derivation. It is shown that the leading corrections produce effects of order (2kR/subc/) -1 relative to the double scattering term in the uncorrected Glauber amplitude, hk being momentum and R/subc/ the nuclear char []e radius. The leading order corrections are found to be small enough to validate quatitative analyses of experimental data for many intermediate to high energy cases and for scattering angles not limited to the very forward region. In a Gaussian model, the leading corrections to the Glauber amplitude are given as convenient analytic expressions

  1. Advanced Corrections for InSAR Using GPS and Numerical Weather Models

    Science.gov (United States)

    Cossu, F.; Foster, J. H.; Amelung, F.; Varugu, B. K.; Businger, S.; Cherubini, T.

    2017-12-01

    We present results from an investigation into the application of numerical weather models for generating tropospheric correction fields for Interferometric Synthetic Aperture Radar (InSAR). We apply the technique to data acquired from a UAVSAR campaign as well as from the CosmoSkyMed satellites. The complex spatial and temporal changes in the atmospheric propagation delay of the radar signal remain the single biggest factor limiting InSAR's potential for hazard monitoring and mitigation. A new generation of InSAR systems is being built and launched, and optimizing the science and hazard applications of these systems requires advanced methodologies to mitigate tropospheric noise. We use the Weather Research and Forecasting (WRF) model to generate a 900 m spatial resolution atmospheric models covering the Big Island of Hawaii and an even higher, 300 m resolution grid over the Mauna Loa and Kilauea volcanoes. By comparing a range of approaches, from the simplest, using reanalyses based on typically available meteorological observations, through to the "kitchen-sink" approach of assimilating all relevant data sets into our custom analyses, we examine the impact of the additional data sets on the atmospheric models and their effectiveness in correcting InSAR data. We focus particularly on the assimilation of information from the more than 60 GPS sites in the island. We ingest zenith tropospheric delay estimates from these sites directly into the WRF analyses, and also perform double-difference tomography using the phase residuals from the GPS processing to robustly incorporate heterogeneous information from the GPS data into the atmospheric models. We assess our performance through comparisons of our atmospheric models with external observations not ingested into the model, and through the effectiveness of the derived phase screens in reducing InSAR variance. Comparison of the InSAR data, our atmospheric analyses, and assessments of the active local and mesoscale

  2. Multiplicity counting from fission chamber signals in the current mode

    Energy Technology Data Exchange (ETDEWEB)

    Pázsit, I. [Chalmers University of Technology, Department of Physics, Division of Subatomic and Plasma Physics, SE-412 96 Göteborg (Sweden); Pál, L. [Centre for Energy Research, Hungarian Academy of Sciences, 114, POB 49, H-1525 Budapest (Hungary); Nagy, L. [Chalmers University of Technology, Department of Physics, Division of Subatomic and Plasma Physics, SE-412 96 Göteborg (Sweden); Budapest University of Technology and Economics, Institute of Nuclear Techniques, H-1111 Budapest (Hungary)

    2016-12-11

    In nuclear safeguards, estimation of sample parameters using neutron-based non-destructive assay methods is traditionally based on multiplicity counting with thermal neutron detectors in the pulse mode. These methods in general require multi-channel analysers and various dead time correction methods. This paper proposes and elaborates on an alternative method, which is based on fast neutron measurements with fission chambers in the current mode. A theory of “multiplicity counting” with fission chambers is developed by incorporating Böhnel's concept of superfission [1] into a master equation formalism, developed recently by the present authors for the statistical theory of fission chamber signals [2,3]. Explicit expressions are derived for the first three central auto- and cross moments (cumulants) of the signals of up to three detectors. These constitute the generalisation of the traditional Campbell relationships for the case when the incoming events represent a compound Poisson distribution. Because now the expressions contain the factorial moments of the compound source, they contain the same information as the singles, doubles and triples rates of traditional multiplicity counting. The results show that in addition to the detector efficiency, the detector pulse shape also enters the formulas; hence, the method requires a more involved calibration than the traditional method of multiplicity counting. However, the method has some advantages by not needing dead time corrections, as well as having a simpler and more efficient data processing procedure, in particular for cross-correlations between different detectors, than the traditional multiplicity counting methods.

  3. Correction of inhomogeneous RF field using multiple SPGR signals for high-field spin-echo MRI

    International Nuclear Information System (INIS)

    Ishimori, Yoshiyuki; Monma, Masahiko; Yamada, Kazuhiro; Kimura, Hirohiko; Uematsu, Hidemasa; Fujiwara, Yasuhiro; Yamaguchi, Isao

    2007-01-01

    The purpose of this study was to propose a simple and useful method for correcting nonuniformity of high-field (3 Tesla) T 1 -weighted spin-echo (SE) images based on a B1 field map estimated from gradient recalled echo (GRE) signals. The method of this study was to estimate B1 inhomogeneity, spoiled gradient recalled echo (SPGR) images were collected using a fixed repetition time of 70 ms, flip angles of 45 and 90 degrees, and echo times of 4.8 and 10.4 ms. Selection of flip angles was based on the observation that the relative intensity changes in SPGR signals were very similar among different tissues at larger flip angles than the Ernst angle. Accordingly, spatial irregularity that was observed on a signal ratio map of the SPGR images acquired with these 2 flip angles was ascribed to inhomogeneity of the B1 field. Dual echo time was used to eliminate T 2 * effects. The ratio map that was acquired was scaled to provide an intensity correction map for SE images. Both phantom and volunteer studies were performed using a 3T magnetic resonance scanner to validate the method. In the phantom study, the uniformity of the T 1 -weighted SE image improved by 23%. Images of human heads also showed practically sufficient improvement in the image uniformity. The present method improves the image uniformity of high-field T 1 -weighted SE images. (author)

  4. Multiple brain abscesses in an infant: a case report | Mathews ...

    African Journals Online (AJOL)

    An ex-preterm baby who was treated successfully for staphylococcus aureus septicaemia and skin abscess in the neonatal period represented at the age of 13 weeks (corrected gestation 41 weeks) with gradual enlargement of the head size. A diagnosis of multiple staphylococcus aureus brain abscesses was made.

  5. Seismic reflector imaging using internal multiples with Marchenko-type equations

    NARCIS (Netherlands)

    Slob, E.C.; Wapenaar, C.P.A.; Broggini, F.; Snieder, R.

    2014-01-01

    We present an imaging method that creates a map of reflection coefficients in correct one-way time with no contamination from internal multiples using purely a filtering approach. The filter is computed from the measured reflection response and does not require a background model. We demonstrate

  6. Estimates of statistical significance for comparison of individual positions in multiple sequence alignments

    Directory of Open Access Journals (Sweden)

    Sadreyev Ruslan I

    2004-08-01

    Full Text Available Abstract Background Profile-based analysis of multiple sequence alignments (MSA allows for accurate comparison of protein families. Here, we address the problems of detecting statistically confident dissimilarities between (1 MSA position and a set of predicted residue frequencies, and (2 between two MSA positions. These problems are important for (i evaluation and optimization of methods predicting residue occurrence at protein positions; (ii detection of potentially misaligned regions in automatically produced alignments and their further refinement; and (iii detection of sites that determine functional or structural specificity in two related families. Results For problems (1 and (2, we propose analytical estimates of P-value and apply them to the detection of significant positional dissimilarities in various experimental situations. (a We compare structure-based predictions of residue propensities at a protein position to the actual residue frequencies in the MSA of homologs. (b We evaluate our method by the ability to detect erroneous position matches produced by an automatic sequence aligner. (c We compare MSA positions that correspond to residues aligned by automatic structure aligners. (d We compare MSA positions that are aligned by high-quality manual superposition of structures. Detected dissimilarities reveal shortcomings of the automatic methods for residue frequency prediction and alignment construction. For the high-quality structural alignments, the dissimilarities suggest sites of potential functional or structural importance. Conclusion The proposed computational method is of significant potential value for the analysis of protein families.

  7. Attenuation correction strategies for multi-energy photon emitters using SPECT

    International Nuclear Information System (INIS)

    Pretorius, P.H.; King, M.A.; Pan, T.S.

    1996-01-01

    The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojection (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation-maximization reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: (1) the 93 keV attenuation map for attenuation correction, (2) the 185 keV attenuation map for attenuation correction, (3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and (4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCR's of sphere 4 were under-estimated, although TCR's were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately

  8. Comparison of maxillary stability after Le Fort I osteotomy for occlusal cant correction surgery and maxillary advanced surgery.

    Science.gov (United States)

    Ueki, Koichiro; Hashiba, Yukari; Marukawa, Kohei; Yoshida, Kan; Shimizu, Chika; Nakagawa, Kiyomasa; Yamamoto, Etsuhide

    2007-07-01

    To compare postoperative maxillary stability following Le Fort I osteotomy for the correction of occlusal cant as compared with conventional Le Fort I osteotomy for maxillary advancement. The subjects were 40 Japanese adults with jaw deformities. Of these, 20 underwent a Le Fort I osteotomy and intraoral vertical ramus osteotomy (IVRO) to correct asymmetric skeletal morphology and inclined occlusal cant. The other 20 patients underwent a Le Fort I osteotomy and sagittal split ramus osteotomy (SSRO) to advance the maxilla. Lateral and posteroanterior cephalograms were taken postoperatively and assessed statistically. Thereafter, the 2 groups were followed for time-course changes. There was no significant difference between the 2 groups with regard to time-course changes during the immediate postoperative period. This suggests that maxillary stability after Le Fort I osteotomy for cant correction does not differ from that after Le Fort I osteotomy for maxillary advancement.

  9. Corrective Action Investigation Plan for Corrective Action Unit 550: Smoky Contamination Area Nevada National Security Site, Nevada

    International Nuclear Information System (INIS)

    Evenson, Grant

    2012-01-01

    on January 31, 2012, by representatives of the Nevada Division of Environmental Protection and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 550. The potential contamination sources associated with the study groups are from nuclear testing activities conducted at CAU 550. The DQO process resulted in an assumption that the total effective dose (TED) within the default contamination boundary of CAU 550 exceeds the final action level and requires corrective action. The presence and nature of contamination outside the default contamination boundary at CAU 550 will be evaluated based on information collected from a field investigation. Radiological contamination will be evaluated based on a comparison of the TED at sample locations to the dose-based final action level. The TED will be calculated as the total of separate estimates of internal and external dose. Results from the analysis of soil samples will be used to calculate internal radiological dose. Thermoluminescent dosimeters placed at the center of each sample location will be used to measure external radiological dose. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each group of CASs.

  10. Functional connectivity analysis of fMRI data using parameterized regions-of-interest.

    NARCIS (Netherlands)

    Weeda, W.D.; Waldorp, L.J.; Grasman, R.P.P.P.; van Gaal, S.; Huizenga, H.M.

    2011-01-01

    Connectivity analysis of fMRI data requires correct specification of regions-of-interest (ROIs). Selection of ROIs based on outcomes of a GLM analysis may be hindered by conservativeness of the multiple comparison correction, while selection based on brain anatomy may be biased due to inconsistent

  11. Publisher Correction

    DEFF Research Database (Denmark)

    Turcot, Valérie; Lu, Yingchang; Highland, Heather M

    2018-01-01

    In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article.......In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article....

  12. Diffusion tensor imaging of the brain. Effects of distortion correction with correspondence to numbers of encoding directions

    International Nuclear Information System (INIS)

    Yoshikawa, Takeharu; Aoki, Shigeki; Abe, Osamu; Hayashi, Naoto; Masutani, Yoshitaka; Masumoto, Tomohiko; Mori, Harushi; Satake, Yoshiroh; Ohtomo, Kuni

    2008-01-01

    The aim of the study was to estimate the effect of distortion correction with correspondence to numbers of encoding directions to acquire diffusion tensor imaging (DTI) of improved quality. Ten volunteers underwent DTI of the head using echo planar imaging with 6, 13, 27, and 55 encoding directions. Fractional anisotropy (FA) maps and apparent diffusion coefficient (ADC) maps were created before and after distortion correction. Regions of interest were placed in the corpus callosum on each map, and standard deviations of FA and ADC were calculated. FA maps were also evaluated visually by experienced neuroradiologists. Dispersion of standard deviations tended to be reduced after distortion correction, with significant differences found in FA maps with 6 encoding directions, ADC maps with 6 directions, and ADC maps with 13 directions (P<0.001, P<0.005, and P<0.05, respectively). Visual image quality was improved after distortion correction (P<0.01 for all of the visual comparisons). Distortion correction is effective in providing DTI of enhanced quality, notwithstanding the number of encoding directions. (author)

  13. Cryosat-2 and Sentinel-3 tropospheric corrections: their evaluation over rivers and lakes

    Science.gov (United States)

    Fernandes, Joana; Lázaro, Clara; Vieira, Telmo; Restano, Marco; Ambrózio, Américo; Benveniste, Jérôme

    2017-04-01

    In the scope of the Sentinel-3 Hydrologic Altimetry PrototypE (SHAPE) project, errors that presently affect the tropospheric corrections i.e. dry and wet tropospheric corrections (DTC and WTC, respectively) given in satellite altimetry products are evaluated over inland water regions. These errors arise because both corrections, function of altitude, are usually computed with respect to an incorrect altitude reference. Several regions of interest (ROI) where CryoSat-2 (CS-2) is operating in SAR/SAR-In modes were selected for this evaluation. In this study, results for Danube River, Amazon Basin, Vanern and Titicaca lakes, and Caspian Sea, using Level 1B CS-2 data, are shown. DTC and WTC have been compared to those derived from ECMWF Operational model and computed at different altitude references: i) ECMWF orography; ii) ACE2 (Altimeter Corrected Elevations 2) and GWD-LR (Global Width Database for Large Rivers) global digital elevation models; iii) mean lake level, derived from Envisat mission data, or river profile derived in the scope of SHAPE project by AlongTrack (ATK) using Jason-2 data. Whenever GNSS data are available in the ROI, a GNSS-derived WTC was also generated and used for comparison. Overall, results show that the tropospheric corrections present in CS-2 L1B products are provided at the level of ECMWF orography, which can depart from the mean lake level or river profile by hundreds of metres. Therefore, the use of the model orography originates errors in the corrections. To mitigate these errors, both DTC and WTC should be provided at the mean river profile/lake level. For example, for the Caspian Sea with a mean level of -27 m, the tropospheric corrections provided in CS-2 products were computed at mean sea level (zero level), leading therefore to a systematic error in the corrections. In case a mean lake level is not available, it can be easily determined from satellite altimetry. In the absence of a mean river profile, both mentioned DEM

  14. Multiple scattering in synchrotron studies of disordered materials

    International Nuclear Information System (INIS)

    Poulsen, H.F.; Neuefeind, J.

    1995-01-01

    A formalism for the multiple scattering and self-absorption in synchrotron studies of disordered materials is presented. The formalism goes beyond conventionally used approximations and treat the cross sections, the beam characteristics, the state of polarization, and the electronic correction terms in full. Using hard X-rays it is shown how the simulated distributions can be directly compared to experimental data. ((orig.))

  15. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    International Nuclear Information System (INIS)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-01-01

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If the detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.

  16. Comparison of multiple linear regression, partial least squares and artificial neural networks for prediction of gas chromatographic relative retention times of trimethylsilylated anabolic androgenic steroids.

    Science.gov (United States)

    Fragkaki, A G; Farmaki, E; Thomaidis, N; Tsantili-Kakoulidou, A; Angelis, Y S; Koupparis, M; Georgakopoulos, C

    2012-09-21

    The comparison among different modelling techniques, such as multiple linear regression, partial least squares and artificial neural networks, has been performed in order to construct and evaluate models for prediction of gas chromatographic relative retention times of trimethylsilylated anabolic androgenic steroids. The performance of the quantitative structure-retention relationship study, using the multiple linear regression and partial least squares techniques, has been previously conducted. In the present study, artificial neural networks models were constructed and used for the prediction of relative retention times of anabolic androgenic steroids, while their efficiency is compared with that of the models derived from the multiple linear regression and partial least squares techniques. For overall ranking of the models, a novel procedure [Trends Anal. Chem. 29 (2010) 101-109] based on sum of ranking differences was applied, which permits the best model to be selected. The suggested models are considered useful for the estimation of relative retention times of designer steroids for which no analytical data are available. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Ideal flood field images for SPECT uniformity correction

    International Nuclear Information System (INIS)

    Oppenheim, B.E.; Appledorn, C.R.

    1984-01-01

    Since as little as 2.5% camera non-uniformity can cause disturbing artifacts in SPECT imaging, the ideal flood field images for uniformity correction would be made with the collimator in place using a perfectly uniform sheet source. While such a source is not realizable the equivalent images can be generated by mapping the activity distribution of a Co-57 sheet source and correcting subsequent images of the source with this mapping. Mapping is accomplished by analyzing equal-time images of the source made in multiple precisely determined positions. The ratio of counts detected in the same region of two images is a measure of the ratio of the activities of the two portions of the source imaged in that region. The activity distribution in the sheet source is determined from a set of such ratios. The more source positions imaged in a given time, the more accurate the source mapping, according to results of a computer simulation. A 1.9 mCi Co-57 sheet source was shifted by 12 mm increments along the horizontal and vertical axis of the camera face to 9 positions on each axis. The source was imaged for 20 min in each position and 214 million total counts were accumulated. The activity distribution of the source, relative to the center pixel, was determined for a 31 x 31 array. The integral uniformity was found to be 2.8%. The RMS error for such a mapping was determined by computer simulation to be 0.46%. The activity distribution was used to correct a high count flood field image for non-uniformities attributable to the Co-57 source. Such a corrected image represents camera plus collimator response to an almost perfectly uniform sheet source

  18. Bleed-through correction for rendering and correlation analysis in multi-colour localization microscopy

    International Nuclear Information System (INIS)

    Kim, Dahan; Curthoys, Nikki M; Parent, Matthew T; Hess, Samuel T

    2013-01-01

    Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods for its correction in correlation analyses have been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows that our method accurately corrects the artificial increase in both types of correlation studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlation examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. While it is demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined. (special issue article)

  19. Comparison of Cerebral Glucose Metabolism between Possible and Probable Multiple System Atrophy

    Directory of Open Access Journals (Sweden)

    Kyum-Yil Kwon

    2009-05-01

    Full Text Available Background: To investigate the relationship between presenting clinical manifestations and imaging features of multisystem neuronal dysfunction in MSA patients, using 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET. Methods: We studied 50 consecutive MSA patients with characteristic brain MRI findings of MSA, including 34 patients with early MSA-parkinsonian (MSA-P and 16 with early MSA-cerebellar (MSA-C. The cerebral glucose metabolism of all MSA patients was evaluated in comparison with 25 age-matched controls. 18F-FDG PET results were assessed by the Statistic Parametric Mapping (SPM analysis and the regions of interest (ROI method. Results: The mean time from disease onset to 18F-FDG PET was 25.9±13.0 months in 34 MSA-P patients and 20.1±11.1 months in 16 MSA-C patients. Glucose metabolism of the putamen showed a greater decrease in possible MSA-P than in probable MSA-P (p=0.031. Although the Unified Multiple System Atrophy Rating Scale (UMSARS score did not differ between possible MSA-P and probable MSA-P, the subscores of rigidity (p=0.04 and bradykinesia (p= 0.008 were significantly higher in possible MSA-P than in probable MSA-P. Possible MSA-C showed a greater decrease in glucose metabolism of the cerebellum than probable MSA-C (p=0.016. Conclusions: Our results may suggest that the early neuropathological pattern of possible MSA with a predilection for the striatonigral or olivopontocerebellar system differs from that of probable MSA, which has prominent involvement of the autonomic nervous system in addition to the striatonigral or olivopontocerebellar system.

  20. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    Science.gov (United States)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  1. Multiple Embedded Processors for Fault-Tolerant Computing

    Science.gov (United States)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  2. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, Alberto, E-mail: alby@anl.gov [Argonne National Laboratory, 9700S. Cass Avenue, Argonne, IL 60439 (United States); Gohar, Y.; Cao, Y.; Zhong, Z. [Argonne National Laboratory, 9700S. Cass Avenue, Argonne, IL 60439 (United States); Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C. [Joint Institute for Power and Nuclear Research-Sosny, National Academy of Sciences of Belarus, 99 acad. Krasin str., Minsk 220109 (Belarus)

    2012-03-11

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  3. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    International Nuclear Information System (INIS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-01-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  4. Exploring viewing behavior data from whole slide images to predict correctness of students' answers during practical exams in oral pathology.

    Science.gov (United States)

    Walkowski, Slawomir; Lundin, Mikael; Szymas, Janusz; Lundin, Johan

    2015-01-01

    The way of viewing whole slide images (WSI) can be tracked and analyzed. In particular, it can be useful to learn how medical students view WSIs during exams and how their viewing behavior is correlated with correctness of the answers they give. We used software-based view path tracking method that enabled gathering data about viewing behavior of multiple simultaneous WSI users. This approach was implemented and applied during two practical exams in oral pathology in 2012 (88 students) and 2013 (91 students), which were based on questions with attached WSIs. Gathered data were visualized and analyzed in multiple ways. As a part of extended analysis, we tried to use machine learning approaches to predict correctness of students' answers based on how they viewed WSIs. We compared the results of analyses for years 2012 and 2013 - done for a single question, for student groups, and for a set of questions. The overall patterns were generally consistent across these 3 years. Moreover, viewing behavior data appeared to have certain potential for predicting answers' correctness and some outcomes of machine learning approaches were in the right direction. However, general prediction results were not satisfactory in terms of precision and recall. Our work confirmed that the view path tracking method is useful for discovering viewing behavior of students analyzing WSIs. It provided multiple useful insights in this area, and general results of our analyses were consistent across two exams. On the other hand, predicting answers' correctness appeared to be a difficult task - students' answers seem to be often unpredictable.

  5. Exploring viewing behavior data from whole slide images to predict correctness of students′ answers during practical exams in oral pathology

    Directory of Open Access Journals (Sweden)

    Slawomir Walkowski

    2015-01-01

    Full Text Available The way of viewing whole slide images (WSI can be tracked and analyzed. In particular, it can be useful to learn how medical students view WSIs during exams and how their viewing behavior is correlated with correctness of the answers they give. We used software-based view path tracking method that enabled gathering data about viewing behavior of multiple simultaneous WSI users. This approach was implemented and applied during two practical exams in oral pathology in 2012 (88 students and 2013 (91 students, which were based on questions with attached WSIs. Gathered data were visualized and analyzed in multiple ways. As a part of extended analysis, we tried to use machine learning approaches to predict correctness of students′ answers based on how they viewed WSIs. We compared the results of analyses for years 2012 and 2013 - done for a single question, for student groups, and for a set of questions. The overall patterns were generally consistent across these 3 years. Moreover, viewing behavior data appeared to have certain potential for predicting answers′ correctness and some outcomes of machine learning approaches were in the right direction. However, general prediction results were not satisfactory in terms of precision and recall. Our work confirmed that the view path tracking method is useful for discovering viewing behavior of students analyzing WSIs. It provided multiple useful insights in this area, and general results of our analyses were consistent across two exams. On the other hand, predicting answers′ correctness appeared to be a difficult task - students′ answers seem to be often unpredictable.

  6. Validation and empirical correction of MODIS AOT and AE over ocean

    Directory of Open Access Journals (Sweden)

    N. A. J. Schutgens

    2013-09-01

    Full Text Available We present a validation study of Collection 5 MODIS level 2 Aqua and Terra AOT (aerosol optical thickness and AE (Ångström exponent over ocean by comparison to coastal and island AERONET (AErosol RObotic NETwork sites for the years 2003–2009. We show that MODIS (MODerate-resolution Imaging Spectroradiometer AOT exhibits significant biases due to wind speed and cloudiness of the observed scene, while MODIS AE, although overall unbiased, exhibits less spatial contrast on global scales than the AERONET observations. The same behaviour can be seen when MODIS AOT is compared against Maritime Aerosol Network (MAN data, suggesting that the spatial coverage of our datasets does not preclude global conclusions. Thus, we develop empirical correction formulae for MODIS AOT and AE that significantly improve agreement of MODIS and AERONET observations. We show these correction formulae to be robust. Finally, we study random errors in the corrected MODIS AOT and AE and show that they mainly depend on AOT itself, although small contributions are present due to wind speed and cloud fraction in AOT random errors and due to AE and cloud fraction in AE random errors. Our analysis yields significantly higher random AOT errors than the official MODIS error estimate (0.03 + 0.05 τ, while random AE errors are smaller than might be expected. This new dataset of bias-corrected MODIS AOT and AE over ocean is intended for aerosol model validation and assimilation studies, but also has consequences as a stand-alone observational product. For instance, the corrected dataset suggests that much less fine mode aerosol is transported across the Pacific and Atlantic oceans.

  7. Corrective Action Investigation Plan for Corrective Action Unit 570: Area 9 Yucca Flat Atmospheric Test Sites Nevada National Security Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Patrick Matthews

    2012-08-01

    CAU 570 comprises the following six corrective action sites (CASs): • 02-23-07, Atmospheric Test Site - Tesla • 09-23-10, Atmospheric Test Site T-9 • 09-23-11, Atmospheric Test Site S-9G • 09-23-14, Atmospheric Test Site - Rushmore • 09-23-15, Eagle Contamination Area • 09-99-01, Atmospheric Test Site B-9A These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable CAAs that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on April 30, 2012, by representatives of the Nevada Division of Environmental Protection and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 570. The site investigation process will also be conducted in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices to be applied to this activity. The presence and nature of contamination at CAU 570 will be evaluated based on information collected from a field investigation. Radiological contamination will be evaluated based on a comparison of the total effective dose at sample locations to the dose-based final action level. The total effective dose will be calculated as the total of separate estimates of internal and external dose. Results from the analysis of soil samples will be used to calculate internal radiological

  8. Theoretical investigation of the energy spectra of the oxygen isoelectronic sequences taking into account relativistic corrections

    International Nuclear Information System (INIS)

    Bogdanovich, P.O.; Shadzhyuvene, S.D.; Boruta, I.I.; Rudzikas, Z.B.

    1976-01-01

    A method for calculating energy spectra of atoms and ions having complex electron configurations is developed which takes into account relativistic corrections of the order of magnitude of the square of the structure constant. The corrections included are caused by the dependence of the electron mass on velocity; by orbit-orbit interaction; by contact interaction and by spin-orbit interaction. The method described is realized in the form of universal algorithms and programs which are written in the Fortran 4 in the BESM-6 version. Examples are given of calculating the ground ls 2 2s 2 2p 6 configuration and two excited ls 2 2s 2 2p 3 3s and ls 2 2s2p 5 ones of the isoelectronic oxygen series, both with and without taking into account the relativistic corrections. The value of the nuclear charge varies from Z=8 to Z=80. The contribution of relativistic corrections increases with Z. The effect of relativistic corrections on the distance between the centers of gravity of ground and excited configurations increases with Z. The comparison of the results obtained with experimental data is made

  9. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Boehlecke, Robert

    2004-01-01

    The six bunkers included in CAU 204 were primarily used to monitor atmospheric testing or store munitions. The 'Corrective Action Investigation Plan (CAIP) for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada' (NNSA/NV, 2002a) provides information relating to the history, planning, and scope of the investigation; therefore, it will not be repeated in this CADD. This CADD identifies potential corrective action alternatives and provides a rationale for the selection of a recommended corrective action alternative for each CAS within CAU 204. The evaluation of corrective action alternatives is based on process knowledge and the results of investigative activities conducted in accordance with the CAIP (NNSA/NV, 2002a) that was approved prior to the start of the Corrective Action Investigation (CAI). Record of Technical Change (ROTC) No. 1 to the CAIP (approval pending) documents changes to the preliminary action levels (PALs) agreed to by the Nevada Division of Environmental Protection (NDEP) and DOE, National Nuclear Security Administration Nevada Site Office (NNSA/NSO). This ROTC specifically discusses the radiological PALs and their application to the findings of the CAU 204 corrective action investigation. The scope of this CADD consists of the following: (1) Develop corrective action objectives; (2) Identify corrective action alternative screening criteria; (3) Develop corrective action alternatives; (4) Perform detailed and comparative evaluations of corrective action alternatives in relation to corrective action objectives and screening criteria; and (5) Recommend and justify a preferred corrective action alternative for each CAS within CAU 204

  10. Distortion Correction in Fetal EPI Using Non-Rigid Registration With a Laplacian Constraint.

    Science.gov (United States)

    Kuklisova-Murgasova, Maria; Lockwood Estrin, Georgia; Nunes, Rita G; Malik, Shaihan J; Rutherford, Mary A; Rueckert, Daniel; Hajnal, Joseph V

    2018-01-01

    Geometric distortion induced by the main B0 field disrupts the consistency of fetal echo planar imaging (EPI) data, on which diffusion and functional magnetic resonance imaging is based. In this paper, we present a novel data-driven method for simultaneous motion and distortion correction of fetal EPI. A motion-corrected and reconstructed T2 weighted single shot fast spin echo (ssFSE) volume is used as a model of undistorted fetal brain anatomy. Our algorithm interleaves two registration steps: estimation of fetal motion parameters by aligning EPI slices to the model; and deformable registration of EPI slices to slices simulated from the undistorted model to estimate the distortion field. The deformable registration is regularized by a physically inspired Laplacian constraint, to model distortion induced by a source-free background B0 field. Our experiments show that distortion correction significantly improves consistency of reconstructed EPI volumes with ssFSE volumes. In addition, the estimated distortion fields are consistent with fields calculated from acquired field maps, and the Laplacian constraint is essential for estimation of plausible distortion fields. The EPI volumes reconstructed from different scans of the same subject were more consistent when the proposed method was used in comparison with EPI volumes reconstructed from data distortion corrected using a separately acquired B0 field map.

  11. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)

    2015-06-23

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.

  12. Application of the modified neutron source multiplication method for a measurement of sub-criticality in AGN-201K reactor

    International Nuclear Information System (INIS)

    Myung-Hyun Kim

    2010-01-01

    Measurement of sub-criticality is a challenging and required task in nuclear industry both for nuclear criticality safety and physics test in nuclear power plant. A relatively new method named as Modified Neutron Source Multiplication Method (MNSM) was proposed in Japan. This method is an improvement of traditional Neutron Source Multiplication (NSM) Method, in which three correction factors are applied additionally. In this study, MNSM was tested in calculation of rod worth using an educational reactor in Kyung Hee University, AGN-201K. For this study, a revised nuclear data library and a neutron transport code system TRANSX-PARTISN were used for the calculation of correction factors for various control rod positions and source locations. Experiments were designed and performed to enhance errors in NSM from the location effects of source and detectors. MNSM can correct these effects but current results showed not much correction effects. (author)

  13. DEFLATE Compression Algorithm Corrects for Overestimation of Phylogenetic Diversity by Grantham Approach to Single-Nucleotide Polymorphism Classification

    Directory of Open Access Journals (Sweden)

    Arran Schlosberg

    2014-05-01

    Full Text Available Improvements in speed and cost of genome sequencing are resulting in increasing numbers of novel non-synonymous single nucleotide polymorphisms (nsSNPs in genes known to be associated with disease. The large number of nsSNPs makes laboratory-based classification infeasible and familial co-segregation with disease is not always possible. In-silico methods for classification or triage are thus utilised. A popular tool based on multiple-species sequence alignments (MSAs and work by Grantham, Align-GVGD, has been shown to underestimate deleterious effects, particularly as sequence numbers increase. We utilised the DEFLATE compression algorithm to account for expected variation across a number of species. With the adjusted Grantham measure we derived a means of quantitatively clustering known neutral and deleterious nsSNPs from the same gene; this was then used to assign novel variants to the most appropriate cluster as a means of binary classification. Scaling of clusters allows for inter-gene comparison of variants through a single pathogenicity score. The approach improves upon the classification accuracy of Align-GVGD while correcting for sensitivity to large MSAs. Open-source code and a web server are made available at https://github.com/aschlosberg/CompressGV.

  14. RCRA corrective action ampersand CERCLA remedial action reference guide

    International Nuclear Information System (INIS)

    1994-07-01

    This reference guide provides a side-by-side comparison of RCRA corrective action and CERCLA Remedial Action, focusing on the statutory and regulatory requirements under each program, criterial and other factors that govern a site's progress, and the ways in which authorities or requirements under each program overlap and/or differ. Topics include the following: Intent of regulation; administration; types of sites and/or facilities; definition of site and/or facility; constituents of concern; exclusions; provisions for short-term remedies; triggers for initial site investigation; short term response actions; site investigations; remedial investigations; remedial alternatives; clean up criterial; final remedy; implementing remedy; on-site waste management; completion of remedial process

  15. Social attribution test--multiple choice (SAT-MC) in schizophrenia: comparison with community sample and relationship to neurocognitive, social cognitive and symptom measures.

    Science.gov (United States)

    Bell, Morris D; Fiszdon, Joanna M; Greig, Tamasine C; Wexler, Bruce E

    2010-09-01

    This is the first report on the use of the Social Attribution Task - Multiple Choice (SAT-MC) to assess social cognitive impairments in schizophrenia. The SAT-MC was originally developed for autism research, and consists of a 64-second animation showing geometric figures enacting a social drama, with 19 multiple choice questions about the interactions. Responses from 85 community-dwelling participants and 66 participants with SCID confirmed schizophrenia or schizoaffective disorders (Scz) revealed highly significant group differences. When the two samples were combined, SAT-MC scores were significantly correlated with other social cognitive measures, including measures of affect recognition, theory of mind, self-report of egocentricity and the Social Cognition Index from the MATRICS battery. Using a cut-off score, 53% of Scz were significantly impaired on SAT-MC compared with 9% of the community sample. Most Scz participants with impairment on SAT-MC also had impairment on affect recognition. Significant correlations were also found with neurocognitive measures but with less dependence on verbal processes than other social cognitive measures. Logistic regression using SAT-MC scores correctly classified 75% of both samples. Results suggest that this measure may have promise, but alternative versions will be needed before it can be used in pre-post or longitudinal designs. (c) 2009 Elsevier B.V. All rights reserved.

  16. Correction of sun glint effect on MIVIS data of the Sicily campaign in July 2000

    Directory of Open Access Journals (Sweden)

    E. Zappitelli

    2006-06-01

    Full Text Available To assess the suspended and dissolved matter in water in the visible and near infrared spectral regions it is necessary to estimate with adequate accuracy the water leaving radiance. Consequently radiance measured by a remote sensor has to be corrected from the atmospheric and the sea surface effects consisting in the path radiance and the sun and sky glitter radiance contributions. This paper describes the application of the sun glint correction scheme on to airborne hyperspectral MIVIS measurements acquired on the area of the Straits of Messina during the campaign in July 2000. In the Messina case study data have been corrected for the atmospheric effects and for the sun-glitter contribution evaluated following the method proposed by Cox and Munk (1954, 1956. Comparison between glitter contaminated and glitter free data has been made taking into account the radiance profiles relevant to selected scan lines and the spectra of different pixels belonging to the same scan line and located out and inside the sun glitter area. The results show that spectra after correction have the same profile as the contaminated ones, although, at this stage, free glint data have not yet been used in water constituent retrieval and consequently the reliability of such correction cannot be completely evaluated.

  17. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... more surgeries depending on the extent of the repair needed. Click here to find out more. Corrective ... more surgeries depending on the extent of the repair needed. Click here to find out more. Corrective ...

  18. Correction factors for assessing immersion suits under harsh conditions.

    Science.gov (United States)

    Power, Jonathan; Tikuisis, Peter; Ré, António Simões; Barwood, Martin; Tipton, Michael

    2016-03-01

    Many immersion suit standards require testing of thermal protective properties in calm, circulating water while these suits are typically used in harsher environments where they often underperform. Yet it can be expensive and logistically challenging to test immersion suits in realistic conditions. The goal of this work was to develop a set of correction factors that would allow suits to be tested in calm water yet ensure they will offer sufficient protection in harsher conditions. Two immersion studies, one dry and the other with 500 mL of water within the suit, were conducted in wind and waves to measure the change in suit insulation. In both studies, wind and waves resulted in a significantly lower immersed insulation value compared to calm water. The minimum required thermal insulation for maintaining heat balance can be calculated for a given mean skin temperature, metabolic heat production, and water temperature. Combining the physiological limits of sustainable cold water immersion and actual suit insulation, correction factors can be deduced for harsh conditions compared to calm. The minimum in-situ suit insulation to maintain thermal balance is 1.553-0.0624·TW + 0.00018·TW(2) for a dry calm condition. Multiplicative correction factors to the above equation are 1.37, 1.25, and 1.72 for wind + waves, 500 mL suit wetness, and both combined, respectively. Calm water certification tests of suit insulation should meet or exceed the minimum in-situ requirements to maintain thermal balance, and correction factors should be applied for a more realistic determination of minimum insulation for harsh conditions. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  19. Oblique corrections in a model with neutrino masses and strong C P resolution

    International Nuclear Information System (INIS)

    Natale, A.A.; Rodrigues da Silva, P.S.

    1994-01-01

    Our intention in this work is to verify what is the order of the limits we obtain on the light neutrino masses, through the calculation and comparison of the oblique corrections with the experimental data. The calculation will be performed for a specific model, although we expect it to be sufficiently general to give one idea of the limits that can be obtained on neutrino masses in this class of models. (author)

  20. One-loop corrections to the process e+e-→tt including hard bremsstrahlung

    International Nuclear Information System (INIS)

    Fleischer, J.; Riemann, T.; Werthenbach, A.; Leike, A.

    2002-03-01

    Radiative corrections to the process e + e - → t anti t are calculated in one-loop approximation of the Standard Model. There exist results from several groups. This talk provides further comparisons of the complete electroweak contributions, including hard bremsstrahlung. The excellent final agreement of the different groups allows to continue by working on a code for an event generator for TESLA and an extension to e + e - → 6 fermions. (orig.)