WorldWideScience

Sample records for small sample correction

  1. Calculation of coincidence summing corrections for a specific small soil sample geometry

    Energy Technology Data Exchange (ETDEWEB)

    Helmer, R.G.; Gehrke, R.J.

    1996-10-01

    Previously, a system was developed at the INEL for measuring the {gamma}-ray emitting nuclides in small soil samples for the purpose of environmental monitoring. These samples were counted close to a {approx}20% Ge detector and, therefore, it was necessary to take into account the coincidence summing that occurs for some nuclides. In order to improve the technical basis for the coincidence summing corrections, the authors have carried out a study of the variation in the coincidence summing probability with position within the sample volume. A Monte Carlo electron and photon transport code (CYLTRAN) was used to compute peak and total efficiencies for various photon energies from 30 to 2,000 keV at 30 points throughout the sample volume. The geometry for these calculations included the various components of the detector and source along with the shielding. The associated coincidence summing corrections were computed at these 30 positions in the sample volume and then averaged for the whole source. The influence of the soil and the detector shielding on the efficiencies was investigated.

  2. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. 78 FR 59798 - Small Business Subcontracting: Correction

    Science.gov (United States)

    2013-09-30

    ... SMALL BUSINESS ADMINISTRATION 13 CFR Part 125 RIN 3245-AG22 Small Business Subcontracting: Correction AGENCY: U.S. Small Business Administration. ACTION: Correcting amendments. SUMMARY: This document... business subcontracting to implement provisions of the Small Business Jobs Act of 2010. This correction...

  4. Accurate EPR radiosensitivity calibration using small sample masses

    Science.gov (United States)

    Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.

    2000-03-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.

  5. Accurate EPR radiosensitivity calibration using small sample masses

    International Nuclear Information System (INIS)

    Hayes, R.B.; Haskell, E.H.; Barrus, J.K.; Kenner, G.H.; Romanyukha, A.A.

    2000-01-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed

  6. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  7. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  8. The modular small-angle X-ray scattering data correction sequence.

    Science.gov (United States)

    Pauw, B R; Smith, A J; Snow, T; Terrill, N J; Thünemann, A F

    2017-12-01

    Data correction is probably the least favourite activity amongst users experimenting with small-angle X-ray scattering: if it is not done sufficiently well, this may become evident only during the data analysis stage, necessitating the repetition of the data corrections from scratch. A recommended comprehensive sequence of elementary data correction steps is presented here to alleviate the difficulties associated with data correction, both in the laboratory and at the synchrotron. When applied in the proposed order to the raw signals, the resulting absolute scattering cross section will provide a high degree of accuracy for a very wide range of samples, with its values accompanied by uncertainty estimates. The method can be applied without modification to any pinhole-collimated instruments with photon-counting direct-detection area detectors.

  9. Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay

    International Nuclear Information System (INIS)

    Wang Na; Wu Zhi-Hai; Peng Li

    2014-01-01

    In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  10. The Accuracy of Inference in Small Samples of Dynamic Panel Data Models

    NARCIS (Netherlands)

    Bun, M.J.G.; Kiviet, J.F.

    2001-01-01

    Through Monte Carlo experiments the small sample behavior is examined of various inference techniques for dynamic panel data models when both the time-series and cross-section dimensions of the data set are small. The LSDV technique and corrected versions of it are compared with IV and GMM

  11. TableSim--A program for analysis of small-sample categorical data.

    Science.gov (United States)

    David J. Rugg

    2003-01-01

    Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.

  12. Exploratory Factor Analysis With Small Samples and Missing Data.

    Science.gov (United States)

    McNeish, Daniel

    2017-01-01

    Exploratory factor analysis (EFA) is an extremely popular method for determining the underlying factor structure for a set of variables. Due to its exploratory nature, EFA is notorious for being conducted with small sample sizes, and recent reviews of psychological research have reported that between 40% and 60% of applied studies have 200 or fewer observations. Recent methodological studies have addressed small size requirements for EFA models; however, these models have only considered complete data, which are the exception rather than the rule in psychology. Furthermore, the extant literature on missing data techniques with small samples is scant, and nearly all existing studies focus on topics that are not of primary interest to EFA models. Therefore, this article presents a simulation to assess the performance of various missing data techniques for EFA models with both small samples and missing data. Results show that deletion methods do not extract the proper number of factors and estimate the factor loadings with severe bias, even when data are missing completely at random. Predictive mean matching is the best method overall when considering extracting the correct number of factors and estimating factor loadings without bias, although 2-stage estimation was a close second.

  13. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    Science.gov (United States)

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  14. A simple method of correcting for variation of sample thickness in the determination of the activity of environmental samples by gamma spectrometry

    International Nuclear Information System (INIS)

    Galloway, R.B.

    1991-01-01

    Gamma ray spectrometry is a well established method of determining the activity of radioactive components in environmental samples. It is usual to maintain precisely the same counting geometry in measurements on samples under investigation as in the calibration measurements on standard materials of known activity, thus avoiding perceived uncertainties and complications in correcting for changes in counting geometry. However this may not always be convenient if, as on some occasions, only a small quantity of sample material is available for analysis. A procedure which avoids re-calibration for each sample size is described and is shown to be simple to use without significantly reducing the accuracy of measurement of the activity of typical environmental samples. The correction procedure relates to the use of cylindrical samples at a constant distance from the detector, the samples all having the same diameter but various thicknesses being permissible. (author)

  15. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  16. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  17. 78 FR 45051 - Small Business Size Standards; Support Activities for Mining; Correction

    Science.gov (United States)

    2013-07-26

    ... Regulations by increasing small business size standards for three of the four industries in North American... SMALL BUSINESS ADMINISTRATION 13 CFR Part 121 RIN 3245-AG44 Small Business Size Standards; Support Activities for Mining; Correction AGENCY: U.S. Small Business Administration. ACTION: Final rule; correction...

  18. A Monte Carlo procedure for Hamiltonians with small nonlocal correction terms

    International Nuclear Information System (INIS)

    Mack, G.; Pinn, K.

    1986-03-01

    We consider lattice field theories whose Hamiltonians contain small nonlocal correction terms. We propose to do simulations for an auxiliarly polymer system with field dependent activities. If a nonlocal correction term to the Hamiltonian is small, it need to be evaluated only rarely. (orig.)

  19. Gamma-ray self-attenuation corrections in environmental samples

    International Nuclear Information System (INIS)

    Robu, E.; Giovani, C.

    2009-01-01

    Gamma-spectrometry is a commonly used technique in environmental radioactivity monitoring. Frequently the bulk samples that should be measured differ with respect to composition and density from the reference sample used for efficiency calibration. Correction factors should be applied in these cases for activity measurement. Linear attenuation coefficients and self-absorption correction factors have been evaluated for soil, grass and liquid sources with different densities and geometries.(authors)

  20. Method for Measuring Thermal Conductivity of Small Samples Having Very Low Thermal Conductivity

    Science.gov (United States)

    Miller, Robert A.; Kuczmarski, Maria a.

    2009-01-01

    This paper describes the development of a hot plate method capable of using air as a standard reference material for the steady-state measurement of the thermal conductivity of very small test samples having thermal conductivity on the order of air. As with other approaches, care is taken to ensure that the heat flow through the test sample is essentially one-dimensional. However, unlike other approaches, no attempt is made to use heated guards to block the flow of heat from the hot plate to the surroundings. It is argued that since large correction factors must be applied to account for guard imperfections when sample dimensions are small, it may be preferable to simply measure and correct for the heat that flows from the heater disc to directions other than into the sample. Experimental measurements taken in a prototype apparatus, combined with extensive computational modeling of the heat transfer in the apparatus, show that sufficiently accurate measurements can be obtained to allow determination of the thermal conductivity of low thermal conductivity materials. Suggestions are made for further improvements in the method based on results from regression analyses of the generated data.

  1. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  2. 40 CFR 1065.690 - Buoyancy correction for PM sample media.

    Science.gov (United States)

    2010-07-01

    ... mass, use a sample media density of 920 kg/m3. (3) For PTFE membrane (film) media with an integral... media. 1065.690 Section 1065.690 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Buoyancy correction for PM sample media. (a) General. Correct PM sample media for their buoyancy in air if...

  3. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  4. Bulk sample self-attenuation correction by transmission measurement

    International Nuclear Information System (INIS)

    Parker, J.L.; Reilly, T.D.

    1976-01-01

    Various methods used in either finding or avoiding the attenuation correction in the passive γ-ray assay of bulk samples are reviewed. Detailed consideration is given to the transmission method, which involves experimental determination of the sample linear attenuation coefficient by measuring the transmission through the sample of a beam of gamma rays from an external source. The method was applied to box- and cylindrically-shaped samples

  5. 78 FR 27442 - Coal Mine Dust Sampling Devices; Correction

    Science.gov (United States)

    2013-05-10

    ... DEPARTMENT OF LABOR Mine Safety and Health Administration Coal Mine Dust Sampling Devices; Correction AGENCY: Mine Safety and Health Administration, Labor. ACTION: Notice; correction. SUMMARY: On April 30, 2013, Mine Safety and Health Administration (MSHA) published a notice in the Federal Register...

  6. Correction for sample self-absorption in activity determination by gamma spectrometry

    International Nuclear Information System (INIS)

    Galloway, R.B.

    1991-01-01

    Gamma ray spectrometry is a convenient method of determining the activity of the radioactive components in environmental samples. Commonly samples vary in gamma absorption or differ in absorption from the calibration standards available, so that accurate correction for self-absorption in the sample is essential. A versatile correction procedure is described. (orig.)

  7. Higher order QCD corrections in small x physics

    International Nuclear Information System (INIS)

    Chachamis, G.

    2006-11-01

    We study higher order QCD corrections in small x Physics. The numerical implementation of the full NLO photon impact factor is the remaining necessary piece for the testing of the NLO BFKL resummation against data from physical processes, such as γ * γ * collisions. We perform the numerical integration over phase space for the virtual corrections to the NLO photon impact factor. This, along with the previously calculated real corrections, makes feasible in the near future first estimates for the γ*γ* total cross section, since the convolution of the full impact factor with the NLO BFKL gluon Green's function is now straightforward. The NLO corrections for the photon impact factor are sizeable and negative. In the second part of this thesis, we estimate higher order correction to the BK equation. We are mainly interested in whether partonic saturation delays or not in rapidity when going beyond the leading order. In our investigation, we use the so called 'rapidity veto' which forbid two emissions to be very close in rapidity, to 'switch on' higher order corrections to the BK equation. From analytic and numerical analysis, we conclude that indeed saturation does delay in rapidity when higher order corrections are taken into account. In the last part, we investigate higher order QCD corrections as additional corrections to the Electroweak (EW) sector. The question of whether BFKL corrections are of any importance in the Regge limit for the EW sector seems natural; although they arise in higher loop level, the accumulation of logarithms in energy s at high energies, cannot be dismissed without an investigation. We focus on the process γγ→ZZ. We calculate the pQCD corrections in the forward region at leading logarithmic (LL) BFKL accuracy, which are of the order of few percent at the TeV energy scale. (orig.)

  8. Higher order QCD corrections in small x physics

    Energy Technology Data Exchange (ETDEWEB)

    Chachamis, G.

    2006-11-15

    We study higher order QCD corrections in small x Physics. The numerical implementation of the full NLO photon impact factor is the remaining necessary piece for the testing of the NLO BFKL resummation against data from physical processes, such as {gamma}{sup *}{gamma}{sup *} collisions. We perform the numerical integration over phase space for the virtual corrections to the NLO photon impact factor. This, along with the previously calculated real corrections, makes feasible in the near future first estimates for the {gamma}*{gamma}* total cross section, since the convolution of the full impact factor with the NLO BFKL gluon Green's function is now straightforward. The NLO corrections for the photon impact factor are sizeable and negative. In the second part of this thesis, we estimate higher order correction to the BK equation. We are mainly interested in whether partonic saturation delays or not in rapidity when going beyond the leading order. In our investigation, we use the so called 'rapidity veto' which forbid two emissions to be very close in rapidity, to 'switch on' higher order corrections to the BK equation. From analytic and numerical analysis, we conclude that indeed saturation does delay in rapidity when higher order corrections are taken into account. In the last part, we investigate higher order QCD corrections as additional corrections to the Electroweak (EW) sector. The question of whether BFKL corrections are of any importance in the Regge limit for the EW sector seems natural; although they arise in higher loop level, the accumulation of logarithms in energy s at high energies, cannot be dismissed without an investigation. We focus on the process {gamma}{gamma}{yields}ZZ. We calculate the pQCD corrections in the forward region at leading logarithmic (LL) BFKL accuracy, which are of the order of few percent at the TeV energy scale. (orig.)

  9. Attenuation correction for the collimated gamma ray assay of cylindrical samples

    International Nuclear Information System (INIS)

    Patra, Sabyasachi; Agarwal, Chhavi; Goswami, A.; Gathibandhe, M.

    2015-01-01

    The Hybrid Monte Carlo (HMC) method developed earlier for attenuation correction of non-collimated samples [Agarwal et al., 2008, Nucl. Instrum. Methods A 597, 198], has been extended to the segmented gamma ray assay of cylindrical samples. The method has been validated both experimentally and theoretically. For experimental validation, the results of HMC calculation have been compared with the experimentally obtained attenuation correction factors. The HMC attenuation correction factors have also been compared with the results obtained from literature available near-field and far-field formulae at two sample-to-detector distances (10.3 cm and 20.4 cm). The method has been found to be valid at all sample-to-detector distances over a wide range of transmittance. On the other hand, the literature available near-field and far-field formulae have been found to work over a limited range of sample-to detector distances and transmittances. The HMC method has been further extended to circular collimated geometries where analytical formula for attenuation correction does not exist. - Highlights: • Hybrid Monte Carlo method for attenuation correction developed for SGA system. • Method found to work for all sample-detector geometries for all transmittances. • The near-field formula applicable only after certain sample-detector distance. • The far-field formula applicable only for higher transmittances (>18%). • Hybrid Monte Carlo method further extended to circular collimated geometry

  10. Small-sample-worth perturbation methods

    International Nuclear Information System (INIS)

    1985-01-01

    It has been assumed that the perturbed region, R/sub p/, is large enough so that: (1) even without a great deal of biasing there is a substantial probability that an average source-neutron will enter it; and (2) once having entered, the neutron is likely to make several collisions in R/sub p/ during its lifetime. Unfortunately neither assumption is valid for the typical configurations one encounters in small-sample-worth experiments. In such experiments one measures the reactivity change which is induced when a very small void in a critical assembly is filled with a sample of some test-material. Only a minute fraction of the fission-source neutrons ever gets into the sample and, of those neutrons that do, most emerge uncollided. Monte Carlo small-sample perturbations computations are described

  11. Standard Deviation for Small Samples

    Science.gov (United States)

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  12. Small Scale Yielding Correction of Constraint Loss in Small Sized Fracture Toughness Test Specimens

    International Nuclear Information System (INIS)

    Kim, Maan Won; Kim, Min Chul; Lee, Bong Sang; Hong, Jun Hwa

    2005-01-01

    Fracture toughness data in the ductile-brittle transition region of ferritic steels show scatter produced by local sampling effects and specimen geometry dependence which results from relaxation in crack tip constraint. The ASTM E1921 provides a standard test method to define the median toughness temperature curve, so called Master Curve, for the material corresponding to a 1T crack front length and also defines a reference temperature, T 0 , at which median toughness value is 100 MPam for a 1T size specimen. The ASTM E1921 procedures assume that high constraint, small scaling yielding (SSY) conditions prevail at fracture along the crack front. Violation of the SSY assumption occurs most often during tests of smaller specimens. Constraint loss in such cases leads to higher toughness values and thus lower T 0 values. When applied to a structure with low constraint geometry, the standard fracture toughness estimates may lead to strongly over-conservative estimates. A lot of efforts have been made to adjust the constraint effect. In this work, we applied a small-scale yielding correction (SSYC) to adjust the constraint loss of 1/3PCVN and PCVN specimens which are relatively smaller than 1T size specimen at the fracture toughness Master Curve test

  13. Structure-based sampling and self-correcting machine learning for accurate calculations of potential energy surfaces and vibrational levels

    Science.gov (United States)

    Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter

    2017-06-01

    We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.

  14. Efficiency corrections in determining the 137Cs inventory of environmental soil samples by using relative measurement method and GEANT4 simulations

    International Nuclear Information System (INIS)

    Li, Gang; Liang, Yongfei; Xu, Jiayun; Bai, Lixin

    2015-01-01

    The determination of 137 Cs inventory is widely used to estimate the soil erosion or deposition rate. The generally used method to determine the activity of volumetric samples is the relative measurement method, which employs a calibration standard sample with accurately known activity. This method has great advantages in accuracy and operation only when there is a small difference in elemental composition, sample density and geometry between measuring samples and the calibration standard. Otherwise it needs additional efficiency corrections in the calculating process. The Monte Carlo simulations can handle these correction problems easily with lower financial cost and higher accuracy. This work presents a detailed description to the simulation and calibration procedure for a conventionally used commercial P-type coaxial HPGe detector with cylindrical sample geometry. The effects of sample elemental composition, density and geometry were discussed in detail and calculated in terms of efficiency correction factors. The effect of sample placement was also analyzed, the results indicate that the radioactive nuclides and sample density are not absolutely uniform distributed along the axial direction. At last, a unified binary quadratic functional relationship of efficiency correction factors as a function of sample density and height was obtained by the least square fitting method. This function covers the sample density and height range of 0.8–1.8 g/cm 3 and 3.0–7.25 cm, respectively. The efficiency correction factors calculated by the fitted function are in good agreement with those obtained by the GEANT4 simulations with the determination coefficient value greater than 0.9999. The results obtained in this paper make the above-mentioned relative measurements more accurate and efficient in the routine radioactive analysis of environmental cylindrical soil samples. - Highlights: • Determination of 137 Cs inventory in environmental soil samples by using relative

  15. Cerebral Small Vessel Disease: Cognition, Mood, Daily Functioning, and Imaging Findings from a Small Pilot Sample

    Directory of Open Access Journals (Sweden)

    John G. Baker

    2012-04-01

    Full Text Available Cerebral small vessel disease, a leading cause of cognitive decline, is considered a relatively homogeneous disease process, and it can co-occur with Alzheimer’s disease. Clinical reports of magnetic resonance imaging (MRI/computed tomography and single photon emission computed tomography (SPECT imaging and neuropsychology testing for a small pilot sample of 14 patients are presented to illustrate disease characteristics through findings from structural and functional imaging and cognitive assessment. Participants showed some decreases in executive functioning, attention, processing speed, and memory retrieval, consistent with previous literature. An older subgroup showed lower age-corrected scores at a single time point compared to younger participants. Performance on a computer-administered cognitive measure showed a slight overall decline over a period of 8–28 months. For a case study with mild neuropsychology findings, the MRI report was normal while the SPECT report identified perfusion abnormalities. Future research can test whether advances in imaging analysis allow for identification of cerebral small vessel disease before changes are detected in cognition.

  16. Corrective Action Investigation Plan for Corrective Action Unit 541: Small Boy Nevada National Security Site and Nevada Test and Training Range, Nevada with ROTC 1

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick [Navarro-Intera, LLC (N-I), Las Vegas, NV (United States)

    2014-09-01

    Corrective Action Unit (CAU) 541 is co-located on the boundary of Area 5 of the Nevada National Security Site and Range 65C of the Nevada Test and Training Range, approximately 65 miles northwest of Las Vegas, Nevada. CAU 541 is a grouping of sites where there has been a suspected release of contamination associated with nuclear testing. This document describes the planned investigation of CAU 541, which comprises the following corrective action sites (CASs): 05-23-04, Atmospheric Tests (6) - BFa Site; 05-45-03, Atmospheric Test Site - Small Boy. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable CAAs that will be presented in the investigation report. The sites will be investigated based on the data quality objectives (DQOs) developed on April 1, 2014, by representatives of the Nevada Division of Environmental Protection; U.S. Air Force; and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Field Office. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 541. The site investigation process also will be conducted in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices to be applied to this activity. The potential contamination sources associated with CASs 05-23-04 and 05-45-03 are from nuclear testing activities conducted at the Atmospheric Tests (6) - BFa Site and Atmospheric Test Site - Small Boy sites. The presence and nature of

  17. Maybe Small Is Too Small a Term: Introduction to Advancing Small Sample Prevention Science.

    Science.gov (United States)

    Fok, Carlotta Ching Ting; Henry, David; Allen, James

    2015-10-01

    Prevention research addressing health disparities often involves work with small population groups experiencing such disparities. The goals of this special section are to (1) address the question of what constitutes a small sample; (2) identify some of the key research design and analytic issues that arise in prevention research with small samples; (3) develop applied, problem-oriented, and methodologically innovative solutions to these design and analytic issues; and (4) evaluate the potential role of these innovative solutions in describing phenomena, testing theory, and evaluating interventions in prevention research. Through these efforts, we hope to promote broader application of these methodological innovations. We also seek whenever possible, to explore their implications in more general problems that appear in research with small samples but concern all areas of prevention research. This special section includes two sections. The first section aims to provide input for researchers at the design phase, while the second focuses on analysis. Each article describes an innovative solution to one or more challenges posed by the analysis of small samples, with special emphasis on testing for intervention effects in prevention research. A concluding article summarizes some of their broader implications, along with conclusions regarding future directions in research with small samples in prevention science. Finally, a commentary provides the perspective of the federal agencies that sponsored the conference that gave rise to this special section.

  18. An Improvement to Interval Estimation for Small Samples

    Directory of Open Access Journals (Sweden)

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  19. Efficiency corrections in determining the (137)Cs inventory of environmental soil samples by using relative measurement method and GEANT4 simulations.

    Science.gov (United States)

    Li, Gang; Liang, Yongfei; Xu, Jiayun; Bai, Lixin

    2015-08-01

    The determination of (137)Cs inventory is widely used to estimate the soil erosion or deposition rate. The generally used method to determine the activity of volumetric samples is the relative measurement method, which employs a calibration standard sample with accurately known activity. This method has great advantages in accuracy and operation only when there is a small difference in elemental composition, sample density and geometry between measuring samples and the calibration standard. Otherwise it needs additional efficiency corrections in the calculating process. The Monte Carlo simulations can handle these correction problems easily with lower financial cost and higher accuracy. This work presents a detailed description to the simulation and calibration procedure for a conventionally used commercial P-type coaxial HPGe detector with cylindrical sample geometry. The effects of sample elemental composition, density and geometry were discussed in detail and calculated in terms of efficiency correction factors. The effect of sample placement was also analyzed, the results indicate that the radioactive nuclides and sample density are not absolutely uniform distributed along the axial direction. At last, a unified binary quadratic functional relationship of efficiency correction factors as a function of sample density and height was obtained by the least square fitting method. This function covers the sample density and height range of 0.8-1.8 g/cm(3) and 3.0-7.25 cm, respectively. The efficiency correction factors calculated by the fitted function are in good agreement with those obtained by the GEANT4 simulations with the determination coefficient value greater than 0.9999. The results obtained in this paper make the above-mentioned relative measurements more accurate and efficient in the routine radioactive analysis of environmental cylindrical soil samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Small sample whole-genome amplification

    Science.gov (United States)

    Hara, Christine; Nguyen, Christine; Wheeler, Elizabeth; Sorensen, Karen; Arroyo, Erin; Vrankovich, Greg; Christian, Allen

    2005-11-01

    Many challenges arise when trying to amplify and analyze human samples collected in the field due to limitations in sample quantity, and contamination of the starting material. Tests such as DNA fingerprinting and mitochondrial typing require a certain sample size and are carried out in large volume reactions; in cases where insufficient sample is present whole genome amplification (WGA) can be used. WGA allows very small quantities of DNA to be amplified in a way that enables subsequent DNA-based tests to be performed. A limiting step to WGA is sample preparation. To minimize the necessary sample size, we have developed two modifications of WGA: the first allows for an increase in amplified product from small, nanoscale, purified samples with the use of carrier DNA while the second is a single-step method for cleaning and amplifying samples all in one column. Conventional DNA cleanup involves binding the DNA to silica, washing away impurities, and then releasing the DNA for subsequent testing. We have eliminated losses associated with incomplete sample release, thereby decreasing the required amount of starting template for DNA testing. Both techniques address the limitations of sample size by providing ample copies of genomic samples. Carrier DNA, included in our WGA reactions, can be used when amplifying samples with the standard purification method, or can be used in conjunction with our single-step DNA purification technique to potentially further decrease the amount of starting sample necessary for future forensic DNA-based assays.

  1. Gaseous radiocarbon measurements of small samples

    International Nuclear Information System (INIS)

    Ruff, M.; Szidat, S.; Gaeggeler, H.W.; Suter, M.; Synal, H.-A.; Wacker, L.

    2010-01-01

    Radiocarbon dating by means of accelerator mass spectrometry (AMS) is a well-established method for samples containing carbon in the milligram range. However, the measurement of small samples containing less than 50 μg carbon often fails. It is difficult to graphitise these samples and the preparation is prone to contamination. To avoid graphitisation, a solution can be the direct measurement of carbon dioxide. The MICADAS, the smallest accelerator for radiocarbon dating in Zurich, is equipped with a hybrid Cs sputter ion source. It allows the measurement of both, graphite targets and gaseous CO 2 samples, without any rebuilding. This work presents experiences dealing with small samples containing 1-40 μg carbon. 500 unknown samples of different environmental research fields have been measured yet. Most of the samples were measured with the gas ion source. These data are compared with earlier measurements of small graphite samples. The performance of the two different techniques is discussed and main contributions to the blank determined. An analysis of blank and standard data measured within years allowed a quantification of the contamination, which was found to be of the order of 55 ng and 750 ng carbon (50 pMC) for the gaseous and the graphite samples, respectively. For quality control, a number of certified standards were measured using the gas ion source to demonstrate reliability of the data.

  2. Research on self-absorption corrections for laboratory γ spectral analysis of soil samples

    International Nuclear Information System (INIS)

    Tian Zining; Jia Mingyan; Li Huibin; Cheng Ziwei; Ju Lingjun; Shen Maoquan; Yang Xiaoyan; Yan Ling; Fen Tiancheng

    2010-01-01

    Based on the calibration results of the point sources,dimensions of HPGe crystal were characterized.Linear attenuation coefficients and detection efficiencies of all kinds of samples were calculated,and the function F(μ) of φ75 mm x 25 mm sample was established. Standard surface source was used to simulate the source of different heights in the soil sample. And the function ε(h) which reflect the relationship between detection efficiencies and heights of the surface sources was determined. The detection efficiency of calibration source can be obtained by integration, F(μ) functions of soil samples established is consistent with the result of MCNP calculation code. Several φ75 mm x 25 mm soil samples were measured by the HPGe spectrometer,and the function F(μ) was used to correct the self absorption. F(μ) functions of soil samples of various dimensions can be calculated by MCNP calculation code established, and self absorption correction can be done. To verify the efficiency of calculation results, φ75 mm x 75 mm soil samples were measured. Several φ75 mm x 25 mm soil samples from aerosphere nuclear testing field was measured by the HPGe spectrometer,and the function F(μ) was used to correct the self absorption. The function F(m) was established, and the technical method which is used to correct the soil samples of unknown area is also given. The correction method of surface source greatly improves the gamma spectrum's metrical accuracy, and it will be widely applied to environmental radioactive investigation. (authors)

  3. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  4. Studies on the true coincidence correction in measuring filter samples by gamma spectrometry

    CERN Document Server

    Lian Qi; Chang Yong Fu; Xia Bing

    2002-01-01

    The true coincidence correction in measuring filter samples has been studied by high efficiency HPGe gamma detectors. The true coincidence correction for a specific three excited levels de-excitation case has been analyzed, and the typical analytical expressions of true coincidence correction factors have been given. According to the measured relative efficiency on the detector surface with 8 'single' energy gamma emitters and efficiency of filter samples, the peak and total efficiency surfaces are fitted. The true coincidence correction factors of sup 6 sup 0 Co and sup 1 sup 5 sup 2 Eu calculated by the efficiency surfaces agree well with experimental results

  5. 75 FR 17036 - Energy Conservation Program: Energy Conservation Standards for Small Electric Motors; Correction

    Science.gov (United States)

    2010-04-05

    ... Conservation Program: Energy Conservation Standards for Small Electric Motors; Correction AGENCY: Office of... standards for small electric motors, which was published on March 9, 2010. In that final rule, the U.S... titled ``Energy Conservation Standards for Small Electric Motors.'' 75 FR 10874. Since the publication of...

  6. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  7. Determination Of Activity Of Radionuclides In Moss-Soil Sample With Self-Absorption Correction

    International Nuclear Information System (INIS)

    Tran Thien Thanh; Chau Van Tao; Truong Thi Hong Loan; Hoang Duc Tam

    2011-01-01

    Hyper Pure Germanium (HPGe) spectrometer system is a very powerful tool for radioactivity measurements. The systematic uncertainty in the full energy peak efficiency is due to the differences between the matrix (density and chemical composition) of the reference and the other bulk samples. For getting precision from the gamma spectrum analysis, the absorbed correction in the sample should be considered. For gamma spectral analysis, a correction for absorption effects in sample should be considered, especially for bulk samples. The results were presented and discussed in this paper. (author)

  8. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    Science.gov (United States)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model

  9. Attenuation correction for the NIH ATLAS small animal PET scanner

    CERN Document Server

    Yao, Rutao; Liow, JeihSan; Seidel, Jurgen

    2003-01-01

    We evaluated two methods of attenuation correction for the NIH ATLAS small animal PET scanner: 1) a CT-based method that derives 511 keV attenuation coefficients (mu) by extrapolation from spatially registered CT images; and 2) an analytic method based on the body outline of emission images and an empirical mu. A specially fabricated attenuation calibration phantom with cylindrical inserts that mimic different body tissues was used to derive the relationship to convert CT values to (I for PET. The methods were applied to three test data sets: 1) a uniform cylinder phantom, 2) the attenuation calibration phantom, and 3) a mouse injected with left bracket **1**8F right bracket FDG. The CT-based attenuation correction factors were larger in non-uniform regions of the imaging subject, e.g. mouse head, than the analytic method. The two methods had similar correction factors for regions with uniform density and detectable emission source distributions.

  10. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  11. Self-absorption corrections of various sample-detector geometries in gamma-ray spectrometry using sample Monte Carlo Simulations

    International Nuclear Information System (INIS)

    Ahmad Saat; Appleby, P.G.; Nolan, P.J.

    1997-01-01

    Corrections for self-absorption in gamma-ray spectrometry have been developed using a simple Monte Carlo simulation technique. The simulation enables the calculation of gamma-ray path lengths in the sample which, using available data, can be used to calculate self-absorption correction factors. The simulation was carried out on three sample geometries: disk, Marinelli beaker, and cylinder (for well-type detectors). Mathematical models and experimental measurements are used to evaluate the simulations. A good agreement of within a few percents was observed. The simulation results are also in good agreement with those reported in the literature. The simulation code was carried out in FORTRAN 90,

  12. Correction for the absorption of plutonium alpha particles in filter paper used for dust sampling

    Energy Technology Data Exchange (ETDEWEB)

    Simons, J G

    1956-01-01

    This sample of air-borne dust collected on a filter paper when laboratory air is monitored for plutonium with the 1195 portable dust sampling unit may be regarded, for counting purposes, as a thick source with a non-uniform distribution of alpha-active plutonium. Experiments have been carried out to determine a correction factor to be applied to the observed count on the filter paper sample to correct for internal absorption in the paper and on the dust layer. From the results obtained it is recommended that a correction factor of 2 be used.

  13. ANL small-sample calorimeter system design and operation

    International Nuclear Information System (INIS)

    Roche, C.T.; Perry, R.B.; Lewis, R.N.; Jung, E.A.; Haumann, J.R.

    1978-07-01

    The Small-Sample Calorimetric System is a portable instrument designed to measure the thermal power produced by radioactive decay of plutonium-containing fuels. The small-sample calorimeter is capable of measuring samples producing power up to 32 milliwatts at a rate of one sample every 20 min. The instrument is contained in two packages: a data-acquisition module consisting of a microprocessor with an 8K-byte nonvolatile memory, and a measurement module consisting of the calorimeter and a sample preheater. The total weight of the system is 18 kg

  14. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  15. Correcting sample drift using Fourier harmonics.

    Science.gov (United States)

    Bárcena-González, G; Guerrero-Lebrero, M P; Guerrero, E; Reyes, D F; Braza, V; Yañez, A; Nuñez-Moraleda, B; González, D; Galindo, P L

    2018-07-01

    During image acquisition of crystalline materials by high-resolution scanning transmission electron microscopy, the sample drift could lead to distortions and shears that hinder their quantitative analysis and characterization. In order to measure and correct this effect, several authors have proposed different methodologies making use of series of images. In this work, we introduce a methodology to determine the drift angle via Fourier analysis by using a single image based on the measurements between the angles of the second Fourier harmonics in different quadrants. Two different approaches, that are independent of the angle of acquisition of the image, are evaluated. In addition, our results demonstrate that the determination of the drift angle is more accurate by using the measurements of non-consecutive quadrants when the angle of acquisition is an odd multiple of 45°. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. An introduction to Bartlett correction and bias reduction

    CERN Document Server

    Cordeiro, Gauss M

    2014-01-01

    This book presents a concise introduction to Bartlett and Bartlett-type corrections of statistical tests and bias correction of point estimators. The underlying idea behind both groups of corrections is to obtain higher accuracy in small samples. While the main focus is on corrections that can be analytically derived, the authors also present alternative strategies for improving estimators and tests based on bootstrap, a data resampling technique, and discuss concrete applications to several important statistical models.

  17. Detector to detector corrections: a comprehensive experimental study of detector specific correction factors for beam output measurements for small radiotherapy beams

    DEFF Research Database (Denmark)

    Azangwe, Godfrey; Grochowska, Paulina; Georg, Dietmar

    2014-01-01

    -doped aluminium oxide (Al2O3:C), organic plastic scintillators, diamond detectors, liquid filled ion chamber, and a range of small volume air filled ionization chambers (volumes ranging from 0.002 cm3 to 0.3 cm3). All detector measurements were corrected for volume averaging effect and compared with dose ratios...... measurements, the authors recommend the use of detectors that require relatively little correction, such as unshielded diodes, diamond detectors or microchambers, and solid state detectors such as alanine, TLD, Al2O3:C, or scintillators....

  18. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. An experimental verification of laser-velocimeter sampling bias and its correction

    Science.gov (United States)

    Johnson, D. A.; Modarress, D.; Owen, F. K.

    1982-01-01

    The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.

  20. Efficiency and attenuation correction factors determination in gamma spectrometric assay of bulk samples using self radiation

    International Nuclear Information System (INIS)

    Haddad, Kh.

    2009-02-01

    Gamma spectrometry forms the most important and capable tool for measuring radioactive materials. Determination of the efficiency and attenuation correction factors is the most tedious problem in the gamma spectrometric assay of bulk samples. A new experimental and easy method for these correction factors determination using self radiation was proposed in this work. An experimental study of the correlation between self attenuation correction factor and sample thickness and its practical application was also introduced. The work was performed on NORM and uranyl nitrate bulk sample. The results of proposed methods agreed with those of traditional ones.(author)

  1. Development of electric discharge equipment for small specimen sampling

    International Nuclear Information System (INIS)

    Okamoto, Koji; Kitagawa, Hideaki; Kusumoto, Junichi; Kanaya, Akihiro; Kobayashi, Toshimi

    2009-01-01

    We have developed the on-site electric discharge sampling equipment that can effectively take samples such as small specimens from the surface portion of the plant components. Compared with the conventional sampling equipment, our sampling equipment can take samples that are thinner in depth and larger in area. In addition, the affection to the equipment can be held down to the minimum, and the thermally-affected zone of the material due to electric discharge is small, which is to be ignored. Therefore, our equipment is excellent in taking samples for various tests such as residual life evaluation.

  2. Accelerator mass spectrometry of small biological samples.

    Science.gov (United States)

    Salehpour, Mehran; Forsgard, Niklas; Possnert, Göran

    2008-12-01

    Accelerator mass spectrometry (AMS) is an ultra-sensitive technique for isotopic ratio measurements. In the biomedical field, AMS can be used to measure femtomolar concentrations of labeled drugs in body fluids, with direct applications in early drug development such as Microdosing. Likewise, the regenerative properties of cells which are of fundamental significance in stem-cell research can be determined with an accuracy of a few years by AMS analysis of human DNA. However, AMS nominally requires about 1 mg of carbon per sample which is not always available when dealing with specific body substances such as localized, organ-specific DNA samples. Consequently, it is of analytical interest to develop methods for the routine analysis of small samples in the range of a few tens of microg. We have used a 5 MV Pelletron tandem accelerator to study small biological samples using AMS. Different methods are presented and compared. A (12)C-carrier sample preparation method is described which is potentially more sensitive and less susceptible to contamination than the standard procedures.

  3. Identification and Correction of Sample Mix-Ups in Expression Genetic Data: A Case Study.

    Science.gov (United States)

    Broman, Karl W; Keller, Mark P; Broman, Aimee Teo; Kendziorski, Christina; Yandell, Brian S; Sen, Śaunak; Attie, Alan D

    2015-08-19

    In a mouse intercross with more than 500 animals and genome-wide gene expression data on six tissues, we identified a high proportion (18%) of sample mix-ups in the genotype data. Local expression quantitative trait loci (eQTL; genetic loci influencing gene expression) with extremely large effect were used to form a classifier to predict an individual's eQTL genotype based on expression data alone. By considering multiple eQTL and their related transcripts, we identified numerous individuals whose predicted eQTL genotypes (based on their expression data) did not match their observed genotypes, and then went on to identify other individuals whose genotypes did match the predicted eQTL genotypes. The concordance of predictions across six tissues indicated that the problem was due to mix-ups in the genotypes (although we further identified a small number of sample mix-ups in each of the six panels of gene expression microarrays). Consideration of the plate positions of the DNA samples indicated a number of off-by-one and off-by-two errors, likely the result of pipetting errors. Such sample mix-ups can be a problem in any genetic study, but eQTL data allow us to identify, and even correct, such problems. Our methods have been implemented in an R package, R/lineup. Copyright © 2015 Broman et al.

  4. Motion correction in simultaneous PET/MR brain imaging using sparsely sampled MR navigators

    DEFF Research Database (Denmark)

    Keller, Sune H; Hansen, Casper; Hansen, Christian

    2015-01-01

    BACKGROUND: We present a study performing motion correction (MC) of PET using MR navigators sampled between other protocolled MR sequences during simultaneous PET/MR brain scanning with the purpose of evaluating its clinical feasibility and the potential improvement of image quality. FINDINGS......: Twenty-nine human subjects had a 30-min [(11)C]-PiB PET scan with simultaneous MR including 3D navigators sampled at six time points, which were used to correct the PET image for rigid head motion. Five subjects with motion greater than 4 mm were reconstructed into six frames (one for each navigator...

  5. Improvements to the Chebyshev expansion of attenuation correction factors for cylindrical samples

    International Nuclear Information System (INIS)

    Mildner, D.F.R.; Carpenter, J.M.

    1990-01-01

    The accuracy of the Chebyshev expansion coefficients used for the calculation of attenuation correction factors for cylinderical samples has been improved. An increased order of expansion allows the method to be useful over a greater range of attenuation. It is shown that many of these coefficients are exactly zero, others are rational numbers, and others are rational frations of π -1 . The assumptions of Sears in his asymptotic expression of the attenuation correction factor are also examined. (orig.)

  6. Implementation of Cascade Gamma and Positron Range Corrections for I-124 Small Animal PET

    Science.gov (United States)

    Harzmann, S.; Braun, F.; Zakhnini, A.; Weber, W. A.; Pietrzyk, U.; Mix, M.

    2014-02-01

    Small animal Positron Emission Tomography (PET) should provide accurate quantification of regional radiotracer concentrations and high spatial resolution. This is challenging for non-pure positron emitters with high positron endpoint energies, such as I-124: On the one hand the cascade gammas emitted from this isotope can produce coincidence events with the 511 keV annihilation photons leading to quantification errors. On the other hand the long range of the high energy positron degrades spatial resolution. This paper presents the implementation of a comprehensive correction technique for both of these effects. The established corrections include a modified sinogram-based tail-fitting approach to correct for scatter, random and cascade gamma coincidences and a compensation for resolution degradation effects during the image reconstruction. Resolution losses were compensated for by an iterative algorithm which incorporates a convolution kernel derived from line source measurements for the microPET Focus 120 system. The entire processing chain for these corrections was implemented, whereas previous work has only addressed parts of this process. Monte Carlo simulations with GATE and measurements of mice with the microPET Focus 120 show that the proposed method reduces absolute quantification errors on average to 2.6% compared to 15.6% for the ordinary Maximum Likelihood Expectation Maximization algorithm. Furthermore resolution was improved in the order of 11-29% depending on the number of convolution iterations. In summary, a comprehensive, fast and robust algorithm for the correction of small animal PET studies with I-124 was developed which improves quantitative accuracy and spatial resolution.

  7. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    OpenAIRE

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60?years. By comparison, recent data (1999?2006) show that hearing thresholds in the US population have improved....

  8. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    Science.gov (United States)

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  9. Estimation for small domains in double sampling for stratification ...

    African Journals Online (AJOL)

    In this article, we investigate the effect of randomness of the size of a small domain on the precision of an estimator of mean for the domain under double sampling for stratification. The result shows that for a small domain that cuts across various strata with unknown weights, the sampling variance depends on the within ...

  10. Determination of the self-attenuation correction factor for environmental samples analysis in gamma spectrometry

    International Nuclear Information System (INIS)

    Santos, Talita O.; Rocha, Zildete; Knupp, Eliana A.N.; Kastner, Geraldo F.; Oliveira, Arno H. de; Oliveira, Arno H. de

    2015-01-01

    Gamma spectrometry technique has been used in order to obtain the activity concentrations of natural and artificial radionuclides in environmental samples of different origins, compositions and densities. These samples characteristics may influence the calibration condition by the self-attenuation effect. The sample density has been considered the most important factor. For reliable results, it is necessary to determine self-attenuation correction factor which has been subject of great interest due to its effect on activity concentration. In this context, the aim of this work is to show the calibration process considering the correction by self-attenuation in the evaluation of the concentration of each radionuclide to a gamma HPGEe detector spectrometry system. (author)

  11. Pulsed Direct Current Electrospray: Enabling Systematic Analysis of Small Volume Sample by Boosting Sample Economy.

    Science.gov (United States)

    Wei, Zhenwei; Xiong, Xingchuang; Guo, Chengan; Si, Xingyu; Zhao, Yaoyao; He, Muyi; Yang, Chengdui; Xu, Wei; Tang, Fei; Fang, Xiang; Zhang, Sichun; Zhang, Xinrong

    2015-11-17

    We had developed pulsed direct current electrospray ionization mass spectrometry (pulsed-dc-ESI-MS) for systematically profiling and determining components in small volume sample. Pulsed-dc-ESI utilized constant high voltage to induce the generation of single polarity pulsed electrospray remotely. This method had significantly boosted the sample economy, so as to obtain several minutes MS signal duration from merely picoliter volume sample. The elongated MS signal duration enable us to collect abundant MS(2) information on interested components in a small volume sample for systematical analysis. This method had been successfully applied for single cell metabolomics analysis. We had obtained 2-D profile of metabolites (including exact mass and MS(2) data) from single plant and mammalian cell, concerning 1034 components and 656 components for Allium cepa and HeLa cells, respectively. Further identification had found 162 compounds and 28 different modification groups of 141 saccharides in a single Allium cepa cell, indicating pulsed-dc-ESI a powerful tool for small volume sample systematical analysis.

  12. [Progress in sample preparation and analytical methods for trace polar small molecules in complex samples].

    Science.gov (United States)

    Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua

    2015-09-01

    Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.

  13. Integrating sphere based reflectance measurements for small-area semiconductor samples

    Science.gov (United States)

    Saylan, S.; Howells, C. T.; Dahlem, M. S.

    2018-05-01

    This article describes a method that enables reflectance spectroscopy of small semiconductor samples using an integrating sphere, without the use of additional optical elements. We employed an inexpensive sample holder to measure the reflectance of different samples through 2-, 3-, and 4.5-mm-diameter apertures and applied a mathematical formulation to remove the bias from the measured spectra caused by illumination of the holder. Using the proposed method, the reflectance of samples fabricated using expensive or rare materials and/or low-throughput processes can be measured. It can also be incorporated to infer the internal quantum efficiency of small-area, research-level solar cells. Moreover, small samples that reflect light at large angles and develop scattering may also be measured reliably, by virtue of an integrating sphere insensitive to directionalities.

  14. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  15. Correction of MRI-induced geometric distortions in whole-body small animal PET-MRI

    International Nuclear Information System (INIS)

    Frohwein, Lynn J.; Schäfers, Klaus P.; Hoerr, Verena; Faber, Cornelius

    2015-01-01

    Purpose: The fusion of positron emission tomography (PET) and magnetic resonance imaging (MRI) data can be a challenging task in whole-body PET-MRI. The quality of the registration between these two modalities in large field-of-views (FOV) is often degraded by geometric distortions of the MRI data. The distortions at the edges of large FOVs mainly originate from MRI gradient nonlinearities. This work describes a method to measure and correct for these kind of geometric distortions in small animal MRI scanners to improve the registration accuracy of PET and MRI data. Methods: The authors have developed a geometric phantom which allows the measurement of geometric distortions in all spatial axes via control points. These control points are detected semiautomatically in both PET and MRI data with a subpixel accuracy. The spatial transformation between PET and MRI data is determined with these control points via 3D thin-plate splines (3D TPS). The transformation derived from the 3D TPS is finally applied to real MRI mouse data, which were acquired with the same scan parameters used in the phantom data acquisitions. Additionally, the influence of the phantom material on the homogeneity of the magnetic field is determined via field mapping. Results: The spatial shift according to the magnetic field homogeneity caused by the phantom material was determined to a mean of 0.1 mm. The results of the correction show that distortion with a maximum error of 4 mm could be reduced to less than 1 mm with the proposed correction method. Furthermore, the control point-based registration of PET and MRI data showed improved congruence after correction. Conclusions: The developed phantom has been shown to have no considerable negative effect on the homogeneity of the magnetic field. The proposed method yields an appropriate correction of the measured MRI distortion and is able to improve the PET and MRI registration. Furthermore, the method is applicable to whole-body small animal

  16. Correction of MRI-induced geometric distortions in whole-body small animal PET-MRI

    Energy Technology Data Exchange (ETDEWEB)

    Frohwein, Lynn J., E-mail: frohwein@uni-muenster.de; Schäfers, Klaus P. [European Institute for Molecular Imaging, University of Münster, Münster 48149 (Germany); Hoerr, Verena; Faber, Cornelius [Department of Clinical Radiology, University Hospital of Münster, Münster 48149 (Germany)

    2015-07-15

    Purpose: The fusion of positron emission tomography (PET) and magnetic resonance imaging (MRI) data can be a challenging task in whole-body PET-MRI. The quality of the registration between these two modalities in large field-of-views (FOV) is often degraded by geometric distortions of the MRI data. The distortions at the edges of large FOVs mainly originate from MRI gradient nonlinearities. This work describes a method to measure and correct for these kind of geometric distortions in small animal MRI scanners to improve the registration accuracy of PET and MRI data. Methods: The authors have developed a geometric phantom which allows the measurement of geometric distortions in all spatial axes via control points. These control points are detected semiautomatically in both PET and MRI data with a subpixel accuracy. The spatial transformation between PET and MRI data is determined with these control points via 3D thin-plate splines (3D TPS). The transformation derived from the 3D TPS is finally applied to real MRI mouse data, which were acquired with the same scan parameters used in the phantom data acquisitions. Additionally, the influence of the phantom material on the homogeneity of the magnetic field is determined via field mapping. Results: The spatial shift according to the magnetic field homogeneity caused by the phantom material was determined to a mean of 0.1 mm. The results of the correction show that distortion with a maximum error of 4 mm could be reduced to less than 1 mm with the proposed correction method. Furthermore, the control point-based registration of PET and MRI data showed improved congruence after correction. Conclusions: The developed phantom has been shown to have no considerable negative effect on the homogeneity of the magnetic field. The proposed method yields an appropriate correction of the measured MRI distortion and is able to improve the PET and MRI registration. Furthermore, the method is applicable to whole-body small animal

  17. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    International Nuclear Information System (INIS)

    Mathew, D; Tanny, S; Parsai, E; Sperling, N

    2015-01-01

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measured on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm 2 to 0.6×0.6 cm 2 , normalized to values at 5×5cm 2 . Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm 2 fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference

  18. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, D; Tanny, S; Parsai, E; Sperling, N [University of Toledo Medical Center, Toledo, OH (United States)

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measured on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class

  19. Use of calibration standards and the correction for sample self-attenuation in gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Parker, J.L.

    1984-08-01

    The efficient use of appropriate calibration standards and the correction for the attenuation of the gamma rays within an assay sample by the sample itself are two important and closely related subjects in gamma-ray nondestructive assay. Much research relating to those subjects has been done in the Nuclear Safeguards Research and Development program at the Los Alamos National Laboratory since 1970. This report brings together most of the significant results of that research. Also discussed are the nature of appropriate calibration standards and the necessary conditions on the composition, size, and shape of the samples to allow accurate assays. Procedures for determining the correction for the sample self-attenuation are described at length including both general principles and several specific useful cases. The most useful concept is that knowing the linear attenuation coefficient of the sample (which can usually be determined) and the size and shape of the sample and its position relative to the detector permits the computation of the correction factor for the self-attenuation. A major objective of the report is to explain how the procedures for determining the self-attenuation correction factor can be applied so that calibration standards can be entirely appropriate without being particularly similar, either physically or chemically, to the items to be assayed. This permits minimization of the number of standards required to assay items with a wide range of size, shape, and chemical composition. 17 references, 18 figures, 2 tables

  20. Empirical method for matrix effects correction in liquid samples

    International Nuclear Information System (INIS)

    Vigoda de Leyt, Dora; Vazquez, Cristina

    1987-01-01

    A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es

  1. Multi-element analysis of small biological samples

    International Nuclear Information System (INIS)

    Rokita, E.; Cafmeyer, J.; Maenhaut, W.

    1983-01-01

    A method combining PIXE and INAA was developed to determine the elemental composition of small biological samples. The method needs virtually no sample preparation and less than 1 mg is sufficient for the analysis. The method was used for determining up to 18 elements in leaves taken from Cracow Herbaceous. The factors which influence the elemental composition of leaves and the possible use of leaves as an environmental pollution indicator are discussed

  2. Mechanical characteristics of historic mortars from tests on small-sample non-standard on small-sample non-standard specimens

    Czech Academy of Sciences Publication Activity Database

    Drdácký, Miloš; Slížková, Zuzana

    2008-01-01

    Roč. 17, č. 1 (2008), s. 20-29 ISSN 1407-7353 R&D Projects: GA ČR(CZ) GA103/06/1609 Institutional research plan: CEZ:AV0Z20710524 Keywords : small-sample non-standard testing * lime * historic mortar Subject RIV: AL - Art, Architecture, Cultural Heritage

  3. Fast shading correction for cone beam CT in radiation therapy via sparse sampling on planning CT.

    Science.gov (United States)

    Shi, Linxi; Tsui, Tiffany; Wei, Jikun; Zhu, Lei

    2017-05-01

    The image quality of cone beam computed tomography (CBCT) is limited by severe shading artifacts, hindering its quantitative applications in radiation therapy. In this work, we propose an image-domain shading correction method using planning CT (pCT) as prior information which is highly adaptive to clinical environment. We propose to perform shading correction via sparse sampling on pCT. The method starts with a coarse mapping between the first-pass CBCT images obtained from the Varian TrueBeam system and the pCT. The scatter correction method embedded in the Varian commercial software removes some image errors but the CBCT images still contain severe shading artifacts. The difference images between the mapped pCT and the CBCT are considered as shading errors, but only sparse shading samples are selected for correction using empirical constraints to avoid carrying over false information from pCT. A Fourier-Transform-based technique, referred to as local filtration, is proposed to efficiently process the sparse data for effective shading correction. The performance of the proposed method is evaluated on one anthropomorphic pelvis phantom and 17 patients, who were scheduled for radiation therapy. (The codes of the proposed method and sample data can be downloaded from https://sites.google.com/view/linxicbct) RESULTS: The proposed shading correction substantially improves the CBCT image quality on both the phantom and the patients to a level close to that of the pCT images. On the phantom, the spatial nonuniformity (SNU) difference between CBCT and pCT is reduced from 74 to 1 HU. The root of mean square difference of SNU between CBCT and pCT is reduced from 83 to 10 HU on the pelvis patients, and from 101 to 12 HU on the thorax patients. The robustness of the proposed shading correction is fully investigated with simulated registration errors between CBCT and pCT on the phantom and mis-registration on patients. The sparse sampling scheme of our method successfully

  4. Testing of Small Graphite Samples for Nuclear Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Julie Chapman

    2010-11-01

    Accurately determining the mechanical properties of small irradiated samples is crucial to predicting the behavior of the overal irradiated graphite components within a Very High Temperature Reactor. The sample size allowed in a material test reactor, however, is limited, and this poses some difficulties with respect to mechanical testing. In the case of graphite with a larger grain size, a small sample may exhibit characteristics not representative of the bulk material, leading to inaccuracies in the data. A study to determine a potential size effect on the tensile strength was pursued under the Next Generation Nuclear Plant program. It focuses first on optimizing the tensile testing procedure identified in the American Society for Testing and Materials (ASTM) Standard C 781-08. Once the testing procedure was verified, a size effect was assessed by gradually reducing the diameter of the specimens. By monitoring the material response, a size effect was successfully identified.

  5. Multivariate correction in laser-enhanced ionization with laser sampling

    International Nuclear Information System (INIS)

    Popov, A.M.; Labutin, T.A.; Sychev, D.N.; Gorbatenko, A.A.; Zorov, N.B.

    2007-01-01

    The opportunity of normalizing laser-enhanced ionization (LEI) signals by several reference signals (RS) measured simultaneously has been examined in view of correcting variations of laser parameters and matrix interferences. Opto-acoustic, atomic emission and non-selective ionization signals and their paired combination were used as RS for Li determination in aluminum alloys (0-6% Mg, 0-5% Cu, 0-1% Sc, 0-1% Ag). The specific normalization procedure in case of RS essential multicollinearity has been proposed. LEI and RS for each definite ablation pulse energy were plotted in Cartesian co-ordinates (x and y axes - the RS values, z axis - LEI signal). It was found that in the three-dimensional space the slope of the correlation line to the plane of RS depends on the analyte content in the solid sample. The use of this slope has therefore been proposed as a multivariate corrected analytical signal. Multivariate correlative normalization provides analytical signal free of matrix interferences for Al-Mg-Cu-Li alloys. The application of this novel approach to the determination of Li allows plotting unified calibration curves for Al-alloys of different matrix composition

  6. Multivariate correction in laser-enhanced ionization with laser sampling

    Energy Technology Data Exchange (ETDEWEB)

    Popov, A.M. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation); Labutin, T.A. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation)], E-mail: timurla@laser.chem.msu.ru; Sychev, D.N.; Gorbatenko, A.A.; Zorov, N.B. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation)

    2007-03-15

    The opportunity of normalizing laser-enhanced ionization (LEI) signals by several reference signals (RS) measured simultaneously has been examined in view of correcting variations of laser parameters and matrix interferences. Opto-acoustic, atomic emission and non-selective ionization signals and their paired combination were used as RS for Li determination in aluminum alloys (0-6% Mg, 0-5% Cu, 0-1% Sc, 0-1% Ag). The specific normalization procedure in case of RS essential multicollinearity has been proposed. LEI and RS for each definite ablation pulse energy were plotted in Cartesian co-ordinates (x and y axes - the RS values, z axis - LEI signal). It was found that in the three-dimensional space the slope of the correlation line to the plane of RS depends on the analyte content in the solid sample. The use of this slope has therefore been proposed as a multivariate corrected analytical signal. Multivariate correlative normalization provides analytical signal free of matrix interferences for Al-Mg-Cu-Li alloys. The application of this novel approach to the determination of Li allows plotting unified calibration curves for Al-alloys of different matrix composition.

  7. The use of calibration standards and the correction for sample self-attenuation in gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Parker, J.L.

    1986-11-01

    The efficient use of appropriate calibration standards and the correction for the attenuation of the gamma rays within an assay sample by the sample itself are two important and closely related subjects in gamma-ray nondestructive assay. Much research relating to those subjects has been done in the Nuclear Safeguards Research and Development program at the Los Alamos National Laboratory since 1970. This report brings together most of the significant results of that research. Also discussed are the nature of appropriate calibration standards and the necessary conditions on the composition, size, and shape of the samples to allow accurate assays. Procedures for determining the correction for the sample self-attenuation are described at length including both general principles and several specific useful cases. The most useful concept is that knowing the linear attenuation coefficient of the sample (which can usually be determined) and the size and shape of the sample and its position relative to the detector permits the computation of the correction factor for the self-attenuation. A major objective of the report is to explain how the procedures for determining the self-attenuation correction factor can be applied so that calibration standards can be entirely appropriate without being particularly similar, either physically or chemically, to the items to be assayed. This permits minimization of the number of standards required to assay items with a wide range of size, shape, and chemical composition

  8. Vector analysis of high (≥3 diopters) astigmatism correction using small-incision lenticule extraction and laser in situ keratomileusis.

    Science.gov (United States)

    Chan, Tommy C Y; Wang, Yan; Ng, Alex L K; Zhang, Jiamei; Yu, Marco C Y; Jhanji, Vishal; Cheng, George P M

    2018-06-13

    To compare the astigmatic correction in high myopic astigmatism between small-incision lenticule extraction and laser in situ keratomileusis (LASIK) using vector analysis. Hong Kong Laser Eye Center, Hong Kong. Retrospective case series. Patients who had correction of myopic astigmatism of 3.0 diopters (D) or more and had either small-incision lenticule extraction or femtosecond laser-assisted LASIK were included. Only the left eye was included for analysis. Visual and refractive results were presented and compared between groups. The study comprised 105 patients (40 eyes in the small-incision lenticule extraction group and 65 eyes in the femtosecond laser-assisted LASIK group.) The mean preoperative manifest cylinder was -3.42 D ± 0.55 (SD) in the small-incision lenticule extraction group and -3.47 ± 0.49 D in the LASIK group (P = .655). At 3 months, there was no significant between-group difference in uncorrected distance visual acuity (P = .915) and manifest spherical equivalent (P = .145). Ninety percent and 95.4% of eyes were within ± 0.5 D of the attempted cylindrical correction for the small-incision lenticule extraction and LASIK group, respectively (P = .423). Vector analysis showed comparable target-induced astigmatism (P = .709), surgically induced astigmatism vector (P = .449), difference vector (P = .335), and magnitude of error (P = .413) between groups. The absolute angle of error was 1.88 ± 2.25 degrees in the small-incision lenticule extraction group and 1.37 ± 1.58 degrees in the LASIK group (P = .217). Small-incision lenticule extraction offered astigmatic correction comparable to LASIK in eyes with high myopic astigmatism. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  9. A Geology Sampling System for Small Bodies

    Science.gov (United States)

    Naids, Adam J.; Hood, Anthony D.; Abell, Paul; Graff, Trevor; Buffington, Jesse

    2016-01-01

    Human exploration of microgravity bodies is being investigated as a precursor to a Mars surface mission. Asteroids, comets, dwarf planets, and the moons of Mars all fall into this microgravity category and some are being discussed as potential mission targets. Obtaining geological samples for return to Earth will be a major objective for any mission to a small body. Currently, the knowledge base for geology sampling in microgravity is in its infancy. Humans interacting with non-engineered surfaces in microgravity environment pose unique challenges. In preparation for such missions a team at the NASA Johnson Space Center has been working to gain experience on how to safely obtain numerous sample types in such an environment. This paper describes the type of samples the science community is interested in, highlights notable prototype work, and discusses an integrated geology sampling solution.

  10. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  11. Receiver calibration and the nonlinearity parameter measurement of thick solid samples with diffraction and attenuation corrections.

    Science.gov (United States)

    Jeong, Hyunjo; Barnard, Daniel; Cho, Sungjong; Zhang, Shuzeng; Li, Xiongbing

    2017-11-01

    This paper presents analytical and experimental techniques for accurate determination of the nonlinearity parameter (β) in thick solid samples. When piezoelectric transducers are used for β measurements, the receiver calibration is required to determine the transfer function from which the absolute displacement can be calculated. The measured fundamental and second harmonic displacement amplitudes should be modified to account for beam diffraction and material absorption. All these issues are addressed in this study and the proposed technique is validated through the β measurements of thick solid samples. A simplified self-reciprocity calibration procedure for a broadband receiver is described. The diffraction and attenuation corrections for the fundamental and second harmonics are explicitly derived. Aluminum alloy samples in five different thicknesses (4, 6, 8, 10, 12cm) are prepared and β measurements are made using the finite amplitude, through-transmission method. The effects of diffraction and attenuation corrections on β measurements are systematically investigated. When diffraction and attenuation corrections are all properly made, the variation of β between different thickness samples is found to be less than 3.2%. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. True coincidence summing correction determination for 214Bi principal gamma lines in NORM samples

    International Nuclear Information System (INIS)

    Haddad, Kh.

    2014-01-01

    The gamma lines 609.3 and 1,120.3 keV are two of the most intensive γ emissions of 214 Bi, but they have serious true coincidence summing (TCS) effects due to the complex decay schemes with multi-cascading transitions. TCS effects cause inaccurate count rate and hence erroneous results. A simple and easy experimental method for determination of TCS correction of 214 Bi gamma lines was developed in this work using naturally occurring radioactive material samples. Height efficiency and self attenuation corrections were determined as well. The developed method has been formulated theoretically and validated experimentally. The corrections problems were solved simply with neither additional standard source nor simulation skills. (author)

  13. Measurements of accurate x-ray scattering data of protein solutions using small stationary sample cells

    Science.gov (United States)

    Hong, Xinguo; Hao, Quan

    2009-01-01

    In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 °C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.

  14. Measurements of accurate x-ray scattering data of protein solutions using small stationary sample cells

    International Nuclear Information System (INIS)

    Hong Xinguo; Hao Quan

    2009-01-01

    In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 deg. C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.

  15. Small Mammal Sampling in Mortandad and Los Alamos Canyons, 2005

    International Nuclear Information System (INIS)

    Kathy Bennett; Sherri Sherwood; Rhonda Robinson

    2006-01-01

    As part of an ongoing ecological field investigation at Los Alamos National Laboratory, a study was conducted that compared measured contaminant concentrations in sediment to population parameters for small mammals in the Mortandad Canyon watershed. Mortandad Canyon and its tributary canyons have received contaminants from multiple solid waste management units and areas of concern since establishment of the Laboratory in the 1940s. The study included three reaches within Effluent and Mortandad canyons (E-1W, M-2W, and M-3) that had a spread in the concentrations of metals and radionuclides and included locations where polychlorinated biphenyls and perchlorate had been detected. A reference location, reach LA-BKG in upper Los Alamos Canyon, was also included in the study for comparison purposes. A small mammal study was initiated to assess whether potential adverse effects were evident in Mortandad Canyon due to the presence of contaminants, designated as contaminants of potential ecological concern, in the terrestrial media. Study sites, including the reference site, were sampled in late July/early August. Species diversity and the mean daily capture rate were the highest for E-1W reach and the lowest for the reference site. Species composition among the three reaches in Mortandad was similar with very little overlap with the reference canyon. Differences in species composition and diversity were most likely due to differences in habitat. Sex ratios, body weights, and reproductive status of small mammals were also evaluated. However, small sample sizes of some species within some sites affected the analysis. Ratios of males to females by species of each site (n = 5) were tested using a Chi-square analysis. No differences were detected. Where there was sufficient sample size, body weights of adult small mammals were compared between sites. No differences in body weights were found. Reproductive status of species appears to be similar across sites. However, sample

  16. Small Mammal Sampling in Mortandad and Los Alamos Canyons, 2005

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, Kathy; Sherwood, Sherri; Robinson, Rhonda

    2006-08-15

    As part of an ongoing ecological field investigation at Los Alamos National Laboratory, a study was conducted that compared measured contaminant concentrations in sediment to population parameters for small mammals in the Mortandad Canyon watershed. Mortandad Canyon and its tributary canyons have received contaminants from multiple solid waste management units and areas of concern since establishment of the Laboratory in the 1940s. The study included three reaches within Effluent and Mortandad canyons (E-1W, M-2W, and M-3) that had a spread in the concentrations of metals and radionuclides and included locations where polychlorinated biphenyls and perchlorate had been detected. A reference location, reach LA-BKG in upper Los Alamos Canyon, was also included in the study for comparison purposes. A small mammal study was initiated to assess whether potential adverse effects were evident in Mortandad Canyon due to the presence of contaminants, designated as contaminants of potential ecological concern, in the terrestrial media. Study sites, including the reference site, were sampled in late July/early August. Species diversity and the mean daily capture rate were the highest for E-1W reach and the lowest for the reference site. Species composition among the three reaches in Mortandad was similar with very little overlap with the reference canyon. Differences in species composition and diversity were most likely due to differences in habitat. Sex ratios, body weights, and reproductive status of small mammals were also evaluated. However, small sample sizes of some species within some sites affected the analysis. Ratios of males to females by species of each site (n = 5) were tested using a Chi-square analysis. No differences were detected. Where there was sufficient sample size, body weights of adult small mammals were compared between sites. No differences in body weights were found. Reproductive status of species appears to be similar across sites. However, sample

  17. Use of the small gas proportional counters for the carbon-14 measurement of very small samples

    International Nuclear Information System (INIS)

    Sayre, E.V.; Harbottle, G.; Stoenner, R.W.; Otlet, R.L.; Evans, G.V.

    1981-01-01

    Two recent developments are: the first is the mass-spectrometric separation of 14 C and 12 C ions, followed by counting of the 14 C, while the second is the extension of conventional proportional counter operation, using CO 2 as counting gas, to very small counters and samples. Although the second method is slow (months of counting time are required for 10 mg of carbon) it does not require operator intervention and many samples may be counted simultaneously. Also, it costs only a fraction of the capital expense of an accelerator installation. The development, construction and operation of suitable small counters are described, and results of three actual dating studies involving milligram scale carbon samples will be given. None of these could have been carried out if conventional, gram-sized samples had been needed. New installations, based on the use of these counters, are under construction or in the planning stages. These are located at Brookhaven Laboratory, the National Bureau of Standards (USA) and Harwell (UK). The Harwell installation, which is in advanced stages of construction, will be described in outline. The main significance of the small-counter method is, that although it will not suffice to measure the smallest (much less than 10 mg) or oldest samples, it will permit existing radiocarbon laboratories to extend their capability considerably, in the direction of smaller samples, at modest expense

  18. Secondary School Students' Reasoning about Conditional Probability, Samples, and Sampling Procedures

    Science.gov (United States)

    Prodromou, Theodosia

    2016-01-01

    In the Australian mathematics curriculum, Year 12 students (aged 16-17) are asked to solve conditional probability problems that involve the representation of the problem situation with two-way tables or three-dimensional diagrams and consider sampling procedures that result in different correct answers. In a small exploratory study, we…

  19. Local heterogeneity effects on small-sample worths

    International Nuclear Information System (INIS)

    Schaefer, R.W.

    1986-01-01

    One of the parameters usually measured in a fast reactor critical assembly is the reactivity associated with inserting a small sample of a material into the core (sample worth). Local heterogeneities introduced by the worth measurement techniques can have a significant effect on the sample worth. Unfortunately, the capability is lacking to model some of the heterogeneity effects associated with the experimental technique traditionally used at ANL (the radial tube technique). It has been suggested that these effects could account for a large portion of what remains of the longstanding central worth discrepancy. The purpose of this paper is to describe a large body of experimental data - most of which has never been reported - that shows the effect of radial tube-related local heterogeneities

  20. Method to make accurate concentration and isotopic measurements for small gas samples

    Science.gov (United States)

    Palmer, M. R.; Wahl, E.; Cunningham, K. L.

    2013-12-01

    Carbon isotopic ratio measurements of CO2 and CH4 provide valuable insight into carbon cycle processes. However, many of these studies, like soil gas, soil flux, and water head space experiments, provide very small gas sample volumes, too small for direct measurement by current constant-flow Cavity Ring-Down (CRDS) isotopic analyzers. Previously, we addressed this issue by developing a sample introduction module which enabled the isotopic ratio measurement of 40ml samples or smaller. However, the system, called the Small Sample Isotope Module (SSIM), does dilute the sample during the delivery with inert carrier gas which causes a ~5% reduction in concentration. The isotopic ratio measurements are not affected by this small dilution, but researchers are naturally interested accurate concentration measurements. We present the accuracy and precision of a new method of using this delivery module which we call 'double injection.' Two portions of the 40ml of the sample (20ml each) are introduced to the analyzer, the first injection of which flushes out the diluting gas and the second injection is measured. The accuracy of this new method is demonstrated by comparing the concentration and isotopic ratio measurements for a gas sampled directly and that same gas measured through the SSIM. The data show that the CO2 concentration measurements were the same within instrument precision. The isotopic ratio precision (1σ) of repeated measurements was 0.16 permil for CO2 and 1.15 permil for CH4 at ambient concentrations. This new method provides a significant enhancement in the information provided by small samples.

  1. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    Science.gov (United States)

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Small-molecule Wnt agonists correct cleft palates in Pax9 mutant mice in utero.

    Science.gov (United States)

    Jia, Shihai; Zhou, Jing; Fanelli, Christopher; Wee, Yinshen; Bonds, John; Schneider, Pascal; Mues, Gabriele; D'Souza, Rena N

    2017-10-15

    Clefts of the palate and/or lip are among the most common human craniofacial malformations and involve multiple genetic and environmental factors. Defects can only be corrected surgically and require complex life-long treatments. Our studies utilized the well-characterized Pax9 -/- mouse model with a consistent cleft palate phenotype to test small-molecule Wnt agonist therapies. We show that the absence of Pax9 alters the expression of Wnt pathway genes including Dkk1 and Dkk2 , proven antagonists of Wnt signaling. The functional interactions between Pax9 and Dkk1 are shown by the genetic rescue of secondary palate clefts in Pax9 -/- Dkk1 f/+ ;Wnt1Cre embryos. The controlled intravenous delivery of small-molecule Wnt agonists (Dkk inhibitors) into pregnant Pax9 +/- mice restored Wnt signaling and led to the growth and fusion of palatal shelves, as marked by an increase in cell proliferation and osteogenesis in utero , while other organ defects were not corrected. This work underscores the importance of Pax9-dependent Wnt signaling in palatogenesis and suggests that this functional upstream molecular relationship can be exploited for the development of therapies for human cleft palates that arise from single-gene disorders. © 2017. Published by The Company of Biologists Ltd.

  3. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS) measurements

    Science.gov (United States)

    Dohe, S.; Sherlock, V.; Hase, F.; Gisi, M.; Robinson, J.; Sepúlveda, E.; Schneider, M.; Blumenstock, T.

    2013-08-01

    The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF) of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE) is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment). Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y) at both sites show discrepancies of 0.2-0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  4. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS measurements

    Directory of Open Access Journals (Sweden)

    S. Dohe

    2013-08-01

    Full Text Available The Total Carbon Column Observing Network (TCCON has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment. Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y at both sites show discrepancies of 0.2–0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  5. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  6. Corrections to the 148Nd method of evaluation of burnup for the PIE samples from Mihama-3 and Genkai-1 reactors

    International Nuclear Information System (INIS)

    Suyama, Kenya; Mochizuki, Hiroki

    2006-01-01

    The value of the burnup is one of the most important parameters of samples taken by post-irradiation examination (PIE). Generally, it is evaluated by the Neodymium-148 method. Precise evaluation of the burnup value requires: (1) an effective fission yield of 148 Nd; (2) neutron capture reactions of 147 Nd and 148 Nd; (3) a conversion factor from fissions per initial heavy metal to the burnup unit GWd/t. In this study, the burnup values of the PIE data from Mihama-3 and Genkai-1 PWRs, which were taken by the Japan Atomic Energy Research Institute, were re-evaluated using more accurate corrections for each of these three items. The PIE data were then re-analyzed using SWAT and SWAT2 code systems with JENDL-3.3 library. The re-evaluation of the effective fission yield of 148 Nd has an effect of 1.5-2.0% on burnup values. Considering the neutron capture reactions of 147 Nd and 148 Nd removes dependence of C/E values of 148 Nd on the burnup value. The conversion factor from FIMA(%) to GWd/t changes according to the burnup value. Its effect on the burnup evaluation is small for samples having burnup of larger than 30 GWd/t. The analyses using the corrected burnup values showed that the calculated 148 Nd concentrations and the PIE data is approximately 1%, whereas this was 3-5% in prior analyses. This analysis indicates that the burnup values of samples from Mihama-3 and Genkai-1 PWRs should be corrected by 2-3%. The effect of re-evaluation of the burnup value on the neutron multiplication factor is an approximately 0.6% change in PIE samples having the burnup of larger than 30 GWd/t. Finally, comparison between calculation results using a single pin-cell model and an assembly model is carried out. Because the results agreed with each other within a few percent, we concluded that the single pin-cell model is suitable for the analysis of PIE samples and that the underestimation of plutonium isotopes, which occurred in the previous analyses, does not result from a geometry

  7. Radioenzymatic assay for trimethoprim in very small serum samples.

    OpenAIRE

    Yogev, R; Melick, C; Tan-Pong, L

    1985-01-01

    A modification of the methotrexate radioassay kit (supplied by New England Enzyme Center) enabled determination of trimethoprim levels in 5-microliter serum samples. An excellent correlation between this assay and high-pressure liquid chromatography assay was found. These preliminary results suggest that with this method rapid determination of trimethoprim levels in very small samples (5 to 10 microliters) can be achieved.

  8. Radioenzymatic assay for trimethoprim in very small serum samples

    International Nuclear Information System (INIS)

    Yogev, R.; Melick, C.; Tan-Pong, L.

    1985-01-01

    A modification of the methotrexate radioassay kit (supplied by New England Enzyme Center) enabled determination of trimethoprim levels in 5-microliter serum samples. An excellent correlation between this assay and high-pressure liquid chromatography assay was found. These preliminary results suggest that with this method rapid determination of trimethoprim levels in very small samples (5 to 10 microliters) can be achieved

  9. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  10. The problem in 180 deg data sampling and radioactivity decay correction in gated cardiac blood pool scanning using SPECT

    International Nuclear Information System (INIS)

    Ohtake, Tohru; Watanabe, Toshiaki; Nishikawa, Junichi

    1986-01-01

    In cardiac blood pool scanning using SPECT, half 180 deg data collection (HD) vs. full 360 deg data collection (FD) and Tc-99m decay are problems in quantifying the ejection count (EC) (end-diastolic count - end-systolic count) of both ventricles and the ratio of the ejection count of the right and left ventricles (RVEC/LVEC). We studied the change produced by altering the starting position of data sampling in HD scans. In our results of phantom and 4 clinical cases, when the cardiac axis deviation was not large and there was not remarkable cardiac enlargement, the change in LVEC, RVEC and RVEC/LVEC was small (1 - 4 %) within 12 degree change of the starting position, and the difference between the results of HD scan with a good starting position (the average of LV peak and RV peak) and FD scan was not large (less than 7 %). Because of this, we think HD scan can be used in those cases. But when the cardiac axis deviation was large or there was remarkable cardiac enlargement, the change of LVEC, RVEC and RVEC/LVEC was large (more than 10 %) even within 12 degree change of the starting position. So we think FD scan would be better in those cases. In our results of 6 patients, the half-life of Tc-99m labeled albumin in blood varied from 2 to 4 hr (3.03 ± 0.59 hr, mean ± s.d.). Using a program for radioactivity (RA) decay correction, we studied the change in LVEC, RVEC and LVEC/RVEC in 11 cases. When RA decay correction was performed using a halflife of 3.0 hr, LVEC increased 7.5 %, RVEC increased 8.7 % and RVEC/LVEC increased 0.9 % on the average in HD scans of 8 cases (LPO to RAO, 32 views, 60 beat/1 view). We think RA decay correction would not be needed in quantifying RVEC/LVEC in most cases because the change of RVEC/LVEC was very small. (author)

  11. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  12. Statistical issues in reporting quality data: small samples and casemix variation.

    Science.gov (United States)

    Zaslavsky, A M

    2001-12-01

    To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.

  13. Reducing overlay sampling for APC-based correction per exposure by replacing measured data with computational prediction

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc

    2016-03-01

    One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.

  14. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  15. Establishing the Validity of the Personality Assessment Inventory Drug and Alcohol Scales in a Corrections Sample

    Science.gov (United States)

    Patry, Marc W.; Magaletta, Philip R.; Diamond, Pamela M.; Weinman, Beth A.

    2011-01-01

    Although not originally designed for implementation in correctional settings, researchers and clinicians have begun to use the Personality Assessment Inventory (PAI) to assess offenders. A relatively small number of studies have made attempts to validate the alcohol and drug abuse scales of the PAI, and only a very few studies have validated those…

  16. Increased accuracy of the carbon-14 D-xylose breath test in detecting small-intestinal bacterial overgrowth by correction with the gastric emptying rate

    International Nuclear Information System (INIS)

    Chang Chisen; Chen Granhum; Kao Chiahung; Wang Shyhjen; Peng Shihnen; Huang Chihkuen; Poon Sekkwong

    1995-01-01

    The aim of this study was to determine whether the accuracy of 14 C-D-xylose breath test for detecting bacterial overgrowth can be increased by correction with the gastric emptying rate of 14 C-D-xylose. Ten culture-positive patients and ten culture-negative controls were included in the study. Small-intestinal aspirates for bacteriological culture were obtained endoscopically. A liquid-phase gastric emptying study was performed simultaneously to assess the amount of 14 C-D-xylose that entered the small intestine. The results of the percentage of expired 14 CO 2 at 30 min were corrected with the amount of 14 C-D-xylose that entered the small intestine. There were six patients in the culture-positive group with a 14 CO 2 concentration above the normal limit. Three out of four patients with initially negative results using the uncorrected method proved to be positive after correction. All these three patients had prolonged gastric emptying of 14 C-D-xylose. When compared with cultures of small-intestine aspirates, the sensitivity and specificity of the uncorrected 14 C-D-xylose breath test were 60% and 90%, respectively. In contrast, the sensitivity and specificity of the corrected 14 C-D-xylose breath test improved to 90% and 100%, respectively. (orig./MG)

  17. Sci—Fri AM: Mountain — 01: Validation of a new formulism and the related correction factors on output factor determination for small photon fields

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yizhen; Younge, Kelly; Nielsen, Michelle; Mutanga, Theodore [Peel Regional Cancer Center, Trillium Health Partners, Mississauga, ON (Canada); Cui, Congwu [Peel Regional Cancer Center, Trillium Health Partners, Mississauga, ON (Canada); Department of Radiation Oncology, University of Toronto, Toronto, ON (Canada); Das, Indra J. [Radiation Oncology Dept., Indiana University- School of Medicine, Indianapolis, IN (United States)

    2014-08-15

    Small field dosimetry measurements including output factors are difficult due to lack of charged-particle equilibrium, occlusion of the radiation source, the finite size of detectors, and non-water equivalence of detector components. With available detectors significant variations could be measured that will lead to incorrect delivered dose to patients. IAEA/AAPM have provided a framework and formulation to correct the detector response in small photon fields. Monte Carlo derived correction factors for some commonly used small field detectors are now available, however validation has not been performed prior to this study. An Exradin A16 chamber, EDGE detector and SFD detector were used to perform the output factor measurement for a series of conical fields (5–30mm) on a Varian iX linear accelerator. Discrepancies up to 20%, 10% and 6% were observed for 5, 7.5 and 10 mm cones between the initial output factors measured by the EDGE detector and the A16 ion chamber, while the discrepancies for the conical fields larger than 10 mm were less than 4%. After the application of the correction, the output factors agree well with each other to within 1%. Caution is needed when determining the output factors for small photon fields, especially for fields 10 mm in diameter or smaller. More than one type of detector should be used, each with proper corrections applied to the measurement results. It is concluded that with the application of correction factors to appropriately chosen detectors, output can be measured accurately for small fields.

  18. Conversion of Small Algal Oil Sample to JP-8

    Science.gov (United States)

    2012-01-01

    cracking of Algal Oil to SPK Hydroprocessing Lab Plant uop Nitrogen Hydrogen Product ., __ Small Scale Lab Hydprocessing plant - Down flow trickle ... bed configuration - Capable of retaining 25 cc of catalyst bed Meter UOP ·CONFIDENTIAL File Number The catalytic deoxygenation stage of the...content which combined with the samples acidity, is a challenge to reactor metallurgy. None the less, an attempt was made to convert this sample to

  19. Corrections to the {sup 148}Nd method of evaluation of burnup for the PIE samples from Mihama-3 and Genkai-1 reactors

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya [Fuel Cycle Facility Safety Research Group, Nuclear Safety Research Center, Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki 319-1195 (Japan)]. E-mail: suyama.kenya@jaea.go.jp; Mochizuki, Hiroki [Japan Research Institute, Limited, 16 Ichiban-cho, Chiyoda-ku, Tokyo 102-0082 (Japan)

    2006-03-15

    The value of the burnup is one of the most important parameters of samples taken by post-irradiation examination (PIE). Generally, it is evaluated by the Neodymium-148 method. Precise evaluation of the burnup value requires: (1) an effective fission yield of {sup 148}Nd; (2) neutron capture reactions of {sup 147}Nd and {sup 148}Nd; (3) a conversion factor from fissions per initial heavy metal to the burnup unit GWd/t. In this study, the burnup values of the PIE data from Mihama-3 and Genkai-1 PWRs, which were taken by the Japan Atomic Energy Research Institute, were re-evaluated using more accurate corrections for each of these three items. The PIE data were then re-analyzed using SWAT and SWAT2 code systems with JENDL-3.3 library. The re-evaluation of the effective fission yield of {sup 148}Nd has an effect of 1.5-2.0% on burnup values. Considering the neutron capture reactions of {sup 147}Nd and {sup 148}Nd removes dependence of C/E values of {sup 148}Nd on the burnup value. The conversion factor from FIMA(%) to GWd/t changes according to the burnup value. Its effect on the burnup evaluation is small for samples having burnup of larger than 30 GWd/t. The analyses using the corrected burnup values showed that the calculated {sup 148}Nd concentrations and the PIE data is approximately 1%, whereas this was 3-5% in prior analyses. This analysis indicates that the burnup values of samples from Mihama-3 and Genkai-1 PWRs should be corrected by 2-3%. The effect of re-evaluation of the burnup value on the neutron multiplication factor is an approximately 0.6% change in PIE samples having the burnup of larger than 30 GWd/t. Finally, comparison between calculation results using a single pin-cell model and an assembly model is carried out. Because the results agreed with each other within a few percent, we concluded that the single pin-cell model is suitable for the analysis of PIE samples and that the underestimation of plutonium isotopes, which occurred in the previous

  20. A scanning tunneling microscope capable of imaging specified micron-scale small samples

    Science.gov (United States)

    Tao, Wei; Cao, Yufei; Wang, Huafeng; Wang, Kaiyou; Lu, Qingyou

    2012-12-01

    We present a home-built scanning tunneling microscope (STM) which allows us to precisely position the tip on any specified small sample or sample feature of micron scale. The core structure is a stand-alone soft junction mechanical loop (SJML), in which a small piezoelectric tube scanner is mounted on a sliding piece and a "U"-like soft spring strip has its one end fixed to the sliding piece and its opposite end holding the tip pointing to the sample on the scanner. Here, the tip can be precisely aligned to a specified small sample of micron scale by adjusting the position of the spring-clamped sample on the scanner in the field of view of an optical microscope. The aligned SJML can be transferred to a piezoelectric inertial motor for coarse approach, during which the U-spring is pushed towards the sample, causing the tip to approach the pre-aligned small sample. We have successfully approached a hand cut tip that was made from 0.1 mm thin Pt/Ir wire to an isolated individual 32.5 × 32.5 μm2 graphite flake. Good atomic resolution images and high quality tunneling current spectra for that specified tiny flake are obtained in ambient conditions with high repeatability within one month showing high and long term stability of the new STM structure. In addition, frequency spectra of the tunneling current signals do not show outstanding tip mount related resonant frequency (low frequency), which further confirms the stability of the STM structure.

  1. A scanning tunneling microscope capable of imaging specified micron-scale small samples.

    Science.gov (United States)

    Tao, Wei; Cao, Yufei; Wang, Huafeng; Wang, Kaiyou; Lu, Qingyou

    2012-12-01

    We present a home-built scanning tunneling microscope (STM) which allows us to precisely position the tip on any specified small sample or sample feature of micron scale. The core structure is a stand-alone soft junction mechanical loop (SJML), in which a small piezoelectric tube scanner is mounted on a sliding piece and a "U"-like soft spring strip has its one end fixed to the sliding piece and its opposite end holding the tip pointing to the sample on the scanner. Here, the tip can be precisely aligned to a specified small sample of micron scale by adjusting the position of the spring-clamped sample on the scanner in the field of view of an optical microscope. The aligned SJML can be transferred to a piezoelectric inertial motor for coarse approach, during which the U-spring is pushed towards the sample, causing the tip to approach the pre-aligned small sample. We have successfully approached a hand cut tip that was made from 0.1 mm thin Pt∕Ir wire to an isolated individual 32.5 × 32.5 μm(2) graphite flake. Good atomic resolution images and high quality tunneling current spectra for that specified tiny flake are obtained in ambient conditions with high repeatability within one month showing high and long term stability of the new STM structure. In addition, frequency spectra of the tunneling current signals do not show outstanding tip mount related resonant frequency (low frequency), which further confirms the stability of the STM structure.

  2. Can mt2 much-gt mb2 arise from small corrections in four-family models

    International Nuclear Information System (INIS)

    Mendel, R.R.; Margolis, B.; Therrien, E.; Valin, P.

    1989-01-01

    This paper proposes a general dynamical scheme capable of explaining naturally the main properties of the observed spectrum, namely the strong inter-family mass hierarchies and the mixing pattern. The authors illustrate these properties in the three-family case with a simple toy model. There is an indication that large values of m t may be required in order to obtain |V by |much-lt|V bc ; the fact the m 2 much-gt m 2 could be due to small corrections in a four-family model where m' ∼ m'. The authors point out possible natural explanations for the small mass of the e, μ and τ neutrinos in the three and four family cases

  3. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    Science.gov (United States)

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  4. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  5. Height drift correction in non-raster atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Travis R. [Department of Mathematics, University of California Los Angeles, Los Angeles, CA 90095 (United States); Ziegler, Dominik [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Brune, Christoph [Institute for Computational and Applied Mathematics, University of Münster (Germany); Chen, Alex [Statistical and Applied Mathematical Sciences Institute, Research Triangle Park, NC 27709 (United States); Farnham, Rodrigo; Huynh, Nen; Chang, Jen-Mei [Department of Mathematics and Statistics, California State University Long Beach, Long Beach, CA 90840 (United States); Bertozzi, Andrea L., E-mail: bertozzi@math.ucla.edu [Department of Mathematics, University of California Los Angeles, Los Angeles, CA 90095 (United States); Ashby, Paul D., E-mail: pdashby@lbl.gov [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2014-02-01

    We propose a novel method to detect and correct drift in non-raster scanning probe microscopy. In conventional raster scanning drift is usually corrected by subtracting a fitted polynomial from each scan line, but sample tilt or large topographic features can result in severe artifacts. Our method uses self-intersecting scan paths to distinguish drift from topographic features. Observing the height differences when passing the same position at different times enables the reconstruction of a continuous function of drift. We show that a small number of self-intersections is adequate for automatic and reliable drift correction. Additionally, we introduce a fitness function which provides a quantitative measure of drift correctability for any arbitrary scan shape. - Highlights: • We propose a novel height drift correction method for non-raster SPM. • Self-intersecting scans enable the distinction of drift from topographic features. • Unlike conventional techniques our method is unsupervised and tilt-invariant. • We introduce a fitness measure to quantify correctability for general scan paths.

  6. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  7. Respondent-driven sampling and the recruitment of people with small injecting networks.

    Science.gov (United States)

    Paquette, Dana; Bryant, Joanne; de Wit, John

    2012-05-01

    Respondent-driven sampling (RDS) is a form of chain-referral sampling, similar to snowball sampling, which was developed to reach hidden populations such as people who inject drugs (PWID). RDS is said to reach members of a hidden population that may not be accessible through other sampling methods. However, less attention has been paid as to whether there are segments of the population that are more likely to be missed by RDS. This study examined the ability of RDS to capture people with small injecting networks. A study of PWID, using RDS, was conducted in 2009 in Sydney, Australia. The size of participants' injecting networks was examined by recruitment chain and wave. Participants' injecting network characteristics were compared to those of participants from a separate pharmacy-based study. A logistic regression analysis was conducted to examine the characteristics independently associated with having small injecting networks, using the combined RDS and pharmacy-based samples. In comparison with the pharmacy-recruited participants, RDS participants were almost 80% less likely to have small injecting networks, after adjusting for other variables. RDS participants were also more likely to have their injecting networks form a larger proportion of those in their social networks, and to have acquaintances as part of their injecting networks. Compared to those with larger injecting networks, individuals with small injecting networks were equally likely to engage in receptive sharing of injecting equipment, but less likely to have had contact with prevention services. These findings suggest that those with small injecting networks are an important group to recruit, and that RDS is less likely to capture these individuals.

  8. ASSESSING SMALL SAMPLE WAR-GAMING DATASETS

    Directory of Open Access Journals (Sweden)

    W. J. HURLEY

    2013-10-01

    Full Text Available One of the fundamental problems faced by military planners is the assessment of changes to force structure. An example is whether to replace an existing capability with an enhanced system. This can be done directly with a comparison of measures such as accuracy, lethality, survivability, etc. However this approach does not allow an assessment of the force multiplier effects of the proposed change. To gauge these effects, planners often turn to war-gaming. For many war-gaming experiments, it is expensive, both in terms of time and dollars, to generate a large number of sample observations. This puts a premium on the statistical methodology used to examine these small datasets. In this paper we compare the power of three tests to assess population differences: the Wald-Wolfowitz test, the Mann-Whitney U test, and re-sampling. We employ a series of Monte Carlo simulation experiments. Not unexpectedly, we find that the Mann-Whitney test performs better than the Wald-Wolfowitz test. Resampling is judged to perform slightly better than the Mann-Whitney test.

  9. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    Science.gov (United States)

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  10. Accelerator mass spectrometry of ultra-small samples with applications in the biosciences

    International Nuclear Information System (INIS)

    Salehpour, Mehran; Håkansson, Karl; Possnert, Göran

    2013-01-01

    An overview is presented covering the biological accelerator mass spectrometry activities at Uppsala University. The research utilizes the Uppsala University Tandem laboratory facilities, including a 5 MV Pelletron tandem accelerator and two stable isotope ratio mass spectrometers. In addition, a dedicated sample preparation laboratory for biological samples with natural activity is in use, as well as another laboratory specifically for 14 C-labeled samples. A variety of ongoing projects are described and presented. Examples are: (1) Ultra-small sample AMS. We routinely analyze samples with masses in the 5–10 μg C range. Data is presented regarding the sample preparation method, (2) bomb peak biological dating of ultra-small samples. A long term project is presented where purified and cell-specific DNA from various part of the human body including the heart and the brain are analyzed with the aim of extracting regeneration rate of the various human cells, (3) biological dating of various human biopsies, including atherosclerosis related plaques is presented. The average built up time of the surgically removed human carotid plaques have been measured and correlated to various data including the level of insulin in the human blood, and (4) In addition to standard microdosing type measurements using small pharmaceutical drugs, pre-clinical pharmacokinetic data from a macromolecular drug candidate are discussed.

  11. Accelerator mass spectrometry of ultra-small samples with applications in the biosciences

    Energy Technology Data Exchange (ETDEWEB)

    Salehpour, Mehran, E-mail: mehran.salehpour@physics.uu.se [Department of Physics and Astronomy, Ion Physics, PO Box 516, SE-751 20 Uppsala (Sweden); Hakansson, Karl; Possnert, Goeran [Department of Physics and Astronomy, Ion Physics, PO Box 516, SE-751 20 Uppsala (Sweden)

    2013-01-15

    An overview is presented covering the biological accelerator mass spectrometry activities at Uppsala University. The research utilizes the Uppsala University Tandem laboratory facilities, including a 5 MV Pelletron tandem accelerator and two stable isotope ratio mass spectrometers. In addition, a dedicated sample preparation laboratory for biological samples with natural activity is in use, as well as another laboratory specifically for {sup 14}C-labeled samples. A variety of ongoing projects are described and presented. Examples are: (1) Ultra-small sample AMS. We routinely analyze samples with masses in the 5-10 {mu}g C range. Data is presented regarding the sample preparation method, (2) bomb peak biological dating of ultra-small samples. A long term project is presented where purified and cell-specific DNA from various part of the human body including the heart and the brain are analyzed with the aim of extracting regeneration rate of the various human cells, (3) biological dating of various human biopsies, including atherosclerosis related plaques is presented. The average built up time of the surgically removed human carotid plaques have been measured and correlated to various data including the level of insulin in the human blood, and (4) In addition to standard microdosing type measurements using small pharmaceutical drugs, pre-clinical pharmacokinetic data from a macromolecular drug candidate are discussed.

  12. Transportable high sensitivity small sample radiometric calorimeter

    International Nuclear Information System (INIS)

    Wetzel, J.R.; Biddle, R.S.; Cordova, B.S.; Sampson, T.E.; Dye, H.R.; McDow, J.G.

    1998-01-01

    A new small-sample, high-sensitivity transportable radiometric calorimeter, which can be operated in different modes, contains an electrical calibration method, and can be used to develop secondary standards, will be described in this presentation. The data taken from preliminary tests will be presented to indicate the precision and accuracy of the instrument. The calorimeter and temperature-controlled bath, at present, require only a 30-in. by 20-in. tabletop area. The calorimeter is operated from a laptop computer system using unique measurement module capable of monitoring all necessary calorimeter signals. The calorimeter can be operated in the normal calorimeter equilibration mode, as a comparison instrument, using twin chambers and an external electrical calibration method. The sample chamber is 0.75 in (1.9 cm) in diameter by 2.5 in. (6.35 cm) long. This size will accommodate most 238 Pu heat standards manufactured in the past. The power range runs from 0.001 W to <20 W. The high end is only limited by sample size

  13. EDXRF applied to the chemical element determination of small invertebrate samples

    International Nuclear Information System (INIS)

    Magalhaes, Marcelo L.R.; Santos, Mariana L.O.; Cantinha, Rebeca S.; Souza, Thomas Marques de; Franca, Elvis J. de

    2015-01-01

    Energy Dispersion X-Ray Fluorescence - EDXRF is a fast analytical technique of easy operation, however demanding reliable analytical curves due to the intrinsic matrix dependence and interference during the analysis. By using biological materials of diverse matrices, multielemental analytical protocols can be implemented and a group of chemical elements could be determined in diverse biological matrices depending on the chemical element concentration. Particularly for invertebrates, EDXRF presents some advantages associated to the possibility of the analysis of small size samples, in which a collimator can be used that directing the incidence of X-rays to a small surface of the analyzed samples. In this work, EDXRF was applied to determine Cl, Fe, P, S and Zn in invertebrate samples using the collimator of 3 mm and 10 mm. For the assessment of the analytical protocol, the SRM 2976 Trace Elements in Mollusk produced and SRM 8415 Whole Egg Powder by the National Institute of Standards and Technology - NIST were also analyzed. After sampling by using pitfall traps, invertebrate were lyophilized, milled and transferred to polyethylene vials covered by XRF polyethylene. Analyses were performed at atmosphere lower than 30 Pa, varying voltage and electric current according to the chemical element to be analyzed. For comparison, Zn in the invertebrate material was also quantified by graphite furnace atomic absorption spectrometry after acid treatment (mixture of nitric acid and hydrogen peroxide) of samples have. Compared to the collimator of 10 mm, the SRM 2976 and SRM 8415 results obtained by the 3 mm collimator agreed well at the 95% confidence level since the E n Number were in the range of -1 and 1. Results from GFAAS were in accordance to the EDXRF values for composite samples. Therefore, determination of some chemical elements by EDXRF can be recommended for very small invertebrate samples (lower than 100 mg) with advantage of preserving the samples. (author)

  14. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.

    2017-11-27

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  15. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.; Pan, B.; Lubineau, Gilles

    2017-01-01

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  16. Nano-Scale Sample Acquisition Systems for Small Class Exploration Spacecraft

    Science.gov (United States)

    Paulsen, G.

    2015-12-01

    The paradigm for space exploration is changing. Large and expensive missions are very rare and the space community is turning to smaller, lighter, and less expensive missions that could still perform great exploration. These missions are also within reach of commercial companies such as the Google Lunar X Prize teams that develop small scale lunar missions. Recent commercial endeavors such as "Planet Labs inc." and Sky Box Imaging, inc. show that there are new benefits and business models associated with miniaturization of space hardware. The Nano-Scale Sample Acquisition System includes NanoDrill for capture of small rock cores and PlanetVac for capture of surface regolith. These two systems are part of the ongoing effort to develop "Micro Sampling" systems for deployment by the small spacecraft with limited payload capacities. The ideal applications include prospecting missions to the Moon and Asteroids. The MicroDrill is a rotary-percussive coring drill that captures cores 7 mm in diameter and up to 2 cm long. The drill weighs less than 1 kg and can capture a core from a 40 MPa strength rock within a few minutes, with less than 10 Watt power and less than 10 Newton of preload. The PlanetVac is a pneumatic based regolith acquisition system that can capture surface sample in touch-and-go maneuver. These sampling systems were integrated within the footpads of commercial quadcopter for testing. As such, they could also be used by geologists on Earth to explore difficult to get to locations.

  17. Correcting for Systematic Bias in Sample Estimates of Population Variances: Why Do We Divide by n-1?

    Science.gov (United States)

    Mittag, Kathleen Cage

    An important topic presented in introductory statistics courses is the estimation of population parameters using samples. Students learn that when estimating population variances using sample data, we always get an underestimate of the population variance if we divide by n rather than n-1. One implication of this correction is that the degree of…

  18. Effect of tubing length on the dispersion correction of an arterially sampled input function for kinetic modeling in PET.

    Science.gov (United States)

    O'Doherty, Jim; Chilcott, Anna; Dunn, Joel

    2015-11-01

    Arterial sampling with dispersion correction is routinely performed for kinetic analysis of PET studies. Because of the the advent of PET-MRI systems, non-MR safe instrumentation will be required to be kept outside the scan room, which requires the length of the tubing between the patient and detector to increase, thus worsening the effects of dispersion. We examined the effects of dispersion in idealized radioactive blood studies using various lengths of tubing (1.5, 3, and 4.5 m) and applied a well-known transmission-dispersion model to attempt to correct the resulting traces. A simulation study was also carried out to examine noise characteristics of the model. The model was applied to patient traces using a 1.5 m acquisition tubing and extended to its use at 3 m. Satisfactory dispersion correction of the blood traces was achieved in the 1.5 m line. Predictions on the basis of experimental measurements, numerical simulations and noise analysis of resulting traces show that corrections of blood data can also be achieved using the 3 m tubing. The effects of dispersion could not be corrected for the 4.5 m line by the selected transmission-dispersion model. On the basis of our setup, correction of dispersion in arterial sampling tubing up to 3 m by the transmission-dispersion model can be performed. The model could not dispersion correct data acquired using a 4.5 m arterial tubing.

  19. A thermostat for precise measurements of thermoresistance of small samples

    International Nuclear Information System (INIS)

    Rusinowski, Z.; Slowinski, B.; Winiewski, R.

    1996-01-01

    In the work a simple experimental set-up is described in which special attention is paid to the important problem of the thermal stability of thermoresistance measurements of small samples of manganin

  20. Auto-validating von Neumann rejection sampling from small phylogenetic tree spaces

    Directory of Open Access Journals (Sweden)

    York Thomas

    2009-01-01

    Full Text Available Abstract Background In phylogenetic inference one is interested in obtaining samples from the posterior distribution over the tree space on the basis of some observed DNA sequence data. One of the simplest sampling methods is the rejection sampler due to von Neumann. Here we introduce an auto-validating version of the rejection sampler, via interval analysis, to rigorously draw samples from posterior distributions over small phylogenetic tree spaces. Results The posterior samples from the auto-validating sampler are used to rigorously (i estimate posterior probabilities for different rooted topologies based on mitochondrial DNA from human, chimpanzee and gorilla, (ii conduct a non-parametric test of rate variation between protein-coding and tRNA-coding sites from three primates and (iii obtain a posterior estimate of the human-neanderthal divergence time. Conclusion This solves the open problem of rigorously drawing independent and identically distributed samples from the posterior distribution over rooted and unrooted small tree spaces (3 or 4 taxa based on any multiply-aligned sequence data.

  1. Correct liquid scintillation counting of steroids and glycosides in RIA samples: a comparison of xylene-based, dioxane-based and colloidal counting systems. Chapter 14

    International Nuclear Information System (INIS)

    Spolders, H.

    1977-01-01

    In RIA, the following parameters are important for accurate liquid scintillation counting. (1) Absence of chemiluminescence. (2) Stability of count rate. (3) Dissolving properties for the sample. For samples with varying colours, a quench correction must be applied. For any type of accurate quench correction, a homogeneous sample is necessary. This can be obtained if proteins and the buffer can be dissolved completely in the scintillator solution. In this paper, these criteria are compared in xylene-based, dioxane-based and colloidal scintillation solutions for either bound or free antigens of different polarity. The labelling radioisotope used was 3 H. Using colloidal scintillators with plasma and buffer samples, phasing or sedimentation of salt or proteins sometimes occurs. The influence of sedimentation or phasing on count rate stability and correct quench correction is illustrated by varying the ratio between the scintillator solution and a RIA sample containing a semi-polar steroid aldosterone. (author)

  2. A two-phase sampling survey for nonresponse and its paradata to correct nonresponse bias in a health surveillance survey.

    Science.gov (United States)

    Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A

    2017-02-01

    The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in

  3. Systematic studies of small scintillators for new sampling calorimeter

    Indian Academy of Sciences (India)

    A new sampling calorimeter using very thin scintillators and the multi-pixel photon counter (MPPC) has been proposed to produce better position resolution for the international linear collider (ILC) experiment. As part of this R & D study, small plastic scintillators of different sizes, thickness and wrapping reflectors are ...

  4. Assessment of radioactivity for 24 hours urine sample depending on correction factor by using creatinine

    International Nuclear Information System (INIS)

    Kharita, M. H.; Maghrabi, M.

    2006-09-01

    Assessment of intake and internal does requires knowing the amount of radioactivity in 24 hours urine sample, sometimes it is difficult to get 24 hour sample because this method is not comfortable and in most cases the workers refuse to collect this amount of urine. This work focuses on finding correction factor of 24 hour sample depending on knowing the amount of creatinine in the sample whatever the size of this sample. Then the 24 hours excretion of radionuclide is calculated assuming the average creatinine excretion rate is 1.7 g per 24 hours, based on the amount of activity and creatinine in the urine sample. Several urine sample were collected from occupationally exposed workers the amount and ratios of creatinine and activity in these samples were determined, then normalized to 24 excretion of radionuclide. The average chemical recovery was 77%. It should be emphasized that this method should only be used if a 24 hours sample was not possible to collect. (author)

  5. Suitability of small diagnostic peripheral-blood samples for cell-therapy studies.

    Science.gov (United States)

    Stephanou, Coralea; Papasavva, Panayiota; Zachariou, Myria; Patsali, Petros; Epitropou, Marilena; Ladas, Petros; Al-Abdulla, Ruba; Christou, Soteroulla; Antoniou, Michael N; Lederer, Carsten W; Kleanthous, Marina

    2017-02-01

    Primary hematopoietic stem and progenitor cells (HSPCs) are key components of cell-based therapies for blood disorders and are thus the authentic substrate for related research. We propose that ubiquitous small-volume diagnostic samples represent a readily available and as yet untapped resource of primary patient-derived cells for cell- and gene-therapy studies. In the present study we compare isolation and storage methods for HSPCs from normal and thalassemic small-volume blood samples, considering genotype, density-gradient versus lysis-based cell isolation and cryostorage media with different serum contents. Downstream analyses include viability, recovery, differentiation in semi-solid media and performance in liquid cultures and viral transductions. We demonstrate that HSPCs isolated either by ammonium-chloride potassium (ACK)-based lysis or by gradient isolation are suitable for functional analyses in clonogenic assays, high-level HSPC expansion and efficient lentiviral transduction. For cryostorage of cells, gradient isolation is superior to ACK lysis, and cryostorage in freezing media containing 50% fetal bovine serum demonstrated good results across all tested criteria. For assays on freshly isolated cells, ACK lysis performed similar to, and for thalassemic samples better than, gradient isolation, at a fraction of the cost and hands-on time. All isolation and storage methods show considerable variation within sample groups, but this is particularly acute for density gradient isolation of thalassemic samples. This study demonstrates the suitability of small-volume blood samples for storage and preclinical studies, opening up the research field of HSPC and gene therapy to any blood diagnostic laboratory with corresponding bioethics approval for experimental use of surplus material. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  6. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    Science.gov (United States)

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  7. A General Linear Method for Equating with Small Samples

    Science.gov (United States)

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  8. Biota dose assessment of small mammals sampled near uranium mines in northern Arizona

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Minter, K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kuhne, W. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kubilius, W. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2018-01-09

    In 2015, the U. S. Geological Survey (USGS) collected approximately 50 small mammal carcasses from Northern Arizona uranium mines and other background locations. Based on the highest gross alpha results, 11 small mammal samples were selected for radioisotopic analyses. None of the background samples had significant gross alpha results. The 11 small mammals were identified relative to the three ‘indicator’ mines located south of Fredonia, AZ on the Kanab Plateau (Kanab North Mine, Pinenut Mine, and Arizona 1 Mine) (Figure 1-1) and are operated by Energy Fuels Resources Inc. (EFRI). EFRI annually reports soil analysis for uranium and radium-226 using Arizona Department of Environmental Quality (ADEQ)-approved Standard Operating Procedures for Soil Sampling (EFRI 2016a, 2016b, 2017). In combination with the USGS small mammal radioiosotopic tissue analyses, a biota dose assessment was completed by Savannah River National Laboratory (SRNL) using the RESidual RADioactivity-BIOTA (RESRAD-BIOTA, V. 1.8) dose assessment tool provided by the Argonne National Laboratory (ANL 2017).

  9. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  10. Automated Sampling and Extraction of Krypton from Small Air Samples for Kr-85 Measurement Using Atom Trap Trace Analysis

    International Nuclear Information System (INIS)

    Hebel, S.; Hands, J.; Goering, F.; Kirchner, G.; Purtschert, R.

    2015-01-01

    Atom-Trap-Trace-Analysis (ATTA) provides the capability of measuring the Krypton-85 concentration in microlitre amounts of krypton extracted from air samples of about 1 litre. This sample size is sufficiently small to allow for a range of applications, including on-site spot sampling and continuous sampling over periods of several hours. All samples can be easily handled and transported to an off-site laboratory for ATTA measurement, or stored and analyzed on demand. Bayesian sampling methodologies can be applied by blending samples for bulk measurement and performing in-depth analysis as required. Prerequisite for measurement is the extraction of a pure krypton fraction from the sample. This paper introduces an extraction unit able to isolate the krypton in small ambient air samples with high speed, high efficiency and in a fully automated manner using a combination of cryogenic distillation and gas chromatography. Air samples are collected using an automated smart sampler developed in-house to achieve a constant sampling rate over adjustable time periods ranging from 5 minutes to 3 hours per sample. The smart sampler can be deployed in the field and operate on battery for one week to take up to 60 air samples. This high flexibility of sampling and the fast, robust sample preparation are a valuable tool for research and the application of Kr-85 measurements to novel Safeguards procedures. (author)

  11. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  12. EDXRF applied to the chemical element determination of small invertebrate samples

    Energy Technology Data Exchange (ETDEWEB)

    Magalhaes, Marcelo L.R.; Santos, Mariana L.O.; Cantinha, Rebeca S.; Souza, Thomas Marques de; Franca, Elvis J. de, E-mail: marcelo_rlm@hotmail.com, E-mail: marianasantos_ufpe@hotmail.com, E-mail: rebecanuclear@gmail.com, E-mail: thomasmarques@live.com.pt, E-mail: ejfranca@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2015-07-01

    Energy Dispersion X-Ray Fluorescence - EDXRF is a fast analytical technique of easy operation, however demanding reliable analytical curves due to the intrinsic matrix dependence and interference during the analysis. By using biological materials of diverse matrices, multielemental analytical protocols can be implemented and a group of chemical elements could be determined in diverse biological matrices depending on the chemical element concentration. Particularly for invertebrates, EDXRF presents some advantages associated to the possibility of the analysis of small size samples, in which a collimator can be used that directing the incidence of X-rays to a small surface of the analyzed samples. In this work, EDXRF was applied to determine Cl, Fe, P, S and Zn in invertebrate samples using the collimator of 3 mm and 10 mm. For the assessment of the analytical protocol, the SRM 2976 Trace Elements in Mollusk produced and SRM 8415 Whole Egg Powder by the National Institute of Standards and Technology - NIST were also analyzed. After sampling by using pitfall traps, invertebrate were lyophilized, milled and transferred to polyethylene vials covered by XRF polyethylene. Analyses were performed at atmosphere lower than 30 Pa, varying voltage and electric current according to the chemical element to be analyzed. For comparison, Zn in the invertebrate material was also quantified by graphite furnace atomic absorption spectrometry after acid treatment (mixture of nitric acid and hydrogen peroxide) of samples have. Compared to the collimator of 10 mm, the SRM 2976 and SRM 8415 results obtained by the 3 mm collimator agreed well at the 95% confidence level since the E{sub n} Number were in the range of -1 and 1. Results from GFAAS were in accordance to the EDXRF values for composite samples. Therefore, determination of some chemical elements by EDXRF can be recommended for very small invertebrate samples (lower than 100 mg) with advantage of preserving the samples. (author)

  13. STATISTICAL EVALUATION OF SMALL SCALE MIXING DEMONSTRATION SAMPLING AND BATCH TRANSFER PERFORMANCE - 12093

    Energy Technology Data Exchange (ETDEWEB)

    GREER DA; THIEN MG

    2012-01-12

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS) has previously presented the results of mixing performance in two different sizes of small scale DSTs to support scale up estimates of full scale DST mixing performance. Currently, sufficient sampling of DSTs is one of the largest programmatic risks that could prevent timely delivery of high level waste to the WTP. WRPS has performed small scale mixing and sampling demonstrations to study the ability to sufficiently sample the tanks. The statistical evaluation of the demonstration results which lead to the conclusion that the two scales of small DST are behaving similarly and that full scale performance is predictable will be presented. This work is essential to reduce the risk of requiring a new dedicated feed sampling facility and will guide future optimization work to ensure the waste feed delivery mission will be accomplished successfully. This paper will focus on the analytical data collected from mixing, sampling, and batch transfer testing from the small scale mixing demonstration tanks and how those data are being interpreted to begin to understand the relationship between samples taken prior to transfer and samples from the subsequent batches transferred. An overview of the types of data collected and examples of typical raw data will be provided. The paper will then discuss the processing and manipulation of the data which is necessary to begin evaluating sampling and batch transfer performance. This discussion will also include the evaluation of the analytical measurement capability with regard to the simulant material used in the demonstration tests. The

  14. Privacy problems in the small sample selection

    Directory of Open Access Journals (Sweden)

    Loredana Cerbara

    2013-05-01

    Full Text Available The side of social research that uses small samples for the production of micro data, today finds some operating difficulties due to the privacy law. The privacy code is a really important and necessary law because it guarantees the Italian citizen’s rights, as already happens in other Countries of the world. However it does not seem appropriate to limit once more the possibilities of the data production of the national centres of research. That possibilities are already moreover compromised due to insufficient founds is a common problem becoming more and more frequent in the research field. It would be necessary, therefore, to include in the law the possibility to use telephonic lists to select samples useful for activities directly of interest and importance to the citizen, such as the collection of the data carried out on the basis of opinion polls by the centres of research of the Italian CNR and some universities.

  15. Attenuation correction for freely moving small animal brain PET studies based on a virtual scanner geometry

    International Nuclear Information System (INIS)

    Angelis, G I; Kyme, A Z; Ryder, W J; Fulton, R R; Meikle, S R

    2014-01-01

    Attenuation correction in positron emission tomography brain imaging of freely moving animals is a very challenging problem since the torso of the animal is often within the field of view and introduces a non negligible attenuating factor that can degrade the quantitative accuracy of the reconstructed images. In the context of unrestrained small animal imaging, estimation of the attenuation correction factors without the need for a transmission scan is highly desirable. An attractive approach that avoids the need for a transmission scan involves the generation of the hull of the animal’s head based on the reconstructed motion corrected emission images. However, this approach ignores the attenuation introduced by the animal’s torso. In this work, we propose a virtual scanner geometry which moves in synchrony with the animal’s head and discriminates between those events that traversed only the animal’s head (and therefore can be accurately compensated for attenuation) and those that might have also traversed the animal’s torso. For each recorded pose of the animal’s head a new virtual scanner geometry is defined and therefore a new system matrix must be calculated leading to a time-varying system matrix. This new approach was evaluated on phantom data acquired on the microPET Focus 220 scanner using a custom-made phantom and step-wise motion. Results showed that when the animal’s torso is within the FOV and not appropriately accounted for during attenuation correction it can lead to bias of up to 10% . Attenuation correction was more accurate when the virtual scanner was employed leading to improved quantitative estimates (bias < 2%), without the need to account for the attenuation introduced by the extraneous compartment. Although the proposed method requires increased computational resources, it can provide a reliable approach towards quantitatively accurate attenuation correction for freely moving animal studies. (paper)

  16. Radiocarbon measurements of small gaseous samples at CologneAMS

    Science.gov (United States)

    Stolz, A.; Dewald, A.; Altenkirch, R.; Herb, S.; Heinze, S.; Schiffer, M.; Feuerstein, C.; Müller-Gatermann, C.; Wotte, A.; Rethemeyer, J.; Dunai, T.

    2017-09-01

    A second SO-110 B (Arnold et al., 2010) ion source was installed at the 6 MV CologneAMS for the measurement of gaseous samples. For the gas supply a dedicated device from Ionplus AG was connected to the ion source. Special effort was devoted to determine optimized operation parameters for the ion source, which give a high carbon current output and a high 14C- yield. The latter is essential in cases when only small samples are available. Additionally a modified immersion lens and modified target pieces were tested and the target position was optimized.

  17. A multi-dimensional sampling method for locating small scatterers

    International Nuclear Information System (INIS)

    Song, Rencheng; Zhong, Yu; Chen, Xudong

    2012-01-01

    A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method. (paper)

  18. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    International Nuclear Information System (INIS)

    Calderon, E; Siergiej, D

    2014-01-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detector (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement

  19. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  20. Small incision corneal refractive surgery using the small incision lenticule extraction (SMILE) procedure for the correction of myopia and myopic astigmatism: results of a 6 month prospective study.

    Science.gov (United States)

    Sekundo, Walter; Kunert, Kathleen S; Blum, Marcus

    2011-03-01

    This 6 month prospective multi-centre study evaluated the feasibility of performing myopic femtosecond lenticule extraction (FLEx) through a small incision using the small incision lenticule extraction (SMILE) procedure. Prospective, non-randomised clinical trial. PARTICIPANTS; Ninety-one eyes of 48 patients with myopia with and without astigmatism completed the final 6 month follow-up. The patients' mean age was 35.3 years. Their preoperative mean spherical equivalent (SE) was −4.75±1.56 D. A refractive lenticule of intrastromal corneal tissue was cut utilising a prototype of the Carl Zeiss Meditec AG VisuMax femtosecond laser system. Simultaneously two opposite small ‘pocket’ incisions were created by the laser system. Thereafter, the lenticule was manually dissected with a spatula and removed through one of incisions using modified McPherson forceps. Uncorrected visual acuity (UCVA) and best spectacle corrected visual acuity (BSCVA) after 6 months, objective and manifest refraction as well as slit-lamp examination, side effects and a questionnaire. Six months postoperatively the mean SE was −0.01 D±0.49 D. Most treated eyes (95.6%) were within ±1.0 D, and 80.2% were within ±0.5 D of intended correction. Of the eyes treated, 83.5% had an UCVA of 1.0 (20/20) or better, 53% remained unchanged, 32.3% gained one line, 3.3% gained two lines of BSCVA, 8.8% lost one line and 1.1% lost ≥2 lines of BSCVA. When answering a standardised questionnaire, 93.3% of patients were satisfied with the results obtained and would undergo the procedure again. SMILE is a promising new flapless minimally invasive refractive procedure to correct myopia.

  1. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    International Nuclear Information System (INIS)

    Soh, R; Lee, J; Harianto, F

    2014-01-01

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm 2 small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm 3 , 2.64g/cm 3 ) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm 3 , HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute material for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm 2 was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm 2 small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced

  2. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...

  3. Rules of attraction: The role of bait in small mammal sampling at ...

    African Journals Online (AJOL)

    Baits or lures are commonly used for surveying small mammal communities, not only because they attract large numbers of these animals, but also because they provide sustenance for trapped individuals. In this study we used Sherman live traps with five bait treatments to sample small mammal populations at three ...

  4. Identification of mistakes and their correction by a small group discussion as a revision exercise at the end of a teaching module in biochemistry.

    Science.gov (United States)

    Bobby, Zachariah; Nandeesha, H; Sridhar, M G; Soundravally, R; Setiya, Sajita; Babu, M Sathish; Niranjan, G

    2014-01-01

    Graduate medical students often get less opportunity for clarifying their doubts and to reinforce their concepts after lecture classes. The Medical Council of India (MCI) encourages group discussions among students. We evaluated the effect of identifying mistakes in a given set of wrong statements and their correction by a small group discussion by graduate medical students as a revision exercise. At the end of a module, a pre-test consisting of multiple-choice questions (MCQs) was conducted. Later, a set of incorrect statements related to the topic was given to the students and they were asked to identify the mistakes and correct them in a small group discussion. The effects on low, medium and high achievers were evaluated by a post-test and delayed post-tests with the same set of MCQs. The mean post-test marks were significantly higher among all the three groups compared to the pre-test marks. The gain from the small group discussion was equal among low, medium and high achievers. The gain from the exercise was retained among low, medium and high achievers after 15 days. Identification of mistakes in statements and their correction by a small group discussion is an effective, but unconventional revision exercise in biochemistry. Copyright 2014, NMJI.

  5. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  6. Corrections to primordial nucleosynthesis

    International Nuclear Information System (INIS)

    Dicus, D.A.; Kolb, E.W.; Gleeson, A.M.; Sudarshan, E.C.G.; Teplitz, V.L.; Turner, M.S.

    1982-01-01

    The changes in primordial nucleosynthesis resulting from small corrections to rates for weak processes that connect neutrons and protons are discussed. The weak rates are corrected by improved treatment of Coulomb and radiative corrections, and by inclusion of plasma effects. The calculations lead to a systematic decrease in the predicted 4 He abundance of about ΔY = 0.0025. The relative changes in other primoridal abundances are also 1 to 2%

  7. Thermal neutron absorption cross section of small samples

    International Nuclear Information System (INIS)

    Nghiep, T.D.; Vinh, T.T.; Son, N.N.; Vuong, T.V.; Hung, N.T.

    1989-01-01

    A modified steady method for determining the macroscopic thermal neutron absorption cross section of small samples 500 cm 3 in volume is described. The method uses a moderating block of paraffin, Pu-Be neutron source emitting 1.1x10 6 n.s. -1 , SNM-14 counter and ordinary counting equipment. The interval of cross section from 2.6 to 1.3x10 4 (10 -3 cm 2 g -1 ) was measured. The experimental data are described by calculation formulae. 7 refs.; 4 figs

  8. Magnification bias corrections to galaxy-lensing cross-correlations

    International Nuclear Information System (INIS)

    Ziour, Riad; Hui, Lam

    2008-01-01

    Galaxy-galaxy or galaxy-quasar lensing can provide important information on the mass distribution in the Universe. It consists of correlating the lensing signal (either shear or magnification) of a background galaxy/quasar sample with the number density of a foreground galaxy sample. However, the foreground galaxy density is inevitably altered by the magnification bias due to the mass between the foreground and the observer, leading to a correction to the observed galaxy-lensing signal. The aim of this paper is to quantify this correction. The single most important determining factor is the foreground redshift z f : the correction is small if the foreground galaxies are at low redshifts but can become non-negligible for sufficiently high redshifts. For instance, we find that for the multipole l=1000, the correction is above 1%x(5s f -2)/b f for z f > or approx. 0.37, and above 5%x(5s f -2)/b f for z f > or approx. 0.67, where s f is the number count slope of the foreground sample and b f its galaxy bias. These considerations are particularly important for geometrical measures, such as the Jain and Taylor ratio or its generalization by Zhang et al. Assuming (5s f -2)/b f =1, we find that the foreground redshift should be limited to z f < or approx. 0.45 in order to avoid biasing the inferred dark energy equation of state w by more than 5%, and that even for a low foreground redshift (<0.45), the background samples must be well separated from the foreground to avoid incurring a bias of similar magnitude. Lastly, we briefly comment on the possibility of obtaining these geometrical measures without using galaxy shapes, using instead magnification bias itself.

  9. System for sampling liquids in small jugs obturated by screwed taps

    International Nuclear Information System (INIS)

    Besnier, J.

    1995-01-01

    This invention describes a machine which samples automatically liquids in small jugs obturated by screwed taps. This device can be situated in an isolated room in order to work with radioactive liquids. The machine can be divided in three main parts: a module to catch the jug, in order to take and fix it, a module to open and to close it, and a module to sample. The later takes the liquid thanks to a suction device and puts it in a container, in order to analyse the sample. (TEC)

  10. Research of pneumatic control transmission system for small irradiation samples

    International Nuclear Information System (INIS)

    Bai Zhongxiong; Zhang Haibing; Rong Ru; Zhang Tao

    2008-01-01

    In order to reduce the absorbed dose damage for the operator, pneumatic control has been adopted to realize the rapid transmission of small irradiation samples. On/off of pneumatic circuit and directions for the rapid transmission system are controlled by the electrical control part. The main program initializes the system and detects the location of the manual/automatic change-over switch, and call for the corresponding subprogram to achieve the automatic or manual operation. Automatic subprogram achieves the automatic sample transmission; Manual subprogram completes the deflation, and back and forth movement of the radiation samples. This paper introduces in detail the implementation of the system, in terms of both hardware and software design. (authors)

  11. Atmospheric Correction Performance of Hyperspectral Airborne Imagery over a Small Eutrophic Lake under Changing Cloud Cover

    Directory of Open Access Journals (Sweden)

    Lauri Markelin

    2016-12-01

    Full Text Available Atmospheric correction of remotely sensed imagery of inland water bodies is essential to interpret water-leaving radiance signals and for the accurate retrieval of water quality variables. Atmospheric correction is particularly challenging over inhomogeneous water bodies surrounded by comparatively bright land surface. We present results of AisaFENIX airborne hyperspectral imagery collected over a small inland water body under changing cloud cover, presenting challenging but common conditions for atmospheric correction. This is the first evaluation of the performance of the FENIX sensor over water bodies. ATCOR4, which is not specifically designed for atmospheric correction over water and does not make any assumptions on water type, was used to obtain atmospherically corrected reflectance values, which were compared to in situ water-leaving reflectance collected at six stations. Three different atmospheric correction strategies in ATCOR4 was tested. The strategy using fully image-derived and spatially varying atmospheric parameters produced a reflectance accuracy of ±0.002, i.e., a difference of less than 15% compared to the in situ reference reflectance. Amplitude and shape of the remotely sensed reflectance spectra were in general accordance with the in situ data. The spectral angle was better than 4.1° for the best cases, in the spectral range of 450–750 nm. The retrieval of chlorophyll-a (Chl-a concentration using a popular semi-analytical band ratio algorithm for turbid inland waters gave an accuracy of ~16% or 4.4 mg/m3 compared to retrieval of Chl-a from reflectance measured in situ. Using fixed ATCOR4 processing parameters for whole images improved Chl-a retrieval results from ~6 mg/m3 difference to reference to approximately 2 mg/m3. We conclude that the AisaFENIX sensor, in combination with ATCOR4 in image-driven parametrization, can be successfully used for inland water quality observations. This implies that the need for in situ

  12. Split Hopkinson Resonant Bar Test for Sonic-Frequency Acoustic Velocity and Attenuation Measurements of Small, Isotropic Geologic Samples

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, S.

    2011-04-01

    Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver - the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 C, and concurrently with x-ray CT imaging. The described Split Hopkinson Resonant Bar (SHRB) test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples and a natural rock sample.

  13. Self-attenuation correction in the environmental sample gamma spectrometry; Correcao de auto-absorcao na espectrometria gama de amostras ambientais

    Energy Technology Data Exchange (ETDEWEB)

    Venturini, Luzia; Nisti, Marcelo B. [Instituto de Pesquisas Energeticas e Nucleares (IPEN), Sao Paulo, SP (Brazil)

    1997-10-01

    Self-attenuation corrections were calculated for gamma ray spectrometry of environmental samples with densities from 0.42 g/ml up to 1.59 g/ml, measured in Marinelli beakers and polyethylene flasks. These corrections are to be used when the counting efficiency is calculated for water measured in the same geometry. The model of Debertin for Marinelli beaker, numerical integration and experimental linear attenuation coefficients were used. (author). 3 refs., 4 figs., 6 tabs.

  14. Monte Carlo and experimental determination of correction factors for gamma knife perfexion small field dosimetry measurements

    Science.gov (United States)

    Zoros, E.; Moutsatsos, A.; Pappas, E. P.; Georgiou, E.; Kollias, G.; Karaiskos, P.; Pantelis, E.

    2017-09-01

    Detector-, field size- and machine-specific correction factors are required for precise dosimetry measurements in small and non-standard photon fields. In this work, Monte Carlo (MC) simulation techniques were used to calculate the k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} and k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors for a series of ionization chambers, a synthetic microDiamond and diode dosimeters, used for reference and/or output factor (OF) measurements in the Gamma Knife Perfexion photon fields. Calculations were performed for the solid water (SW) and ABS plastic phantoms, as well as for a water phantom of the same geometry. MC calculations for the k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors in SW were compared against corresponding experimental results for a subset of ionization chambers and diode detectors. Reference experimental OF data were obtained through the weighted average of corresponding measurements using TLDs, EBT-2 films and alanine pellets. k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} values close to unity (within 1%) were calculated for most of ionization chambers in water. Greater corrections of up to 6.0% were observed for chambers with relatively large air-cavity dimensions and steel central electrode. A phantom correction of 1.006 and 1.024 (breaking down to 1.014 from the ABS sphere and 1.010 from the accompanying ABS phantom adapter) were calculated for the SW and ABS phantoms, respectively, adding up to k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} corrections in water. Both measurements and MC calculations for the diode and microDiamond detectors resulted in lower than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors, due to their denser sensitive volume and encapsulation materials. In comparison, higher than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} results for the ionization chambers suggested field size depended dose underestimations (being significant for the 4 mm field), with magnitude depending on the combination of

  15. Measurement of regional cerebral blood flow using one-point arterial blood sampling and microsphere model with 123I-IMP. Correction of one-point arterial sampling count by whole brain count ratio

    International Nuclear Information System (INIS)

    Makino, Kenichi; Masuda, Yasuhiko; Gotoh, Satoshi

    1998-01-01

    The experimental subjects were 189 patients with cerebrovascular disorders. 123 I-IMP, 222 MBq, was administered by intravenous infusion. Continuous arterial blood sampling was carried out for 5 minutes, and arterial blood was also sampled once at 5 minutes after 123 I-IMP administration. Then the whole blood count of the one-point arterial sampling was compared with the octanol-extracted count of the continuous arterial sampling. A positive correlation was found between the two values. The ratio of the continuous sampling octanol-extracted count (OC) to the one-point sampling whole blood count (TC5) was compared with the whole brain count ratio (5:29 ratio, Cn) using 1-minute planar SPECT images, centering on 5 and 29 minutes after 123 I-IMP administration. Correlation was found between the two values. The following relationship was shown from the correlation equation. OC/TC5=0.390969 x Cn-0.08924. Based on this correlation equation, we calculated the theoretical continuous arterial sampling octanol-extracted count (COC). COC=TC5 x (0.390969 x Cn-0.08924). There was good correlation between the value calculated with this equation and the actually measured value. The coefficient improved to r=0.94 from the r=0.87 obtained before using the 5:29 ratio for correction. For 23 of these 189 cases, another one-point arterial sampling was carried out at 6, 7, 8, 9 and 10 minutes after the administration of 123 I-IMP. The correlation coefficient was also improved for these other point samplings when this correction method using the 5:29 ratio was applied. It was concluded that it is possible to obtain highly accurate input functions, i.e., calculated continuous arterial sampling octanol-extracted counts, using one-point arterial sampling whole blood counts by performing correction using the 5:29 ratio. (K.H.)

  16. SU-C-201-06: Small Field Correction Factors for the MicroDiamond Detector in the Gamma Knife-Model C Derived Using Monte Carlo Methods

    International Nuclear Information System (INIS)

    Barrett, J C; Knill, C

    2016-01-01

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to

  17. SU-C-201-06: Small Field Correction Factors for the MicroDiamond Detector in the Gamma Knife-Model C Derived Using Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, J C [Wayne State University, Detroit, MI (United States); Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI (United States); Knill, C [Wayne State University, Detroit, MI (United States); Beaumont Hospital, Canton, MI (United States)

    2016-06-15

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to

  18. Pharmacological Correction of Stress-Induced Gastric Ulceration by Novel Small-Molecule Agents with Antioxidant Profile

    Directory of Open Access Journals (Sweden)

    Konstantin V. Kudryavtsev

    2014-01-01

    Full Text Available This study was designed to determine novel small-molecule agents influencing the pathogenesis of gastric lesions induced by stress. To achieve this goal, four novel organic compounds containing structural fragments with known antioxidant activity were synthesized, characterized by physicochemical methods, and evaluated in vivo at water immersion restraint conditions. The levels of lipid peroxidation products and activities of antioxidative system enzymes were measured in gastric mucosa and correlated with the observed gastroprotective activity of the active compounds. Prophylactic single-dose 1 mg/kg treatment with (2-hydroxyphenylthioacetyl derivatives of L-lysine and L-proline efficiently decreases up to 86% stress-induced stomach ulceration in rats. Discovered small-molecule antiulcer agents modulate activities of gastric mucosa tissue superoxide dismutase, catalase, and xanthine oxidase in concerted directions. Gastroprotective effect of (2-hydroxyphenylthioacetyl derivatives of L-lysine and L-proline at least partially depends on the correction of gastric mucosa oxidative balance.

  19. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  20. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  1. A New Bias Corrected Version of Heteroscedasticity Consistent Covariance Estimator

    Directory of Open Access Journals (Sweden)

    Munir Ahmed

    2016-06-01

    Full Text Available In the presence of heteroscedasticity, different available flavours of the heteroscedasticity consistent covariance estimator (HCCME are used. However, the available literature shows that these estimators can be considerably biased in small samples. Cribari–Neto et al. (2000 introduce a bias adjustment mechanism and give the modified White estimator that becomes almost bias-free even in small samples. Extending these results, Cribari-Neto and Galvão (2003 present a similar bias adjustment mechanism that can be applied to a wide class of HCCMEs’. In the present article, we follow the same mechanism as proposed by Cribari-Neto and Galvão to give bias-correction version of HCCME but we use adaptive HCCME rather than the conventional HCCME. The Monte Carlo study is used to evaluate the performance of our proposed estimators.

  2. Fitted temperature-corrected Compton cross sections for Monte Carlo applications and a sampling distribution

    International Nuclear Information System (INIS)

    Wienke, B.R.; Devaney, J.J.; Lathrop, B.L.

    1984-01-01

    Simple temperature-corrected cross sections, which replace the static Klein-Nishina set in a one-to-one manner, are developed for Monte Carlo applications. The reduced set is obtained from a nonlinear least-squares fit to the exact photon-Maxwellian electron cross sections by using a Klein-Nishina-like formula as the fitting equation. Two parameters are sufficient, and accurate to two decimal places, to explicitly fit the exact cross sections over a range of 0 to 100 keV in electron temperature and 0 to 1 MeV in incident photon energy. Since the fit equations are Klein-Nishina-like, existing Monte Carlo code algorithms using the Klein-Nishina formula can be trivially modified to accommodate corrections for a moving Maxwellian electron background. The simple two parameter scheme and other fits are presented and discussed and comparisons with exact predictions are exhibited. The fits are made to the total photon-Maxwellian electron cross section and the fitting parameters can be consistently used in both the energy conservation equation for photon-electron scattering and the differential cross section, as they are presently sampled in Monte Carlo photonics applications. The fit equations are motivated in a very natural manner by the asymptotic expansion of the exact photon-Maxwellian effective cross-section kernel. A probability distribution is also obtained for the corrected set of equations

  3. On-product overlay enhancement using advanced litho-cluster control based on integrated metrology, ultra-small DBO targets and novel corrections

    Science.gov (United States)

    Bhattacharyya, Kaustuve; Ke, Chih-Ming; Huang, Guo-Tsai; Chen, Kai-Hsiung; Smilde, Henk-Jan H.; Fuchs, Andreas; Jak, Martin; van Schijndel, Mark; Bozkurt, Murat; van der Schaar, Maurits; Meyer, Steffen; Un, Miranda; Morgan, Stephen; Wu, Jon; Tsai, Vincent; Liang, Frida; den Boef, Arie; ten Berge, Peter; Kubis, Michael; Wang, Cathy; Fouquet, Christophe; Terng, L. G.; Hwang, David; Cheng, Kevin; Gau, TS; Ku, Y. C.

    2013-04-01

    Aggressive on-product overlay requirements in advanced nodes are setting a superior challenge for the semiconductor industry. This forces the industry to look beyond the traditional way-of-working and invest in several new technologies. Integrated metrology2, in-chip overlay control, advanced sampling and process correction-mechanism (using the highest order of correction possible with scanner interface today), are a few of such technologies considered in this publication.

  4. Sensitivity study of micro four-point probe measurements on small samples

    DEFF Research Database (Denmark)

    Wang, Fei; Petersen, Dirch Hjorth; Hansen, Torben Mikael

    2010-01-01

    probes than near the outer ones. The sensitive area is defined for infinite film, circular, square, and rectangular test pads, and convergent sensitivities are observed for small samples. The simulations show that the Hall sheet resistance RH in micro Hall measurements with position error suppression...

  5. Respiration-averaged CT for attenuation correction in non-small-cell lung cancer

    International Nuclear Information System (INIS)

    Cheng, Nai-Ming; Ho, Kung-Chu; Yen, Tzu-Chen; Yu, Chih-Teng; Wu, Yi-Cheng; Liu, Yuan-Chang; Wang, Chih-Wei

    2009-01-01

    Breathing causes artefacts on PET/CT images. Cine CT has been used to reduce respiratory artefacts by acquiring multiple images during a single breathing cycle. The aim of this prospective study in non-small-cell lung cancer (NSCLC) patients was twofold. Firstly, we sought to compare the motion artefacts in PET/CT images attenuation-corrected with helical CT (HCT) and with averaged CT (ACT), which provides an average of cine CT images. Secondly, we wanted to evaluate the differences in maximum standardized uptake values (SUV max ) between HCT and ACT. Enrolled in the study were 80 patients with NSCLC. PET images attenuation-corrected with HCT (PET/HCT) and with ACT (PET/ACT) were obtained in all patients. Misregistration was evaluated by measurement of the curved photopenic area in the lower thorax of the PET images for all patients and direct measurement of misregistration for selected lesions. SUV max was measured separately at the primary tumours, regional lymph nodes, and background. A total of 80 patients with NSCLC were included. Significantly lower misregistrations were observed in PET/ACT images than in PET/HCT images (below-thoracic misregistration 0.25±0.58 cm vs. 1.17±1.17 cm, p max were noted in PET/ACT images than in PET/HCT images in the primary tumour (p max in PET/ACT images was higher by 0.35 for the main tumours and 0.34 for lymph nodes. Due to its significantly reduced misregistration, PET/ACT provided more reliable SUV max and may be useful in treatment planning and monitoring the therapeutic response in patients with NSCLC. (orig.)

  6. Quantifying predictability through information theory: small sample estimation in a non-Gaussian framework

    International Nuclear Information System (INIS)

    Haven, Kyle; Majda, Andrew; Abramov, Rafail

    2005-01-01

    Many situations in complex systems require quantitative estimates of the lack of information in one probability distribution relative to another. In short term climate and weather prediction, examples of these issues might involve the lack of information in the historical climate record compared with an ensemble prediction, or the lack of information in a particular Gaussian ensemble prediction strategy involving the first and second moments compared with the non-Gaussian ensemble itself. The relative entropy is a natural way to quantify the predictive utility in this information, and recently a systematic computationally feasible hierarchical framework has been developed. In practical systems with many degrees of freedom, computational overhead limits ensemble predictions to relatively small sample sizes. Here the notion of predictive utility, in a relative entropy framework, is extended to small random samples by the definition of a sample utility, a measure of the unlikeliness that a random sample was produced by a given prediction strategy. The sample utility is the minimum predictability, with a statistical level of confidence, which is implied by the data. Two practical algorithms for measuring such a sample utility are developed here. The first technique is based on the statistical method of null-hypothesis testing, while the second is based upon a central limit theorem for the relative entropy of moment-based probability densities. These techniques are tested on known probability densities with parameterized bimodality and skewness, and then applied to the Lorenz '96 model, a recently developed 'toy' climate model with chaotic dynamics mimicking the atmosphere. The results show a detection of non-Gaussian tendencies of prediction densities at small ensemble sizes with between 50 and 100 members, with a 95% confidence level

  7. Design and experimental testing of air slab caps which convert commercial electron diodes into dual purpose, correction-free diodes for small field dosimetry.

    Science.gov (United States)

    Charles, P H; Cranmer-Sargison, G; Thwaites, D I; Kairn, T; Crowe, S B; Pedrazzini, G; Aland, T; Kenny, J; Langton, C M; Trapp, J V

    2014-10-01

    Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable "air cap". A set of output ratios (ORDet (fclin) ) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to ORDet (fclin) measured using an IBA stereotactic field diode (SFD). kQclin,Qmsr (fclin,fmsr) was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that kQclin,Qmsr (fclin,fmsr) was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which is "correction-free" in small field relative dosimetry. In addition, the feasibility of experimentally transferring kQclin,Qmsr (fclin,fmsr) values from the SFD to unknown diodes was tested by comparing the experimentally transferred kQclin,Qmsr (fclin,fmsr) values for unmodified PTWe and EDGEe diodes to Monte Carlo simulated values. 1.0 mm of air was required to make the PTWe diode correction-free. This modified diode (PTWeair) produced output factors equivalent to those in water at all field sizes (5-50 mm

  8. A review of empirical research related to the use of small quantitative samples in clinical outcome scale development.

    Science.gov (United States)

    Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S

    2016-11-01

    There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.

  9. A practical method for determining γ-ray full-energy peak efficiency considering coincidence-summing and self-absorption corrections for the measurement of environmental samples after the Fukushima reactor accident

    Energy Technology Data Exchange (ETDEWEB)

    Shizuma, Kiyoshi, E-mail: shizuma@hiroshima-u.ac.jp [Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527 (Japan); Oba, Yurika; Takada, Momo [Graduate School of Integrated Arts and Sciences, Hiroshima University, Higashi-Hiroshima 739-8521 (Japan)

    2016-09-15

    A method for determining the γ-ray full-energy peak efficiency at positions close to three Ge detectors and at the well port of a well-type detector was developed for measuring environmental volume samples containing {sup 137}Cs, {sup 134}Cs and {sup 40}K. The efficiency was estimated by considering two correction factors: coincidence-summing and self-absorption corrections. The coincidence-summing correction for a cascade transition nuclide was estimated by an experimental method involving measuring a sample at the far and close positions of a detector. The derived coincidence-summing correction factors were compared with those of analytical and Monte Carlo simulation methods and good agreements were obtained. Differences in the matrix of the calibration source and the environmental sample resulted in an increase or decrease of the full-energy peak counts due to the self-absorption of γ-rays in the sample. The correction factor was derived as a function of the densities of several matrix materials. The present method was applied to the measurement of environmental samples and also low-level radioactivity measurements of water samples using the well-type detector.

  10. Lipid correction model of carbon stable isotopes for a cosmopolitan predator, spiny dogfish Squalus acanthias.

    Science.gov (United States)

    Reum, J C P

    2011-12-01

    Three lipid correction models were evaluated for liver and white dorsal muscle from Squalus acanthias. For muscle, all three models performed well, based on the Akaike Information Criterion value corrected for small sample sizes (AIC(c) ), and predicted similar lipid corrections to δ(13) C that were up to 2.8 ‰ higher than those predicted using previously published models based on multispecies data. For liver, which possessed higher bulk C:N values compared to that of white muscle, all three models performed poorly and lipid-corrected δ(13) C values were best approximated by simply adding 5.74 ‰ to bulk δ(13) C values. © 2011 The Author. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.

  11. Pierre Gy's sampling theory and sampling practice heterogeneity, sampling correctness, and statistical process control

    CERN Document Server

    Pitard, Francis F

    1993-01-01

    Pierre Gy's Sampling Theory and Sampling Practice, Second Edition is a concise, step-by-step guide for process variability management and methods. Updated and expanded, this new edition provides a comprehensive study of heterogeneity, covering the basic principles of sampling theory and its various applications. It presents many practical examples to allow readers to select appropriate sampling protocols and assess the validity of sampling protocols from others. The variability of dynamic process streams using variography is discussed to help bridge sampling theory with statistical process control. Many descriptions of good sampling devices, as well as descriptions of poor ones, are featured to educate readers on what to look for when purchasing sampling systems. The book uses its accessible, tutorial style to focus on professional selection and use of methods. The book will be a valuable guide for mineral processing engineers; metallurgists; geologists; miners; chemists; environmental scientists; and practit...

  12. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    Science.gov (United States)

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  13. Quantum superposition of the state discrete spectrum of mathematical correlation molecule for small samples of biometric data

    Directory of Open Access Journals (Sweden)

    Vladimir I. Volchikhin

    2017-06-01

    Full Text Available Introduction: The study promotes to decrease a number of errors of calculating the correlation coefficient in small test samples. Materials and Methods: We used simulation tool for the distribution functions of the density values of the correlation coefficient in small samples. A method for quantization of the data, allows obtaining a discrete spectrum states of one of the varieties of correlation functional. This allows us to consider the proposed structure as a mathematical correlation molecule, described by some analogue continuous-quantum Schrödinger equation. Results: The chi-squared Pearson’s molecule on small samples allows enhancing power of classical chi-squared test to 20 times. A mathematical correlation molecule described in the article has similar properties. It allows in the future reducing calculation errors of the classical correlation coefficients in small samples. Discussion and Conclusions: The authors suggest that there are infinitely many mathematical molecules are similar in their properties to the actual physical molecules. Schrödinger equations are not unique, their analogues can be constructed for each mathematical molecule. You can expect a mathematical synthesis of molecules for a large number of known statistical tests and statistical moments. All this should make it possible to reduce calculation errors due to quantum effects that occur in small test samples.

  14. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 547: Miscellaneous Contaminated Waste Sites, Nevada National Security Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Mark Krauss

    2011-09-01

    The purpose of this CADD/CAP is to present the corrective action alternatives (CAAs) evaluated for CAU 547, provide justification for selection of the recommended alternative, and describe the plan for implementing the selected alternative. Corrective Action Unit 547 consists of the following three corrective action sites (CASs): (1) CAS 02-37-02, Gas Sampling Assembly; (2) CAS 03-99-19, Gas Sampling Assembly; and(3) CAS 09-99-06, Gas Sampling Assembly. The gas sampling assemblies consist of inactive process piping, equipment, and instrumentation that were left in place after completion of underground safety experiments. The purpose of these safety experiments was to confirm that a nuclear explosion would not occur in the case of an accidental detonation of the high-explosive component of the device. The gas sampling assemblies allowed for the direct sampling of the gases and particulates produced by the safety experiments. Corrective Action Site 02-37-02 is located in Area 2 of the Nevada National Security Site (NNSS) and is associated with the Mullet safety experiment conducted in emplacement borehole U2ag on October 17, 1963. Corrective Action Site 03-99-19 is located in Area 3 of the NNSS and is associated with the Tejon safety experiment conducted in emplacement borehole U3cg on May 17, 1963. Corrective Action Site 09-99-06 is located in Area 9 of the NNSS and is associated with the Player safety experiment conducted in emplacement borehole U9cc on August 27, 1964. The CAU 547 CASs were investigated in accordance with the data quality objectives (DQOs) developed by representatives of the Nevada Division of Environmental Protection (NDEP) and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office. The DQO process was used to identify and define the type, amount, and quality of data needed to determine and implement appropriate corrective actions for CAU 547. Existing radiological survey data and historical knowledge of

  15. Determination of phosphorus in small amounts of protein samples by ICP-MS.

    Science.gov (United States)

    Becker, J Sabine; Boulyga, Sergei F; Pickhardt, Carola; Becker, J; Buddrus, Stefan; Przybylski, Michael

    2003-02-01

    Inductively coupled plasma mass spectrometry (ICP-MS) is used for phosphorus determination in protein samples. A small amount of solid protein sample (down to 1 micro g) or digest (1-10 micro L) protein solution was denatured in nitric acid and hydrogen peroxide by closed-microvessel microwave digestion. Phosphorus determination was performed with an optimized analytical method using a double-focusing sector field inductively coupled plasma mass spectrometer (ICP-SFMS) and quadrupole-based ICP-MS (ICP-QMS). For quality control of phosphorus determination a certified reference material (CRM), single cell proteins (BCR 273) with a high phosphorus content of 26.8+/-0.4 mg g(-1), was analyzed. For studies on phosphorus determination in proteins while reducing the sample amount as low as possible the homogeneity of CRM BCR 273 was investigated. Relative standard deviation and measurement accuracy in ICP-QMS was within 2%, 3.5%, 11% and 12% when using CRM BCR 273 sample weights of 40 mg, 5 mg, 1 mg and 0.3 mg, respectively. The lowest possible sample weight for an accurate phosphorus analysis in protein samples by ICP-MS is discussed. The analytical method developed was applied for the analysis of homogeneous protein samples in very low amounts [1-100 micro g of solid protein sample, e.g. beta-casein or down to 1 micro L of protein or digest in solution (e.g., tau protein)]. A further reduction of the diluted protein solution volume was achieved by the application of flow injection in ICP-SFMS, which is discussed with reference to real protein digests after protein separation using 2D gel electrophoresis.The detection limits for phosphorus in biological samples were determined by ICP-SFMS down to the ng g(-1) level. The present work discusses the figure of merit for the determination of phosphorus in a small amount of protein sample with ICP-SFMS in comparison to ICP-QMS.

  16. Corrective Action Investigation Plan for Corrective Action Unit 554: Area 23 Release Site, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Boehlecke, Robert F.

    2004-01-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information for conducting site investigation activities at Corrective Action Unit (CAU) 554: Area 23 Release Site, Nevada Test Site, Nevada. Information presented in this CAIP includes facility descriptions, environmental sample collection objectives, and criteria for the selection and evaluation of environmental samples. Corrective Action Unit 554 is located in Area 23 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 554 is comprised of one Corrective Action Site (CAS), which is: 23-02-08, USTs 23-115-1, 2, 3/Spill 530-90-002. This site consists of soil contamination resulting from a fuel release from underground storage tanks (USTs). Corrective Action Site 23-02-08 is being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation prior to evaluating corrective action alternatives and selecting the appropriate corrective action for this CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document for CAU 554. Corrective Action Site 23-02-08 will be investigated based on the data quality objectives (DQOs) developed on July 15, 2004, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; and contractor personnel. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 554. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to CAS 23-02-08. The scope of the corrective action investigation

  17. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  18. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  19. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  20. Rural and small-town attitudes about alcohol use during pregnancy: a community and provider sample.

    Science.gov (United States)

    Logan, T K; Walker, Robert; Nagle, Laura; Lewis, Jimmie; Wiesenhahn, Donna

    2003-01-01

    While there has been considerable research on prenatal alcohol use, there have been limited studies focused on women in rural and small-town environments. This 2-part study examines gender differences in attitudes and perceived barriers to intervention in large community sample of persons living in rural and small-town environments in Kentucky (n = 3,346). The study also examines rural/small-town prenatal service providers' perceptions of barriers to assessment and intervention with pregnant substance abusers (n = 138). Surveys were administered to a convenience sample of employees and customers from 16 rural and small-town community outlets. There were 1503 males (45%) and 1843 females (55%) ranging in age from under 18 years old to over 66 years old. Surveys also were mailed to prenatal providers in county health departments of the 13-county study area, with 138 of 149 responding. Overall results of the community sample suggest that neither males nor females were knowledgeable about the harmful effects of alcohol use during pregnancy. Results also indicate substantial gender differences in alcohol attitudes, knowledge, and perceived barriers. Further, prenatal care providers identified several barriers in assessment and treatment of pregnant women with alcohol use problems in rural and small-town communities, including lack of knowledge and comfort with assessment as well as a lack of available and accessible treatment for referrals.

  1. A Study of Assimilation Bias in Name-Based Sampling of Migrants

    Directory of Open Access Journals (Sweden)

    Schnell Rainer

    2014-06-01

    Full Text Available The use of personal names for screening is an increasingly popular sampling technique for migrant populations. Although this is often an effective sampling procedure, very little is known about the properties of this method. Based on a large German survey, this article compares characteristics of respondents whose names have been correctly classified as belonging to a migrant population with respondentswho aremigrants and whose names have not been classified as belonging to a migrant population. Although significant differences were found for some variables even with some large effect sizes, the overall bias introduced by name-based sampling (NBS is small as long as procedures with small false-negative rates are employed.

  2. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Improvements in dose calculation accuracy for small off-axis targets in high dose per fraction tomotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding; Rosenfeld, Anatoly B.; Tome, Wolfgang A. [Department of Human Oncology, University of Wisconsin-Madison, WI, 53792 (United States); Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC 3002 (Australia) and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia); Department of Human Oncology, University of Wisconsin-Madison, WI 53792 (United States); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia) and Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur (Malaysia); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia); Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Einstein Institute of Oncophysics, Albert Einstein College of Medicine of Yeshiva University, Bronx, New York 10461 (United States) and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia)

    2012-08-15

    Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed 'Super Sampling' involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receiving a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.

  4. A weighted least-squares lump correction algorithm for transmission-corrected gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.

    1993-01-01

    With transmission-corrected gamma-ray nondestructive assay instruments such as the Segmented Gamma Scanner (SGS) and the Tomographic Gamma Scanner (TGS) that is currently under development at Los Alamos National Laboratory, the amount of gamma-ray emitting material can be underestimated for samples in which the emitting material consists of particles or lumps of highly attenuating material. This problem is encountered in the assay of uranium and plutonium-bearing samples. To correct for this source of bias, we have developed a least-squares algorithm that uses transmission-corrected assay results for several emitted energies and a weighting function to account for statistical uncertainties in the assay results. The variation of effective lump size in the fitted model is parameterized; this allows the correction to be performed for a wide range of lump-size distributions. It may be possible to use the reduced chi-squared value obtained in the fit to identify samples in which assay assumptions have been violated. We found that the algorithm significantly reduced bias in simulated assays and improved SGS assay results for plutonium-bearing samples. Further testing will be conducted with the TGS, which is expected to be less susceptible than the SGS to systematic source of bias

  5. Small Sample Properties of the Wilcoxon Signed Rank Test with Discontinuous and Dependent Observations

    OpenAIRE

    Nadine Chlass; Jens J. Krueger

    2007-01-01

    This Monte-Carlo study investigates sensitivity of the Wilcoxon signed rank test to certain assumption violations in small samples. Emphasis is put on within-sample-dependence, between-sample dependence, and the presence of ties. Our results show that both assumption violations induce severe size distortions and entail power losses. Surprisingly, these consequences do vary substantially with other properties the data may display. Results provided are particularly relevant for experimental set...

  6. Quantum corrections to thermodynamics of quasitopological black holes

    Directory of Open Access Journals (Sweden)

    Sudhaker Upadhyay

    2017-12-01

    Full Text Available Based on the modification to area-law due to thermal fluctuation at small horizon radius, we investigate the thermodynamics of charged quasitopological and charged rotating quasitopological black holes. In particular, we derive the leading-order corrections to the Gibbs free energy, charge and total mass densities. In order to analyze the behavior of the thermal fluctuations on the thermodynamics of small black holes, we draw a comparative analysis between the first-order corrected and original thermodynamical quantities. We also examine the stability and bound points of such black holes under effect of leading-order corrections.

  7. The Top-of-Instrument corrections for nuclei with AMS on the Space Station

    Science.gov (United States)

    Ferris, N. G.; Heil, M.

    2018-05-01

    The Alpha Magnetic Spectrometer (AMS) is a large acceptance, high precision magnetic spectrometer on the International Space Station (ISS). The top-of-instrument correction for nuclei flux measurements with AMS accounts for backgrounds due to the fragmentation of nuclei with higher charge. Upon entry in the detector, nuclei may interact with AMS materials and split into fragments of lower charge based on their cross-section. The redundancy of charge measurements along the particle trajectory with AMS allows for the determination of inelastic interactions and for the selection of high purity nuclei samples with small uncertainties. The top-of-instrument corrections for nuclei with 2 < Z ≤ 6 are presented.

  8. Identification of multiple mRNA and DNA sequences from small tissue samples isolated by laser-assisted microdissection.

    Science.gov (United States)

    Bernsen, M R; Dijkman, H B; de Vries, E; Figdor, C G; Ruiter, D J; Adema, G J; van Muijen, G N

    1998-10-01

    Molecular analysis of small tissue samples has become increasingly important in biomedical studies. Using a laser dissection microscope and modified nucleic acid isolation protocols, we demonstrate that multiple mRNA as well as DNA sequences can be identified from a single-cell sample. In addition, we show that the specificity of procurement of tissue samples is not compromised by smear contamination resulting from scraping of the microtome knife during sectioning of lesions. The procedures described herein thus allow for efficient RT-PCR or PCR analysis of multiple nucleic acid sequences from small tissue samples obtained by laser-assisted microdissection.

  9. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  10. Corrections of arterial input function for dynamic H215O PET to assess perfusion of pelvic tumours: arterial blood sampling versus image extraction

    International Nuclear Information System (INIS)

    Luedemann, L; Sreenivasa, G; Michel, R; Rosner, C; Plotkin, M; Felix, R; Wust, P; Amthauer, H

    2006-01-01

    Assessment of perfusion with 15 O-labelled water (H 2 15 O) requires measurement of the arterial input function (AIF). The arterial time activity curve (TAC) measured using the peripheral sampling scheme requires corrections for delay and dispersion. In this study, parametrizations with and without arterial spillover correction for fitting of the tissue curve are evaluated. Additionally, a completely noninvasive method for generation of the AIF from a dynamic positron emission tomography (PET) acquisition is applied to assess perfusion of pelvic tumours. This method uses a volume of interest (VOI) to extract the TAC from the femoral artery. The VOI TAC is corrected for spillover using a separate tissue TAC and for recovery by determining the recovery coefficient on a coregistered CT data set. The techniques were applied in five patients with pelvic tumours who underwent a total of 11 examinations. Delay and dispersion correction of the blood TAC without arterial spillover correction yielded in seven examinations solutions inconsistent with physiology. Correction of arterial spillover increased the fitting accuracy and yielded consistent results in all patients. Generation of an AIF from PET image data was investigated as an alternative to arterial blood sampling and was shown to have an intrinsic potential to determine the AIF noninvasively and reproducibly. The AIF extracted from a VOI in a dynamic PET scan was similar in shape to the blood AIF but yielded significantly higher tissue perfusion values (mean of 104.0 ± 52.0%) and lower partition coefficients (-31.6 ± 24.2%). The perfusion values and partition coefficients determined with the VOI technique have to be corrected in order to compare the results with those of studies using a blood AIF

  11. Basic distribution free identification tests for small size samples of environmental data

    International Nuclear Information System (INIS)

    Federico, A.G.; Musmeci, F.

    1998-01-01

    Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data [it

  12. Computing correct truncated excited state wavefunctions

    Science.gov (United States)

    Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.

    2016-12-01

    We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.

  13. Corrective Action Investigation Plan for Corrective Action Unit 542: Disposal Holes, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Laura Pastor

    2006-01-01

    Corrective Action Unit (CAU) 542 is located in Areas 3, 8, 9, and 20 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 542 is comprised of eight corrective action sites (CASs): (1) 03-20-07, ''UD-3a Disposal Hole''; (2) 03-20-09, ''UD-3b Disposal Hole''; (3) 03-20-10, ''UD-3c Disposal Hole''; (4) 03-20-11, ''UD-3d Disposal Hole''; (5) 06-20-03, ''UD-6 and UD-6s Disposal Holes''; (6) 08-20-01, ''U-8d PS No.1A Injection Well Surface Release''; (7) 09-20-03, ''U-9itsy30 PS No.1A Injection Well Surface Release''; and (8) 20-20-02, ''U-20av PS No.1A Injection Well Surface Release''. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on January 30, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and Bechtel Nevada. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 542. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the CAI for CAU 542 includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling. (2) Conduct radiological surveys. (3) Conduct geophysical surveys to

  14. Estimating the residential demand function for natural gas in Seoul with correction for sample selection bias

    International Nuclear Information System (INIS)

    Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun

    2009-01-01

    Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)

  15. Corrective Action Investigation Plan for Corrective Action Unit 561: Waste Disposal Areas, Nevada Test Site, Nevada, Revision 0

    International Nuclear Information System (INIS)

    Grant Evenson

    2008-01-01

    Corrective Action Unit (CAU) 561 is located in Areas 1, 2, 3, 5, 12, 22, 23, and 25 of the Nevada Test Site, which is approximately 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 561 is comprised of the 10 corrective action sites (CASs) listed below: (1) 01-19-01, Waste Dump; (2) 02-08-02, Waste Dump and Burn Area; (3) 03-19-02, Debris Pile; (4) 05-62-01, Radioactive Gravel Pile; (5) 12-23-09, Radioactive Waste Dump; (6) 22-19-06, Buried Waste Disposal Site; (7) 23-21-04, Waste Disposal Trenches; (8) 25-08-02, Waste Dump; (9) 25-23-21, Radioactive Waste Dump; and (10) 25-25-19, Hydrocarbon Stains and Trench. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on April 28, 2008, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and National Security Technologies, LLC. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 561. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the Corrective Action Investigation for CAU 561 includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling. (2) Conduct

  16. Fission track dating of volcanic glass: experimental evidence for the validity of the Size-Correction Method

    International Nuclear Information System (INIS)

    Bernardes, C.; Hadler Neto, J.C.; Lattes, C.M.G.; Araya, A.M.O.; Bigazzi, G.; Cesar, M.F.

    1986-01-01

    Two techniques may be employed for correcting thermally lowered fission track ages on glass material: the so called 'size-correcting method' and 'Plateau method'. Several results from fission track dating on obsidian were analysed in order to compare the model rising size-correction method with experimental evidences. The results from this work can be summarized as follows: 1) The assumption that mean size of spontaneous and induced etched tracks are equal on samples unaffected by partial fading is supported by experimental results. If reactor effects such as an enhancing of the etching rate in the irradiated fraction due to the radiation damage and/or to the fact that induced fission releases a quantity of energy slightly greater than spontaneous one exist, their influence on size-correction method is very small. 2) The above two correction techniques produce concordant results. 3) Several samples from the same obsidian, affected by 'instantaneous' as well as 'continuous' natural fading to different degrees were analysed: the curve showing decreasing of spontaneous track mean-size vs. fraction of spontaneous tracks lost by fading is in close agreement with the correction curve constructed for the same obsidian by imparting artificial thermal treatements on induced tracks. By the above points one can conclude that the assumptions on which size-correction method is based are well supported, at least in first approximation. (Author) [pt

  17. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  18. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  19. Coincidence detection FDG-PET (Co-PET) in the management of oncological patients: attenuation correction versus non-attenuation correction

    International Nuclear Information System (INIS)

    Chan, W.L.; Freund, J.; Pocock, N.; Szeto, E.; Chan, F.; Sorensen, B.; McBride, B.

    2000-01-01

    Full text: This study was to determine if attenuation correction (AC) in FDG Co-PET improved image quality, lesion detection, patient staging and management of various malignant neoplasms, compared to non-attenuation-corrected (NAC) images. Thirty patients (25 men, 5 women, mean age 58 years) with known or suspected malignant neoplasms, including non-small-cell lung cancer, non Hodgkin's and Hodgkin's lymphoma, carcinoma of the breast, head and neck cancer and melanoma, underwent FDG Co-PET, which was correlated with histopathology, CT and other conventional imaging modalities and clinical follow-up. Whole body tomography was performed (ADAC Vertex MCD) 60 min after 200 MBq of 18 F-FDG (>6h fasting). The number and location of FDG avid lesions detected on the AC images and NAC Co-PET images were blindly assessed by two independent observers. Semi-quantitative grading of image clarity and lesion-to-background quality was performed. This revealed markedly improved image clarity and lesion-to-background quality, in the AC versus NAC images. AC and NAC Co-PET were statistically different in relation to lesion detection (p<0.01) and tumour staging (p<0.0 1). NAC Co-PET demonstrated 51 of the 65 lesions (78%) detected by AC Co-PET. AC Co-PET staging was correct in 27 patients (90%), compared with NAC Co-PET in 22 patients (73%). AC Co-PET altered tumour staging in five of 30 patients (16%) and NAC Co-PET did not alter tumour staging in any of the patients- management was altered in only two of these five patients (7%). In conclusion, AC Co-PET resulted in better image quality with significantly improved lesion detectability and tumour staging compared to NAC Co-PET. Its additional impact on patient management in this relatively small sample was minor. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  20. Absorption correction factor in X-ray fluorescent quantitative analysis

    International Nuclear Information System (INIS)

    Pimjun, S.

    1994-01-01

    An experiment on absorption correction factor in X-ray fluorescent quantitative analysis were carried out. Standard samples were prepared from the mixture of Fe 2 O 3 and tapioca flour at various concentration of Fe 2 O 3 ranging from 5% to 25%. Unknown samples were kaolin containing 3.5% to-50% of Fe 2 O 3 Kaolin samples were diluted with tapioca flour in order to reduce the absorption of FeK α and make them easy to prepare. Pressed samples with 0.150 /cm 2 and 2.76 cm in diameter, were used in the experiment. Absorption correction factor is related to total mass absorption coefficient (χ) which varied with sample composition. In known sample, χ can be calculated by conveniently the formula. However in unknown sample, χ can be determined by Emission-Transmission method. It was found that the relationship between corrected FeK α intensity and contents of Fe 2 O 3 in these samples was linear. This result indicate that this correction factor can be used to adjust the accuracy of X-ray intensity. Therefore, this correction factor is essential in quantitative analysis of elements comprising in any sample by X-ray fluorescent technique

  1. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    Science.gov (United States)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application

  2. Method of absorbance correction in a spectroscopic heating value sensor

    Science.gov (United States)

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  3. Corrective Action Investigation Plan for Corrective Action Unit 190: Contaminated Waste Sites Nevada Test Site, Nevada, Rev. No.: 0

    International Nuclear Information System (INIS)

    Wickline, Alfred

    2006-01-01

    Corrective Action Unit (CAU) 190 is located in Areas 11 and 14 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 190 is comprised of the four Corrective Action Sites (CASs) listed below: (1) 11-02-01, Underground Centrifuge; (2) 11-02-02, Drain Lines and Outfall; (3) 11-59-01, Tweezer Facility Septic System; and (4) 14-23-01, LTU-6 Test Area. These sites are being investigated because existing information is insufficient on the nature and extent of potential contamination to evaluate and recommend corrective action alternatives. Additional information will be obtained before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS by conducting a corrective action investigation (CAI). The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on August 24, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture, and National Security Technologies, LLC. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 190. The scope of the CAU 190 CAI includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling; (2) Conduct radiological and geophysical surveys; (3) Perform field screening; (4) Collect and submit environmental samples for laboratory analysis to determine whether contaminants of concern (COCs) are present; (5) If COCs are present, collect additional step-out samples to define the lateral and vertical extent of the contamination; (6) Collect samples of source material, if present

  4. Baysian estimation of P(X > x) from a small sample of Gaussian data

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2017-01-01

    The classical statistical uncertainty problem of estimation of upper tail probabilities on the basis of a small sample of observations of a Gaussian random variable is considered. Predictive posterior estimation is discussed, adopting the standard statistical model with diffuse priors of the two...

  5. Corrective Action Investigation Plan for Corrective Action Unit 137: Waste Disposal Sites, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Wickline, Alfred

    2005-01-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information including facility descriptions, environmental sample collection objectives, and criteria for conducting site investigation activities at Corrective Action Unit (CAU) 137: Waste Disposal Sites. This CAIP has been developed in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996) that was agreed to by the State of Nevada, the U.S. Department of Energy (DOE), and the U.S. Department of Defense. Corrective Action Unit 137 contains sites that are located in Areas 1, 3, 7, 9, and 12 of the Nevada Test Site (NTS), which is approximately 65 miles (mi) northwest of Las Vegas, Nevada (Figure 1-1). Corrective Action Unit 137 is comprised of the eight corrective action sites (CASs) shown on Figure 1-1 and listed below: (1) CAS 01-08-01, Waste Disposal Site; (2) CAS 03-23-01, Waste Disposal Site; (3) CAS 03-23-07, Radioactive Waste Disposal Site; (4) CAS 03-99-15, Waste Disposal Site; (5) CAS 07-23-02, Radioactive Waste Disposal Site; (6) CAS 09-23-07, Radioactive Waste Disposal Site; (7) CAS 12-08-01, Waste Disposal Site; and (8) CAS 12-23-07, Waste Disposal Site. The Corrective Action Investigation (CAI) will include field inspections, radiological surveys, geophysical surveys, sampling of environmental media, analysis of samples, and assessment of investigation results, where appropriate. Data will be obtained to support corrective action alternative evaluations and waste management decisions. The CASs in CAU 137 are being investigated because hazardous and/or radioactive constituents may be present in concentrations that could potentially pose a threat to human health and the environment. Existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives for the CASs. Additional information will be generated by conducting a CAI before evaluating and selecting corrective action

  6. Taking sputum samples from small children with cystic fibrosis: a matter of cooperation

    DEFF Research Database (Denmark)

    Pehn, Mette; Bregnballe, Vibeke

    2014-01-01

    Objectives: An important part of the disease control in Danish guidelines for care of patients with cystic fibrosis (CF) is a monthly sputum sample by tracheal suchtion. Coping to this unpleasant procedure in small children depends heavily on the support from parents and nurse. The objective...... of this study was to develop a tool to help parents and children to cope with tracheal suctioning. Methods: Three short videos showing how nurses perform tracheal suctioning to get a sputum sample from small children with cystic fibrosis were made. The videos were shown to and discussed with parents...... and children to help them identify their own challenges in coping with the procedure. The study was carried out in the outpatient clinic at the CF centre, Aarhus Univeristy Hospital. Results: The videos are a useful tool to convince the parents, nurses and children from the age of about four years...

  7. Improvements in dose calculation accuracy for small off-axis targets in high dose per fraction tomotherapy

    International Nuclear Information System (INIS)

    Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding; Rosenfeld, Anatoly B.; Tomé, Wolfgang A.

    2012-01-01

    Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed “Super Sampling” involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receiving a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.

  8. Gamma ray auto absorption correction evaluation methodology

    International Nuclear Information System (INIS)

    Gugiu, Daniela; Roth, Csaba; Ghinescu, Alecse

    2010-01-01

    Neutron activation analysis (NAA) is a well established nuclear technique, suited to investigate the microstructural or elemental composition and can be applied to studies of a large variety of samples. The work with large samples involves, beside the development of large irradiation devices with well know neutron field characteristics, the knowledge of perturbing phenomena and adequate evaluation of correction factors like: neutron self shielding, extended source correction, gamma ray auto absorption. The objective of the works presented in this paper is to validate an appropriate methodology for gamma ray auto absorption correction evaluation for large inhomogeneous samples. For this purpose a benchmark experiment has been defined - a simple gamma ray transmission experiment, easy to be reproduced. The gamma ray attenuation in pottery samples has been measured and computed using MCNP5 code. The results show a good agreement between the computed and measured values, proving that the proposed methodology is able to evaluate the correction factors. (authors)

  9. Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers and Improve Data-Limited Stock Assessments. This biosampling project...

  10. Conditional estimation of local pooled dispersion parameter in small-sample RNA-Seq data improves differential expression test.

    Science.gov (United States)

    Gim, Jungsoo; Won, Sungho; Park, Taesung

    2016-10-01

    High throughput sequencing technology in transcriptomics studies contribute to the understanding of gene regulation mechanism and its cellular function, but also increases a need for accurate statistical methods to assess quantitative differences between experiments. Many methods have been developed to account for the specifics of count data: non-normality, a dependence of the variance on the mean, and small sample size. Among them, the small number of samples in typical experiments is still a challenge. Here we present a method for differential analysis of count data, using conditional estimation of local pooled dispersion parameters. A comprehensive evaluation of our proposed method in the aspect of differential gene expression analysis using both simulated and real data sets shows that the proposed method is more powerful than other existing methods while controlling the false discovery rates. By introducing conditional estimation of local pooled dispersion parameters, we successfully overcome the limitation of small power and enable a powerful quantitative analysis focused on differential expression test with the small number of samples.

  11. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  12. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  13. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  14. 76 FR 78182 - Application of the Segregation Rules to Small Shareholders; Correction

    Science.gov (United States)

    2011-12-16

    ... CONTACT: Concerning the proposed regulations, Stephen R. Cleary, (202) 622-7750 (not a toll-free number... ``regard to Sec. 1.382-2T(h)(i)(A)) or a first'' is corrected to read ``regard to Sec. 1.382-2T(h)(2)(i)(A.... Clarification of Sec. 1.382-2T(j)(3)'', last line of the paragraph, the language ``2T(h)(i)(A).'' is corrected...

  15. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  16. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai; Pang, Herbert; Tong, Tiejun; Genton, Marc G.

    2015-01-01

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  17. Corrective Action Investigation Plan for Corrective Action Unit 428: Area 3 Septic Waste Systems 1 and 5, Tonopah Test Range, Nevada

    International Nuclear Information System (INIS)

    ITLV

    1999-01-01

    The Corrective Action Investigation Plan for Corrective Action Unit 428, Area 3 Septic Waste Systems 1 and 5, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U. S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 428 consists of Corrective Action Sites 03- 05- 002- SW01 and 03- 05- 002- SW05, respectively known as Area 3 Septic Waste System 1 and Septic Waste System 5. This Corrective Action Investigation Plan is used in combination with the Work Plan for Leachfield Corrective Action Units: Nevada Test Site and Tonopah Test Range, Nevada , Rev. 1 (DOE/ NV, 1998c). The Leachfield Work Plan was developed to streamline investigations at leachfield Corrective Action Units by incorporating management, technical, quality assurance, health and safety, public involvement, field sampling, and waste management information common to a set of Corrective Action Units with similar site histories and characteristics into a single document that can be referenced. This Corrective Action Investigation Plan provides investigative details specific to Corrective Action Unit 428. A system of leachfields and associated collection systems was used for wastewater disposal at Area 3 of the Tonopah Test Range until a consolidated sewer system was installed in 1990 to replace the discrete septic waste systems. Operations within various buildings at Area 3 generated sanitary and industrial wastewaters potentially contaminated with contaminants of potential concern and disposed of in septic tanks and leachfields. Corrective Action Unit 428 is composed of two leachfield systems in the northern portion of Area 3. Based on site history collected to support the Data Quality Objectives process, contaminants of potential concern for the site include oil/ diesel range total petroleum hydrocarbons, and Resource Conservation

  18. ANALYSIS OF MONTE CARLO SIMULATION SAMPLING TECHNIQUES ON SMALL SIGNAL STABILITY OF WIND GENERATOR- CONNECTED POWER SYSTEM

    Directory of Open Access Journals (Sweden)

    TEMITOPE RAPHAEL AYODELE

    2016-04-01

    Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.

  19. Precise Th/U-dating of small and heavily coated samples of deep sea corals

    Science.gov (United States)

    Lomitschka, Michael; Mangini, Augusto

    1999-07-01

    Marine carbonate skeletons like deep-sea corals are frequently coated with iron and manganese oxides/hydroxides which adsorb additional thorium and uranium out of the sea water. A new cleaning procedure has been developed to reduce this contamination. In this further cleaning step a solution of Na 2EDTA (Na 2H 2T B) and ascorbic acid is used which composition is optimised especially for samples of 20 mg of weight. It was first tested on aliquots of a reef-building coral which had been artificially contaminated with powdered ferromanganese nodule. Applied on heavily contaminated deep-sea corals (scleractinia), it reduced excess 230Th by another order of magnitude in addition to usual cleaning procedures. The measurement of at least three fractions of different contamination, together with an additional standard correction for contaminated carbonates results in Th/U-ages corrected for the authigenic component. A good agreement between Th/U- and 14C-ages can be achieved even for extremely coated corals.

  20. Experimental validation of gallium production and isotope-dependent positron range correction in PET

    Energy Technology Data Exchange (ETDEWEB)

    Fraile, L.M., E-mail: lmfraile@ucm.es [Grupo de Física Nuclear, Dpto. Física Atómica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Herraiz, J.L.; Udías, J.M.; Cal-González, J.; Corzo, P.M.G.; España, S.; Herranz, E.; Pérez-Liva, M.; Picado, E.; Vicente, E. [Grupo de Física Nuclear, Dpto. Física Atómica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Muñoz-Martín, A. [Centro de Microanálisis de Materiales, Universidad Autónoma de Madrid, E-28049 Madrid (Spain); Vaquero, J.J. [Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid (Spain)

    2016-04-01

    Positron range (PR) is one of the important factors that limit the spatial resolution of positron emission tomography (PET) preclinical images. Its blurring effect can be corrected to a large extent if the appropriate method is used during the image reconstruction. Nevertheless, this correction requires an accurate modelling of the PR for the particular radionuclide and materials in the sample under study. In this work we investigate PET imaging with {sup 68}Ga and {sup 66}Ga radioisotopes, which have a large PR and are being used in many preclinical and clinical PET studies. We produced a {sup 68}Ga and {sup 66}Ga phantom on a natural zinc target through (p,n) reactions using the 9-MeV proton beam delivered by the 5-MV CMAM tandetron accelerator. The phantom was imaged in an ARGUS small animal PET/CT scanner and reconstructed with a fully 3D iterative algorithm, with and without PR corrections. The reconstructed images at different time frames show significant improvement in spatial resolution when the appropriate PR is applied for each frame, by taking into account the relative amount of each isotope in the sample. With these results we validate our previously proposed PR correction method for isotopes with large PR. Additionally, we explore the feasibility of PET imaging with {sup 68}Ga and {sup 66}Ga radioisotopes in proton therapy.

  1. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  2. Correction for the interference of strontium in the determination of uranium in geologic samples by X-ray fluorescence

    International Nuclear Information System (INIS)

    Roca, M.; Bayon, A.

    1981-01-01

    A suitable empirical algorithm for the correction for the spectral interference of the SrKα on the ULα line has been derived. It works successfully for SrO concentrations up to 8% with a minimum detectable limit of 20 ppm U 3 O 8 . X-ray spectrometry procedure allows also the determination of the SrO contents of the samples. A program in BASIC language for data reduction has been written. (Author) 3 refs

  3. Construct Validity of the MMPI-2-RF Triarchic Psychopathy Scales in Correctional and Collegiate Samples.

    Science.gov (United States)

    Kutchen, Taylor J; Wygant, Dustin B; Tylicki, Jessica L; Dieter, Amy M; Veltri, Carlo O C; Sellbom, Martin

    2017-01-01

    This study examined the MMPI-2-RF (Ben-Porath & Tellegen, 2008/2011) Triarchic Psychopathy scales recently developed by Sellbom et al. ( 2016 ) in 3 separate groups of male correctional inmates and 2 college samples. Participants were administered a diverse battery of psychopathy specific measures (e.g., Psychopathy Checklist-Revised [Hare, 2003 ], Psychopathic Personality Inventory-Revised [Lilienfeld & Widows, 2005 ], Triarchic Psychopathy Measure [Patrick, 2010 ]), omnibus personality and psychopathology measures such as the Personality Assessment Inventory (Morey, 2007 ) and Personality Inventory for DSM-5 (Krueger, Derringer, Markon, Watson, & Skodol, 2012 ), and narrow-band measures that capture conceptually relevant constructs. Our results generally evidenced strong support for the convergent and discriminant validity for the MMPI-2-RF Triarchic scales. Boldness was largely associated with measures of fearless dominance, social potency, and stress immunity. Meanness showed strong relationships with measures of callousness, aggression, externalizing tendencies, and poor interpersonal functioning. Disinhibition exhibited strong associations with poor impulse control, stimulus seeking, and general externalizing proclivities. Our results provide additional construct validation to both the triarchic model and MMPI-2-RF Triarchic scales. Given the widespread use of the MMPI-2-RF in correctional and forensic settings, our results have important implications for clinical assessment in these 2 areas, where psychopathy is a highly relevant construct.

  4. Determination of sampling constants in NBS geochemical standard reference materials

    International Nuclear Information System (INIS)

    Filby, R.H.; Bragg, A.E.; Grimm, C.A.

    1986-01-01

    Recently Filby et al. showed that, for several elements, National Bureau of Standards (NBS) Fly Ash standard reference material (SRM) 1633a was a suitable reference material for microanalysis (sample weights 2 , and the mean sample weight, W vector, K/sub s/ = (S/sub s/%) 2 W vector, could not be determined from these data because it was not possible to quantitate other sources of error in the experimental variances. K/sub s/ values for certified elements in geochemical SRMs provide important homogeneity information for microanalysis. For mineralogically homogeneous SRMs (i.e., small K/sub s/ values for associated elements) such as the proposed clays, it is necessary to determine K/sub s/ by analysis of very small sample aliquots to maximize the subsampling variance relative to other sources of error. This source of error and the blank correction for the sample container can be eliminated by determining K/sub s/ from radionuclide activities of weighed subsamples of a preirradiated SRM

  5. The use of commercially available PC-interface cards for elemental mapping in small samples using XRF

    International Nuclear Information System (INIS)

    Abu Bakar bin Ghazali; Hoyes Garnet

    1991-01-01

    This paper demonstrates the use of ADC and reed relay cards to scan a small sample for acquiring data of X-ray fluorescence. The result shows the distribution of an element such as zinc content in the sample by means of colours, signifying the concentration

  6. Sampling or gambling

    Energy Technology Data Exchange (ETDEWEB)

    Gy, P.M.

    1981-12-01

    Sampling can be compared to no other technique. A mechanical sampler must above all be selected according to its aptitude for supressing or reducing all components of the sampling error. Sampling is said to be correct when it gives all elements making up the batch of matter submitted to sampling an uniform probability of being selected. A sampler must be correctly designed, built, installed, operated and maintained. When the conditions of sampling correctness are not strictly respected, the sampling error can no longer be controlled and can, unknown to the user, be unacceptably large: the sample is no longer representative. The implementation of an incorrect sampler is a form of gambling and this paper intends to show that at this game the user is nearly always the loser in the long run. The users' and the manufacturers' interests may diverge and the standards which should safeguard the users' interests very often fail to do so by tolerating or even recommending incorrect techniques such as the implementation of too narrow cutters traveling too fast through the stream to be sampled.

  7. Corrective Action Investigation Plan for Corrective Action Unit 561: Waste Disposal Areas, Nevada Test Site, Nevada with ROTC 1, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Grant Evenson

    2008-07-01

    Corrective Action Unit (CAU) 561 is located in Areas 1, 2, 3, 5, 12, 22, 23, and 25 of the Nevada Test Site, which is approximately 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 561 is comprised of the 10 corrective action sites (CASs) listed below: • 01-19-01, Waste Dump • 02-08-02, Waste Dump and Burn Area • 03-19-02, Debris Pile • 05-62-01, Radioactive Gravel Pile • 12-23-09, Radioactive Waste Dump • 22-19-06, Buried Waste Disposal Site • 23-21-04, Waste Disposal Trenches • 25-08-02, Waste Dump • 25-23-21, Radioactive Waste Dump • 25-25-19, Hydrocarbon Stains and Trench These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on April 28, 2008, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and National Security Technologies, LLC. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 561. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the Corrective Action Investigation for CAU 561 includes the following activities: • Move surface debris and/or materials, as needed, to facilitate sampling. • Conduct radiological surveys

  8. Investigation of Phase Transition-Based Tethered Systems for Small Body Sample Capture

    Science.gov (United States)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Scharf, Daniel; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and possible return to Earth. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  9. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  10. Using Data-Dependent Priors to Mitigate Small Sample Bias in Latent Growth Models: A Discussion and Illustration Using M"plus"

    Science.gov (United States)

    McNeish, Daniel M.

    2016-01-01

    Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…

  11. Determination of self absorption correction factor (SAF) for gross alpha measurement in water samples by BIS method

    International Nuclear Information System (INIS)

    Raveendran, Nanda; Baburajan, A.; Ravi, P.M.

    2018-01-01

    The laboratories accredited by AERB undertake the measurement of gross alpha and gross beta in packaged drinking water from manufactures across the country and analyze as per the procedure of Bureau of Indian standards. The accurate measurements of gross alpha in the drinking water sample is a challenge due to the self absorption of alpha particle from varying precipitate (Fe(OH) 3 +BaSO 4 ) thickness and total dissolved solids (TDS). This paper deals with a study on tracer recovery generation and self absorption correction factor (SAF). ESL, Tarapur has participated in an inter-laboratory comparison exercise conducted by IDS, RSSD, BARC as per the recommendation of AERB for the accredited laboratories. The thickness of the precipitate is an important aspect which affected the counting process. The activity was reported after conducting multiple experiments with uranium tracer recovery and precipitate thickness. Later on to make our efforts simplified, an average tracer recovery and Self Absorption correction Factor (SAF) was derived by the present experiment and the same was used for the re-calculation of activity from the count rate reported earlier

  12. Accuracy and Radiation Dose of CT-Based Attenuation Correction for Small Animal PET: A Monte Carlo Simulation Study

    International Nuclear Information System (INIS)

    Yang, Ching-Ching; Chan, Kai-Chieh

    2013-06-01

    -Small animal PET allows qualitative assessment and quantitative measurement of biochemical processes in vivo, but the accuracy and reproducibility of imaging results can be affected by several parameters. The first aim of this study was to investigate the performance of different CT-based attenuation correction strategies and assess the resulting impact on PET images. The absorbed dose in different tissues caused by scanning procedures was also discussed to minimize biologic damage generated by radiation exposure due to PET/CT scanning. A small animal PET/CT system was modeled based on Monte Carlo simulation to generate imaging results and dose distribution. Three energy mapping methods, including the bilinear scaling method, the dual-energy method and the hybrid method which combines the kVp conversion and the dual-energy method, were investigated comparatively through assessing the accuracy of estimating linear attenuation coefficient at 511 keV and the bias introduced into PET quantification results due to CT-based attenuation correction. Our results showed that the hybrid method outperformed the bilinear scaling method, while the dual-energy method achieved the highest accuracy among the three energy mapping methods. Overall, the accuracy of PET quantification results have similar trend as that for the estimation of linear attenuation coefficients, whereas the differences between the three methods are more obvious in the estimation of linear attenuation coefficients than in the PET quantification results. With regards to radiation exposure from CT, the absorbed dose ranged between 7.29-45.58 mGy for 50-kVp scan and between 6.61-39.28 mGy for 80-kVp scan. For 18 F radioactivity concentration of 1.86x10 5 Bq/ml, the PET absorbed dose was around 24 cGy for tumor with a target-to-background ratio of 8. The radiation levels for CT scans are not lethal to the animal, but concurrent use of PET in longitudinal study can increase the risk of biological effects. The

  13. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    Science.gov (United States)

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  14. Research on 3-D terrain correction methods of airborne gamma-ray spectrometry survey

    International Nuclear Information System (INIS)

    Liu Yanyang; Liu Qingcheng; Zhang Zhiyong

    2008-01-01

    The general method of height correction is not effectual in complex terrain during the process of explaining airborne gamma-ray spectrometry data, and the 2-D terrain correction method researched in recent years is just available for correction of section measured. A new method of 3-D sector terrain correction is studied. The ground radiator is divided into many small sector radiators by the method, then the irradiation rate is calculated in certain survey distance, and the total value of all small radiate sources is regarded as the irradiation rate of the ground radiator at certain point of aero- survey, and the correction coefficients of every point are calculated which then applied to correct to airborne gamma-ray spectrometry data. The method can achieve the forward calculation, inversion calculation and terrain correction for airborne gamma-ray spectrometry survey in complex topography by dividing the ground radiator into many small sectors. Other factors are considered such as the un- saturated degree of measure scope, uneven-radiator content on ground, and so on. The results of for- ward model and an example analysis show that the 3-D terrain correction method is proper and effectual. (authors)

  15. An improved optimization algorithm of the three-compartment model with spillover and partial volume corrections for dynamic FDG PET images of small animal hearts in vivo

    Science.gov (United States)

    Li, Yinlin; Kundu, Bijoy K.

    2018-03-01

    The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged  -1.4  ±  8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4  ±  5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic

  16. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  17. Efficiency calibration and measurement of self-absorption correction of environmental gamma spectroscopy of soils samples using Marinelli beaker

    International Nuclear Information System (INIS)

    Abdi, M. R.; Mostajaboddavati, M.; Hassanzadeh, S.; Faghihian, H.; Rezaee, Kh.; Kamali, M.

    2006-01-01

    A nonlinear function in combination with the method of mixing activity calibrated is applied for fitting the experimental peak efficiency of HPGe spectrometers in 59-2614 keV energy range. The preparation of Marinelli beaker standards of mixed gamma and RG-set at secular equilibrium with its daughter radionuclides was studied. Standards were prepared by mixing of known amounts of 13B a, 241 Am, 152 Eu, 207 Bi, 24 Na, Al 2 O 3 powder and soil. The validity of these standards was checked by comparison with certified standard reference material RG-set and IAEA-Soil-6 Self-absorption was measured for the activity calculation of the gamma-ray lines about series of 238 U daughter, 232 Th series, 137 Cs and 40 K in soil samples. Self-absorption in the sample will depend on a number of factor including sample composition, density, sample size and gamma-ray energy. Seven Marinelli beaker standards were prepared in different degrees of compaction with bulk density ( ρ) of 1.000 to 1.600 g cm -3 . The detection efficiency versus density was obtained and the equation of self-absorption correction factors calculated for soil samples

  18. Corrective Action Decision Document/Closure Report for Corrective Action Unit 266: Area 25 Building 3124 Leachfield, Nevada Test Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    NNSA/NV

    2000-02-17

    This Corrective Action Decision Document/Closure Report (CADD/CR) was prepared for Corrective Action Unit (CAU) 266, Area 25 Building 3124 Leachfield, in accordance with the Federal Facility Agreement and Consent Order. Located in Area 25 at the Nevada Test Site in Nevada, CAU 266 includes Corrective Action Site (CAS) 25-05-09. The Corrective Action Decision Document and Closure Report were combined into one report because sample data collected during the corrective action investigation (CAI) indicated that contaminants of concern (COCs) were either not present in the soil, or present at concentrations not requiring corrective action. This CADD/CR identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's recommendation that no corrective action was necessary for CAU 266. From February through May 1999, CAI activities were performed as set forth in the related Corrective Action Investigation Plan. Analytes detected during the three-stage CAI of CAU 266 were evaluated against preliminary action levels (PALs) to determine COCs, and the analysis of the data generated from soil collection activities indicated the PALs were not exceeded for total volatile/semivolatile organic compounds, total petroleum hydrocarbons, polychlorinated biphenyls, total Resource Conservation and Recovery Act metals, gamma-emitting radionuclides, isotopic uranium/plutonium, and strontium-90 for any of the samples. However, COCs were identified in samples from within the septic tank and distribution box; and the isotopic americium concentrations in the two soil samples did exceed PALs. Closure activities were performed at the site to address the COCs identified in the septic tank and distribution box. Further, no use restrictions were required to be placed on CAU 266 because the CAI revealed soil contamination to be less than the 100 millirems per year limit established by DOE Order 5400.5.

  19. Corrective Action Decision Document/Closure Report for Corrective Action Unit 266: Area 25 Building 3124 Leachfield, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    2000-01-01

    This Corrective Action Decision Document/Closure Report (CADD/CR) was prepared for Corrective Action Unit (CAU) 266, Area 25 Building 3124 Leachfield, in accordance with the Federal Facility Agreement and Consent Order. Located in Area 25 at the Nevada Test Site in Nevada, CAU 266 includes Corrective Action Site (CAS) 25-05-09. The Corrective Action Decision Document and Closure Report were combined into one report because sample data collected during the corrective action investigation (CAI) indicated that contaminants of concern (COCs) were either not present in the soil, or present at concentrations not requiring corrective action. This CADD/CR identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's recommendation that no corrective action was necessary for CAU 266. From February through May 1999, CAI activities were performed as set forth in the related Corrective Action Investigation Plan. Analytes detected during the three-stage CAI of CAU 266 were evaluated against preliminary action levels (PALs) to determine COCs, and the analysis of the data generated from soil collection activities indicated the PALs were not exceeded for total volatile/semivolatile organic compounds, total petroleum hydrocarbons, polychlorinated biphenyls, total Resource Conservation and Recovery Act metals, gamma-emitting radionuclides, isotopic uranium/plutonium, and strontium-90 for any of the samples. However, COCs were identified in samples from within the septic tank and distribution box; and the isotopic americium concentrations in the two soil samples did exceed PALs. Closure activities were performed at the site to address the COCs identified in the septic tank and distribution box. Further, no use restrictions were required to be placed on CAU 266 because the CAI revealed soil contamination to be less than the 100 millirems per year limit established by DOE Order 5400.5

  20. Corrective Action Investigation Plan for Corrective Action Unit 166: Storage Yards and Contaminated Materials, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    David Strand

    2006-01-01

    Corrective Action Unit 166 is located in Areas 2, 3, 5, and 18 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit (CAU) 166 is comprised of the seven Corrective Action Sites (CASs) listed below: (1) 02-42-01, Cond. Release Storage Yd - North; (2) 02-42-02, Cond. Release Storage Yd - South; (3) 02-99-10, D-38 Storage Area; (4) 03-42-01, Conditional Release Storage Yard; (5) 05-19-02, Contaminated Soil and Drum; (6) 18-01-01, Aboveground Storage Tank; and (7) 18-99-03, Wax Piles/Oil Stain. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation (CAI) before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on February 28, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and Bechtel Nevada. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 166. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the CAI for CAU 166 includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling. (2) Conduct radiological surveys. (3) Perform field screening. (4) Collect and submit environmental samples for laboratory analysis to determine if

  1. Advanced astigmatism-corrected tandem Wadsworth mounting for small-scale spectral broadband imaging spectrometer.

    Science.gov (United States)

    Lei, Yu; Lin, Guan-yu

    2013-01-01

    Tandem gratings of double-dispersion mount make it possible to design an imaging spectrometer for the weak light observation with high spatial resolution, high spectral resolution, and high optical transmission efficiency. The traditional tandem Wadsworth mounting is originally designed to match the coaxial telescope and large-scale imaging spectrometer. When it is used to connect the off-axis telescope such as off-axis parabolic mirror, it presents lower imaging quality than to connect the coaxial telescope. It may also introduce interference among the detector and the optical elements as it is applied to the short focal length and small-scale spectrometer in a close volume by satellite. An advanced tandem Wadsworth mounting has been investigated to deal with the situation. The Wadsworth astigmatism-corrected mounting condition for which is expressed as the distance between the second concave grating and the imaging plane is calculated. Then the optimum arrangement for the first plane grating and the second concave grating, which make the anterior Wadsworth condition fulfilling each wavelength, is analyzed by the geometric and first order differential calculation. These two arrangements comprise the advanced Wadsworth mounting condition. The spectral resolution has also been calculated by these conditions. An example designed by the optimum theory proves that the advanced tandem Wadsworth mounting performs excellently in spectral broadband.

  2. Bryant J. correction formula

    International Nuclear Information System (INIS)

    Tejera R, A.; Cortes P, A.; Becerril V, A.

    1990-03-01

    For the practical application of the method proposed by J. Bryant, the authors carried out a series of small corrections, related with the bottom, the dead time of the detectors and channels, with the resolution time of the coincidences, with the accidental coincidences, with the decay scheme and with the gamma efficiency of the beta detector beta and the beta efficiency beta of the gamma detector. The calculation of the correction formula is presented in the development of the present report, being presented 25 combinations of the probability of the first existent state at once of one disintegration and the second state at once of the following disintegration. (Author)

  3. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  4. Corrective Action Investigation Plan for Corrective Action Unit 551: Area 12 Muckpiles, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Boehlecke, Robert F.

    2004-01-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information including facility descriptions, environmental sample collection objectives, and criteria for conducting site investigation activities at Corrective Action Unit (CAU) 551, Area 12 muckpiles, Nevada Test Site (NTS), Nevada. This CAIP has been developed in accordance with the 'Federal Facility Agreement and Consent Order' (FFACO) (1996) that was agreed to by the State of Nevada, the U.S. Department of Energy (DOE), and the U.S. Department of Defense. Corrective Action Unit 551 is located in Area 12 of the NTS, which is approximately 110 miles (mi) northwest of Las Vegas, Nevada (Figure 1-1). Area 12 is approximately 40 miles beyond the main gate to the NTS. Corrective Action Unit 551 is comprised of the four Corrective Action Sites (CASs) shown on Figure 1-1 and listed below: (1) 12-01-09, Aboveground Storage Tank and Stain; (2) 12-06-05, Muckpile; (3) 12-06-07, Muckpile; and (4) 12-06-08, Muckpile. Corrective Action Site 12-01-09 is located in Area 12 and consists of an above ground storage tank (AST) and associated stain. Corrective Action Site 12-06-05 is located in Area 12 and consists of a muckpile associated with the U12 B-Tunnel. Corrective Action Site 12-06-07 is located in Area 12 and consists of a muckpile associated with the U12 C-, D-, and F-Tunnels. Corrective Action Site 12-06-08 is located in Area 12 and consists of a muckpile associated with the U12 B-Tunnel. In keeping with common convention, the U12B-, C-, D-, and F-Tunnels will be referred to as the B-, C-, D-, and F-Tunnels. The corrective action investigation (CAI) will include field inspections, radiological surveys, and sampling of media, where appropriate. Data will also be obtained to support waste management decisions

  5. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  6. Power corrections and renormalons in Transverse Momentum Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Scimemi, Ignazio [Departamento de Física Teórica II, Universidad Complutense de Madrid,Ciudad Universitaria, 28040 Madrid (Spain); Vladimirov, Alexey [Institut für Theoretische Physik, Universität Regensburg,D-93040 Regensburg (Germany)

    2017-03-01

    We study the power corrections to Transverse Momentum Distributions (TMDs) by analyzing renormalon divergences of the perturbative series. The renormalon divergences arise independently in two constituents of TMDs: the rapidity evolution kernel and the small-b matching coefficient. The renormalon contributions (and consequently power corrections and non-perturbative corrections to the related cross sections) have a non-trivial dependence on the Bjorken variable and the transverse distance. We discuss the consistency requirements for power corrections for TMDs and suggest inputs for the TMD phenomenology in accordance with this study. Both unpolarized quark TMD parton distribution function and fragmentation function are considered.

  7. A Rational Approach for Discovering and Validating Cancer Markers in Very Small Samples Using Mass Spectrometry and ELISA Microarrays

    Directory of Open Access Journals (Sweden)

    Richard C. Zangar

    2004-01-01

    Full Text Available Identifying useful markers of cancer can be problematic due to limited amounts of sample. Some samples such as nipple aspirate fluid (NAF or early-stage tumors are inherently small. Other samples such as serum are collected in larger volumes but archives of these samples are very valuable and only small amounts of each sample may be available for a single study. Also, given the diverse nature of cancer and the inherent variability in individual protein levels, it seems likely that the best approach to screen for cancer will be to determine the profile of a battery of proteins. As a result, a major challenge in identifying protein markers of disease is the ability to screen many proteins using very small amounts of sample. In this review, we outline some technological advances in proteomics that greatly advance this capability. Specifically, we propose a strategy for identifying markers of breast cancer in NAF that utilizes mass spectrometry (MS to simultaneously screen hundreds or thousands of proteins in each sample. The best potential markers identified by the MS analysis can then be extensively characterized using an ELISA microarray assay. Because the microarray analysis is quantitative and large numbers of samples can be efficiently analyzed, this approach offers the ability to rapidly assess a battery of selected proteins in a manner that is directly relevant to traditional clinical assays.

  8. Respiratory Motion Correction for Compressively Sampled Free Breathing Cardiac MRI Using Smooth l1-Norm Approximation

    Directory of Open Access Journals (Sweden)

    Muhammad Bilal

    2018-01-01

    Full Text Available Transformed domain sparsity of Magnetic Resonance Imaging (MRI has recently been used to reduce the acquisition time in conjunction with compressed sensing (CS theory. Respiratory motion during MR scan results in strong blurring and ghosting artifacts in recovered MR images. To improve the quality of the recovered images, motion needs to be estimated and corrected. In this article, a two-step approach is proposed for the recovery of cardiac MR images in the presence of free breathing motion. In the first step, compressively sampled MR images are recovered by solving an optimization problem using gradient descent algorithm. The L1-norm based regularizer, used in optimization problem, is approximated by a hyperbolic tangent function. In the second step, a block matching algorithm, known as Adaptive Rood Pattern Search (ARPS, is exploited to estimate and correct respiratory motion among the recovered images. The framework is tested for free breathing simulated and in vivo 2D cardiac cine MRI data. Simulation results show improved structural similarity index (SSIM, peak signal-to-noise ratio (PSNR, and mean square error (MSE with different acceleration factors for the proposed method. Experimental results also provide a comparison between k-t FOCUSS with MEMC and the proposed method.

  9. Microdochium nivale and Microdochium majus in seed samples of Danish small grain cereals

    DEFF Research Database (Denmark)

    Nielsen, L. K.; Justesen, A. F.; Jensen, J. D.

    2013-01-01

    Microdochium nivale and Microdochium majus are two of fungal species found in the Fusarium Head Blight (FHB) complex infecting small grain cereals. Quantitative real-time PCR assays were designed to separate the two Microdochium species based on the translation elongation factor 1a gene (TEF-1a......) and used to analyse a total of 374 seed samples of wheat, barley, triticale, rye and oat sampled from farmers’ fields across Denmark from 2003 to 2007. Both fungal species were detected in the five cereal species but M. majus showed a higher prevalence compared to M. nivale in most years in all cereal...... species except rye, in which M. nivale represented a larger proportion of the biomass and was more prevalent than M. majus in some samples. Historical samples of wheat and barley from 1957 to 2000 similarly showed a strong prevalence of M. majus over M. nivale indicating that M. majus has been the main...

  10. Slurry sampling high-resolution continuum source electrothermal atomic absorption spectrometry for direct beryllium determination in soil and sediment samples after elimination of SiO interference by least-squares background correction.

    Science.gov (United States)

    Husáková, Lenka; Urbanová, Iva; Šafránková, Michaela; Šídová, Tereza

    2017-12-01

    In this work a simple, efficient, and environmentally-friendly method is proposed for determination of Be in soil and sediment samples employing slurry sampling and high-resolution continuum source electrothermal atomic absorption spectrometry (HR-CS-ETAAS). The spectral effects originating from SiO species were identified and successfully corrected by means of a mathematical correction algorithm. Fractional factorial design has been employed to assess the parameters affecting the analytical results and especially to help in the development of the slurry preparation and optimization of measuring conditions. The effects of seven analytical variables including particle size, concentration of glycerol and HNO 3 for stabilization and analyte extraction, respectively, the effect of ultrasonic agitation for slurry homogenization, concentration of chemical modifier, pyrolysis and atomization temperature were investigated by a 2 7-3 replicate (n = 3) design. Using the optimized experimental conditions, the proposed method allowed the determination of Be with a detection limit being 0.016mgkg -1 and characteristic mass 1.3pg. Optimum results were obtained after preparing the slurries by weighing 100mg of a sample with particle size < 54µm and adding 25mL of 20% w/w glycerol. The use of 1μg Rh and 50μg citric acid was found satisfactory for the analyte stabilization. Accurate data were obtained with the use of matrix-free calibration. The accuracy of the method was confirmed by analysis of two certified reference materials (NIST SRM 2702 Inorganics in Marine Sediment and IGI BIL-1 Baikal Bottom Silt) and by comparison of the results obtained for ten real samples by slurry sampling with those determined after microwave-assisted extraction by inductively coupled plasma time of flight mass spectrometry (TOF-ICP-MS). The reported method has a precision better than 7%. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Correcting for intra-experiment variation in Illumina BeadChip data is necessary to generate robust gene-expression profiles

    Directory of Open Access Journals (Sweden)

    van Hemert Jano I

    2010-02-01

    Full Text Available Abstract Background Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR and duplicate hybridisations of primary breast tumour samples from a clinical study. Results A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999 and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. Conclusion In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data.

  12. Correcting for intra-experiment variation in Illumina BeadChip data is necessary to generate robust gene-expression profiles.

    Science.gov (United States)

    Kitchen, Robert R; Sabine, Vicky S; Sims, Andrew H; Macaskill, E Jane; Renshaw, Lorna; Thomas, Jeremy S; van Hemert, Jano I; Dixon, J Michael; Bartlett, John M S

    2010-02-24

    Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data.

  13. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set—Effect of Pasteurization

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-01-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  14. Correction procedures for C-14 dates

    International Nuclear Information System (INIS)

    McKerrell, H.

    1975-01-01

    There are two quite separate criteria to satisfy before accepting as valid the corrections to C-14 dates which have been indicated for some years now by the bristlecone pine calibration. Firstly the correction figures have to be based upon all the available tree-ring data and derived in a manner that is mathematically sound, and secondly the correction figures have to produce accurate results on C-14 dates from archaeological test samples of known historical date, these covering as wide a period as possible. Neither of these basic prerequisites has yet been fully met. Thus the two-fold purpose of this paper is to bring together, and to compare with an independently based procedure, the various correction curves or tables that have been published up to Spring 1974, as well as to detail the correction results on reliable, historically dated Egyptian, Helladic and Minoan test samples from 3100 B.C. The nomenclature followed is strictly that adopted by the primary dating journal Radiocarbon, all C-14 dates quoted thus relate to the 5568 year half-life and the standard AD/BC system. (author)

  15. Measurement of phthalates in small samples of mammalian tissue

    International Nuclear Information System (INIS)

    Acott, P.D.; Murphy, M.G.; Ogborn, M.R.; Crocker, J.F.S.

    1987-01-01

    Di-(2-ethylhexyl)-phthalate (DEHP) is a phthalic acid ester that is used as a plasticizer in polyvinyl chloride products, many of which have widespread medical application. DEHP has been shown to be leached from products used for storage and delivery of blood transfusions during procedures such as plasmaphoresis, hemodialysis and open heart surgery. Results of studies in this laboratory have suggested that there is an association between the absorption and deposition of DEHP (and/or related chemicals) in the kidney and the acquired renal cystic disease (ACD) frequently seen in patients who have undergone prolonged dialysis treatment. In order to determine the relationship between the two, it has been necessary to establish a method for extracting and accurately quantitating minute amounts of these chemicals in small tissue samples. The authors have now established such a method using kidneys from normal rats and from a rat model for ACD

  16. A novel approach for small sample size family-based association studies: sequential tests.

    Science.gov (United States)

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  17. Power corrections to exclusive processes in QCD

    Energy Technology Data Exchange (ETDEWEB)

    Mankiewicz, Lech

    2002-02-01

    In practice applicability of twist expansion crucially depends on the magnitude to power corrections to the leading-twist amplitude. I illustrate this point by considering explicit examples of two hard exclusive processes in QCD. In the case of {gamma}{sup *}{gamma} {yields} {pi}{pi} amplitude power corrections are small enough such that it should be possible to describe current experimental data by the leading-twist QCD prediction. The photon helicity-flip amplitude in DVCS on a nucleon receives large kinematical power corrections which screen the leading-twist prediction up to large values of the hard photon virtuality.

  18. Fluorescence correction in electron probe microanalysis

    International Nuclear Information System (INIS)

    Castellano, Gustavo; Riveros, J.A.

    1987-01-01

    In this work, several expressions for characteristic fluorescence corrections are computed, for a compilation of experimental determinations on standard samples. Since this correction does not take significant values, the performance of the different models is nearly the same; this fact suggests the use of the simplest available expression. (Author) [es

  19. Correction for phylogeny, small number of observations and data redundancy improves the identification of coevolving amino acid pairs using mutual information

    DEFF Research Database (Denmark)

    Buslje, C.M.; Santos, J.; Delfino, J.M.

    2009-01-01

    Motivation: Mutual information (MI) theory is often applied to predict positional correlations in a multiple sequence alignment (MSA) to make possible the analysis of those positions structurally or functionally important in a given fold or protein family. Accurate identification of coevolving......-weighting techniques to reduce sequence redundancy and low-count corrections to account for small number of observations in limited size sequence families, can significantly improve the predictability of MI. The evaluation is made on large sets of both in silico-generated alignments as well as on biological sequence...

  20. Advanced computer-controlled automatic alpha-beta air sample counter

    International Nuclear Information System (INIS)

    Howell, W.P.; Bruinekool, D.J.; Stapleton, E.E.

    1983-01-01

    An improved computer controlled automatic alpha-beta air sample counter was developed, based upon an earlier automatic air sample counter design. The system consists of an automatic sample changer, an electronic counting system utilizing a large silicon diode detector, a small desk-type microcomputer, a high speed matrix printer, and the necessary data interfaces. The system is operated by commands from the keyboard and programs stored on magnetic tape cassettes. The programs provide for background counting, Chi 2 test, radon subtraction, and sample counting for sample periods of one day to one week. Output data are printed by the matrix printer on standard multifold paper. The data output includes gross beta, gross alpha, and plutonium results. Data are automatically corrected for background, counter efficiency, and in the gross alpha and plutonium channels, for the presence of radon

  1. Application of bias correction methods to improve U3Si2 sample preparation for quantitative analysis by WDXRF

    International Nuclear Information System (INIS)

    Scapin, Marcos A.; Guilhen, Sabine N.; Azevedo, Luciana C. de; Cotrim, Marycel E.B.; Pires, Maria Ap. F.

    2017-01-01

    The determination of silicon (Si), total uranium (U) and impurities in uranium-silicide (U 3 Si 2 ) samples by wavelength dispersion X-ray fluorescence technique (WDXRF) has been already validated and is currently implemented at IPEN's X-Ray Fluorescence Laboratory (IPEN-CNEN/SP) in São Paulo, Brazil. Sample preparation requires the use of approximately 3 g of H 3 BO 3 as sample holder and 1.8 g of U 3 Si 2 . However, because boron is a neutron absorber, this procedure precludes U 3 Si 2 sample's recovery, which, in time, considering routinely analysis, may account for significant unusable uranium waste. An estimated average of 15 samples per month are expected to be analyzed by WDXRF, resulting in approx. 320 g of U 3 Si 2 that would not return to the nuclear fuel cycle. This not only impacts in production losses, but generates another problem: radioactive waste management. The purpose of this paper is to present the mathematical models that may be applied for the correction of systematic errors when H 3 BO 3 sample holder is substituted by cellulose-acetate {[C 6 H 7 O 2 (OH) 3-m (OOCCH 3 )m], m = 0∼3}, thus enabling U 3 Si 2 sample’s recovery. The results demonstrate that the adopted mathematical model is statistically satisfactory, allowing the optimization of the procedure. (author)

  2. Radiometric Correction of Multitemporal Hyperspectral Uas Image Mosaics of Seedling Stands

    Science.gov (United States)

    Markelin, L.; Honkavaara, E.; Näsi, R.; Viljanen, N.; Rosnell, T.; Hakala, T.; Vastaranta, M.; Koivisto, T.; Holopainen, M.

    2017-10-01

    Novel miniaturized multi- and hyperspectral imaging sensors on board of unmanned aerial vehicles have recently shown great potential in various environmental monitoring and measuring tasks such as precision agriculture and forest management. These systems can be used to collect dense 3D point clouds and spectral information over small areas such as single forest stands or sample plots. Accurate radiometric processing and atmospheric correction is required when data sets from different dates and sensors, collected in varying illumination conditions, are combined. Performance of novel radiometric block adjustment method, developed at Finnish Geospatial Research Institute, is evaluated with multitemporal hyperspectral data set of seedling stands collected during spring and summer 2016. Illumination conditions during campaigns varied from bright to overcast. We use two different methods to produce homogenous image mosaics and hyperspectral point clouds: image-wise relative correction and image-wise relative correction with BRDF. Radiometric datasets are converted to reflectance using reference panels and changes in reflectance spectra is analysed. Tested methods improved image mosaic homogeneity by 5 % to 25 %. Results show that the evaluated method can produce consistent reflectance mosaics and reflectance spectra shape between different areas and dates.

  3. RADIOMETRIC CORRECTION OF MULTITEMPORAL HYPERSPECTRAL UAS IMAGE MOSAICS OF SEEDLING STANDS

    Directory of Open Access Journals (Sweden)

    L. Markelin

    2017-10-01

    Full Text Available Novel miniaturized multi- and hyperspectral imaging sensors on board of unmanned aerial vehicles have recently shown great potential in various environmental monitoring and measuring tasks such as precision agriculture and forest management. These systems can be used to collect dense 3D point clouds and spectral information over small areas such as single forest stands or sample plots. Accurate radiometric processing and atmospheric correction is required when data sets from different dates and sensors, collected in varying illumination conditions, are combined. Performance of novel radiometric block adjustment method, developed at Finnish Geospatial Research Institute, is evaluated with multitemporal hyperspectral data set of seedling stands collected during spring and summer 2016. Illumination conditions during campaigns varied from bright to overcast. We use two different methods to produce homogenous image mosaics and hyperspectral point clouds: image-wise relative correction and image-wise relative correction with BRDF. Radiometric datasets are converted to reflectance using reference panels and changes in reflectance spectra is analysed. Tested methods improved image mosaic homogeneity by 5 % to 25 %. Results show that the evaluated method can produce consistent reflectance mosaics and reflectance spectra shape between different areas and dates.

  4. Refractive lenticule extraction (ReLEx through a small incision (SMILE for correction of myopia and myopic astigmatism: current perspectives

    Directory of Open Access Journals (Sweden)

    Ağca A

    2016-10-01

    Full Text Available Alper Ağca,1 Ahmet Demirok,2 Yusuf Yıldırım,1 Ali Demircan,1 Dilek Yaşa,1 Ceren Yeşilkaya,1 İrfan Perente,1 Muhittin Taşkapılı1 1Beyoğlu Eye Research and Training Hospital, 2Department of Ophthalmology, Istanbul Medeniyet University, Istanbul, Turkey Abstract: Small-incision lenticule extraction (SMILE is an alternative to laser-assisted in situ keratomileusis (LASIK and photorefractive keratectomy (PRK for the correction of myopia and myopic astigmatism. SMILE can be performed for the treatment of myopia ≤-12 D and astigmatism ≤5 D. The technology is currently only available in the VisuMax femtosecond laser platform. It offers several advantages over LASIK and PRK; however, hyperopia treatment, topography-guided treatment, and cyclotorsion control are not available in the current platform. The working principles, potential advantages, and disadvantages are discussed in this review. Keywords: SMILE, small-incision lenticule extraction, femtosecond laser, laser in situ keratomileusis, corneal biomechanics

  5. Design and experimental testing of air slab caps which convert commercial electron diodes into dual purpose, correction-free diodes for small field dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Charles, P. H., E-mail: paulcharles111@gmail.com [Department of Radiation Oncology, Princess Alexandra Hospital, Ipswich Road, Woolloongabba, Brisbane, Queensland 4102, Australia and School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001 (Australia); Cranmer-Sargison, G. [Department of Medical Physics, Saskatchewan Cancer Agency, 20 Campus Drive, Saskatoon, Saskatchewan S7L 3P6, Canada and College of Medicine, University of Saskatchewan, 107 Wiggins Road, Saskatoon, Saskatchewan S7N 5E5 (Canada); Thwaites, D. I. [Institute of Medical Physics, School of Physics, University of Sydney, New South Wales 2006 (Australia); Kairn, T. [School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001, Australia and Genesis CancerCare Queensland, The Wesley Medical Centre, Suite 1, 40 Chasely Street, Auchenflower, Brisbane, Queensland 4066 (Australia); Crowe, S. B.; Langton, C. M.; Trapp, J. V. [School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001 (Australia); Pedrazzini, G. [Genesis CancerCare Queensland, The Wesley Medical Centre, Suite 1, 40 Chasely Street, Auchenflower, Brisbane, Queensland 4066 (Australia); Aland, T.; Kenny, J. [Epworth Radiation Oncology, 89 Bridge Road, Richmond, Melbourne, Victoria 3121 (Australia)

    2014-10-15

    Purpose: Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Methods: Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable “air cap”. A set of output ratios (OR{sub Det}{sup f{sub c}{sub l}{sub i}{sub n}}) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to OR{sub Det}{sup f{sub c}{sub l}{sub i}{sub n}} measured using an IBA stereotactic field diode (SFD). k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which is “correction-free” in small field relative dosimetry. In addition, the feasibility of experimentally transferring k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r

  6. A study of the dosimetry of small field photon beams used in intensity modulated radiation therapy in inhomogeneous media: Monte Carlo simulations, and algorithm comparisons and corrections

    International Nuclear Information System (INIS)

    Jones, Andrew Osler

    2004-01-01

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however, their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs farther downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. The dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the

  7. Safe and sensible preprocessing and baseline correction of pupil-size data.

    Science.gov (United States)

    Mathôt, Sebastiaan; Fabius, Jasper; Van Heusden, Elle; Van der Stigchel, Stefan

    2018-02-01

    Measurement of pupil size (pupillometry) has recently gained renewed interest from psychologists, but there is little agreement on how pupil-size data is best analyzed. Here we focus on one aspect of pupillometric analyses: baseline correction, i.e., analyzing changes in pupil size relative to a baseline period. Baseline correction is useful in experiments that investigate the effect of some experimental manipulation on pupil size. In such experiments, baseline correction improves statistical power by taking into account random fluctuations in pupil size over time. However, we show that baseline correction can also distort data if unrealistically small pupil sizes are recorded during the baseline period, which can easily occur due to eye blinks, data loss, or other distortions. Divisive baseline correction (corrected pupil size = pupil size/baseline) is affected more strongly by such distortions than subtractive baseline correction (corrected pupil size = pupil size - baseline). We discuss the role of baseline correction as a part of preprocessing of pupillometric data, and make five recommendations: (1) before baseline correction, perform data preprocessing to mark missing and invalid data, but assume that some distortions will remain in the data; (2) use subtractive baseline correction; (3) visually compare your corrected and uncorrected data; (4) be wary of pupil-size effects that emerge faster than the latency of the pupillary response allows (within ±220 ms after the manipulation that induces the effect); and (5) remove trials on which baseline pupil size is unrealistically small (indicative of blinks and other distortions).

  8. Report of the advisory group meeting on elemental analysis of extremely small samples

    International Nuclear Information System (INIS)

    2002-01-01

    This publication contains summary of discussions held at the meeting with brief description and comparative characteristics of most common nuclear analytical techniques used for analysis of very small samples as well as the conclusions of the meeting. Some aspect of reference materials and quality control are also discussed. The publication also contains individual contributions made by the participants, each of these papers haven provided with an abstract and indexed separately

  9. The cell pattern correction through design-based metrology

    Science.gov (United States)

    Kim, Yonghyeon; Lee, Kweonjae; Chang, Jinman; Kim, Taeheon; Han, Daehan; Lee, Kyusun; Hong, Aeran; Kang, Jinyoung; Choi, Bumjin; Lee, Joosung; Yeom, Kyehee; Lee, Jooyoung; Hong, Hyeongsun; Lee, Kyupil; Jin, Gyoyoung

    2015-03-01

    Starting with the sub 2Xnm node, the process window becomes smaller and tighter than before. Pattern related error budget is required for accurate critical-dimension control of Cell layers. Therefore, lithography has been faced with its various difficulties, such as weird distribution, overlay error, patterning difficulty etc. The distribution of cell pattern and overlay management are the most important factors in DRAM field. We had been experiencing that the fatal risk is caused by the patterns located in the tail of the distribution. The overlay also induces the various defect sources and misalignment issues. Even though we knew that these elements are important, we could not classify the defect type of Cell patterns. Because there is no way to gather massive small pattern CD samples in cell unit block and to compare layout with cell patterns by the CD-SEM. The CD- SEM is used in order to gather these data through high resolution, but CD-SEM takes long time to inspect and extract data because it measures the small FOV. (Field Of View) However, the NGR(E-beam tool) provides high speed with large FOV and high resolution. Also, it's possible to measure an accurate overlay between the target layout and cell patterns because they provide DBM. (Design Based Metrology) By using massive measured data, we extract the result that it is persuasive by applying the various analysis techniques, as cell distribution and defects, the pattern overlay error correction etc. We introduce how to correct cell pattern, by using the DBM measurement, and new analysis methods.

  10. Universality of quantum gravity corrections.

    Science.gov (United States)

    Das, Saurya; Vagenas, Elias C

    2008-11-28

    We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.

  11. SUSY-QCD corrections to Higgs boson production at hadron colliders

    International Nuclear Information System (INIS)

    Djouadi, A.; Spira, M.

    1999-12-01

    We analyze the next-to-leading order SUSY-QCD corrections to the production of Higgs particles at hadron colliders in supersymmetric extensions of the standard model. Besides the standard QCD corrections due to gluon exchange and emission, genuine supersymmetric corrections due to the virtual exchange of squarks and gluinos are present. At both the Tevatron and the LHC, these corrections are found to be small in the Higgs-strahlung, Drell-Yan-like Higgs pair production and vector boson fusion processes. (orig.)

  12. Corrective Action Investigation Plan for Corrective Action Unit 137: Waste Disposal Sites, Nevada Test Site, Nevada, Rev. No.:0

    Energy Technology Data Exchange (ETDEWEB)

    Wickline, Alfred

    2005-12-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information including facility descriptions, environmental sample collection objectives, and criteria for conducting site investigation activities at Corrective Action Unit (CAU) 137: Waste Disposal Sites. This CAIP has been developed in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996) that was agreed to by the State of Nevada, the U.S. Department of Energy (DOE), and the U.S. Department of Defense. Corrective Action Unit 137 contains sites that are located in Areas 1, 3, 7, 9, and 12 of the Nevada Test Site (NTS), which is approximately 65 miles (mi) northwest of Las Vegas, Nevada (Figure 1-1). Corrective Action Unit 137 is comprised of the eight corrective action sites (CASs) shown on Figure 1-1 and listed below: (1) CAS 01-08-01, Waste Disposal Site; (2) CAS 03-23-01, Waste Disposal Site; (3) CAS 03-23-07, Radioactive Waste Disposal Site; (4) CAS 03-99-15, Waste Disposal Site; (5) CAS 07-23-02, Radioactive Waste Disposal Site; (6) CAS 09-23-07, Radioactive Waste Disposal Site; (7) CAS 12-08-01, Waste Disposal Site; and (8) CAS 12-23-07, Waste Disposal Site. The Corrective Action Investigation (CAI) will include field inspections, radiological surveys, geophysical surveys, sampling of environmental media, analysis of samples, and assessment of investigation results, where appropriate. Data will be obtained to support corrective action alternative evaluations and waste management decisions. The CASs in CAU 137 are being investigated because hazardous and/or radioactive constituents may be present in concentrations that could potentially pose a threat to human health and the environment. Existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives for the CASs. Additional information will be generated by conducting a CAI before evaluating and selecting

  13. Evaluation applications of instrument calibration research findings in psychology for very small samples

    Science.gov (United States)

    Fisher, W. P., Jr.; Petry, P.

    2016-11-01

    Many published research studies document item calibration invariance across samples using Rasch's probabilistic models for measurement. A new approach to outcomes evaluation for very small samples was employed for two workshop series focused on stress reduction and joyful living conducted for health system employees and caregivers since 2012. Rasch-calibrated self-report instruments measuring depression, anxiety and stress, and the joyful living effects of mindfulness behaviors were identified in peer-reviewed journal articles. Items from one instrument were modified for use with a US population, other items were simplified, and some new items were written. Participants provided ratings of their depression, anxiety and stress, and the effects of their mindfulness behaviors before and after each workshop series. The numbers of participants providing both pre- and post-workshop data were low (16 and 14). Analysis of these small data sets produce results showing that, with some exceptions, the item hierarchies defining the constructs retained the same invariant profiles they had exhibited in the published research (correlations (not disattenuated) range from 0.85 to 0.96). In addition, comparisons of the pre- and post-workshop measures for the three constructs showed substantively and statistically significant changes. Implications for program evaluation comparisons, quality improvement efforts, and the organization of communications concerning outcomes in clinical fields are explored.

  14. Mass amplifying probe for sensitive fluorescence anisotropy detection of small molecules in complex biological samples.

    Science.gov (United States)

    Cui, Liang; Zou, Yuan; Lin, Ninghang; Zhu, Zhi; Jenkins, Gareth; Yang, Chaoyong James

    2012-07-03

    Fluorescence anisotropy (FA) is a reliable and excellent choice for fluorescence sensing. One of the key factors influencing the FA value for any molecule is the molar mass of the molecule being measured. As a result, the FA method with functional nucleic acid aptamers has been limited to macromolecules such as proteins and is generally not applicable for the analysis of small molecules because their molecular masses are relatively too small to produce observable FA value changes. We report here a molecular mass amplifying strategy to construct anisotropy aptamer probes for small molecules. The probe is designed in such a way that only when a target molecule binds to the probe does it activate its binding ability to an anisotropy amplifier (a high molecular mass molecule such as protein), thus significantly increasing the molecular mass and FA value of the probe/target complex. Specifically, a mass amplifying probe (MAP) consists of a targeting aptamer domain against a target molecule and molecular mass amplifying aptamer domain for the amplifier protein. The probe is initially rendered inactive by a small blocking strand partially complementary to both target aptamer and amplifier protein aptamer so that the mass amplifying aptamer domain would not bind to the amplifier protein unless the probe has been activated by the target. In this way, we prepared two probes that constitute a target (ATP and cocaine respectively) aptamer, a thrombin (as the mass amplifier) aptamer, and a fluorophore. Both probes worked well against their corresponding small molecule targets, and the detection limits for ATP and cocaine were 0.5 μM and 0.8 μM, respectively. More importantly, because FA is less affected by environmental interferences, ATP in cell media and cocaine in urine were directly detected without any tedious sample pretreatment. Our results established that our molecular mass amplifying strategy can be used to design aptamer probes for rapid, sensitive, and selective

  15. 40 CFR Appendix A to Subpart F of... - Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines

    Science.gov (United States)

    2010-07-01

    ... Enforcement Auditing of Small Nonroad Engines A Appendix A to Subpart F of Part 90 Protection of Environment...-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Selective Enforcement Auditing Pt. 90, Subpt. F, App. A Appendix A to Subpart F of Part 90—Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines...

  16. Corrective Action Investigation Plan for Corrective Action Unit 409: Other Waste Sites, Tonopah Test Range, Nevada (Rev. 0)

    International Nuclear Information System (INIS)

    2000-01-01

    undisturbed locations near the area of the disposal pits; field screening samples for radiological constituents; analysis for geotechnical/hydrologic parameters of samples beneath the disposal pits; and bioassesment samples, if VOC or TPH contamination concentrations exceed field-screening levels. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document

  17. Small Scale Mixing Demonstration Batch Transfer and Sampling Performance of Simulated HLW - 12307

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Jesse; Townson, Paul; Vanatta, Matt [EnergySolutions, Engineering and Technology Group, Richland, WA, 99354 (United States)

    2012-07-01

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste treatment Plant (WTP) has been recognized as a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. At the end of 2009 DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS), awarded a contract to EnergySolutions to design, fabricate and operate a demonstration platform called the Small Scale Mixing Demonstration (SSMD) to establish pre-transfer sampling capacity, and batch transfer performance data at two different scales. This data will be used to examine the baseline capacity for a tank mixed via rotational jet mixers to transfer consistent or bounding batches, and provide scale up information to predict full scale operational performance. This information will then in turn be used to define the baseline capacity of such a system to transfer and sample batches sent to WTP. The Small Scale Mixing Demonstration (SSMD) platform consists of 43'' and 120'' diameter clear acrylic test vessels, each equipped with two scaled jet mixer pump assemblies, and all supporting vessels, controls, services, and simulant make up facilities. All tank internals have been modeled including the air lift circulators (ALCs), the steam heating coil, and the radius between the wall and floor. The test vessels are set up to simulate the transfer of HLW out of a mixed tank, and collect a pre-transfer sample in a manner similar to the proposed baseline configuration. The collected material is submitted to an NQA-1 laboratory for chemical analysis. Previous work has been done to assess tank mixing performance at both scales. This work involved a combination of unique instruments to understand the three dimensional distribution of solids using a combination of Coriolis meter measurements, in situ chord length distribution

  18. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  19. [Monitoring microbiological safety of small systems of water distribution. Comparison of two sampling programs in a town in central Italy].

    Science.gov (United States)

    Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A

    2005-01-01

    To determine the frequency of sampling in small water distribution systems (distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value samples (beta= 0.24) than with 21 (beta= 0.05). For this small network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.

  20. High order QED corrections in Z physics

    International Nuclear Information System (INIS)

    Marck, S.C. van der.

    1991-01-01

    In this thesis a number of calculations of higher order QED corrections are presented, all applying to the standard LEP/SLC processes e + e - → f-bar f, where f stands for any fermion. In cases where f≠ e - , ν e , the above process is only possible via annihilation of the incoming electron positron pair. At LEP/SLC this mainly occurs via the production and the subsequent decay of a Z boson, i.e. the cross section is heavily dominated by the Z resonance. These processes and the corrections to them, treated in a semi-analytical way, are discussed (ch. 2). In the case f = e - (Bhabha scattering) the process can also occur via the exchange of a virtual photon in the t-channel. Since the latter contribution is dominant at small scattering angles one has to exclude these angles if one is interested in Z physics. Having excluded that region one has to recalculate all QED corrections (ch. 3). The techniques introduced there enables for the calculation the difference between forward and backward scattering, the forward backward symmetry, for the cases f ≠ e - , ν e (ch. 4). At small scattering angles, where Bhabha scattering is dominated by photon exchange in the t-channel, this process is used in experiments to determine the luminosity of the e + e - accelerator. hence an accurate theoretical description of this process at small angles is of vital interest to the overall normalization of all measurements at LEP/SLC. Ch. 5 gives such a description in a semi-analytical way. The last two chapters discuss Monte Carlo techniques that are used for the cases f≠ e - , ν e . Ch. 6 describes the simulation of two photon bremsstrahlung, which is a second order QED correction effect. The results are compared with results of the semi-analytical treatment in ch. 2. Finally ch. 7 reviews several techniques that have been used to simulate higher order QED corrections for the cases f≠ e - , ν e . (author). 132 refs.; 10 figs.; 16 tabs

  1. Tapping in synchrony with a perturbed metronome: the phase correction response to small and large phase shifts as a function of tempo.

    Science.gov (United States)

    Repp, Bruno H

    2011-01-01

    When tapping is paced by an auditory sequence containing small phase shift (PS) perturbations, the phase correction response (PCR) of the tap following a PS increases with the baseline interonset interval (IOI), leading eventually to overcorrection (B. H. Repp, 2008). Experiment 1 shows that this holds even for fixed-size PSs that become imperceptible as the IOI increases (here, from 400 to 1200 ms). Earlier research has also shown (but only for IOI=500 ms) that the PCR is proportionally smaller for large than for small PSs (B. H. Repp, 2002a, 2002b). Experiment 2 introduced large PSs and found smaller PCRs than in Experiment 1, at all of the same IOIs. In Experiments 3A and 3B, the author investigated whether the change in slope of the sigmoid function relating PCR and PS magnitudes occurs at a fixed absolute or relative PS magnitude across different IOIs (600, 1000, 1400 ms). The results suggest no clear answer; the exact shape of the function may depend on the range of PSs used in an experiment. Experiment 4 examined the PCR in the IOI range from 1000 to 2000 ms and found overcorrection throughout, but with the PCR increasing much more gradually than in Experiment 1. These results provide important new information about the phase correction process and pose challenges for models of sensorimotor synchronization, which presently cannot explain nonlinear PCR functions and overcorrection. Copyright © Taylor & Francis Group, LLC

  2. Attenuation correction method for single photon emission CT

    Energy Technology Data Exchange (ETDEWEB)

    Morozumi, Tatsuru; Nakajima, Masato [Keio Univ., Yokohama (Japan). Faculty of Science and Technology; Ogawa, Koichi; Yuta, Shinichi

    1983-10-01

    A correction method (Modified Correction Matrix method) is proposed to implement iterative correction by exactly measuring attenuation constant distribution in a test body, calculating a correction factor for every picture element, then multiply the image by these factors. Computer simulation for the comparison of the results showed that the proposed method was specifically more effective to an application to the test body, in which the rate of attenuation constant change is large, than the conventional correction matrix method. Since the actual measurement data always contain quantum noise, the noise was taken into account in the simulation. However, the correction effect was large even under the noise. For verifying its clinical effectiveness, the experiment using an acrylic phantom was also carried out. As the result, the recovery of image quality in the parts with small attenuation constant was remarkable as compared with the conventional method.

  3. Sensitive power compensated scanning calorimeter for analysis of phase transformations in small samples

    International Nuclear Information System (INIS)

    Lopeandia, A.F.; Cerdo, Ll.; Clavaguera-Mora, M.T.; Arana, Leonel R.; Jensen, K.F.; Munoz, F.J.; Rodriguez-Viejo, J.

    2005-01-01

    We have designed and developed a sensitive scanning calorimeter for use with microgram or submicrogram, thin film, or powder samples. Semiconductor processing techniques are used to fabricate membrane based microreactors with a small heat capacity of the addenda, 120 nJ/K at room temperature. At heating rates below 10 K/s the heat released or absorbed by the sample during a given transformation is compensated through a resistive Pt heater by a digital controller so that the calorimeter works as a power compensated device. Its use and dynamic sensitivity is demonstrated by analyzing the melting behavior of thin films of indium and high density polyethylene. Melting enthalpies in the range of 40-250 μJ for sample masses on the order of 1.5 μg have been measured with accuracy better than 5% at heating rates ∼0.2 K/s. The signal-to-noise ratio, limited by the electronic setup, is 200 nW

  4. Correction of time resolution of an ambulatory cardiac monitor (VEST)

    International Nuclear Information System (INIS)

    Kumita, Shin-ichiro; Nishimura, Tsunehiko; Hayashida, Kohei; Uehara, Toshiisa

    1990-01-01

    Using ambulatory cardiac monitor (VEST) at exercise study, its time resolution is very important factor. We evaluated the time resolution of VEST using pulsate cardiac baloon phantom. Four analysis were carried out; no smoothing (NS) method, 3 points smoothing (3S) method, short sampling interval (SS) method, and digital filter (DF) method. By comparison of |ΔEF| (|EF:HR120-EF: HR60|) among 4 analysis methods, |ΔEF| by DF method was significant small (NS:3.58±3.01, 3S: 4.46±0.95, SS: 3.35±3.26, DF: 1.11±1.28%). We conclude that correction of time resolution by digital filter is necessary when we use VEST during exercise. (author)

  5. An advanced computer-controlled automatic alpha-beta air sample counter

    International Nuclear Information System (INIS)

    Howell, W.P.; Bruinekool, D.J.; Stapleton, E.E.

    1984-01-01

    An improved computer-controlled automatic alpha-beta air sample counter was developed, based upon an earlier automatic air sample counter design. The system consists of an automatic sample changer, an electronic counting system utilizing a large silicon diode detector, a small desk-type microcomputer, a high-speed matrix printer and the necessary data interfaces. The system is operated by commands from the keyboard and programs stored on magnetic tape cassettes. The programs provide for background counting, Chi 2 test, radon subtraction and sample counting for sample periods of one day to one week. Output data are printed by the matrix printer on standard multifold paper. The data output includes gross beta, gross alpha and plutonium results. Data are automatically corrected for background, counter efficiency, and in the gross alpha and plutonium channels, for the presence of radon

  6. On Gluonic Corrections to the Mass Spectrum in a Relativistic Charmonium Model

    OpenAIRE

    Hitoshi, ITO; Department of Physics, Faculty of Science and Technology Kinki University

    1984-01-01

    It is shown that the gluonic correction in the innermost region is abnormally large in the ^1S_0 State and a cutoff parameter which suppresses this correction. should be introduced. The retardation effect is estimated under this restriction on the gluonic correction. The correction due to the pair creation is shown to be small except for the ^1S_0 and ^3P_0 states.

  7. Surface excitation correction of electron IMFP of selected polymers

    International Nuclear Information System (INIS)

    Gergely, G.; Orosz, G.T.; Lesiak, B.; Jablonski, A.; Toth, J.; Varga, D.

    2004-01-01

    Complete text of publication follows. The IMFP [1] of selected polymers: polythiophenes, polyanilines, polyethylene (PE) [2] was determined by EPES [3] experiments, using Si, Ge and Ag (for PE) reference samples. Experiments were evaluated by Monte Carlo (MC) simulations [1] applying the NIST 64 (1996 and 2002) databases and IMFP data of Tanuma and Gries [1]. The integrated experimental elastic peak ratios of sample and reference are different from those calculated by Monte Carlo (MC) simulation [1]. The difference was attributed to the difference of surface excitation parameters (SEP) [4] of the sample and reference. The SEP parameters of the reference samples were taken from Chen and Werner. A new procedure was developed for experimental determination of the SEP parameters of polymer samples. It is a trial and error method for optimising the SEP correction of the IMFP and the correction of experimental elastic peak ratio [4]. Experiments made with a HSA spectrometer [5] covered the E = 0.2-2 keV energy range. The improvements with SEP correction appears in reduc- ing the difference between the corrected and MC calculated IMFPs, assuming Gries and Tanuma's et al IMFPs [1] for polymers and standard respectively. The experimental peak areas were corrected for the hydrogen peak. For the direct detection of hydrogen see Ref. [6] and [7]. Results obtained with the different NIST 64 databases and atomic potentials [8] are presented. This work was supported by the Hungarian Science Foundation of OTKA: T037709 and T038016. (author)

  8. Atmospheric scattering corrections to solar radiometry

    International Nuclear Information System (INIS)

    Box, M.A.; Deepak, A.

    1979-01-01

    Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. In this paper we shall discuss the correction factors needed to account for the diffuse (i.e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle 0 ) and relatively clear skies (optical depths <0.4), it is shown that the total diffuse contributions represents approximately l% of the total intensity. It is assumed here that the main contributions to the diffuse radiation within the detector's view cone are due to single scattering by molecules and aerosols and multiple scattering by molecules alone, aerosol multiple scattering contributions being treated as negligibly small. The theory and the numerical results discussed in this paper will be helpful not only in making corrections to the measured optical depth data but also in designing improved solar radiometers

  9. Corrective Action Investigation Plan for Corrective Action Unit 166: Storage Yards and Contaminated Materials, Nevada Test Site, Nevada, Rev. No.: 0

    Energy Technology Data Exchange (ETDEWEB)

    David Strand

    2006-06-01

    Corrective Action Unit 166 is located in Areas 2, 3, 5, and 18 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit (CAU) 166 is comprised of the seven Corrective Action Sites (CASs) listed below: (1) 02-42-01, Cond. Release Storage Yd - North; (2) 02-42-02, Cond. Release Storage Yd - South; (3) 02-99-10, D-38 Storage Area; (4) 03-42-01, Conditional Release Storage Yard; (5) 05-19-02, Contaminated Soil and Drum; (6) 18-01-01, Aboveground Storage Tank; and (7) 18-99-03, Wax Piles/Oil Stain. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation (CAI) before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on February 28, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and Bechtel Nevada. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 166. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the CAI for CAU 166 includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling. (2) Conduct radiological surveys. (3) Perform field screening. (4) Collect and submit environmental samples for laboratory analysis to determine if

  10. Measurement of double differential cross sections of charged particle emission reactions by incident DT neutrons. Correction for energy loss of charged particle in sample materials

    International Nuclear Information System (INIS)

    Takagi, Hiroyuki; Terada, Yasuaki; Murata, Isao; Takahashi, Akito

    2000-01-01

    In the measurement of charged particle emission spectrum induced by neutrons, correcting the energy loss of charged particle in sample materials becomes a very important inverse problem. To deal with this inverse problem, we have applied the Bayesian unfolding method to correct the energy loss, and tested the performance of the method. Although this method is very simple, it was confirmed from the test that the performance was not inferior to other methods at all, and therefore the method could be a powerful tool for charged particle spectrum measurement. (author)

  11. Effective absorption correction for energy dispersive X-ray mapping in a scanning transmission electron microscope: analysing the local indium distribution in rough samples of InGaN alloy layers.

    Science.gov (United States)

    Wang, X; Chauvat, M-P; Ruterana, P; Walther, T

    2017-12-01

    We have applied our previous method of self-consistent k*-factors for absorption correction in energy-dispersive X-ray spectroscopy to quantify the indium content in X-ray maps of thick compound InGaN layers. The method allows us to quantify the indium concentration without measuring the sample thickness, density or beam current, and works even if there is a drastic local thickness change due to sample roughness or preferential thinning. The method is shown to select, point-by-point in a two-dimensional spectrum image or map, the k*-factor from the local Ga K/L intensity ratio that is most appropriate for the corresponding sample geometry, demonstrating it is not the sample thickness measured along the electron beam direction but the optical path length the X-rays have to travel through the sample that is relevant for the absorption correction. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  12. Corrective Action Investigation Plan for Corrective Action Unit 145: Wells and Storage Holes, Nevada Test Site, Nevada, Rev. No.: 0

    Energy Technology Data Exchange (ETDEWEB)

    David A. Strand

    2004-09-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information for conducting site investigation activities at Corrective Action Unit (CAU) 145: Wells and Storage Holes. Information presented in this CAIP includes facility descriptions, environmental sample collection objectives, and criteria for the selection and evaluation of environmental samples. Corrective Action Unit 145 is located in Area 3 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 145 is comprised of the six Corrective Action Sites (CASs) listed below: (1) 03-20-01, Core Storage Holes; (2) 03-20-02, Decon Pad and Sump; (3) 03-20-04, Injection Wells; (4) 03-20-08, Injection Well; (5) 03-25-01, Oil Spills; and (6) 03-99-13, Drain and Injection Well. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation (CAI) prior to evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. One conceptual site model with three release scenario components was developed for the six CASs to address all releases associated with the site. The sites will be investigated based on data quality objectives (DQOs) developed on June 24, 2004, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and Bechtel Nevada. The DQOs process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 145.

  13. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    Science.gov (United States)

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  14. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  15. Distribution load forecast with interactive correction of horizon loads

    International Nuclear Information System (INIS)

    Glamochanin, V.; Andonov, D.; Gagovski, I.

    1994-01-01

    This paper presents the interactive distribution load forecast application that performs the distribution load forecast with interactive correction of horizon loads. It consists of two major parts implemented in Fortran and Visual Basic. The Fortran part is used for the forecasts computations. It consists of two methods: Load Transfer Coupling Curve Fitting (LTCCF) and load Forecast Using Curve Shape Clustering (FUCSC). LTCCF is used to 'correct' the contaminated data because of load transfer among neighboring distribution areas. FUCSC uses curve shape clustering to forecast the distribution loads of small areas. The forecast for each small area is achieved by using the shape of corresponding cluster curve. The comparison of forecasted loads of the area with historical data will be used as a tool for the correction of the estimated horizon load. The Visual Basic part is used to provide flexible interactive user-friendly environment. (author). 5 refs., 3 figs

  16. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    Science.gov (United States)

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  17. Quark mass correction to chiral separation effect and pseudoscalar condensate

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Er-dong [State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics,Chinese Academy of Sciences,Beijing 100190 (China); Kavli Institute of Theoretical Physics China, Chinese Academy of Sciences,Beijing 100190 (China); Lin, Shu [School of Physics and Astronomy, Sun Yat-Sen University,No 2 University Road, Zhuhai 519082 (China)

    2017-01-25

    We derived an analytic structure of the quark mass correction to chiral separation effect (CSE) in small mass regime. We confirmed this structure by a D3/D7 holographic model study in a finite density, finite magnetic field background. The quark mass correction to CSE can be related to correlators of pseudo-scalar condensate, quark number density and quark condensate in static limit. We found scaling relations of these correlators with spatial momentum in the small momentum regime. They characterize medium responses to electric field, inhomogeneous quark mass and chiral shift. Beyond the small momentum regime, we found existence of normalizable mode, which possibly leads to formation of spiral phase. The normalizable mode exists beyond a critical magnetic field, whose magnitude decreases with quark chemical potential.

  18. SU-E-T-225: Correction Matrix for PinPoint Ionization Chamber for Dosimetric Measurements in the Newly Released Incise™ Multileaf Collimator Shaped Small Field for CyberKnife M6™ Machine

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Y; Li, T; Heron, D; Huq, M [University of Pittsburgh Cancer Institute and UPMC CancerCenter, Pittsburgh, PA (United States)

    2015-06-15

    Purpose: For small field dosimetry, such as measurements of output factors for cones or MLC-shaped irregular small fields, ion chambers often Result in an underestimation of the dose, due to both the volume averaging effect and the lack of lateral charged particle equilibrium. This work presents a mathematical model for correction matrix for a PTW PinPoint ionization chamber for dosimetric measurements made in the newly released Incise™ Multileaf collimator fields of the CyberKnife M6™ machine. Methods: A correction matrix for a PTW 0.015cc PinPoint ionization chamber was developed by modeling its 3D dose response in twelve cone-shaped circular fields created using the 5mm, 7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm cones in a CyberKnife M6™ machine. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. The contribution of each dose pixel to a measurement point depended on the radial distance and the angle to the chamber axis. These readings were then compared with the theoretical dose as obtained with Monte Carlo calculation. A penalized least-square optimization algorithm was developed to generate the correction matrix. After the parameter fitting, the mathematical model was validated for MLC-shaped irregular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for PinPoint chamber for dosimetric measurements in small MLC-shaped irregular fields. The correction matrix is dependent on detector, treatment unit and the geometry of setup. The model can be applied to non-standard composite fields and provides an access to IMRT point dose validation.

  19. SU-E-T-225: Correction Matrix for PinPoint Ionization Chamber for Dosimetric Measurements in the Newly Released Incise™ Multileaf Collimator Shaped Small Field for CyberKnife M6™ Machine

    International Nuclear Information System (INIS)

    Zhang, Y; Li, T; Heron, D; Huq, M

    2015-01-01

    Purpose: For small field dosimetry, such as measurements of output factors for cones or MLC-shaped irregular small fields, ion chambers often Result in an underestimation of the dose, due to both the volume averaging effect and the lack of lateral charged particle equilibrium. This work presents a mathematical model for correction matrix for a PTW PinPoint ionization chamber for dosimetric measurements made in the newly released Incise™ Multileaf collimator fields of the CyberKnife M6™ machine. Methods: A correction matrix for a PTW 0.015cc PinPoint ionization chamber was developed by modeling its 3D dose response in twelve cone-shaped circular fields created using the 5mm, 7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm cones in a CyberKnife M6™ machine. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. The contribution of each dose pixel to a measurement point depended on the radial distance and the angle to the chamber axis. These readings were then compared with the theoretical dose as obtained with Monte Carlo calculation. A penalized least-square optimization algorithm was developed to generate the correction matrix. After the parameter fitting, the mathematical model was validated for MLC-shaped irregular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for PinPoint chamber for dosimetric measurements in small MLC-shaped irregular fields. The correction matrix is dependent on detector, treatment unit and the geometry of setup. The model can be applied to non-standard composite fields and provides an access to IMRT point dose validation

  20. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  1. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  2. Nonlinear correction to the longitudinal structure function at small x

    International Nuclear Information System (INIS)

    Boroun, G.R.

    2010-01-01

    We computed the longitudinal proton structure function F L , using the nonlinear Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (NLDGLAP) evolution equation approach at small x. For the gluon distribution, the nonlinear effects are related to the longitudinal structure function. As the very small-x behavior of the gluon distribution is obtained by solving the Gribov, Levin, Ryskin, Mueller and Qiu (GLR-MQ) evolution equation with the nonlinear shadowing term incorporated, we show that the strong rise that corresponds to the linear QCD evolution equations can be tamed by screening effects. Consequently, the obtained longitudinal structure function shows a tamed growth at small x. We computed the predictions for all details of the nonlinear longitudinal structure function in the kinematic range where it has been measured by the H1 Collaboration and made comparisons with the computation by Moch, Vermaseren and Vogt at the second order with input data from the MRST QCD fit. (orig.)

  3. Evaluating the biological potential in samples returned from planetary satellites and small solar system bodies: framework for decision making

    National Research Council Canada - National Science Library

    National Research Council Staff; Space Studies Board; Division on Engineering and Physical Sciences; National Research Council; National Academy of Sciences

    ... from Planetary Satellites and Small Solar System Bodies Framework for Decision Making Task Group on Sample Return from Small Solar System Bodies Space Studies Board Commission on Physical Sciences, Mathematics, and Applications National Research Council NATIONAL ACADEMY PRESS Washington, D.C. 1998 i Copyrightthe true use are Please breaks...

  4. A simple Bayesian approach to quantifying confidence level of adverse event incidence proportion in small samples.

    Science.gov (United States)

    Liu, Fang

    2016-01-01

    In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.

  5. Small Sample Reactivity Measurements in the RRR/SEG Facility: Reanalysis using TRIPOLI-4

    Energy Technology Data Exchange (ETDEWEB)

    Hummel, Andrew [Idaho National Lab. (INL), Idaho Falls, ID (United States); Palmiotti, Guiseppe [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This work involved reanalyzing the RRR/SEG integral experiments performed at the Rossendorf facility in Germany throughout the 1970s and 80s. These small sample reactivity worth measurements were carried out using the pile oscillator technique for many different fission products, structural materials, and standards. The coupled fast-thermal system was designed such that the measurements would provide insight into elemental data, specifically the competing effects between neutron capture and scatter. Comparing the measured to calculated reactivity values can then provide adjustment criteria to ultimately improve nuclear data for fast reactor designs. Due to the extremely small reactivity effects measured (typically less than 1 pcm) and the specific heterogeneity of the core, the tool chosen for this analysis was TRIPOLI-4. This code allows for high fidelity 3-dimensional geometric modeling, and the most recent, unreleased version, is capable of exact perturbation theory.

  6. Robust Active Label Correction

    DEFF Research Database (Denmark)

    Kremer, Jan; Sha, Fei; Igel, Christian

    2018-01-01

    for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...

  7. Matching-to-sample by an echolocating dolphin (Tursiops truncatus).

    Science.gov (United States)

    Roitblat, H L; Penner, R H; Nachtigall, P E

    1990-01-01

    An adult male dolphin was trained to perform a three-alternative delayed matching-to-sample task while wearing eyecups to occlude its vision. Sample and comparison stimuli consisted of a small and a large PVC plastic tube, a water-filled stainless steel sphere, and a solid aluminum cone. Stimuli were presented under water and the dolphin was allowed to identify the stimuli through echolocation. The echolocation clicks emitted by the dolphin to each sample and each comparison stimulus were recorded and analyzed. Over 48 sessions of testing, choice accuracy averaged 94.5% correct. This high level of accuracy was apparently achieved by varying the number of echolocation clicks emitted to various stimuli. Performance appeared to reflect a preexperimental stereotyped search pattern that dictated the order in which comparison items were examined and a complex sequential-sampling decision process. A model for the dolphin's decision-making processes is described.

  8. Improvement of 137Cs analysis in small volume seawater samples using the Ogoya underground facility

    International Nuclear Information System (INIS)

    Hirose, K.; Komura, K.; Kanazawa University, Ishikawa; Aoyama, M.; Igarashi, Y.

    2008-01-01

    137 Cs in seawater is one of the most powerful tracers of water motion. Large volumes of samples have been required for determination of 137 Cs in seawater. This paper describes improvement of separation and purification processes of 137 Cs in seawater, which includes purification of 137 Cs using hexachloroplatinic acid in addition to ammonium phosphomolybdate (AMP) precipitation. As a result, we succeeded the 137 Cs determination in seawater with a smaller sample volume of 10 liter by using ultra-low background gamma-spectrometry in the Ogoya underground facility. 137 Cs detection limit was about 0.1 mBq (counting time: 10 6 s). This method is applied to determine 137 Cs in small samples of the South Pacific deep waters. (author)

  9. Modeling and Testing of Phase Transition-Based Deployable Systems for Small Body Sample Capture

    Science.gov (United States)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Keim, Jason; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and return. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  10. Hybrid image and blood sampling input function for quantification of small animal dynamic PET data

    International Nuclear Information System (INIS)

    Shoghi, Kooresh I.; Welch, Michael J.

    2007-01-01

    We describe and validate a hybrid image and blood sampling (HIBS) method to derive the input function for quantification of microPET mice data. The HIBS algorithm derives the peak of the input function from the image, which is corrected for recovery, while the tail is derived from 5 to 6 optimally placed blood sampling points. A Bezier interpolation algorithm is used to link the rightmost image peak data point to the leftmost blood sampling point. To assess the performance of HIBS, 4 mice underwent 60-min microPET imaging sessions following a 0.40-0.50-mCi bolus administration of 18 FDG. In total, 21 blood samples (blood-sampled plasma time-activity curve, bsPTAC) were obtained throughout the imaging session to compare against the proposed HIBS method. MicroPET images were reconstructed using filtered back projection with a zoom of 2.75 on the heart. Volumetric regions of interest (ROIs) were composed by drawing circular ROIs 3 pixels in diameter on 3-4 transverse planes of the left ventricle. Performance was characterized by kinetic simulations in terms of bias in parameter estimates when bsPTAC and HIBS are used as input functions. The peak of the bsPTAC curve was distorted in comparison to the HIBS-derived curve due to temporal limitations and delay in blood sampling, which affected the rates of bidirectional exchange between plasma and tissue. The results highlight limitations in using bsPTAC. The HIBS method, however, yields consistent results, and thus, is a substitute for bsPTAC

  11. The small sample uncertainty aspect in relation to bullwhip effect measurement

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2009-01-01

    The bullwhip effect as a concept has been known for almost half a century starting with the Forrester effect. The bullwhip effect is observed in many supply chains, and it is generally accepted as a potential malice. Despite of this fact, the bullwhip effect still seems to be first and foremost...... a conceptual phenomenon. This paper intends primarily to investigate why this might be so and thereby investigate the various aspects, possibilities and obstacles that must be taken into account, when considering the potential practical use and measure of the bullwhip effect in order to actually get the supply...... chain under control. This paper will put special emphasis on the unavoidable small-sample uncertainty aspects relating to the measurement or estimation of the bullwhip effect.  ...

  12. DEFINITION OF TYPOS IN ANSWER OF STUDENT IN KNOWN CORRECT ANSWER

    Directory of Open Access Journals (Sweden)

    Maria V. Biryukova

    2015-01-01

    Full Text Available The paper describes method of typo detection in the answers for the questions with open answers. In such questions we know one or several correct answers defining relatively small dictionary of correct words contrasting the usual case of looking for typos in arbitrary text. This fact allows using more complex analysis methods and finding more possible typos, such as extra or missing separators. A typo correction module for the Correct Writing question type (for Moodle LMS was developed using proposed methods. 

  13. Corrective Action Investigation Plan for Corrective Action Unit 555: Septic Systems Nevada Test Site, Nevada, Rev. No.: 0 with Errata

    International Nuclear Information System (INIS)

    Pastor, Laura

    2005-01-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information including facility descriptions, environmental sample collection objectives, and criteria for conducting site investigation activities at Corrective Action Unit (CAU) 555: Septic Systems, Nevada Test Site (NTS), Nevada. This CAIP has been developed in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996) that was agreed to by the State of Nevada, the U.S. Department of Energy (DOE), and the U.S. Department of Defense. Corrective Action Unit 555 is located in Areas 1, 3 and 6 of the NTS, which is approximately 65 miles (mi) northwest of Las Vegas, Nevada, and is comprised of the five corrective action sites (CASs) shown on Figure 1-1 and listed below: (1) CAS 01-59-01, Area 1 Camp Septic System; (2) CAS 03-59-03, Core Handling Building Septic System; (3) CAS 06-20-05, Birdwell Dry Well; (4) CAS 06-59-01, Birdwell Septic System; and (5) CAS 06-59-02, National Cementers Septic System. An FFACO modification was approved on December 14, 2005, to include CAS 06-20-05, Birdwell Dry Well, as part of the scope of CAU 555. The work scope was expanded in this document to include the investigation of CAS 06-20-05. The Corrective Action Investigation (CAI) will include field inspections, radiological surveys, geophysical surveys, sampling of environmental media, analysis of samples, and assessment of investigation results, where appropriate. Data will be obtained to support corrective action alternative evaluations and waste management decisions. The CASs in CAU 555 are being investigated because hazardous and/or radioactive constituents may be present in concentrations that could potentially pose a threat to human health and the environment. Existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives for the CASs. Additional information will be generated by conducting a CAI

  14. Corrective Action Investigation Plan for Corrective Action Unit 555: Septic Systems Nevada Test Site, Nevada, Rev. No.: 0 with Errata

    Energy Technology Data Exchange (ETDEWEB)

    Pastor, Laura

    2005-12-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information including facility descriptions, environmental sample collection objectives, and criteria for conducting site investigation activities at Corrective Action Unit (CAU) 555: Septic Systems, Nevada Test Site (NTS), Nevada. This CAIP has been developed in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996) that was agreed to by the State of Nevada, the U.S. Department of Energy (DOE), and the U.S. Department of Defense. Corrective Action Unit 555 is located in Areas 1, 3 and 6 of the NTS, which is approximately 65 miles (mi) northwest of Las Vegas, Nevada, and is comprised of the five corrective action sites (CASs) shown on Figure 1-1 and listed below: (1) CAS 01-59-01, Area 1 Camp Septic System; (2) CAS 03-59-03, Core Handling Building Septic System; (3) CAS 06-20-05, Birdwell Dry Well; (4) CAS 06-59-01, Birdwell Septic System; and (5) CAS 06-59-02, National Cementers Septic System. An FFACO modification was approved on December 14, 2005, to include CAS 06-20-05, Birdwell Dry Well, as part of the scope of CAU 555. The work scope was expanded in this document to include the investigation of CAS 06-20-05. The Corrective Action Investigation (CAI) will include field inspections, radiological surveys, geophysical surveys, sampling of environmental media, analysis of samples, and assessment of investigation results, where appropriate. Data will be obtained to support corrective action alternative evaluations and waste management decisions. The CASs in CAU 555 are being investigated because hazardous and/or radioactive constituents may be present in concentrations that could potentially pose a threat to human health and the environment. Existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives for the CASs. Additional information will be generated by

  15. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  16. Correction Effect of Finite Pulse Duration for High Thermal Diffusivity Materials

    International Nuclear Information System (INIS)

    Park, Dae Gyu; Kim, Hee Moon; Baik, Seung Je; Yoo, Byoung Ok; Ahn, Sang Bok; Ryu, Woo Seok

    2010-01-01

    In the laser pulsed flash method, a pulse of energy is incident on one of two parallel faces of a sample. The subsequent temperature history of the opposite face is then related to the thermal diffusivity. When the heat pulse is of infinitesimal duration, the diffusivity is obtained from the transient response of the rear face temperature proposed by Parker et al. The diffusivity αis computed from relation 2222121.37cattαππ≡= (1) Where a is the sample thickness and is the time required for the rear face temperature to reach half-maximum, and t c ≡a 2 / π 2 t 1/2 is the characteristic rise time of the rear face temperature. When the pulse-time 1/2tτis not infinitesimal, but becomes comparable to tc, it is apparent that the rise in temperature of the rear face will be retarded, and will be greater than 1.37 t c . This retardation has been called the ' finite pulse-time effect.' Equation (1) is accurate to 1% for tc > ∼ 501/2tτ. For many substances, this inequality cannot be achieved with conventional optical sources (e.g. τ. 10 -3 sec for a solid state laser) unless the sample thickness is so large that its rise in temperature is too small for accurate measurement. One must therefore make an appropriate correction for the retardation of the temperature wave. Purpose of study are to observe impact of finite pulse time effect in appropriate sample thickness and to verify the effect of pulse correction using Cape and Lehman method for high thermal diffusivity materials

  17. Modeling bias and variation in the stochastic processes of small RNA sequencing.

    Science.gov (United States)

    Argyropoulos, Christos; Etheridge, Alton; Sakhanenko, Nikita; Galas, David

    2017-06-20

    The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Corrective action decision document for the Roller Coaster Lagoons and North Disposal Trench (Corrective Action Unit Number 404)

    International Nuclear Information System (INIS)

    1997-01-01

    The North Disposal Trench, located north of the eastern most lagoon, was installed in 1963 to receive solid waste and construction debris from the Operation Roller Coaster man camp. Subsequent to Operation Roller Coaster, the trench continued to receive construction debris and range cleanup debris (including ordnance) from Sandia National Laboratories and other operators. A small hydrocarbon spill occurred during Voluntary Corrective Action (VCA) activities (VCA Spill Area) at an area associated with the North Disposal Trench Corrective Action Site (CAS). Remediation activities at this site were conducted in 1995. A corrective action investigation was conducted in September of 1996 following the Corrective Action Investigation Plan (CAIP); the detailed results of that investigation are presented in Appendix A. The Roller Coaster Lagoons and North Disposal Trench are located at the Tonopah Test Range (TTR), a part of the Nellis Air Force Range, which is approximately 225 kilometers (140 miles) northwest of Las Vegas, Nevada, by air

  19. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Vivek [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cai, Guowei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gribok, Andrei V. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mahadevan, Sankaran [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presented in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.

  20. Structural properties of small Lin (n = 5-8) atomic clusters via ab initio random structure searching: A look into the role of different implementations of long-range dispersion corrections

    Science.gov (United States)

    Putungan, Darwin Barayang; Lin, Shi-Hsin

    2018-01-01

    In this work, we looked into the lowest energy structures of small lithium clusters (Lin, n = 5, 6, 7, 8) utilizing conventional PBE exchange-correlation functional, PBE with D2 dispersion correction and PBE with Tkatchenko and Scheffler (TS) dispersion correction, and searched using ab initio random structure searching. Results show that in general, dispersion-corrected PBE obtained similar lowest minima structures as those obtained via conventional PBE regardless of the type of implementation, although both D2 and TS found several high-energy isomers that conventional PBE did not arrive at, with TS in general giving more structures per energy range that could be attributed to its environment-dependent implementation. Moreover, D2 and TS dispersion corrections found a lowest energy geometry for Li8 cluster that is in agreement with the structure obtained via the typical benchmarking method diffusion Monte Carlo in a recent work. It is thus suggested that for much larger lithium clusters, utilization of dispersion correction could be of help in searching for lowest energy minima that is in close agreement with that of diffusion Monte Carlo results, but computationally inexpensive.

  1. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    International Nuclear Information System (INIS)

    Rabiet, M.; Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M.

    2010-01-01

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10 -3 g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  2. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    Energy Technology Data Exchange (ETDEWEB)

    Rabiet, M., E-mail: marion.rabiet@unilim.f [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France); Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M. [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France)

    2010-03-15

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10{sup -3} g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  3. Simultaneous small-sample comparisons in longitudinal or multi-endpoint trials using multiple marginal models

    DEFF Research Database (Denmark)

    Pallmann, Philip; Ritz, Christian; Hothorn, Ludwig A

    2018-01-01

    , however only asymptotically. In this paper, we show how to make the approach also applicable to small-sample data problems. Specifically, we discuss the computation of adjusted P values and simultaneous confidence bounds for comparisons of randomised treatment groups as well as for levels......Simultaneous inference in longitudinal, repeated-measures, and multi-endpoint designs can be onerous, especially when trying to find a reasonable joint model from which the interesting effects and covariances are estimated. A novel statistical approach known as multiple marginal models greatly...... simplifies the modelling process: the core idea is to "marginalise" the problem and fit multiple small models to different portions of the data, and then estimate the overall covariance matrix in a subsequent, separate step. Using these estimates guarantees strong control of the family-wise error rate...

  4. Histologic examination of hepatic biopsy samples as a prognostic indicator in dogs undergoing surgical correction of congenital portosystemic shunts: 64 cases (1997-2005).

    Science.gov (United States)

    Parker, Jacquelyn S; Monnet, Eric; Powers, Barbara E; Twedt, David C

    2008-05-15

    To determine whether results of histologic examination of hepatic biopsy samples could be used as an indicator of survival time in dogs that underwent surgical correction of a congenital portosystemic shunt (PSS). Retrospective case series. 64 dogs that underwent exploratory laparotomy for an extrahepatic (n = 39) or intrahepatic (25) congenital PSS. All H&E-stained histologic slides of hepatic biopsy samples obtained at the time of surgery were reviewed by a single individual, and severity of histologic abnormalities (ie, arteriolar hyperplasia, biliary hyperplasia, fibrosis, cell swelling, lipidosis, lymphoplasmacytic cholangiohepatitis, suppurative cholangiohepatitis, lipid granulomas, and dilated sinusoids) was graded. A Cox proportional hazards regression model was used to determine whether each histologic feature was associated with survival time. Median follow-up time was 35.7 months, and median survival time was 50.6 months. Thirty-eight dogs were alive at the time of final follow-up; 15 had died of causes associated with the PSS, including 4 that died immediately after surgery; 3 had died of unrelated causes; and 8 were lost to follow-up. None of the histologic features examined were significantly associated with survival time. Findings suggested that results of histologic examination of hepatic biopsy samples obtained at the time of surgery cannot be used to predict long-term outcome in dogs undergoing surgical correction of a PSS.

  5. Authorship Correction: Sampling Key Populations for HIV Surveillance: Results From Eight Cross-Sectional Studies Using Respondent-Driven Sampling and Venue-Based Snowball Sampling.

    Science.gov (United States)

    Rao, Amrita; Stahlman, Shauna; Hargreaves, James; Weir, Sharon; Edwards, Jessie; Rice, Brian; Kochelani, Duncan; Mavimbela, Mpumelelo; Baral, Stefan

    2018-01-15

    [This corrects the article DOI: 10.2196/publichealth.8116.]. ©Amrita Rao, Shauna Stahlman, James Hargreaves, Sharon Weir, Jessie Edwards, Brian Rice, Duncan Kochelani, Mpumelelo Mavimbela, Stefan Baral. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 15.01.2018.

  6. Including screening in van der Waals corrected density functional theory calculations: The case of atoms and small molecules physisorbed on graphene

    Energy Technology Data Exchange (ETDEWEB)

    Silvestrelli, Pier Luigi; Ambrosetti, Alberto [Dipartimento di Fisica e Astronomia, Università di Padova, via Marzolo 8, I–35131 Padova, Italy and DEMOCRITOS National Simulation Center of the Italian Istituto Officina dei Materiali (IOM) of the Italian National Research Council (CNR), Trieste (Italy)

    2014-03-28

    The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H{sub 2}, H{sub 2}O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems.

  7. Automated microfluidic sample-preparation platform for high-throughput structural investigation of proteins by small-angle X-ray scattering

    DEFF Research Database (Denmark)

    Lafleur, Josiane P.; Snakenborg, Detlef; Nielsen, Søren Skou

    2011-01-01

    A new microfluidic sample-preparation system is presented for the structural investigation of proteins using small-angle X-ray scattering (SAXS) at synchrotrons. The system includes hardware and software features for precise fluidic control, sample mixing by diffusion, automated X-ray exposure...... control, UV absorbance measurements and automated data analysis. As little as 15 l of sample is required to perform a complete analysis cycle, including sample mixing, SAXS measurement, continuous UV absorbance measurements, and cleaning of the channels and X-ray cell with buffer. The complete analysis...

  8. Electromagnetic corrections in η→3π decays

    International Nuclear Information System (INIS)

    Ditsche, Christoph; Kubis, Bastian; Meissner, Ulf G.

    2009-01-01

    We re-evaluate the electromagnetic corrections to η→3π decays at next-to-leading order in the chiral expansion, arguing that effects of order e 2 (m u -m d ) disregarded so far are not negligible compared to other contributions of order e 2 times a light-quark mass. Despite the appearance of the Coulomb pole in η→π + π - π 0 and cusps in η→3π 0 , the overall corrections remain small. (orig.)

  9. Attenuation correction factors for cylindrical, disc and box geometry

    International Nuclear Information System (INIS)

    Agarwal, Chhavi; Poi, Sanhita; Mhatre, Amol; Goswami, A.; Gathibandhe, M.

    2009-01-01

    In the present study, attenuation correction factors have been experimentally determined for samples having cylindrical, disc and box geometry and compared with the attenuation correction factors calculated by Hybrid Monte Carlo (HMC) method [ C. Agarwal, S. Poi, A. Goswami, M. Gathibandhe, R.A. Agrawal, Nucl. Instr. and. Meth. A 597 (2008) 198] and with the near-field and far-field formulations available in literature. It has been observed that the near-field formulae, although said to be applicable at close sample-detector geometry, does not work at very close sample-detector configuration. The advantage of the HMC method is that it is found to be valid for all sample-detector geometries.

  10. Sampling procedure, receipt and conservation of water samples to determine environmental radioactivity

    International Nuclear Information System (INIS)

    Herranz, M.; Navarro, E.; Payeras, J.

    2009-01-01

    The present document informs about essential goals, processes and contents that the subgroups Sampling and Samples Preparation and Conservation believe they should be part of the procedure to obtain a correct sampling, receipt, conservation and preparation of samples of continental, marine and waste water before qualifying its radioactive content.

  11. A simple technique for measuring the superconducting critical temperature of small (>= 10 μg) samples

    International Nuclear Information System (INIS)

    Pereira, R.F.R.; Meyer, E.; Silveira, M.F. da.

    1983-01-01

    A simple technique for measuring the superconducting critical temperature of small (>=10μg) samples is described. The apparatus is built in the form of a probe, which can be introduced directly into a liquid He storage dewar and permits the determination of the critical temperature, with an imprecision of +- 0.05 K above 4.2 K, in about 10 minutes. (Author) [pt

  12. A correction to 'efficient and secure comparison for on-line auctions'

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Krøigaard, Mikkel; Geisler, Martin

    2009-01-01

    In this paper, we describe a correction to the cryptosystem proposed in Damgard et al. from Int. J. Applied Cryptography, Vol. 1, No. 1. Although, the correction is small and does not affect the performance of the protocols from Damgard et al., it is necessary, as the cryptosystem is not secure...

  13. Evaluation of wastewater contaminant transport in surface waters using verified Lagrangian sampling

    Science.gov (United States)

    Antweiler, Ronald C.; Writer, Jeffrey H.; Murphy, Sheila F.

    2014-01-01

    Contaminants released from wastewater treatment plants can persist in surface waters for substantial distances. Much research has gone into evaluating the fate and transport of these contaminants, but this work has often assumed constant flow from wastewater treatment plants. However, effluent discharge commonly varies widely over a 24-hour period, and this variation controls contaminant loading and can profoundly influence interpretations of environmental data. We show that methodologies relying on the normalization of downstream data to conservative elements can give spurious results, and should not be used unless it can be verified that the same parcel of water was sampled. Lagrangian sampling, which in theory samples the same water parcel as it moves downstream (the Lagrangian parcel), links hydrologic and chemical transformation processes so that the in-stream fate of wastewater contaminants can be quantitatively evaluated. However, precise Lagrangian sampling is difficult, and small deviations – such as missing the Lagrangian parcel by less than 1 h – can cause large differences in measured concentrations of all dissolved compounds at downstream sites, leading to erroneous conclusions regarding in-stream processes controlling the fate and transport of wastewater contaminants. Therefore, we have developed a method termed “verified Lagrangian” sampling, which can be used to determine if the Lagrangian parcel was actually sampled, and if it was not, a means for correcting the data to reflect the concentrations which would have been obtained had the Lagrangian parcel been sampled. To apply the method, it is necessary to have concentration data for a number of conservative constituents from the upstream, effluent, and downstream sites, along with upstream and effluent concentrations that are constant over the short-term (typically 2–4 h). These corrections can subsequently be applied to all data, including non-conservative constituents. Finally, we

  14. Evaluation of wastewater contaminant transport in surface waters using verified Lagrangian sampling.

    Science.gov (United States)

    Antweiler, Ronald C; Writer, Jeffrey H; Murphy, Sheila F

    2014-02-01

    Contaminants released from wastewater treatment plants can persist in surface waters for substantial distances. Much research has gone into evaluating the fate and transport of these contaminants, but this work has often assumed constant flow from wastewater treatment plants. However, effluent discharge commonly varies widely over a 24-hour period, and this variation controls contaminant loading and can profoundly influence interpretations of environmental data. We show that methodologies relying on the normalization of downstream data to conservative elements can give spurious results, and should not be used unless it can be verified that the same parcel of water was sampled. Lagrangian sampling, which in theory samples the same water parcel as it moves downstream (the Lagrangian parcel), links hydrologic and chemical transformation processes so that the in-stream fate of wastewater contaminants can be quantitatively evaluated. However, precise Lagrangian sampling is difficult, and small deviations - such as missing the Lagrangian parcel by less than 1h - can cause large differences in measured concentrations of all dissolved compounds at downstream sites, leading to erroneous conclusions regarding in-stream processes controlling the fate and transport of wastewater contaminants. Therefore, we have developed a method termed "verified Lagrangian" sampling, which can be used to determine if the Lagrangian parcel was actually sampled, and if it was not, a means for correcting the data to reflect the concentrations which would have been obtained had the Lagrangian parcel been sampled. To apply the method, it is necessary to have concentration data for a number of conservative constituents from the upstream, effluent, and downstream sites, along with upstream and effluent concentrations that are constant over the short-term (typically 2-4h). These corrections can subsequently be applied to all data, including non-conservative constituents. Finally, we show how data

  15. A new set-up for simultaneous high-precision measurements of CO2, δ13C-CO2 and δ18O-CO2 on small ice core samples

    Science.gov (United States)

    Jenk, Theo Manuel; Rubino, Mauro; Etheridge, David; Ciobanu, Viorela Gabriela; Blunier, Thomas

    2016-08-01

    Palaeoatmospheric records of carbon dioxide and its stable carbon isotope composition (δ13C) obtained from polar ice cores provide important constraints on the natural variability of the carbon cycle. However, the measurements are both analytically challenging and time-consuming; thus only data exist from a limited number of sampling sites and time periods. Additional analytical resources with high analytical precision and throughput are thus desirable to extend the existing datasets. Moreover, consistent measurements derived by independent laboratories and a variety of analytical systems help to further increase confidence in the global CO2 palaeo-reconstructions. Here, we describe our new set-up for simultaneous measurements of atmospheric CO2 mixing ratios and atmospheric δ13C and δ18O-CO2 in air extracted from ice core samples. The centrepiece of the system is a newly designed needle cracker for the mechanical release of air entrapped in ice core samples of 8-13 g operated at -45 °C. The small sample size allows for high resolution and replicate sampling schemes. In our method, CO2 is cryogenically and chromatographically separated from the bulk air and its isotopic composition subsequently determined by continuous flow isotope ratio mass spectrometry (IRMS). In combination with thermal conductivity measurement of the bulk air, the CO2 mixing ratio is calculated. The analytical precision determined from standard air sample measurements over ice is ±1.9 ppm for CO2 and ±0.09 ‰ for δ13C. In a laboratory intercomparison study with CSIRO (Aspendale, Australia), good agreement between CO2 and δ13C results is found for Law Dome ice core samples. Replicate analysis of these samples resulted in a pooled standard deviation of 2.0 ppm for CO2 and 0.11 ‰ for δ13C. These numbers are good, though they are rather conservative estimates of the overall analytical precision achieved for single ice sample measurements. Facilitated by the small sample requirement

  16. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  17. Imaging single atoms using secondary electrons with an aberration-corrected electron microscope.

    Science.gov (United States)

    Zhu, Y; Inada, H; Nakamura, K; Wall, J

    2009-10-01

    Aberration correction has embarked on a new frontier in electron microscopy by overcoming the limitations of conventional round lenses, providing sub-angstrom-sized probes. However, improvement of spatial resolution using aberration correction so far has been limited to the use of transmitted electrons both in scanning and stationary mode, with an improvement of 20-40% (refs 3-8). In contrast, advances in the spatial resolution of scanning electron microscopes (SEMs), which are by far the most widely used instrument for surface imaging at the micrometre-nanometre scale, have been stagnant, despite several recent efforts. Here, we report a new SEM, with aberration correction, able to image single atoms by detecting electrons emerging from its surface as a result of interaction with the small probe. The spatial resolution achieved represents a fourfold improvement over the best-reported resolution in any SEM (refs 10-12). Furthermore, we can simultaneously probe the sample through its entire thickness with transmitted electrons. This ability is significant because it permits the selective visualization of bulk atoms and surface ones, beyond a traditional two-dimensional projection in transmission electron microscopy. It has the potential to revolutionize the field of microscopy and imaging, thereby opening the door to a wide range of applications, especially when combined with simultaneous nanoprobe spectroscopy.

  18. A simple method for regional cerebral blood flow measurement by one-point arterial blood sampling and 123I-IMP microsphere model (part 2). A study of time correction of one-point blood sample count

    International Nuclear Information System (INIS)

    Masuda, Yasuhiko; Makino, Kenichi; Gotoh, Satoshi

    1999-01-01

    In our previous paper regarding determination of the regional cerebral blood flow (rCBF) using the 123 I-IMP microsphere model, we reported that the accuracy of determination of the integrated value of the input function from one-point arterial blood sampling can be increased by performing correction using the 5 min: 29 min ratio for the whole-brain count. However, failure to carry out the arterial blood collection at exactly 5 minutes after 123 I-IMP injection causes errors with this method, and there is thus a time limitation. We have now revised out method so that the one-point arterial blood sampling can be performed at any time during the interval between 5 minutes and 20 minutes after 123 I-IMP injection, with addition of a correction step for the sampling time. This revised method permits more accurate estimation of the integral of the input functions. This method was then applied to 174 experimental subjects: one-point blood samples collected at random times between 5 and 20 minutes, and the estimated values for the continuous arterial octanol extraction count (COC) were determined. The mean error rate between the COC and the actual measured continuous arterial octanol extraction count (OC) was 3.6%, and the standard deviation was 12.7%. Accordingly, in 70% of the cases, the rCBF was able to be estimated within an error rate of 13%, while estimation was possible in 95% of the cases within an error rate of 25%. This improved method is a simple technique for determination of the rCBF by 123 I-IMP microsphere model and one-point arterial blood sampling which no longer shows a time limitation and does not require any octanol extraction step. (author)

  19. Corrective Action Investigation Plan for Corrective Action Unit 230: Area 22 Sewage Lagoons and Corrective Action Unit 320: Area 22 Desert Rock Airport Strainer Box, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    1999-01-01

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, Nevada Operation Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 230/320 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 230 consists of Corrective Action Site (CAS) 22-03-01, Sewage Lagoon; while CAU 320 consists of CAS 22-99-01, Strainer Box. These CAUs are referred to as CAU 230/320 or the Sewage Lagoons Site. The Sewage Lagoons Site also includes an Imhoff tank, sludge bed, and associated buried sewer piping. Located in Area 22, the site was used between 1951 to 1958 for disposal of sanitary sewage effluent from the historic Camp Desert Rock Facility at the Nevada Test Site in Nevada. Based on site history, the contaminants of potential concern include volatile organic compounds (VOCs), semivolatile organic compounds, total petroleum hydrocarbons (TPH), and radionuclides. Vertical migration is estimated to be less than 12 feet below ground surface, and lateral migration is limited to the soil immediately adjacent to or within areas of concern. The proposed investigation will involve a combination of field screening for VOCs and TPH using the direct-push method and excavation using a backhoe to gather soil samples for analysis. Gamma spectroscopy will also be conducted for waste management purposes. Sampling locations will be biased to suspected worst-case areas including the nearby sludge bed, sewage lagoon inlet(s) and outlet(s), disturbed soil surrounding the lagoons, surface drainage channel south of the lagoons, and the area near the Imhoff tank. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document

  20. Sophistication of 14C measurement at JAEA-AMS-MUTSU. Attempt on a small quantity of sample

    International Nuclear Information System (INIS)

    Tanaka, Takayuki; Kabuto, Shoji; Kinoshita, Naoki; Yamamoto, Nobuo

    2010-01-01

    In the investigations on substance dynamics using the molecular weight and chemical fractionation, the utilization of 14 C measurement by an accelerator mass spectrometry (AMS) have started. As a result of the fractionation, sample contents required for AMS measurement have been downsized. We expect that this trend toward a small quantity of sample will be steadily accelerated in the future. As 14 C measurement by AMS established at Mutsu office require about 2 mg of sample content at present, our AMS lags behind the others in the trend. We try to downsize the needed sample content for 14 C measurement by our AMS. In this study, we modified the shape of the target-piece in which the sample is packed and which is regularly needed to radiocarbon measurement by our AMS. Moreover, we improved on the apparatus needed to pack the sample. As a result of the improvement, we revealed that it is possible to measure the 14 C using our AMS even by the amount of the sample of about 0.5 mg. (author)

  1. Bias correction for selecting the minimal-error classifier from many machine learning models.

    Science.gov (United States)

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Final voluntary release assessment/corrective action report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-12

    The US Department of Energy, Carlsbad Area Office (DOE-CAO) has completed a voluntary release assessment sampling program at selected Solid Waste Management Units (SWMUs) at the Waste Isolation Pilot Plant (WIPP). This Voluntary Release Assessment/Corrective Action (RA/CA) report has been prepared for final submittal to the Environmental protection Agency (EPA) Region 6, Hazardous Waste Management Division and the New Mexico Environment Department (NMED) Hazardous and Radioactive Materials Bureau to describe the results of voluntary release assessment sampling and proposed corrective actions at the SWMU sites. The Voluntary RA/CA Program is intended to be the first phase in implementing the Resource Conservation and Recovery Act (RCRA) Facility Investigation (RFI) and corrective action process at the WIPP. Data generated as part of this sampling program are intended to update the RCRA Facility Assessment (RFA) for the WIPP (Assessment of Solid Waste Management Units at the Waste Isolation Pilot Plant), NMED/DOE/AIP 94/1. This Final Voluntary RA/CA Report documents the results of release assessment sampling at 11 SWMUs identified in the RFA. With this submittal, DOE formally requests a No Further Action determination for these SWMUs. Additionally, this report provides information to support DOE`s request for No Further Action at the Brinderson and Construction landfill SWMUs, and to support DOE`s request for approval of proposed corrective actions at three other SWMUs (the Badger Unit Drill Pad, the Cotton Baby Drill Pad, and the DOE-1 Drill Pad). This information is provided to document the results of the Voluntary RA/CA activities submitted to the EPA and NMED in August 1995.

  3. Sample preparation in foodomic analyses.

    Science.gov (United States)

    Martinović, Tamara; Šrajer Gajdošik, Martina; Josić, Djuro

    2018-04-16

    Representative sampling and adequate sample preparation are key factors for successful performance of further steps in foodomic analyses, as well as for correct data interpretation. Incorrect sampling and improper sample preparation can be sources of severe bias in foodomic analyses. It is well known that both wrong sampling and sample treatment cannot be corrected anymore. These, in the past frequently neglected facts, are now taken into consideration, and the progress in sampling and sample preparation in foodomics is reviewed here. We report the use of highly sophisticated instruments for both high-performance and high-throughput analyses, as well as miniaturization and the use of laboratory robotics in metabolomics, proteomics, peptidomics and genomics. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. Enrichment and determination of small amounts of 90Sr/90Y in water samples

    International Nuclear Information System (INIS)

    Mundschenk, H.

    1979-01-01

    Small amounts of 90 Sr/ 90 Y can be concentrated from large volumes of surface water (100 l) by precipitation of the phosphates, using bentonite as adsorber matrix. In the case of samples containing no or nearly no suspended matter (tap water, ground water, sea water), the daughter 90 Y can be extracted directly by using filter beds impregnated with HDEHP. The applicability of both techniques is demonstrated under realistic conditions. (orig.) 891 HP/orig. 892 MKO [de

  5. Basic distribution free identification tests for small size samples of environmental data

    Energy Technology Data Exchange (ETDEWEB)

    Federico, A.G.; Musmeci, F. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dipt. Ambiente

    1998-01-01

    Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data. [Italiano] Nell`analisi di dati ambientali ricorre spesso il caso di dover sottoporre a test l`ipotesi di provenienza di due, o piu`, insiemi di dati dalla stessa popolazione. Tipicamente i dati disponibili sono pochi e spesso l`ipotesi di provenienza da distribuzioni normali non e` sostenibile. D`altra aprte la diffusione odierna di Personal Computer fornisce nuove possibili soluzioni basate sull`uso intensivo delle risorse della CPU. Il rapporto analizza il problema e presenta la possibilita` di utilizzo di due test non parametrici basati sulle proprieta` intrinseche di equiprobabilita` dei campioni. Il primo e` basato su una tecnica di ricampionamento esaustivo mentre il secondo su un approccio di tipo bootstrap. E` presentato un programma di semplice utilizzo e un caso di studio basato su dati di contaminazione di bambini a Chernobyl.

  6. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  7. Calibrating the X-ray attenuation of liquid water and correcting sample movement artefacts during in operando synchrotron X-ray radiographic imaging of polymer electrolyte membrane fuel cells.

    Science.gov (United States)

    Ge, Nan; Chevalier, Stéphane; Hinebaugh, James; Yip, Ronnie; Lee, Jongmin; Antonacci, Patrick; Kotaka, Toshikazu; Tabuchi, Yuichiro; Bazylak, Aimy

    2016-03-01

    Synchrotron X-ray radiography, due to its high temporal and spatial resolutions, provides a valuable means for understanding the in operando water transport behaviour in polymer electrolyte membrane fuel cells. The purpose of this study is to address the specific artefact of imaging sample movement, which poses a significant challenge to synchrotron-based imaging for fuel cell diagnostics. Specifically, the impact of the micrometer-scale movement of the sample was determined, and a correction methodology was developed. At a photon energy level of 20 keV, a maximum movement of 7.5 µm resulted in a false water thickness of 0.93 cm (9% higher than the maximum amount of water that the experimental apparatus could physically contain). This artefact was corrected by image translations based on the relationship between the false water thickness value and the distance moved by the sample. The implementation of this correction method led to a significant reduction in false water thickness (to ∼0.04 cm). Furthermore, to account for inaccuracies in pixel intensities due to the scattering effect and higher harmonics, a calibration technique was introduced for the liquid water X-ray attenuation coefficient, which was found to be 0.657 ± 0.023 cm(-1) at 20 keV. The work presented in this paper provides valuable tools for artefact compensation and accuracy improvements for dynamic synchrotron X-ray imaging of fuel cells.

  8. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  9. SELF CORRECTION WORKS BETTER THAN TEACHER CORRECTION IN EFL SETTING

    Directory of Open Access Journals (Sweden)

    Azizollah Dabaghi

    2012-11-01

    Full Text Available Learning a foreign language takes place step by step, during which mistakes are to be expected in all stages of learning. EFL learners are usually afraid of making mistakes which prevents them from being receptive and responsive. Overcoming fear of mistakes depends on the way mistakes are rectified. It is believed that autonomy and learner-centeredness suggest that in some settings learner's self-correction of mistakes might be more beneficial for language learning than teacher's correction. This assumption has been the subject of debates for some time. Some researchers believe that correction whether that of teacher's or on behalf of learners is effective in showing them how their current interlanguage differs from the target (Long &Robinson, 1998. Others suggest that correcting the students whether directly or through recasts are ambiguous and may be perceived by the learner as confirmation of meaning rather than feedback on form (Lyster, 1998a. This study is intended to investigate the effects of correction on Iranian intermediate EFL learners' writing composition in Payam Noor University. For this purpose, 90 English majoring students, studying at Isfahan Payam Noor University were invited to participate at the experiment. They all received a sample of TOFEL test and a total number of 60 participants whose scores were within the range of one standard deviation below and above the mean were divided into two equal groups; experimental and control. The experimental group went through some correction during the experiment while the control group remained intact and the ordinary processes of teaching went on. Each group received twelve sessions of two hour classes every week on advanced writing course in which some activities of Modern English (II were selected. Then after the treatment both groups received an immediate test as post-test and the experimental group took the second post-test as the delayed recall test with the same design as the

  10. Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes.

    Science.gov (United States)

    Demitri, Nevine; Zoubir, Abdelhak M

    2017-01-01

    Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.

  11. Corrective Action Decision Document for Corrective Action Unit 417: Central Nevada Test Area Surface, Nevada

    International Nuclear Information System (INIS)

    1999-01-01

    This Corrective Action Decision Document (CADD) identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 417: Central Nevada Test Area Surface, Nevada, under the Federal Facility Agreement and Consent Order. Located in Hot Creek Valley in Nye County, Nevada, and consisting of three separate land withdrawal areas (UC-1, UC-3, and UC-4), CAU 417 is comprised of 34 corrective action sites (CASs) including 2 underground storage tanks, 5 septic systems, 8 shaker pad/cuttings disposal areas, 1 decontamination facility pit, 1 burn area, 1 scrap/trash dump, 1 outlier area, 8 housekeeping sites, and 16 mud pits. Four field events were conducted between September 1996 and June 1998 to complete a corrective action investigation indicating that the only contaminant of concern was total petroleum hydrocarbon (TPH) which was found in 18 of the CASs. A total of 1,028 samples were analyzed. During this investigation, a statistical approach was used to determine which depth intervals or layers inside individual mud pits and shaker pad areas were above the State action levels for the TPH. Other related field sampling activities (i.e., expedited site characterization methods, surface geophysical surveys, direct-push geophysical surveys, direct-push soil sampling, and rotosonic drilling located septic leachfields) were conducted in this four-phase investigation; however, no further contaminants of concern (COCs) were identified. During and after the investigation activities, several of the sites which had surface debris but no COCs were cleaned up as housekeeping sites, two septic tanks were closed in place, and two underground storage tanks were removed. The focus of this CADD was to identify CAAs which would promote the prevention or mitigation of human exposure to surface and subsurface soils with contaminant

  12. Dynamic retardation corrections to the mass spectrum of heavy quarkonia

    International Nuclear Information System (INIS)

    Kopalejshvili, T.; Rusetskij, A.

    1996-01-01

    In the framework of the Logunov-Tavkhelidze quasipotential approach the first-order retardation corrections to the heavy quarkonia mass spectrum are calculated with the use of the stationary wave boundary condition in the covariant kernel of the Bethe-Salpeter equation. As has been expected, these corrections turn out to be small for all low-lying heavy meson states and vanish in the heavy quark limit (m Q →∞). The comparison of the suggested approach to the calculation of retardation corrections with others, known in literature, is carried out. 22 refs., 1 tab

  13. Radiometric Correction of Close-Range Spectral Image Blocks Captured Using an Unmanned Aerial Vehicle with a Radiometric Block Adjustment

    Directory of Open Access Journals (Sweden)

    Eija Honkavaara

    2018-02-01

    Full Text Available Unmanned airborne vehicles (UAV equipped with novel, miniaturized, 2D frame format hyper- and multispectral cameras make it possible to conduct remote sensing measurements cost-efficiently, with greater accuracy and detail. In the mapping process, the area of interest is covered by multiple, overlapping, small-format 2D images, which provide redundant information about the object. Radiometric correction of spectral image data is important for eliminating any external disturbance from the captured data. Corrections should include sensor, atmosphere and view/illumination geometry (bidirectional reflectance distribution function—BRDF related disturbances. An additional complication is that UAV remote sensing campaigns are often carried out under difficult conditions, with varying illumination conditions and cloudiness. We have developed a global optimization approach for the radiometric correction of UAV image blocks, a radiometric block adjustment. The objective of this study was to implement and assess a combined adjustment approach, including comprehensive consideration of weighting of various observations. An empirical study was carried out using imagery captured using a hyperspectral 2D frame format camera of winter wheat crops. The dataset included four separate flights captured during a 2.5 h time period under sunny weather conditions. As outputs, we calculated orthophoto mosaics using the most nadir images and sampled multiple-view hyperspectral spectra for vegetation sample points utilizing multiple images in the dataset. The method provided an automated tool for radiometric correction, compensating for efficiently radiometric disturbances in the images. The global homogeneity factor improved from 12–16% to 4–6% with the corrections, and a reduction in disturbances could be observed in the spectra of the object points sampled from multiple overlapping images. Residuals in the grey and white reflectance panels were less than 5% of the

  14. A Blast Wave Model With Viscous Corrections

    International Nuclear Information System (INIS)

    Yang, Z; Fries, R J

    2017-01-01

    Hadronic observables in the final stage of heavy ion collision can be described well by fluid dynamics or blast wave parameterizations. We improve existing blast wave models by adding shear viscous corrections to the particle distributions in the Navier-Stokes approximation. The specific shear viscosity η/s of a hadron gas at the freeze-out temperature is a new parameter in this model. We extract the blast wave parameters with viscous corrections from experimental data which leads to constraints on the specific shear viscosity at kinetic freeze-out. Preliminary results show η/s is rather small. (paper)

  15. A Blast Wave Model With Viscous Corrections

    Science.gov (United States)

    Yang, Z.; Fries, R. J.

    2017-04-01

    Hadronic observables in the final stage of heavy ion collision can be described well by fluid dynamics or blast wave parameterizations. We improve existing blast wave models by adding shear viscous corrections to the particle distributions in the Navier-Stokes approximation. The specific shear viscosity η/s of a hadron gas at the freeze-out temperature is a new parameter in this model. We extract the blast wave parameters with viscous corrections from experimental data which leads to constraints on the specific shear viscosity at kinetic freeze-out. Preliminary results show η/s is rather small.

  16. Pb isotope analysis of ng size samples by TIMS equipped with a 1013 Ω resistor using a 207Pb-204Pb double spike

    NARCIS (Netherlands)

    Klaver, M.; Smeets, R.J.; Koornneef, J.M.; Davies, G.R.; Vroon, P.Z.

    2016-01-01

    The use of the double spike technique to correct for instrumental mass fractionation has yielded high precision results for lead isotope measurements by thermal ionisation mass spectrometry (TIMS), but the applicability to ng size Pb samples is hampered by the small size of the

  17. Electromagnetic corrections in {eta}{yields}3{pi} decays

    Energy Technology Data Exchange (ETDEWEB)

    Ditsche, Christoph; Kubis, Bastian [Universitaet Bonn, Helmholtz-Institut fuer Strahlen- und Kernphysik (Theorie) and Bethe Center for Theoretical Physics, Bonn (Germany); Meissner, Ulf G. [Universitaet Bonn, Helmholtz-Institut fuer Strahlen- und Kernphysik (Theorie) and Bethe Center for Theoretical Physics, Bonn (Germany); Forschungszentrum Juelich, Institut fuer Kernphysik (Theorie), Institute for Advanced Simulations, and Juelich Center for Hadron Physics, Juelich (Germany)

    2009-03-15

    We re-evaluate the electromagnetic corrections to {eta}{yields}3{pi} decays at next-to-leading order in the chiral expansion, arguing that effects of order e{sup 2}(m{sub u}-m{sub d}) disregarded so far are not negligible compared to other contributions of order e {sup 2} times a light-quark mass. Despite the appearance of the Coulomb pole in {eta}{yields}{pi}{sup +}{pi}{sup -}{pi}{sup 0} and cusps in {eta}{yields}3{pi}{sup 0}, the overall corrections remain small. (orig.)

  18. Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.

    Science.gov (United States)

    Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe

    2012-01-01

    Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.

  19. SOPPA and CCSD vibrational corrections to NMR indirect spin-spin coupling constants of small hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Faber, Rasmus; Sauer, Stephan P. A. [Department of Chemistry, University of Copenhagen, Universitetsparken 5, DK-2100 Copenhagen Ø (Denmark)

    2015-12-31

    We present zero-point vibrational corrections to the indirect nuclear spin-spin coupling constants in ethyne, ethene, cyclopropene and allene. The calculations have been carried out both at the level of the second order polarization propagator approximation (SOPPA) employing a new implementation in the DALTON program, at the density functional theory level with the B3LYP functional employing also the Dalton program and at the level of coupled cluster singles and doubles (CCSD) theory employing the implementation in the CFOUR program. Specialized coupling constant basis sets, aug-cc-pVTZ-J, have been employed in the calculations. We find that on average the SOPPA results for both the equilibrium geometry values and the zero-point vibrational corrections are in better agreement with the CCSD results than the corresponding B3LYP results. Furthermore we observed that the vibrational corrections are in the order of 5 Hz for the one-bond carbon-hydrogen couplings and about 1 Hz or smaller for the other couplings apart from the one-bond carbon-carbon coupling (11 Hz) and the two-bond carbon-hydrogen coupling (4 Hz) in ethyne. However, not for all couplings lead the inclusion of zero-point vibrational corrections to better agreement with experiment.

  20. Beam hardening correction algorithm in microtomography images

    International Nuclear Information System (INIS)

    Sales, Erika S.; Lima, Inaya C.B.; Lopes, Ricardo T.; Assis, Joaquim T. de

    2009-01-01

    Quantification of mineral density of bone samples is directly related to the attenuation coefficient of bone. The X-rays used in microtomography images are polychromatic and have a moderately broad spectrum of energy, which makes the low-energy X-rays passing through a sample to be absorbed, causing a decrease in the attenuation coefficient and possibly artifacts. This decrease in the attenuation coefficient is due to a process called beam hardening. In this work the beam hardening of microtomography images of vertebrae of Wistar rats subjected to a study of hyperthyroidism was corrected by the method of linearization of the projections. It was discretized using a spectrum in energy, also called the spectrum of Herman. The results without correction for beam hardening showed significant differences in bone volume, which could lead to a possible diagnosis of osteoporosis. But the data with correction showed a decrease in bone volume, but this decrease was not significant in a confidence interval of 95%. (author)

  1. Beam hardening correction algorithm in microtomography images

    Energy Technology Data Exchange (ETDEWEB)

    Sales, Erika S.; Lima, Inaya C.B.; Lopes, Ricardo T., E-mail: esales@con.ufrj.b, E-mail: ricardo@lin.ufrj.b [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Lab. de Instrumentacao Nuclear; Assis, Joaquim T. de, E-mail: joaquim@iprj.uerj.b [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico. Dept. de Engenharia Mecanica

    2009-07-01

    Quantification of mineral density of bone samples is directly related to the attenuation coefficient of bone. The X-rays used in microtomography images are polychromatic and have a moderately broad spectrum of energy, which makes the low-energy X-rays passing through a sample to be absorbed, causing a decrease in the attenuation coefficient and possibly artifacts. This decrease in the attenuation coefficient is due to a process called beam hardening. In this work the beam hardening of microtomography images of vertebrae of Wistar rats subjected to a study of hyperthyroidism was corrected by the method of linearization of the projections. It was discretized using a spectrum in energy, also called the spectrum of Herman. The results without correction for beam hardening showed significant differences in bone volume, which could lead to a possible diagnosis of osteoporosis. But the data with correction showed a decrease in bone volume, but this decrease was not significant in a confidence interval of 95%. (author)

  2. [Aggression and mobbing among correctional officers].

    Science.gov (United States)

    Merecz-Kot, Dorota; Cebrzyńska, Joanna

    2008-01-01

    The paper addresses the issue of violence among correctional officers. The aim of the study was to assess the frequency of exposure to violence in this professional group. The study comprised the sample of 222 correctional officers who voluntary and anonymously fulfilled the MDM questionnaire. The MDM Questionnaire allows for assessing exposure to aggression and mobbing at work. Preliminary assessment of exposure to single aggressive acts and mobbing shows a quite alarming tendency--around one third of subjects under the study experienced repetitive aggressive acts from coworkers and/or superiors. The problem of organizational aggression in correctional institutions should be recognized in details to develop effective preventive measures against violent behaviors occurring at work.

  3. Corrective Action Plan for Corrective Action Unit 135: Area 25 Underground Storage Tanks, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Cox, D. H.

    2000-01-01

    The Area 25 Underground Storage Tanks site Corrective Action Unit (CAU) 135 will be closed by unrestricted release decontamination and verification survey, in accordance with the Federal Facility Agreement and Consert Order (FFACO, 1996). The CAU includes one Corrective Action Site (CAS). The Area 25 Underground Storage Tanks, (CAS 25-02-01), referred to as the Engine-Maintenance Assembly and Disassembly (E-MAD) Waste Holdup Tanks and Vault, were used to receive liquid waste from all of the radioactive drains at the E-MAD Facility. Based on the results of the Corrective Action Investigation conducted in June 1999 discussed in the Corrective Action Investigation Plan for Corrective Action Unit 135: Area 25 Underground Storage Tanks, Nevada Test Site, Nevada (DOE/NV,1999a), one sample from the radiological survey of the concrete vault interior exceeded radionuclide preliminary action levels. The analytes from the sediment samples that exceeded the preliminary action levels are polychlorinated biphenyls, Resource Conservation and Recovery Act metals, total petroleum hydrocarbons as diesel-range organics, and radionuclides. Unrestricted release decontamination and verification involves removal of concrete and the cement-lined pump sump from the vault. After verification that the contamination has been removed, the vault will be repaired with concrete, as necessary. The radiological- and chemical-contaminated pump sump and concrete removed from the vault would be disposed of at the Area 5 Radioactive Waste Management Site. The vault interior will be field surveyed following removal of contaminated material to verify that unrestricted release criteria have been achieved

  4. Corrective Action Plan for Corrective Action Unit 135: Area 25 Underground Storage Tanks, Nevada Test Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    D. H. Cox

    2000-07-01

    The Area 25 Underground Storage Tanks site Corrective Action Unit (CAU) 135 will be closed by unrestricted release decontamination and verification survey, in accordance with the Federal Facility Agreement and Consert Order (FFACO, 1996). The CAU includes one Corrective Action Site (CAS). The Area 25 Underground Storage Tanks, (CAS 25-02-01), referred to as the Engine-Maintenance Assembly and Disassembly (E-MAD) Waste Holdup Tanks and Vault, were used to receive liquid waste from all of the radioactive drains at the E-MAD Facility. Based on the results of the Corrective Action Investigation conducted in June 1999 discussed in the Corrective Action Investigation Plan for Corrective Action Unit 135: Area 25 Underground Storage Tanks, Nevada Test Site, Nevada (DOE/NV,1999a), one sample from the radiological survey of the concrete vault interior exceeded radionuclide preliminary action levels. The analytes from the sediment samples that exceeded the preliminary action levels are polychlorinated biphenyls, Resource Conservation and Recovery Act metals, total petroleum hydrocarbons as diesel-range organics, and radionuclides. Unrestricted release decontamination and verification involves removal of concrete and the cement-lined pump sump from the vault. After verification that the contamination has been removed, the vault will be repaired with concrete, as necessary. The radiological- and chemical-contaminated pump sump and concrete removed from the vault would be disposed of at the Area 5 Radioactive Waste Management Site. The vault interior will be field surveyed following removal of contaminated material to verify that unrestricted release criteria have been achieved.

  5. Color quench correction for low level Cherenkov counting.

    Science.gov (United States)

    Tsroya, S; Pelled, O; German, U; Marco, R; Katorza, E; Alfassi, Z B

    2009-05-01

    The Cherenkov counting efficiency varies strongly with color quenching, thus correction curves must be used to obtain correct results. The external (152)Eu source of a Quantulus 1220 liquid scintillation counting (LSC) system was used to obtain a quench indicative parameter based on spectra area ratio. A color quench correction curve for aqueous samples containing (90)Sr/(90)Y was prepared. The main advantage of this method over the common spectra indicators is its usefulness also for low level Cherenkov counting.

  6. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    International Nuclear Information System (INIS)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K.

    2015-01-01

    Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. The detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm 3 . The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum detectable

  7. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    Energy Technology Data Exchange (ETDEWEB)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K. [Canberra Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-07-01

    Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. The detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm{sup 3}. The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum detectable

  8. Superficial structures cartography of the Essaouira basin under ground (Morocco, by small refraction seismic: contribution of the static corrections in the reinterpretation of the speeds variations.

    Directory of Open Access Journals (Sweden)

    Dahaoui M.

    2018-01-01

    Full Text Available The static corrections are a necessary step in the sequence of the seismic processing. This paper presents a study of these corrections in the Essaouira basin. The main objective of this study is to calculate the static corrections by exploiting the seismic data acquired in the field to improve the deep structures imaging. It is to determine the roof and the basis of the superficial layers which constitute the weathered zone while calculating the delays of seismic wave’s arrivals in these layers. The purpose is to cancel the effect of the topography and the weathered zone, in order to avoid any confusion when the seismic and geological interpretation. The results obtained show the average values of the static corrections varying between - 127 and 282 ms (double time, with existence of high values by location, particularly in the Eastern and North-Eastern of the basin, which meant the presence of altered zone with irregular topography and whose thickness and speeds vary laterally. In effect the variations of velocities in the fifty meters from the surface may introduce significant anomalies in seismic refraction, with heavy consequences when the interpretation or the drilling establishment. These variations are mainly due to lateral changes in facies and variations in the formations thickness. The calculation of the static corrections, revealed high values at certain areas (East and North-East, which will enable us to better orient the future campaigns in these zones. It is therefore necessary to concentrate the seismic cores drillings and the small refraction seismic profiles by tightening the seismic lines meshes in order to have the maximum values of static corrections and thereafter a better imaging of the reflectors.

  9. QED radiative corrections to impact factors

    International Nuclear Information System (INIS)

    Kuraev, E.A.; Lipatov, L.N.; Shishkina, T.V.

    2001-01-01

    We consider radiative corrections to the electron and photon impact factors. The generalized eikonal representation for the e + e - scattering amplitude at high energies and fixed momentum transfers is violated by nonplanar diagrams. An additional contribution to the two-loop approximation appears from the Bethe-Heitler mechanism of fermion pair production with the identity of the fermions in the final state taken into account. The violation of the generalized eikonal representation is also related to the charge parity conservation in QED. A one-loop correction to the photon impact factor for small virtualities of the exchanged photon is obtained using the known results for the cross section of the e + e - production during photon-nuclei interactions

  10. Relative amplitude preservation processing utilizing surface consistent amplitude correction. Part 3; Surface consistent amplitude correction wo mochiita sotai shinpuku hozon shori. 3

    Energy Technology Data Exchange (ETDEWEB)

    Saeki, T [Japan National Oil Corporation, Tokyo (Japan). Technology Research Center

    1996-10-01

    For the seismic reflection method conducted on the ground surface, generator and geophone are set on the surface. The observed waveforms are affected by the ground surface and surface layer. Therefore, it is required for discussing physical properties of the deep underground to remove the influence of surface layer, preliminarily. For the surface consistent amplitude correction, properties of the generator and geophone were removed by assuming that the observed waveforms can be expressed by equations of convolution. This is a correction method to obtain records without affected by the surface conditions. In response to analysis and correction of waveforms, wavelet conversion was examined. Using the amplitude patterns after correction, the significant signal region, noise dominant region, and surface wave dominant region would be separated each other. Since the amplitude values after correction of values in the significant signal region have only small variation, a representative value can be given. This can be used for analyzing the surface consistent amplitude correction. Efficiency of the process can be enhanced by considering the change of frequency. 3 refs., 5 figs.

  11. Spectral distribution of particle fluence in small field detectors and its implication on small field dosimetry.

    Science.gov (United States)

    Benmakhlouf, Hamza; Andreo, Pedro

    2017-02-01

    Correction factors for the relative dosimetry of narrow megavoltage photon beams have recently been determined in several publications. These corrections are required because of the several small-field effects generally thought to be caused by the lack of lateral charged particle equilibrium (LCPE) in narrow beams. Correction factors for relative dosimetry are ultimately necessary to account for the fluence perturbation caused by the detector. For most small field detectors the perturbation depends on field size, resulting in large correction factors when the field size is decreased. In this work, electron and photon fluence differential in energy will be calculated within the radiation sensitive volume of a number of small field detectors for 6 MV linear accelerator beams. The calculated electron spectra will be used to determine electron fluence perturbation as a function of field size and its implication on small field dosimetry analyzed. Fluence spectra were calculated with the user code PenEasy, based on the PENELOPE Monte Carlo system. The detectors simulated were one liquid ionization chamber, two air ionization chambers, one diamond detector, and six silicon diodes, all manufactured either by PTW or IBA. The spectra were calculated for broad (10 cm × 10 cm) and narrow (0.5 cm × 0.5 cm) photon beams in order to investigate the field size influence on the fluence spectra and its resulting perturbation. The photon fluence spectra were used to analyze the impact of absorption and generation of photons. These will have a direct influence on the electrons generated in the detector radiation sensitive volume. The electron fluence spectra were used to quantify the perturbation effects and their relation to output correction factors. The photon fluence spectra obtained for all detectors were similar to the spectrum in water except for the shielded silicon diodes. The photon fluence in the latter group was strongly influenced, mostly in the low-energy region, by

  12. Preparing Monodisperse Macromolecular Samples for Successful Biological Small-Angle X-ray and Neutron Scattering Experiments

    Science.gov (United States)

    Jeffries, Cy M.; Graewert, Melissa A.; Blanchet, Clément E.; Langley, David B.; Whitten, Andrew E.; Svergun, Dmitri I

    2017-01-01

    Small-angle X-ray and neutron scattering (SAXS and SANS) are techniques used to extract structural parameters and determine the overall structures and shapes of biological macromolecules, complexes and assemblies in solution. The scattering intensities measured from a sample contain contributions from all atoms within the illuminated sample volume including the solvent and buffer components as well as the macromolecules of interest. In order to obtain structural information, it is essential to prepare an exactly matched solvent blank so that background scattering contributions can be accurately subtracted from the sample scattering to obtain the net scattering from the macromolecules in the sample. In addition, sample heterogeneity caused by contaminants, aggregates, mismatched solvents, radiation damage or other factors can severely influence and complicate data analysis so it is essential that the samples are pure and monodisperse for the duration of the experiment. This Protocol outlines the basic physics of SAXS and SANS and reveals how the underlying conceptual principles of the techniques ultimately ‘translate’ into practical laboratory guidance for the production of samples of sufficiently high quality for scattering experiments. The procedure describes how to prepare and characterize protein and nucleic acid samples for both SAXS and SANS using gel electrophoresis, size exclusion chromatography and light scattering. Also included are procedures specific to X-rays (in-line size exclusion chromatography SAXS) and neutrons, specifically preparing samples for contrast matching/variation experiments and deuterium labeling of proteins. PMID:27711050

  13. Correctional Practitioners on Reentry: A Missed Perspective

    Directory of Open Access Journals (Sweden)

    Elaine Gunnison

    2015-06-01

    Full Text Available Much of the literature on reentry of formerly incarcerated individuals revolves around discussions of failures they incur during reintegration or the identification of needs and challenges that they have during reentry from the perspective of community corrections officers. The present research fills a gap in the reentry literature by examining the needs and challenges of formerly incarcerated individuals and what makes for reentry success from the perspective of correctional practitioners (i.e., wardens and non-wardens. The views of correctional practitioners are important to understand the level of organizational commitment to reentry and the ways in which social distance between correctional professionals and their clients may impact reentry success. This research reports on the results from an email survey distributed to a national sample of correctional officials listed in the American Correctional Association, 2012 Directory. Specifically, correctional officials were asked to report on needs and challenges facing formerly incarcerated individuals, define success, identify factors related to successful reentry, recount success stories, and report what could be done to assist them in successful outcomes. Housing and employment were raised by wardens and corrections officials as important needs for successful reentry. Corrections officials adopted organizational and systems perspectives in their responses and had differing opinions about social distance. Policy implications are presented.

  14. Application of bias correction methods to improve U{sub 3}Si{sub 2} sample preparation for quantitative analysis by WDXRF

    Energy Technology Data Exchange (ETDEWEB)

    Scapin, Marcos A.; Guilhen, Sabine N.; Azevedo, Luciana C. de; Cotrim, Marycel E.B.; Pires, Maria Ap. F., E-mail: mascapin@ipen.br, E-mail: snguilhen@ipen.br, E-mail: lvsantana@ipen.br, E-mail: mecotrim@ipen.br, E-mail: mapires@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The determination of silicon (Si), total uranium (U) and impurities in uranium-silicide (U{sub 3}Si{sub 2}) samples by wavelength dispersion X-ray fluorescence technique (WDXRF) has been already validated and is currently implemented at IPEN's X-Ray Fluorescence Laboratory (IPEN-CNEN/SP) in São Paulo, Brazil. Sample preparation requires the use of approximately 3 g of H{sub 3}BO{sub 3} as sample holder and 1.8 g of U{sub 3}Si{sub 2}. However, because boron is a neutron absorber, this procedure precludes U{sub 3}Si{sub 2} sample's recovery, which, in time, considering routinely analysis, may account for significant unusable uranium waste. An estimated average of 15 samples per month are expected to be analyzed by WDXRF, resulting in approx. 320 g of U{sub 3}Si{sub 2} that would not return to the nuclear fuel cycle. This not only impacts in production losses, but generates another problem: radioactive waste management. The purpose of this paper is to present the mathematical models that may be applied for the correction of systematic errors when H{sub 3}BO{sub 3} sample holder is substituted by cellulose-acetate {[C_6H_7O_2(OH)_3_-_m(OOCCH_3)m], m = 0∼3}, thus enabling U{sub 3}Si{sub 2} sample’s recovery. The results demonstrate that the adopted mathematical model is statistically satisfactory, allowing the optimization of the procedure. (author)

  15. Observing System Simulations for Small Satellite Formations Estimating Bidirectional Reflectance

    Science.gov (United States)

    Nag, Sreeja; Gatebe, Charles K.; de Weck, Olivier

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: Use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.

  16. Observing system simulations for small satellite formations estimating bidirectional reflectance

    Science.gov (United States)

    Nag, Sreeja; Gatebe, Charles K.; Weck, Olivier de

    2015-12-01

    The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.

  17. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    Science.gov (United States)

    Burckhardt, Bjoern B.; Laeer, Stephanie

    2015-01-01

    In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum). Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers. PMID:25873972

  18. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    Directory of Open Access Journals (Sweden)

    Bjoern B. Burckhardt

    2015-01-01

    Full Text Available In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum. Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers.

  19. Data-driven motion correction in brain SPECT

    International Nuclear Information System (INIS)

    Kyme, A.Z.; Hutton, B.F.; Hatton, R.L.; Skerrett, D.W.

    2002-01-01

    Patient motion can cause image artifacts in SPECT despite restraining measures. Data-driven detection and correction of motion can be achieved by comparison of acquired data with the forward-projections. By optimising the orientation of the reconstruction, parameters can be obtained for each misaligned projection and applied to update this volume using a 3D reconstruction algorithm. Digital and physical phantom validation was performed to investigate this approach. Noisy projection data simulating at least one fully 3D patient head movement during acquisition were constructed by projecting the digital Huffman brain phantom at various orientations. Motion correction was applied to the reconstructed studies. The importance of including attenuation effects in the estimation of motion and the need for implementing an iterated correction were assessed in the process. Correction success was assessed visually for artifact reduction, and quantitatively using a mean square difference (MSD) measure. Physical Huffman phantom studies with deliberate movements introduced during the acquisition were also acquired and motion corrected. Effective artifact reduction in the simulated corrupt studies was achieved by motion correction. Typically the MSD ratio between the corrected and reference studies compared to the corrupted and reference studies was > 2. Motion correction could be achieved without inclusion of attenuation effects in the motion estimation stage, providing simpler implementation and greater efficiency. Moreover the additional improvement with multiple iterations of the approach was small. Improvement was also observed in the physical phantom data, though the technique appeared limited here by an object symmetry. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  20. Calculation of “LS-curves” for coincidence summing corrections in gamma ray spectrometry

    Science.gov (United States)

    Vidmar, Tim; Korun, Matjaž

    2006-01-01

    When coincidence summing correction factors for extended samples are calculated in gamma-ray spectrometry from full-energy-peak and total efficiencies, their variation over the sample volume needs to be considered. In other words, the correction factors cannot be computed as if the sample were a point source. A method developed by Blaauw and Gelsema takes the variation of the efficiencies over the sample volume into account. It introduces the so-called LS-curve in the calibration procedure and only requires the preparation of a single standard for each sample geometry. We propose to replace the standard preparation by calculation and we show that the LS-curves resulting from our method yield coincidence summing correction factors that are consistent with the LS values obtained from experimental data.

  1. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  2. Absorption corrections for x-ray fluorescence analysis of environmental samples

    International Nuclear Information System (INIS)

    Bazan, F.; Bonner, N.A.

    1975-01-01

    The discovery of a very simple and useful relationship between the absorption coefficient of a particular element and the ratio of incoherent to coherent scattering by the sample containing the element is discussed. By measuring the absorption coefficients for a few elements in a few samples, absorption coefficients for many elements in an entire set of similar samples can be obtained. (auth)

  3. Absorption corrections for x-ray fluorescence analysis of environmental samples

    International Nuclear Information System (INIS)

    Bazan, F.; Bonner, N.A.

    1976-01-01

    The discovery of a very simple and useful relationship between the absorption coefficient of a particular element and the ratio of incoherent to coherent scattering by the sample containing the element is discussed. By measuring the absorption coefficients for a few elements in a few samples, absorption coefficients for many elements in an entire set of similar samples can be obtained

  4. Correction of systematic bias in ultrasound dating in studies of small-for-gestational-age birth: an example from the Iowa Health in Pregnancy Study.

    Science.gov (United States)

    Harland, Karisa K; Saftlas, Audrey F; Wallis, Anne B; Yankowitz, Jerome; Triche, Elizabeth W; Zimmerman, M Bridget

    2012-09-01

    The authors examined whether early ultrasound dating (≤20 weeks) of gestational age (GA) in small-for-gestational-age (SGA) fetuses may underestimate gestational duration and therefore the incidence of SGA birth. Within a population-based case-control study (May 2002-June 2005) of Iowa SGA births and preterm deliveries identified from birth records (n = 2,709), the authors illustrate a novel methodological approach with which to assess and correct for systematic underestimation of GA by early ultrasound in women with suspected SGA fetuses. After restricting the analysis to subjects with first-trimester prenatal care, a nonmissing date of the last menstrual period (LMP), and early ultrasound (n = 1,135), SGA subjects' ultrasound GA was 5.5 days less than their LMP GA, on average. Multivariable linear regression was conducted to determine the extent to which ultrasound GA predicted LMP dating and to correct for systematic misclassification that results after applying standard guidelines to adjudicate differences in these measures. In the unadjusted model, SGA subjects required a correction of +1.5 weeks to the ultrasound estimate. With adjustment for maternal age, smoking, and first-trimester vaginal bleeding, standard guidelines for adjudicating differences in ultrasound and LMP dating underestimated SGA birth by 12.9% and overestimated preterm delivery by 8.7%. This methodological approach can be applied by researchers using different study populations in similar research contexts.

  5. Adiponectin levels measured in dried blood spot samples from neonates born small and appropriate for gestational age

    DEFF Research Database (Denmark)

    Klamer, A; Skogstrand, Kristin; Hougaard, D M

    2007-01-01

    Adiponectin levels measured in neonatal dried blood spot samples (DBSS) might be affected by both prematurity and being born small for gestational age (SGA). The aim of the study was to measure adiponectin levels in routinely collected neonatal DBSS taken on day 5 (range 3-12) postnatal from...

  6. The use of secondary ion mass spectrometry in forensic analyses of ultra-small samples

    Science.gov (United States)

    Cliff, John

    2010-05-01

    It is becoming increasingly important in forensic science to perform chemical and isotopic analyses on very small sample sizes. Moreover, in some instances the signature of interest may be incorporated in a vast background making analyses impossible by bulk methods. Recent advances in instrumentation make secondary ion mass spectrometry (SIMS) a powerful tool to apply to these problems. As an introduction, we present three types of forensic analyses in which SIMS may be useful. The causal organism of anthrax (Bacillus anthracis) chelates Ca and other metals during spore formation. Thus, the spores contain a trace element signature related to the growth medium that produced the organisms. Although other techniques have been shown to be useful in analyzing these signatures, the sample size requirements are generally relatively large. We have shown that time of flight SIMS (TOF-SIMS) combined with multivariate analysis, can clearly separate Bacillus sp. cultures prepared in different growth media using analytical spot sizes containing approximately one nanogram of spores. An important emerging field in forensic analysis is that of provenance of fecal pollution. The strategy of choice for these analyses-developing host-specific nucleic acid probes-has met with considerable difficulty due to lack of specificity of the probes. One potentially fruitful strategy is to combine in situ nucleic acid probing with high precision isotopic analyses. Bulk analyses of human and bovine fecal bacteria, for example, indicate a relative difference in d13C content of about 4 per mil. We have shown that sample sizes of several nanograms can be analyzed with the IMS 1280 with precisions capable of separating two per mil differences in d13C. The NanoSIMS 50 is capable of much better spatial resolution than the IMS 1280, albeit at a cost of analytical precision. Nevertheless we have documented precision capable of separating five per mil differences in d13C using analytical spots containing

  7. Drift correction for single-molecule imaging by molecular constraint field, a distance minimum metric

    International Nuclear Information System (INIS)

    Han, Renmin; Wang, Liansan; Xu, Fan; Zhang, Yongdeng; Zhang, Mingshu; Liu, Zhiyong; Ren, Fei; Zhang, Fa

    2015-01-01

    The recent developments of far-field optical microscopy (single molecule imaging techniques) have overcome the diffraction barrier of light and improve image resolution by a factor of ten compared with conventional light microscopy. These techniques utilize the stochastic switching of probe molecules to overcome the diffraction limit and determine the precise localizations of molecules, which often requires a long image acquisition time. However, long acquisition times increase the risk of sample drift. In the case of high resolution microscopy, sample drift would decrease the image resolution. In this paper, we propose a novel metric based on the distance between molecules to solve the drift correction. The proposed metric directly uses the position information of molecules to estimate the frame drift. We also designed an algorithm to implement the metric for the general application of drift correction. There are two advantages of our method: First, because our method does not require space binning of positions of molecules but directly operates on the positions, it is more natural for single molecule imaging techniques. Second, our method can estimate drift with a small number of positions in each temporal bin, which may extend its potential application. The effectiveness of our method has been demonstrated by both simulated data and experiments on single molecular images

  8. SWOT ANALYSIS ON SAMPLING METHOD

    Directory of Open Access Journals (Sweden)

    CHIS ANCA OANA

    2014-07-01

    Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the sampling method, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the sampling method the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.

  9. Evaluation of energy deposition by 153Sm in small samples

    International Nuclear Information System (INIS)

    Cury, M.I.C.; Siqueira, P.T.D.; Yoriyaz, H.; Coelho, P.R.P.; Da Silva, M.A.; Okazaki, K.

    2002-01-01

    Aim: This work presents evaluations of the absorbed dose by 'in vitro' blood cultures when mixed with 153 Sm solutions of different concentrations. Although 153 Sm is used as radiopharmaceutical mainly due to its beta emission, which is short-range radiation, it also emits gamma radiation which has a longer-range penetration. Therefore it turns to be a difficult task to determine the absorbed dose by small samples where the infinite approximation is no longer valid. Materials and Methods: MCNP-4C (Monte Carlo N - Particle transport code) has been used to perform the evaluations. It is not a deterministic code that calculates the value of a specific quantity solving the physical equations involved in the problem, but a virtual experiment where the events related to the problems are simulated and the concerned quantities are tallied. MCNP also stands out by its possibilities to specify geometrically any problem. However, these features, among others, turns MCNP in a time consuming code. The simulated problem consists of a cylindrical plastic tube with 1.5 cm internal diameter and 0.1cm thickness. It also has 2.0 cm height conic bottom end, so that the represented sample has 4.0 ml ( consisted by 1 ml of blood and 3 ml culture medium). To evaluate the energy deposition in the blood culture in each 153 Sm decay, the problem has been divided in 3 steps to account to the β- emissions (which has a continuum spectrum), gammas and conversion and Auger electrons emissions. Afterwards each emission contribution was weighted and summed to present the final value. Besides this radiation 'fragmentation', simulations were performed for many different amounts of 153 Sm solution added to the sample. These amounts cover a range from 1μl to 0.5 ml. Results: The average energy per disintegration of 153 Sm is 331 keV [1]. Gammas account for 63 keV and β-, conversion and Auger electrons account for 268 keV. The simulations performed showed an average energy deposition of 260 ke

  10. An inverse-modelling approach for frequency response correction of capacitive humidity sensors in ABL research with small remotely piloted aircraft (RPA)

    Science.gov (United States)

    Wildmann, N.; Kaufmann, F.; Bange, J.

    2014-09-01

    The measurement of water vapour concentration in the atmosphere is an ongoing challenge in environmental research. Satisfactory solutions exist for ground-based meteorological stations and measurements of mean values. However, carrying out advanced research of thermodynamic processes aloft as well, above the surface layer and especially in the atmospheric boundary layer (ABL), requires the resolution of small-scale turbulence. Sophisticated optical instruments are used in airborne meteorology with manned aircraft to achieve the necessary fast-response measurements of the order of 10 Hz (e.g. LiCor 7500). Since these instruments are too large and heavy for the application on small remotely piloted aircraft (RPA), a method is presented in this study that enhances small capacitive humidity sensors to be able to resolve turbulent eddies of the order of 10 m. The sensor examined here is a polymer-based sensor of the type P14-Rapid, by the Swiss company Innovative Sensor Technologies (IST) AG, with a surface area of less than 10 mm2 and a negligible weight. A physical and dynamical model of this sensor is described and then inverted in order to restore original water vapour fluctuations from sensor measurements. Examples of flight measurements show how the method can be used to correct vertical profiles and resolve turbulence spectra up to about 3 Hz. At an airspeed of 25 m s-1 this corresponds to a spatial resolution of less than 10 m.

  11. Case Study: Learner Attitudes Towards the Correction of Mistakes

    Directory of Open Access Journals (Sweden)

    Galina Kavaliauskienė

    2012-07-01

    Full Text Available The objective of the research is to explore learner attitudes to correction of mistakes or feedback as a language learning tool in oral, electronically- and paper-written work as well as peer correction of mistakes.Feedback is a method used in the teaching of languages to improve performance by sharing observations, concerns and suggestions with regard to written work or oral presentation. It includes not only correcting learners, but also assessing them. Both correction and assessment depend on mistakes being made, reasons for mistakes, and class activities. Recently the value of feedback in language studies has been a matter of debate among language teaching practitioners. The research into the effects of feedback is far from conclusive. Teachers’ and students’ expectations toward feedback are found to be opposing, and the most frequent reason given is its negative impact on students’ confidence and motivation. However, at the university level the issue of feedback has been examined in passing and there is insufficient research into learner attitudes to feedback in English for Specific Purposes.The hypothesis for the present study is to find out whether criticism has a negative impact on student confidence and whether perceptions of feedback depend on professional specialization.The research methods. A survey of students’ perceptions of teachers’ feedback in various class activities was administered to various groups of undergraduate students of psychology and penitentiary law. Statistical treatment of students’ responses using Statistical Package for the Social Sciences software (SPSS was carried out in order to establish the level of significance for the two small samples of participants.The respondents in this research participated students of two different specializations, penitentiary law and psychology, who study English for Specific Purposes at the Faculty of Social Policy, Mykolas Romeris University in Vilnius, Lithuania

  12. Case Study: Learner Attitudes Towards the Correction of Mistakes

    Directory of Open Access Journals (Sweden)

    Galina Kavaliauskienė

    2013-01-01

    Full Text Available The objective of the research is to explore learner attitudes to correction of mistakes or feedback as a language learning tool in oral, electronically- and paper-written work as well as peer correction of mistakes. Feedback is a method used in the teaching of languages to improve performance by sharing observations, concerns and suggestions with regard to written work or oral presentation. It includes not only correcting learners, but also assessing them. Both correction and assessment depend on mistakes being made, reasons for mistakes, and class activities. Recently the value of feedback in language studies has been a matter of debate among language teaching practitioners. The research into the effects of feedback is far from conclusive. Teachers’ and students’ expectations toward feedback are found to be opposing, and the most frequent reason given is its negative impact on students’ confidence and motivation. However, at the university level the issue of feedback has been examined in passing and there is insufficient research into learner attitudes to feedback in English for Specific Purposes. The hypothesis for the present study is to find out whether criticism has a negative impact on student confidence and whether perceptions of feedback depend on professional specialization. The research methods. A survey of students’ perceptions of teachers’ feedback in various class activities was administered to various groups of undergraduate students of psychology and penitentiary law. Statistical treatment of students’ responses using Statistical Package for the Social Sciences software (SPSS was carried out in order to establish the level of significance for the two small samples of participants. The respondents in this research participated students of two different specializations, penitentiary law and psychology, who study English for Specific Purposes at the Faculty of Social Policy, Mykolas Romeris University in Vilnius

  13. Brief Report: Accuracy and Response Time for the Recognition of Facial Emotions in a Large Sample of Children with Autism Spectrum Disorders

    Science.gov (United States)

    Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M.; Begeer, Sander

    2014-01-01

    The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion…

  14. Neural network scatter correction technique for digital radiography

    International Nuclear Information System (INIS)

    Boone, J.M.

    1990-01-01

    This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique

  15. Principal components in the discrimination of outliers: A study in simulation sample data corrected by Pearson's and Yates´s chi-square distance

    Directory of Open Access Journals (Sweden)

    Manoel Vitor de Souza Veloso

    2016-04-01

    Full Text Available Current study employs Monte Carlo simulation in the building of a significance test to indicate the principal components that best discriminate against outliers. Different sample sizes were generated by multivariate normal distribution with different numbers of variables and correlation structures. Corrections by chi-square distance of Pearson´s and Yates's were provided for each sample size. Pearson´s correlation test showed the best performance. By increasing the number of variables, significance probabilities in favor of hypothesis H0 were reduced. So that the proposed method could be illustrated, a multivariate time series was applied with regard to sales volume rates in the state of Minas Gerais, obtained in different market segments.

  16. Corrective Action Decision Document for Corrective Action Unit 417: Central Nevada Test Area Surface, Nevada Appendix D - Corrective Action Investigation Report, Central Nevada Test Area, CAU 417

    International Nuclear Information System (INIS)

    1999-01-01

    This Corrective Action Decision Document (CADD) identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 417: Central Nevada Test Area Surface, Nevada, under the Federal Facility Agreement and Consent Order. Located in Hot Creek Valley in Nye County, Nevada, and consisting of three separate land withdrawal areas (UC-1, UC-3, and UC-4), CAU 417 is comprised of 34 corrective action sites (CASs) including 2 underground storage tanks, 5 septic systems, 8 shaker pad/cuttings disposal areas, 1 decontamination facility pit, 1 burn area, 1 scrap/trash dump, 1 outlier area, 8 housekeeping sites, and 16 mud pits. Four field events were conducted between September 1996 and June 1998 to complete a corrective action investigation indicating that the only contaminant of concern was total petroleum hydrocarbon (TPH) which was found in 18 of the CASs. A total of 1,028 samples were analyzed. During this investigation, a statistical approach was used to determine which depth intervals or layers inside individual mud pits and shaker pad areas were above the State action levels for the TPH. Other related field sampling activities (i.e., expedited site characterization methods, surface geophysical surveys, direct-push geophysical surveys, direct-push soil sampling, and rotosonic drilling located septic leachfields) were conducted in this four-phase investigation; however, no further contaminants of concern (COCs) were identified. During and after the investigation activities, several of the sites which had surface debris but no COCs were cleaned up as housekeeping sites, two septic tanks were closed in place, and two underground storage tanks were removed. The focus of this CADD was to identify CAAs which would promote the prevention or mitigation of human exposure to surface and subsurface soils with contaminant

  17. Isochronicity correction in the CR storage ring

    International Nuclear Information System (INIS)

    Litvinov, S.; Toprek, D.; Weick, H.; Dolinskii, A.

    2013-01-01

    A challenge for nuclear physics is to measure masses of exotic nuclei up to the limits of nuclear existence which are characterized by low production cross-sections and short half-lives. The large acceptance Collector Ring (CR) [1] at FAIR [2] tuned in the isochronous ion-optical mode offers unique possibilities for measuring short-lived and very exotic nuclides. However, in a ring designed for maximal acceptance, many factors limit the resolution. One point is a limit in time resolution inversely proportional to the transverse emittance. But most of the time aberrations can be corrected and others become small for large number of turns. We show the relations of the time correction to the corresponding transverse focusing and that the main correction for large emittance corresponds directly to the chromaticity correction for transverse focusing of the beam. With the help of Monte-Carlo simulations for the full acceptance we demonstrate how to correct the revolution times so that in principle resolutions of Δm/m=10 −6 can be achieved. In these calculations the influence of magnet inhomogeneities and extended fringe fields are considered and a calibration scheme also for ions with different mass-to-charge ratio is presented

  18. A new trajectory correction technique for linacs

    International Nuclear Information System (INIS)

    Raubenheimer, T.O.; Ruth, R.D.

    1990-06-01

    In this paper, we describe a new trajectory correction technique for high energy linear accelerators. Current correction techniques force the beam trajectory to follow misalignments of the Beam Position Monitors. Since the particle bunch has a finite energy spread and particles with different energies are deflected differently, this causes ''chromatic'' dilution of the transverse beam emittance. The algorithm, which we describe in this paper, reduces the chromatic error by minimizing the energy dependence of the trajectory. To test the method we compare the effectiveness of our algorithm with a standard correction technique in simulations on a design linac for a Next Linear Collider. The simulations indicate that chromatic dilution would be debilitating in a future linear collider because of the very small beam sizes required to achieve the necessary luminosity. Thus, we feel that this technique will prove essential for future linear colliders. 3 refs., 6 figs., 2 tabs

  19. Fat fraction bias correction using T1 estimates and flip angle mapping.

    Science.gov (United States)

    Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A

    2014-01-01

    To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.

  20. Tools for Inspecting and Sampling Waste in Underground Radioactive Storage Tanks with Small Access Riser Openings

    International Nuclear Information System (INIS)

    Nance, T.A.

    1998-01-01

    Underground storage tanks with 2 inches to 3 inches diameter access ports at the Department of Energy's Savannah River Site have been used to store radioactive solvents and sludge. In order to close these tanks, the contents of the tanks need to first be quantified in terms of volume and chemical and radioactive characteristics. To provide information on the volume of waste contained within the tanks, a small remote inspection system was needed. This inspection system was designed to provide lighting and provide pan and tilt capabilities in an inexpensive package with zoom abilities and color video. This system also needed to be utilized inside of a plastic tent built over the access port to contain any contamination exiting from the port. This system had to be build to travel into the small port opening, through the riser pipe, into the tank evacuated space, and out of the riser pipe and access port with no possibility of being caught and blocking the access riser. Long thin plates were found in many access riser pipes that blocked the inspection system from penetrating into the tank interiors. Retrieval tools to clear the plates from the tanks using developed sampling devices while providing safe containment for the samples. This paper will discuss the inspection systems, tools for clearing access pipes, and solvent sampling tools developed to evaluate the tank contents of the underground solvent storage tanks

  1. Finite temperature QCD corrections to lepton-pair formation in a quark-gluon plasma

    International Nuclear Information System (INIS)

    Altherr, T.

    1989-02-01

    We discuss the O(α S ) corrections to lepton-pair production in a quark-gluon plasma in equilibrium. The corrections are found to be very small in the domain of interest for ultrarelativistic heavy ions collisions. Interesting effects, however, appear at the annihilation threshold of the thermalized quarks

  2. Weighted piecewise LDA for solving the small sample size problem in face verification.

    Science.gov (United States)

    Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis

    2007-03-01

    A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.

  3. Corrections to the Eckhaus' stability criterion for one-dimensional stationary structures

    Science.gov (United States)

    Malomed, B. A.; Staroselsky, I. E.; Konstantinov, A. B.

    1989-01-01

    Two amendments to the well-known Eckhaus' stability criterion for small-amplitude non-linear structures generated by weak instability of a spatially uniform state of a non-equilibrium one-dimensional system against small perturbations with finite wavelengths are obtained. Firstly, we evaluate small corrections to the main Eckhaus' term which, on the contrary so that term, do not have a universal form. Comparison of those non-universal corrections with experimental or numerical results gives a possibility to select a more relevant form of an effective nonlinear evolution equation. In particular, the comparison with such results for convective rolls and Taylor vortices gives arguments in favor of the Swift-Hohenberg equation. Secondly, we derive an analog of the Eckhaus criterion for systems degenerate in the sense that in an expansion of their non-linear parts in powers of dynamical variables, the second and third degree terms are absent.

  4. True coincidence summing correction and mathematical efficiency modeling of a well detector

    Energy Technology Data Exchange (ETDEWEB)

    Jäderström, H., E-mail: henrik.jaderstrom@canberra.com [CANBERRA Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States); Mueller, W.F. [CANBERRA Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States); Atrashkevich, V. [Stroitely St 4-4-52, Moscow (Russian Federation); Adekola, A.S. [CANBERRA Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-06-01

    True coincidence summing (TCS) occurs when two or more photons are emitted from the same decay of a radioactive nuclide and are detected within the resolving time of the gamma ray detector. TCS changes the net peak areas of the affected full energy peaks in the spectrum and the nuclide activity is rendered inaccurate if no correction is performed. TCS is independent of the count rate, but it is strongly dependent on the peak and total efficiency, as well as the characteristics of a given nuclear decay. The TCS effects are very prominent for well detectors because of the high efficiencies, and make accounting for TCS a necessity. For CANBERRA's recently released Small Anode Germanium (SAGe) well detector, an extension to CANBERRA's mathematical efficiency calibration method (In Situ Object Calibration Software or ISOCS, and Laboratory SOurceless Calibration Software or LabSOCS) has been developed that allows for calculation of peak and total efficiencies for SAGe well detectors. The extension also makes it possible to calculate TCS corrections for well detectors using the standard algorithm provided with CANBERRAS's Spectroscopy software Genie 2000. The peak and total efficiencies from ISOCS/LabSOCS have been compared to MCNP with agreements within 3% for peak efficiencies and 10% for total efficiencies for energies above 30 keV. A sample containing Ra-226 daughters has been measured within the well and analyzed with and without TCS correction and applying the correction factor shows significant improvement of the activity determination for the energy range 46–2447 keV. The implementation of ISOCS/LabSOCS for well detectors offers a powerful tool for efficiency calibration for these detectors. The automated algorithm to correct for TCS effects in well detectors makes nuclide specific calibration unnecessary and offers flexibility in carrying out gamma spectral analysis.

  5. Corrective Action Investigation Plan for Corrective Action Unit 366: Area 11 Plutonium Valley Dispersion Sites, Nevada National Security Site, Nevada

    International Nuclear Information System (INIS)

    Matthews, Patrick

    2011-01-01

    Corrective Action Unit 366 comprises the six corrective action sites (CASs) listed below: (1) 11-08-01, Contaminated Waste Dump No.1; (2) 11-08-02, Contaminated Waste Dump No.2; (3) 11-23-01, Radioactively Contaminated Area A; (4) 11-23-02, Radioactively Contaminated Area B; (5) 11-23-03, Radioactively Contaminated Area C; and (6) 11-23-04, Radioactively Contaminated Area D. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of CAAs that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed July 6, 2011, by representatives of the Nevada Division of Environmental Protection and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 366. The presence and nature of contamination at CAU 366 will be evaluated based on information collected from a field investigation. Radiological contamination will be evaluated based on a comparison of the total effective dose (TED) at sample locations to the dose-based final action level (FAL). The TED will be calculated by summing the estimates of internal and external dose. Results from the analysis of soil samples collected from sample plots will be used to calculate internal radiological dose. Thermoluminescent dosimeters placed at each sample location will be used to measure external radiological dose. Based on historical documentation of the releases

  6. Radiative corrections due to a heavy Higgs-particle

    International Nuclear Information System (INIS)

    Van der Bij, J.J.

    1984-01-01

    The leading two-loop corrections to the rho parameter and to the vectorboson masses was calculated in the limit of large Higgs-mass. The corrections appear to be too small to be measured, of the order of a few tenths of a percent. For rho perturbation theory breaks down for a Higgs-mass of 11 TeV and larger, for the vectorboson mass this happens for a Higgs-mass of a 4 TeV or larger. There is no direct correspondence between these results and the poles at n=3 in the gauged non-linear σ-model

  7. Perspectives of an acoustic–electrostatic/electrodynamic hybrid levitator for small fluid and solid samples

    International Nuclear Information System (INIS)

    Lierke, E G; Holitzner, L

    2008-01-01

    The feasibility of an acoustic–electrostatic hybrid levitator for small fluid and solid samples is evaluated. A proposed design and its theoretical assessment are based on the optional implementation of simple hardware components (ring electrodes) and standard laboratory equipment into typical commercial ultrasonic standing wave levitators. These levitators allow precise electrical charging of drops during syringe- or ink-jet-type deployment. The homogeneous electric 'Millikan field' between the grounded ultrasonic transducer and the electrically charged reflector provide an axial compensation of the sample weight in an indifferent equilibrium, which can be balanced by using commercial optical position sensors in combination with standard electronic PID position control. Radial electrostatic repulsion forces between the charged sample and concentric ring electrodes of the same polarity provide stable positioning at the centre of the levitator. The levitator can be used in a pure acoustic or electrostatic mode or in a hybrid combination of both subsystems. Analytical evaluations of the radial–axial force profiles are verified with detailed numerical finite element calculations under consideration of alternative boundary conditions. The simple hardware modification with implemented double-ring electrodes in ac/dc operation is also feasible for an electrodynamic/acoustic hybrid levitator

  8. Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics

    Energy Technology Data Exchange (ETDEWEB)

    Moss, A.R.L

    2000-07-01

    Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of {+-} 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of {+-}0.1%T is required to give a UPF uncertainty of {+-}2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)

  9. Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics

    International Nuclear Information System (INIS)

    Moss, A.R.L.

    2000-01-01

    Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of ± 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of ±0.1%T is required to give a UPF uncertainty of ±2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)

  10. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    Science.gov (United States)

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  11. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments.

    Science.gov (United States)

    Bras, Wim; Koizumi, Satoshi; Terrill, Nicholas J

    2014-11-01

    Small- and wide-angle X-ray scattering (SAXS, WAXS) are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments.

  12. Patient Safety Outcomes in Small Urban and Small Rural Hospitals

    Science.gov (United States)

    Vartak, Smruti; Ward, Marcia M.; Vaughn, Thomas E.

    2010-01-01

    Purpose: To assess patient safety outcomes in small urban and small rural hospitals and to examine the relationship of hospital and patient factors to patient safety outcomes. Methods: The Nationwide Inpatient Sample and American Hospital Association annual survey data were used for analyses. To increase comparability, the study sample was…

  13. Peripheral refractive correction and automated perimetric profiles.

    Science.gov (United States)

    Wild, J M; Wood, J M; Crews, S J

    1988-06-01

    The effect of peripheral refractive error correction on the automated perimetric sensitivity profile was investigated on a sample of 10 clinically normal, experienced observers. Peripheral refractive error was determined at eccentricities of 0 degree, 20 degrees and 40 degrees along the temporal meridian of the right eye using the Canon Autoref R-1, an infra-red automated refractor, under the parametric conditions of the Octopus automated perimeter. Perimetric sensitivity was then undertaken at these eccentricities (stimulus sizes 0 and III) with and without the appropriate peripheral refractive correction using the Octopus 201 automated perimeter. Within the measurement limits of the experimental procedures employed, perimetric sensitivity was not influenced by peripheral refractive correction.

  14. Comparison of small-group training with self-directed internet-based training in inhaler techniques.

    Science.gov (United States)

    Toumas, Mariam; Basheti, Iman A; Bosnic-Anticevich, Sinthia Z

    2009-08-28

    To compare the effectiveness of small-group training in correct inhaler technique with self-directed Internet-based training. Pharmacy students were randomly allocated to 1 of 2 groups: small-group training (n = 123) or self-directed Internet-based training (n = 113). Prior to intervention delivery, all participants were given a placebo Turbuhaler and product information leaflet and received inhaler technique training based on their group. Technique was assessed following training and predictors of correct inhaler technique were examined. There was a significant improvement in the number of participants demonstrating correct technique in both groups (small group training, 12% to 63%; p training, 9% to 59%; p groups in the percent change (n = 234, p > 0.05). Increased student confidence following the intervention was a predictor for correct inhaler technique. Self-directed Internet-based training is as effective as small-group training in improving students' inhaler technique.

  15. Importance of Attenuation Correction (AC) for Small Animal PET Imaging

    DEFF Research Database (Denmark)

    El Ali, Henrik H.; Bodholdt, Rasmus Poul; Jørgensen, Jesper Tranekjær

    2012-01-01

    was performed. Methods: Ten NMRI nude mice with subcutaneous implantation of human breast cancer cells (MCF-7) were scanned consecutively in small animal PET and CT scanners (MicroPETTM Focus 120 and ImTek’s MicroCATTM II). CT-based AC, PET-based AC and uniform AC methods were compared. Results: The activity...

  16. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  17. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  18. Corrective Action Decision Document/Closure Report for Corrective Action Unit 500: Test Cell A Septic System, Nevada Test Site, Nevada, Rev. 0

    International Nuclear Information System (INIS)

    2000-01-01

    This Corrective Action Decision Document/Closure Report (CADD/CR) has been prepared for Corrective Action Unit (CAU) 500: Test Cell A Septic System, in accordance with the Federal Facility Agreement and Consent Order. Located in Area 25 at the Nevada Test Site in Nevada, CAU 500 is comprised of one Corrective Action Site, CAS 25-04-05. This CADD/CR identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's (DOE/NV's) recommendation that no corrective action is deemed necessary for CAU 500. The Corrective Action Decision Document and Closure Report have been combined into one report based on sample data collected during the field investigation performed between February and May 1999, which showed no evidence of soil contamination at this site. The clean closure justification for CAU 500 is based on these results. Analytes detected were evaluated against preliminary action levels (PALs) to determine contaminants of concern (COCs) for CAU 500, and it was determined that the PALs were not exceeded for total volatile organic compounds, total semivolatile organic compounds, total petroleum hydrocarbons, polychlorinated biphenyls, total Resource Conservation and Recovery Act metals, gamma-emitting radionuclides, isotopic uranium, and strontium-90 for any of the soil samples collected. COCs were identified only within the septic tank and distribution box at the CAU. No COCs were identified outside these two areas; therefore, no corrective action was necessary for the soil. Closure activities were performed to address the COCs identified within the septic tank and distribution box. The DOE/NV recommended that neither corrective action nor a corrective action plan was required at CAU 500. Further, no use restrictions were required to be placed on CAU 500, and the septic tank and distribution box have been closed in accordance with all applicable state and federal regulations for closure of the site

  19. Under-correction of human myopia – Is it myopigenic?: A retrospective analysis of clinical refraction data

    Directory of Open Access Journals (Sweden)

    Balamurali Vasudevan

    2014-07-01

    Conclusion: Under-correction of myopia produced a small but progressively greater degree of myopic progression than did full correction. The present finding is consistent with earlier clinical trials and modeling of human myopia.

  20. Efficient free energy calculations by combining two complementary tempering sampling methods.

    Science.gov (United States)

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  1. Local chromatic correction scheme for LER of PEP-II

    International Nuclear Information System (INIS)

    Forest, E.; Robin, D.; Zholents, A.; Donald, M.; Helm, R.; Irwin, J.; Sullivan, M.K.

    1993-01-01

    The correction of the chromaticity of low-beta insertions in storage rings is usually made with sextupole lenses in the ring arcs. When decreasing the beta functions at the interaction point (IP), this technique becomes fairly ineffective, since it fails to properly correct the higher-order chromatic aberrations. Here we consider the approach for ampersand PEP-II B Factory low energy ring (LER) where the chromatic effects of the quadrupole lenses generating low beta functions at the IP are corrected locally with two families of sextupoles, one family for each plane. For the IP straight section the lattice is designed in such a way that the chromatic aberrations are made small and sextupole-like aberrations are eliminated. The results of dimensional tracking simulations are presented

  2. Correction of the first order beam transport of the SLC Arcs

    International Nuclear Information System (INIS)

    Walker, N.; Barklow, T.; Emma, P.; Krejcik, P.

    1991-05-01

    Correction of the first order transport of the SLC Arcs has been made possible by a technique which allows the full 4x4 transport matrix across any section of Arc to be experimentally determined. By the introduction of small closed bumps into each achromat, it is possible to substantially correct first order optical errors, and notably the cross plane coupling at the exit of the Arcs. 4 refs., 3 figs

  3. Running coupling corrections to high energy inclusive gluon production

    International Nuclear Information System (INIS)

    Horowitz, W.A.; Kovchegov, Yuri V.

    2011-01-01

    We calculate running coupling corrections for the lowest-order gluon production cross section in high energy hadronic and nuclear scattering using the BLM scale-setting prescription. In the final answer for the cross section the three powers of fixed coupling are replaced by seven factors of running coupling, five in the numerator and two in the denominator, forming a 'septumvirate' of running couplings, analogous to the 'triumvirate' of running couplings found earlier for the small-x BFKL/BK/JIMWLK evolution equations. It is interesting to note that the two running couplings in the denominator of the 'septumvirate' run with complex-valued momentum scales, which are complex conjugates of each other, such that the production cross section is indeed real. We use our lowest-order result to conjecture how running coupling corrections may enter the full fixed-coupling k T -factorization formula for gluon production which includes nonlinear small-x evolution.

  4. A method for analysing small samples of floral pollen for free and protein-bound amino acids.

    Science.gov (United States)

    Stabler, Daniel; Power, Eileen F; Borland, Anne M; Barnes, Jeremy D; Wright, Geraldine A

    2018-02-01

    Pollen provides floral visitors with essential nutrients including proteins, lipids, vitamins and minerals. As an important nutrient resource for pollinators, including honeybees and bumblebees, pollen quality is of growing interest in assessing available nutrition to foraging bees. To date, quantifying the protein-bound amino acids in pollen has been difficult and methods rely on large amounts of pollen, typically more than 1 g. More usual is to estimate a crude protein value based on the nitrogen content of pollen, however, such methods provide no information on the distribution of essential and non-essential amino acids constituting the proteins.Here, we describe a method of microwave-assisted acid hydrolysis using low amounts of pollen that allows exploration of amino acid composition, quantified using ultra high performance liquid chromatography (UHPLC), and a back calculation to estimate the crude protein content of pollen.Reliable analysis of protein-bound and free amino acids as well as an estimation of crude protein concentration was obtained from pollen samples as low as 1 mg. Greater variation in both protein-bound and free amino acids was found in pollen sample sizes amino acids in smaller sample sizes, we suggest a correction factor to apply to specific sample sizes of pollen in order to estimate total crude protein content.The method described in this paper will allow researchers to explore the composition of amino acids in pollen and will aid research assessing the available nutrition to pollinating animals. This method will be particularly useful in assaying the pollen of wild plants, from which it is difficult to obtain large sample weights.

  5. Quantum-corrected transient analysis of plasmonic nanostructures

    KAUST Repository

    Uysal, Ismail Enes

    2017-03-08

    A time domain surface integral equation (TD-SIE) solver is developed for quantum-corrected analysis of transient electromagnetic field interactions on plasmonic nanostructures with sub-nanometer gaps. “Quantum correction” introduces an auxiliary tunnel to support the current path that is generated by electrons tunneled between the nanostructures. The permittivity of the auxiliary tunnel and the nanostructures is obtained from density functional theory (DFT) computations. Electromagnetic field interactions on the combined structure (nanostructures plus auxiliary tunnel connecting them) are computed using a TD-SIE solver. Time domain samples of the permittivity and the Green function required by this solver are obtained from their frequency domain samples (generated from DFT computations) using a semi-analytical method. Accuracy and applicability of the resulting quantum-corrected solver scheme are demonstrated via numerical examples.

  6. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    Science.gov (United States)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  7. A TIMS-based method for the high precision measurements of the three-isotope potassium composition of small samples

    DEFF Research Database (Denmark)

    Wielandt, Daniel Kim Peel; Bizzarro, Martin

    2011-01-01

    A novel thermal ionization mass spectrometry (TIMS) method for the three-isotope analysis of K has been developed, and ion chromatographic methods for the separation of K have been adapted for the processing of small samples. The precise measurement of K-isotopes is challenged by the presence of ...

  8. Corrective Action Decision Document for Corrective Action Unit 428: Area 3 Septic Waste Systems 1 and 5, Tonopah Test Range, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    U.S. Department of Energy, Nevada Operations Office

    2000-02-08

    This Corrective Action Decision Document identifies and rationalizes the US Department of Energy, Nevada Operations Office's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 428, Septic Waste Systems 1 and 5, under the Federal Facility Agreement and Consent Order. Located in Area 3 at the Tonopah Test Range (TTR) in Nevada, CAU 428 is comprised of two Corrective Action Sites (CASs): (1) CAS 03-05-002-SW01, Septic Waste System 1 and (2) CAS 03-05-002- SW05, Septic Waste System 5. A corrective action investigation performed in 1999 detected analyte concentrations that exceeded preliminary action levels; specifically, contaminants of concern (COCs) included benzo(a) pyrene in a septic tank integrity sample associated with Septic Tank 33-1A of Septic Waste System 1, and arsenic in a soil sample associated with Septic Waste System 5. During this investigation, three Corrective Action Objectives (CAOs) were identified to prevent or mitigate exposure to contents of the septic tanks and distribution box, to subsurface soil containing COCs, and the spread of COCs beyond the CAU. Based on these CAOs, a review of existing data, future use, and current operations in Area 3 of the TTR, three CAAs were developed for consideration: Alternative 1 - No Further Action; Alternative 2 - Closure in Place with Administrative Controls; and Alternative 3 - Clean Closure by Excavation and Disposal. These alternatives were evaluated based on four general corrective action standards and five remedy selection decision factors. Based on the results of the evaluation, the preferred CAA was Alternative 3. This alternative meets all applicable state and federal regulations for closure of the site and will eliminate potential future exposure pathways to the contaminated soils at the Area 3 Septic Waste Systems 1 and 5.

  9. Corrective Action Plan for Corrective Action Unit 143: Area 25 Contaminated Waste Dumps, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Gustafason, D.L.

    2001-01-01

    This Corrective Action Plan (CAP) has been prepared for Corrective Action Unit (CAU) 143: Area 25 Contaminated Waste Dumps, Nevada Test Site, Nevada, in accordance with the Federal Facility Agreement and Consent Order of 1996. This CAP provides the methodology for implementing the approved corrective action alternative as listed in the Corrective Action Decision Document (U.S. Department of Energy, Nevada Operations Office, 2000). The CAU includes two Corrective Action Sites (CASs): 25-23-09, Contaminated Waste Dump Number 1; and 25-23-03, Contaminated Waste Dump Number 2. Investigation of CAU 143 was conducted in 1999. Analytes detected during the corrective action investigation were evaluated against preliminary action levels to determine constituents of concern for CAU 143. Radionuclide concentrations in disposal pit soil samples associated with the Reactor Maintenance, Assembly, and Disassembly Facility West Trenches, the Reactor Maintenance, Assembly, and Disassembly Facility East Trestle Pit, and the Engine Maintenance, Assembly, and Disassembly Facility Trench are greater than normal background concentrations. These constituents are identified as constituents of concern for their respective CASs. Closure-in-place with administrative controls involves use restrictions to minimize access and prevent unauthorized intrusive activities, earthwork to fill depressions to original grade, placing additional clean cover material over the previously filled portion of some of the trenches, and placing secondary or diversion berm around pertinent areas to divert storm water run-on potential

  10. Radiodiagnosis of diseases of the small intestine

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    Roentgenological image of diseases, development anomalies, various diseases of the small intestine is presented. Roentgenological semiotics of chronic enterocolotis, absorption failure syndrome, Crohn's disease, tuberculosis, abdominal actinomycosis, carcenoid, benign tumors, small intestine cancer, is given. To state final correct diagnosis a complex investigation, comprising angiography, computer tomography and ultrasound diagnosis, is necessary

  11. QNB: differential RNA methylation analysis for count-based small-sample sequencing data with a quad-negative binomial model.

    Science.gov (United States)

    Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia

    2017-08-31

    As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.

  12. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    Science.gov (United States)

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  13. A compact time-of-flight SANS instrument optimised for measurements of small sample volumes at the European Spallation Source

    Energy Technology Data Exchange (ETDEWEB)

    Kynde, Søren, E-mail: kynde@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark); Hewitt Klenø, Kaspar [Niels Bohr Institute, University of Copenhagen (Denmark); Nagy, Gergely [SINQ, Paul Scherrer Institute (Switzerland); Mortensen, Kell; Lefmann, Kim [Niels Bohr Institute, University of Copenhagen (Denmark); Kohlbrecher, Joachim, E-mail: Joachim.kohlbrecher@psi.ch [SINQ, Paul Scherrer Institute (Switzerland); Arleth, Lise, E-mail: arleth@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark)

    2014-11-11

    The high flux at European Spallation Source (ESS) will allow for performing experiments with relatively small beam-sizes while maintaining a high intensity of the incoming beam. The pulsed nature of the source makes the facility optimal for time-of-flight small-angle neutron scattering (ToF-SANS). We find that a relatively compact SANS instrument becomes the optimal choice in order to obtain the widest possible q-range in a single setting and the best possible exploitation of the neutrons in each pulse and hence obtaining the highest possible flux at the sample position. The instrument proposed in the present article is optimised for performing fast measurements of small scattering volumes, typically down to 2×2×2 mm{sup 3}, while covering a broad q-range from about 0.005 1/Å to 0.5 1/Å in a single instrument setting. This q-range corresponds to that available at a typical good BioSAXS instrument and is relevant for a wide set of biomacromolecular samples. A central advantage of covering the whole q-range in a single setting is that each sample has to be loaded only once. This makes it convenient to use the fully automated high-throughput flow-through sample changers commonly applied at modern synchrotron BioSAXS-facilities. The central drawback of choosing a very compact instrument is that the resolution in terms of δλ/λ obtained with the short wavelength neutrons becomes worse than what is usually the standard at state-of-the-art SANS instruments. Our McStas based simulations of the instrument performance for a set of characteristic biomacromolecular samples show that the resulting smearing effects still have relatively minor effects on the obtained data and can be compensated for in the data analysis. However, in cases where a better resolution is required in combination with the large simultaneous q-range characteristic of the instrument, we show that this can be obtained by inserting a set of choppers.

  14. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments

    Directory of Open Access Journals (Sweden)

    Wim Bras

    2014-11-01

    Full Text Available Small- and wide-angle X-ray scattering (SAXS, WAXS are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments.

  15. THE FLATULENCE SYMPTOM IN SMALL CHILDREN: CAUSES AND WAYS OF CORRECTION

    Directory of Open Access Journals (Sweden)

    A. N. Surkov

    2013-01-01

    Full Text Available Gastrointestinal tract malfunctions, food allergy, intestinal microbiocenosis disorder, disaccharide insufficiency, celiac disease and several other causes lead to an increased gas-formation, overdistension of intestinal loops and abdominal pains in 0-1-year-old children. The crucial task of flatulence elimination is the correction of causes of its occurrence. Frequent intestinal spasm episodes in infants reduce the quality of life of them and their families in general and are also associated with the subsequent child’s physical and mental maldevelopments. Simethicone-based suspension (in the form of antifoaming agent helps to cope with the issue; it has carminative properties; this allows to reduce the amount of gases in the intestinal lumen, thus terminating pain symptoms.

  16. An electron microscope for the aberration-corrected era

    Energy Technology Data Exchange (ETDEWEB)

    Krivanek, O.L. [Nion Co., 1102 8th Street, Kirkland, WA 98033 (United States)], E-mail: krivanek.ondrej@gmail.com; Corbin, G.J.; Dellby, N.; Elston, B.F.; Keyse, R.J.; Murfitt, M.F.; Own, C.S.; Szilagyi, Z.S.; Woodruff, J.W. [Nion Co., 1102 8th Street, Kirkland, WA 98033 (United States)

    2008-02-15

    Improved resolution made possible by aberration correction has greatly increased the demands on the performance of all parts of high-end electron microscopes. In order to meet these demands, we have designed and built an entirely new scanning transmission electron microscope (STEM). The microscope includes a flexible illumination system that allows the properties of its probe to be changed on-the-fly, a third-generation aberration corrector which corrects all geometric aberrations up to fifth order, an ultra-responsive yet stable five-axis sample stage, and a flexible configuration of optimized detectors. The microscope features many innovations, such as a modular column assembled from building blocks that can be stacked in almost any order, in situ storage and cleaning facilities for up to five samples, computer-controlled loading of samples into the column, and self-diagnosing electronics. The microscope construction is described, and examples of its capabilities are shown.

  17. An electron microscope for the aberration-corrected era

    International Nuclear Information System (INIS)

    Krivanek, O.L.; Corbin, G.J.; Dellby, N.; Elston, B.F.; Keyse, R.J.; Murfitt, M.F.; Own, C.S.; Szilagyi, Z.S.; Woodruff, J.W.

    2008-01-01

    Improved resolution made possible by aberration correction has greatly increased the demands on the performance of all parts of high-end electron microscopes. In order to meet these demands, we have designed and built an entirely new scanning transmission electron microscope (STEM). The microscope includes a flexible illumination system that allows the properties of its probe to be changed on-the-fly, a third-generation aberration corrector which corrects all geometric aberrations up to fifth order, an ultra-responsive yet stable five-axis sample stage, and a flexible configuration of optimized detectors. The microscope features many innovations, such as a modular column assembled from building blocks that can be stacked in almost any order, in situ storage and cleaning facilities for up to five samples, computer-controlled loading of samples into the column, and self-diagnosing electronics. The microscope construction is described, and examples of its capabilities are shown

  18. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  19. Direct analysis of 210Pb in sediment samples: Self-absorption corrections

    International Nuclear Information System (INIS)

    Cutshall, N.H.; Larsen, I.L.; Olsen, C.R.

    1983-01-01

    A procedure for the direct #betta#-ray instrumental analysis of 210 Pb in sediment samples is presented. The problem of dependence of self-absorption on sample composition is solved by making a direct transmission measurement on each sample. The procedure has been verified by intercalibrations and other tests. (orig.)

  20. X-ray fluorescence microscopy artefacts in elemental maps of topologically complex samples: Analytical observations, simulation and a map correction method

    Science.gov (United States)

    Billè, Fulvio; Kourousias, George; Luchinat, Enrico; Kiskinova, Maya; Gianoncelli, Alessandra

    2016-08-01

    XRF spectroscopy is among the most widely used non-destructive techniques for elemental analysis. Despite the known angular dependence of X-ray fluorescence (XRF), topological artefacts remain an unresolved issue when using X-ray micro- or nano-probes. In this work we investigate the origin of the artefacts in XRF imaging of topologically complex samples, which are unresolved problems in studies of organic matter due to the limited travel distances of low energy XRF emission from the light elements. In particular we mapped Human Embryonic Kidney (HEK293T) cells. The exemplary results with biological samples, obtained with a soft X-ray scanning microscope installed at a synchrotron facility were used for testing a mathematical model based on detector response simulations, and for proposing an artefact correction method based on directional derivatives. Despite the peculiar and specific application, the methodology can be easily extended to hard X-rays and to set-ups with multi-array detector systems when the dimensions of surface reliefs are in the order of the probing beam size.

  1. Corrective action investigation plan for Corrective Action Unit Number 427: Area 3 septic waste system numbers 2 and 6, Tonopah Test Range, Nevada

    International Nuclear Information System (INIS)

    1997-01-01

    This Corrective Action Investigation Plan (CAIP) contains the environmental sample collection objectives and the criteria for conducting site investigation activities at the Area 3 Compound, specifically Corrective Action Unit (CAU) Number 427, which is located at the Tonopah Test Range (TTR). The TTR, included in the Nellis Air Force Range, is approximately 255 kilometers (140 miles) northwest of Las Vegas, Nevada. The Corrective Action Unit Work Plan, Tonopah Test Range, Nevada divides investigative activities at TTR into Source Groups. The Septic Tanks and Lagoons Group consists of seven CAUs. Corrective Action Unit Number 427 is one of three septic waste system CAUs in TTR Area 3. Corrective Action Unit Numbers 405 and 428 will be investigated at a future data. Corrective Action Unit Number 427 is comprised of Septic Waste Systems Number 2 and 6 with respective CAS Numbers 03-05-002-SW02 and 03-05-002-SW06

  2. Correction of bias in belt transect studies of immotile objects

    Science.gov (United States)

    Anderson, D.R.; Pospahala, R.S.

    1970-01-01

    Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.

  3. Correction of population stratification in large multi-ethnic association studies.

    Directory of Open Access Journals (Sweden)

    David Serre

    2008-01-01

    Full Text Available The vast majority of genetic risk factors for complex diseases have, taken individually, a small effect on the end phenotype. Population-based association studies therefore need very large sample sizes to detect significant differences between affected and non-affected individuals. Including thousands of affected individuals in a study requires recruitment in numerous centers, possibly from different geographic regions. Unfortunately such a recruitment strategy is likely to complicate the study design and to generate concerns regarding population stratification.We analyzed 9,751 individuals representing three main ethnic groups - Europeans, Arabs and South Asians - that had been enrolled from 154 centers involving 52 countries for a global case/control study of acute myocardial infarction. All individuals were genotyped at 103 candidate genes using 1,536 SNPs selected with a tagging strategy that captures most of the genetic diversity in different populations. We show that relying solely on self-reported ethnicity is not sufficient to exclude population stratification and we present additional methods to identify and correct for stratification.Our results highlight the importance of carefully addressing population stratification and of carefully "cleaning" the sample prior to analyses to obtain stronger signals of association and to avoid spurious results.

  4. Study on porosity of ceramic SiC using small angle neutron scattering

    International Nuclear Information System (INIS)

    Li Jizhou; Yang Jilian; Kang Jian; Ye Chuntang

    1996-01-01

    The mechanical properties of functional heat-resistant ceramics SiC are significantly influenced by the concentration and dimensions of pores. Small angle neutron scattering measurements for 3 SiC samples with different densities are performed on C1-2 SANS instrument of the University of Tokyo. Two groups of the neutron data are obtained using 8 and 16 m of secondary flight path, 1 and 0.7 nm of neutron wave lengths, respectively. After deduction of background measurement and transmission correction, both neutron data are linked up with each other. The patterns of neutron data of 3 samples with Q range from 0.028∼0.5 nm -1 are almost with axial symmetry, showing that the shape of pores is almost spherical. Using Mellin transform, size distributions of pores in 3 samples are obtained. The average size (∼19 nm) of pores for hot-pressed SiC sample with higher density is smaller than the others (∼ 21 nm). It seems to be the reason why the density of hot-pressed SiC sample is higher than not hot-pressed sample

  5. Air slab-correction for Γ-ray attenuation measurements

    Science.gov (United States)

    Mann, Kulwinder Singh

    2017-12-01

    Gamma (γ)-ray shielding behaviour (GSB) of a material can be ascertained from its linear attenuation coefficient (μ, cm-1). Narrow-beam transmission geometry is required for μ-measurement. In such measurements, a thin slab of the material has to insert between point-isotropic γ-ray source and detector assembly. The accuracy in measurements requires that sample's optical thickness (OT) remain below 0.5 mean free path (mfp). Sometimes it is very difficult to produce thin slab of sample (absorber), on the other hand for thick absorber, i.e. OT >0.5 mfp, the influence of the air displaced by it cannot be ignored during μ-measurements. Thus, for a thick sample, correction factor has been suggested which compensates the air present in the transmission geometry. The correction factor has been named as an air slab-correction (ASC). Six samples of low-Z engineering materials (cement-black, clay, red-mud, lime-stone, cement-white and plaster-of-paris) have been selected for investigating the effect of ASC on μ-measurements at three γ-ray energies (661.66, 1173.24, 1332.50 keV). The measurements have been made using point-isotropic γ-ray sources (Cs-137 and Co-60), NaI(Tl) detector and multi-channel-analyser coupled with a personal computer. Theoretical values of μ have been computed using a GRIC2-toolkit (standardized computer programme). Elemental compositions of the samples were measured with Wavelength Dispersive X-ray Fluorescence (WDXRF) analyser. Inter-comparison of measured and computed μ-values, suggested that the application of ASC helps in precise μ-measurement for thick samples of low-Z materials. Thus, this hitherto widely ignored ASC factor is recommended to use in similar γ-ray measurements.

  6. Gravimetric and volumetric approaches adapted for hydrogen sorption measurements with in situ conditioning on small sorbent samples

    International Nuclear Information System (INIS)

    Poirier, E.; Chahine, R.; Tessier, A.; Bose, T.K.

    2005-01-01

    We present high sensitivity (0 to 1 bar, 295 K) gravimetric and volumetric hydrogen sorption measurement systems adapted for in situ sample conditioning at high temperature and high vacuum. These systems are designed especially for experiments on sorbents available in small masses (mg) and requiring thorough degassing prior to sorption measurements. Uncertainty analysis from instrumental specifications and hydrogen absorption measurements on palladium are presented. The gravimetric and volumetric systems yield cross-checkable results within about 0.05 wt % on samples weighing from (3 to 25) mg. Hydrogen storage capacities of single-walled carbon nanotubes measured at 1 bar and 295 K with both systems are presented

  7. Improving PET Quantification of Small Animal [68Ga]DOTA-Labeled PET/CT Studies by Using a CT-Based Positron Range Correction.

    Science.gov (United States)

    Cal-Gonzalez, Jacobo; Vaquero, Juan José; Herraiz, Joaquín L; Pérez-Liva, Mailyn; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Udías, José Manuel

    2018-01-19

    Image quality of positron emission tomography (PET) tracers that emits high-energy positrons, such as Ga-68, Rb-82, or I-124, is significantly affected by positron range (PR) effects. PR effects are especially important in small animal PET studies, since they can limit spatial resolution and quantitative accuracy of the images. Since generators accessibility has made Ga-68 tracers wide available, the aim of this study is to show how the quantitative results of [ 68 Ga]DOTA-labeled PET/X-ray computed tomography (CT) imaging of neuroendocrine tumors in mice can be improved using positron range correction (PRC). Eighteen scans in 12 mice were evaluated, with three different models of tumors: PC12, AR42J, and meningiomas. In addition, three different [ 68 Ga]DOTA-labeled radiotracers were used to evaluate the PRC with different tracer distributions: [ 68 Ga]DOTANOC, [ 68 Ga]DOTATOC, and [ 68 Ga]DOTATATE. Two PRC methods were evaluated: a tissue-dependent (TD-PRC) and a tissue-dependent spatially-variant correction (TDSV-PRC). Taking a region in the liver as reference, the tissue-to-liver ratio values for tumor tissue (TLR tumor ), lung (TLR lung ), and necrotic areas within the tumors (TLR necrotic ) and their respective relative variations (ΔTLR) were evaluated. All TLR values in the PRC images were significantly different (p DOTA-labeled PET/CT imaging of mice with neuroendocrine tumors, hence demonstrating that these techniques could also ameliorate the deleterious effect of the positron range in clinical PET imaging.

  8. Computer processing of 14C data; statistical tests and corrections of data

    International Nuclear Information System (INIS)

    Obelic, B.; Planinic, J.

    1977-01-01

    The described computer program calculates the age of samples and performs statistical tests and corrections of data. Data are obtained from the proportional counter that measures anticoincident pulses per 20 minute intervals. After every 9th interval the counter measures total number of counts per interval. Input data are punched on cards. The output list contains input data schedule and the following results: mean CPM value, correction of CPM for normal pressure and temperature (NTP), sample age calculation based on 14 C half life of 5570 and 5730 years, age correction for NTP, dendrochronological corrections and the relative radiocarbon concentration. All results are given with one standard deviation. Input data test (Chauvenet's criterion), gas purity test, standard deviation test and test of the data processor are also included in the program. (author)

  9. Liquid scintillation counting system with automatic gain correction

    International Nuclear Information System (INIS)

    Frank, R.B.

    1976-01-01

    An automatic liquid scintillation counting apparatus is described including a scintillating medium in the elevator ram of the sample changing apparatus. An appropriate source of radiation, which may be the external source for standardizing samples, produces reference scintillations in the scintillating medium which may be used for correction of the gain of the counting system

  10. Corrective Action Investigation Plan for Corrective Action Unit 232: Area 25 Sewage Lagoons, Nevada Test Site, Nevada, Revision 0

    International Nuclear Information System (INIS)

    1999-01-01

    The Corrective Action Investigation Plan for Corrective Action Unit 232, Area 25 Sewage Lagoons, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U.S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 232 consists of Corrective Action Site 25-03-01, Sewage Lagoon. Corrective Action Unit 232, Area 25 Sewage Lagoons, received sanitary effluent from four buildings within the Test Cell ''C'' Facility from the mid-1960s through approximately 1996. The Test Cell ''C'' Facility was used to develop nuclear propulsion technology by conducting nuclear test reactor studies. Based on the site history collected to support the Data Quality Objectives process, contaminants of potential concern include volatile organic compounds, semivolatile organic compounds, Resource Conservation and Recovery Act metals, petroleum hydrocarbons, polychlorinated biphenyls, pesticides, herbicides, gamma emitting radionuclides, isotopic plutonium, isotopic uranium, and strontium-90. A detailed conceptual site model is presented in Section 3.0 and Appendix A of this Corrective Action Investigation Plan. The conceptual model serves as the basis for the sampling strategy. Under the Federal Facility Agreement and Consent Order, the Corrective Action Investigation Plan will be submitted to the Nevada Division of Environmental Protection for approval. Field work will be conducted following approval of the plan. The results of the field investigation will support a defensible evaluation of corrective action alternatives in the Corrective Action Decision Document

  11. Corrective Action Investigation Plan for Corrective Action Unit 232: Area 25 Sewage Lagoons, Nevada Test Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    USDOE/NV

    1999-05-01

    The Corrective Action Investigation Plan for Corrective Action Unit 232, Area 25 Sewage Lagoons, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U.S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 232 consists of Corrective Action Site 25-03-01, Sewage Lagoon. Corrective Action Unit 232, Area 25 Sewage Lagoons, received sanitary effluent from four buildings within the Test Cell ''C'' Facility from the mid-1960s through approximately 1996. The Test Cell ''C'' Facility was used to develop nuclear propulsion technology by conducting nuclear test reactor studies. Based on the site history collected to support the Data Quality Objectives process, contaminants of potential concern include volatile organic compounds, semivolatile organic compounds, Resource Conservation and Recovery Act metals, petroleum hydrocarbons, polychlorinated biphenyls, pesticides, herbicides, gamma emitting radionuclides, isotopic plutonium, isotopic uranium, and strontium-90. A detailed conceptual site model is presented in Section 3.0 and Appendix A of this Corrective Action Investigation Plan. The conceptual model serves as the basis for the sampling strategy. Under the Federal Facility Agreement and Consent Order, the Corrective Action Investigation Plan will be submitted to the Nevada Division of Environmental Protection for approval. Field work will be conducted following approval of the plan. The results of the field investigation will support a defensible evaluation of corrective action alternatives in the Corrective Action Decision Document.

  12. Inverse Gaussian model for small area estimation via Gibbs sampling

    African Journals Online (AJOL)

    We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...

  13. Correction of incomplete penoscrotal transposition by a modified Glenn-Anderson technique

    Directory of Open Access Journals (Sweden)

    Saleh Amin

    2010-01-01

    Full Text Available Purpose: Penoscrotal transposition may be partial or complete, resulting in variable degrees of positional exchanges between the penis and the scrotum. Repairs of penoscrotal transposition rely on the creation of rotational flaps to mobilise the scrotum downwards or transpose the penis to a neo hole created in the skin of the mons-pubis. All known techniques result in complete circular incision around the root of the penis, resulting in severe and massive oedema of the penile skin, which delays correction of the associated hypospadias and increases the incidence of complications, as the skin vascularity and lymphatics are impaired by the designed incision. A new design to prevent this post-operative oedema, allowing early correction of the associated hypospadias and lowering the incidence of possible complications, had been used, whose results were compared with other methods of correction. Materials and Methods: Ten patients with incomplete penoscrotal transposition had been corrected by designing rotational flaps that push the scrotum back while the penile skin remains attached by small strip to the skin of the mons-pubis. Results : All patients showed an excellent cosmetic outcome. There was minimal post-operative oedema and no vascular compromise to the penile or scrotal skin. Correction of associated hypospadias can be performed in the same sitting or in another sitting, without or with minimal complications. Conclusion: This modification, which maintains the penile skin connected to the skin of the lower abdomen by a small strip of skin during correction of penoscrotal transposition, prevents post-operative oedema and improves healing with excellent cosmetic appearance, allows one-stage repair with minimal complications and reduce post-operative complications such as urinary fistula and flap necrosis.

  14. Corrective Action Investigation Plan for Corrective Action Unit 262: Area 25 Septic Systems and Underground Discharge Point, Nevada Test Site, Nevada, Revision No. 1 (9/2001)

    International Nuclear Information System (INIS)

    2000-01-01

    This corrective action investigation plan contains the U.S. Department of Energy, Nevada Operations Office's approach to collect data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 262 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 262 consists of nine Corrective Action Sites (CASs): Underground Storage Tank (25-02-06), Septic Systems A and B (25-04-06), Septic System (25-04-07), Leachfield (25-05-03), Leachfield (25-05-05), Leachfield (25-05-06), Radioactive Leachfield (25-05-08), Leachfield (25-05-12), and Dry Well (25-51-01). Situated in Area 25 at the Nevada Test Site (NTS), sites addressed by CAU 262 are located at the Reactor-Maintenance, Assembly, and Disassembly (R-MAD); Test Cell C; and Engine-Maintenance, Assembly, and Disassembly (E-MAD) facilities. The R-MAD, Test Cell C, and E-MAD facilities supported nuclear rocket reactor and engine testing as part of the Nuclear Rocket Development Station. The activities associated with the testing program were conducted between 1958 and 1973. Based on site history collected to support the Data Quality Objectives process, contaminants of potential concern (COPCs) for the site include oil/diesel-range total petroleum hydrocarbons, volatile organic compounds, semivolatile organic compounds, polychlorinated biphenyls, Resource Conservation and Recovery Act metals, and gamma-emitting radionuclides, isotopic uranium, isotopic plutonium, strontium-90, and tritium. The scope of the corrective action field investigation at the CAU will include the inspection of portions of the collection systems, sampling the contents of collection system features in situ of leachfield logging materials, surface soil sampling, collection of samples of soil underlying the base of inlet and outfall ends of septic tanks and outfall ends of diversion structures and distribution boxes, collection of soil samples from biased or a combination of

  15. Some recoil corrections to the hydrogen hyperfine splitting

    International Nuclear Information System (INIS)

    Bodwin, G.T.; Yennie, D.R.

    1988-01-01

    We compute all of the recoil corrections to the ground-state hyperfine splitting in hydrogen, with the exception of the proton polarizability, that are required to achieve an accuracy of 1 ppm. Our approach includes a unified treatment of the corrections that would arise from a pointlike Dirac proton and the corrections that are due to the proton's non-QED structure. Our principal new results are a calculation of the relative order-α 2 (m/sub e//m/sub p/) contributions that arise from the proton's anomalous magnetic moment and a systematic treatment of the relative order-α(m/sub e//m/sub p/) contributions that arise from form-factor corrections. In the former calculation we introduce some new technical improvements and are able to evaluate all of the expressions analytically. In the latter calculation, which has been the subject of previous investigations by other authors, we express the form-factor corrections in terms of two-dimensional integrals that are convenient for numerical evaluation and present numerical results for the commonly used dipole parametrization of the form factors. Because we use a parametrization of the form factors that differs slightly from the ones used in previous work, our numerical results are shifted from older ones by a small amount

  16. Small-kernel constrained-least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-10-01

    Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the restoration filter is derived by maximizing the smoothness of the restored image while satisfying a fidelity constraint related to how well the restored image matches the actual data. The traditional derivation and implementation of the constrained least-squares restoration filter is based on an incomplete discrete/discrete system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a recent paper Park demonstrated that a derivation of the Wiener filter based on the incomplete discrete/discrete model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous model. In a similar way, in this paper, we show that a derivation of the constrained least-squares filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model and, by so doing, an improved restoration filter is derived. Building on previous work by Reichenbach and Park for the Wiener filter, we also show that this improved constrained least-squares restoration filter can be efficiently implemented as a small-kernel convolution in the spatial domain.

  17. Relativistic corrections for the conventional, classical Nyquist theorem

    International Nuclear Information System (INIS)

    Theimer, O.; Dirk, E.H.

    1983-01-01

    New expressions for the Nyquist theorem are derived under the condition in which the random thermal speed of electrons, in a system of charged particles, can approach the speed of light. Both the case in which, the electron have not drift velocity relative to the ions or neutral particles and the case in which drift occours are investigated. In both instances, the new expressions for the Nyquist theorem are found to contain relativistic correction terms; however for electron temperatures T approx. 10 9 K and drift velocity magnitudes w approx. 0.5c, where c is the speed of light, the effects of these correction terms are generally small. The derivation of these relativistic corrections is carried out by means of procedures developed in an earlier work. A relativistic distribution function, which incorporates a constant drift velocity with a random thermal velocity for a given particle species, is developed

  18. Strong Coupling Corrections in Quantum Thermodynamics

    Science.gov (United States)

    Perarnau-Llobet, M.; Wilming, H.; Riera, A.; Gallego, R.; Eisert, J.

    2018-03-01

    Quantum systems strongly coupled to many-body systems equilibrate to the reduced state of a global thermal state, deviating from the local thermal state of the system as it occurs in the weak-coupling limit. Taking this insight as a starting point, we study the thermodynamics of systems strongly coupled to thermal baths. First, we provide strong-coupling corrections to the second law applicable to general systems in three of its different readings: As a statement of maximal extractable work, on heat dissipation, and bound to the Carnot efficiency. These corrections become relevant for small quantum systems and vanish in first order in the interaction strength. We then move to the question of power of heat engines, obtaining a bound on the power enhancement due to strong coupling. Our results are exemplified on the paradigmatic non-Markovian quantum Brownian motion.

  19. Radiative corrections to double-Dalitz decays revisited

    Science.gov (United States)

    Kampf, Karol; Novotný, Jiři; Sanchez-Puertas, Pablo

    2018-03-01

    In this study, we revisit and complete the full next-to-leading order corrections to pseudoscalar double-Dalitz decays within the soft-photon approximation. Comparing to the previous study, we find small differences, which are nevertheless relevant for extracting information about the pseudoscalar transition form factors. Concerning the latter, these processes could offer the opportunity to test them—for the first time—in their double-virtual regime.

  20. Measurement of specimen-induced aberrations of biological samples using phase stepping interferometry.

    Science.gov (United States)

    Schwertner, M; Booth, M J; Neil, M A A; Wilson, T

    2004-01-01

    Confocal or multiphoton microscopes, which deliver optical sections and three-dimensional (3D) images of thick specimens, are widely used in biology. These techniques, however, are sensitive to aberrations that may originate from the refractive index structure of the specimen itself. The aberrations cause reduced signal intensity and the 3D resolution of the instrument is compromised. It has been suggested to correct for aberrations in confocal microscopes using adaptive optics. In order to define the design specifications for such adaptive optics systems, one has to know the amount of aberrations present for typical applications such as with biological samples. We have built a phase stepping interferometer microscope that directly measures the aberration of the wavefront. The modal content of the wavefront is extracted by employing Zernike mode decomposition. Results for typical biological specimens are presented. It was found for all samples investigated that higher order Zernike modes give only a small contribution to the overall aberration. Therefore, these higher order modes can be neglected in future adaptive optics sensing and correction schemes implemented into confocal or multiphoton microscopes, leading to more efficient designs.

  1. XRF analysis of mineralised samples

    International Nuclear Information System (INIS)

    Ahmedali, T.

    2002-01-01

    Full text: Software now supplied by instrument manufacturers has made it practical and convenient for users to analyse unusual samples routinely. Semiquantitative scanning software can be used for rapid preliminary screening of elements ranging from Carbon to Uranium, prior to assigning mineralised samples to an appropriate quantitative analysis routine. The general quality and precision of analytical results obtained from modern XRF spectrometers can be significantly enhanced by several means: a. Modifications in preliminary sample preparation can result in less contamination from crushing and grinding equipment. Optimised techniques of actual sample preparation can significantly increase precision of results. b. Employment of automatic data recording balances and the use of catch weights during sample preparation reduces technician time as well as weighing errors. * c. Consistency of results can be improved significantly by the use of appropriate stable drift monitors with a statistically significant content of the analyte d. A judicious selection of kV/mA combinations, analysing crystals, primary beam filters, collimators, peak positions, accurate background correction and peak overlap corrections, followed by the use of appropriate matrix correction procedures. e. Preventative maintenance procedures for XRF spectrometers and ancillary equipment, which can also contribute significantly to reducing instrument down times, are described. Examples of various facets of sample processing routines are given from the XRF spectrometer component of a multi-instrument analytical university facility, which provides XRF data to 17 Canadian universities. Copyright (2002) Australian X-ray Analytical Association Inc

  2. Corrective Action Decision Document for Corrective Action Unit 516: Septic Systems and Discharge Points, Nevada Test Site, Nevada, Rev. No.: 1 with ROTC 1

    Energy Technology Data Exchange (ETDEWEB)

    Alfred N. Wickline

    2004-04-01

    This Corrective Action Decision Document (CADD) has been prepared for Corrective Action Unit (CAU) 516, Septic Systems and Discharge Points, Nevada Test Site, Nevada, in accordance with the ''Federal Facility Agreement and Consent Order'' (1996). Corrective Action Unit 516 is comprised of the following Corrective Action Sites (CASs): (1) 03-59-01 - Bldg 3C-36 Septic System; (2) 03-59-02 - Bldg 3C-45 Septic System; (3) 06-51-01 - Sump and Piping; (4) 06-51-02 - Clay Pipe and Debris; (5) 06-51-03 - Clean Out Box and Piping; and (7) 22-19-04 - Vehicle Decontamination Area. The purpose of this CADD is to identify and provide the rationale for the recommendation of an acceptable corrective action alternative for each CAS within CAU 516. Corrective action investigation activities were performed between July 22 and August 14, 2003, as set forth in the Corrective Action Investigation Plan. Supplemental sampling was conducted in late 2003 and early 2004.

  3. June 2012 Groundwater Sampling at the Central Nevada Test Area (Data Validation Package)

    International Nuclear Information System (INIS)

    2013-01-01

    The U.S. Department of Energy Office of Legacy Management conducted annual sampling at the Central Nevada Test Area (CNTA) on June 26-27, 2012, in accordance with the 2004 Correction Action Decision Document/Corrective Action Plan for Corrective Action Unit 443: Central Nevada Test Area (CNTA)-Subsurface and the addendum to the 'Corrective Action Decision Document/Corrective Action Plan' completed in 2008. Sampling and analysis were conducted as specified in the Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PLN/S04351), continually updated).

  4. May 2011 Groundwater Sampling at the Central Nevada Test Area (Data Validation Package)

    International Nuclear Information System (INIS)

    2011-01-01

    The U.S. Department of Energy Office of Legacy Management conducted annual sampling at the Central Nevada Test Area (CNTA) on May 10-11, 2011, in accordance with the 2004 Correction Action Decision Document/Corrective Action Plan for Corrective Action Unit 443: Central Nevada Test Area (CNTA)-Subsurface and the addendum to the 'Corrective Action Decision Document/Corrective Action Plan' completed in 2008. Sampling and analysis were conducted as specified in the Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PLN/S04351), continually updated)

  5. Determination of Organic Pollutants in Small Samples of Groundwaters by Liquid-Liquid Extraction and Capillary Gas Chromatography

    DEFF Research Database (Denmark)

    Harrison, I.; Leader, R.U.; Higgo, J.J.W.

    1994-01-01

    A method is presented for the determination of 22 organic compounds in polluted groundwaters. The method includes liquid-liquid extraction of the base/neutral organics from small, alkaline groundwater samples, followed by derivatisation and liquid-liquid extraction of phenolic compounds after neu...... neutralisation. The extracts were analysed by capillary gas chromatography. Dual detection by flame Ionisation and electron capture was used to reduce analysis time....

  6. On self-attenuation corrections in gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Bolivar, J.P.; Garcia-Leon, M.; Garcia-Tenorio, R.

    1997-01-01

    In this paper we discuss and justify the dependence on the sample density and gamma energy of the self-attenuation correction factor, f, in the transmission method for the full energy peak efficiency calibration of Ge detectors. It is suggested as a method for the direct computing of f in the case that the sample composition is known. (Author)

  7. k-space sampling optimization for ultrashort TE imaging of cortical bone: Applications in radiation therapy planning and MR-based PET attenuation correction

    International Nuclear Information System (INIS)

    Hu, Lingzhi; Traughber, Melanie; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Muzic, Raymond F. Jr.

    2014-01-01

    Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2 ∗ = 1/T2 ∗ , was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2 ∗ of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2 ∗ of human skull was measured as 0.2–0.3 ms −1 depending on the specific region, which is more than ten times greater than the R2 ∗ of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in brain. High-quality, bone

  8. Quantum corrections to Drell-Yan production of Z bosons

    Energy Technology Data Exchange (ETDEWEB)

    Shcherbakova, Elena S.

    2011-08-15

    In this thesis, we present higher-order corrections to inclusive Z-boson hadroproduction via the Drell-Yan mechanism, h{sub 1}+h{sub 2}{yields}Z+X, at large transverse momentum (q{sub T}). Specifically, we include the QED, QCD and electroweak corrections of orders O({alpha}{sub S}{alpha}, {alpha}{sub S}{sup 2}{alpha}, {alpha}{sub S}{alpha}{sup 2}). We work in the framework of the Standard Model and adopt the MS scheme of renormalization and factorization. The cross section of Z-boson production has been precisely measured at various hadron-hadron colliders, including the Tevatron and the LHC. Our calculations will help to calibrate and monitor the luminosity and to estimate of backgrounds of the hadron-hadron interactions more reliably. Besides the total cross section, we study the distributions in the transverse momentum and the rapidity (y) of the Z boson, appropriate for Tevatron and LHC experimental conditions. Investigating the relative sizes fo the various types of corrections by means of the factor K = {sigma}{sub tot} / {sigma}{sub Born}, we find that the QCS corrections of order {alpha}{sub S}{sup 2}{alpha} are largest in general and that the electroweak corrections of order {alpha}{sub S}{alpha}{sup 2} play an important role at large values of q{sub T}, while the QED corrections at the same order are small, of order 2% or below. We also compare out results with the existing literature. We correct a few misprints in the original calculation of the QCD corrections, and find the published electroweak correction to be incomplete. Our results for the QED corrections are new. (orig.)

  9. Present status of NMCC and sample preparation method for bio-samples

    International Nuclear Information System (INIS)

    Futatsugawa, S.; Hatakeyama, S.; Saitou, S.; Sera, K.

    1993-01-01

    In NMCC(Nishina Memorial Cyclotron Center) we are doing researches on PET of nuclear medicine (Positron Emission Computed Tomography) and PIXE analysis (Particle Induced X-ray Emission) using a small cyclotron of compactly designed. The NMCC facilities have been opened to researchers of other institutions since April 1993. The present status of NMCC is described. Bio-samples (medical samples, plants, animals and environmental samples) have mainly been analyzed by PIXE in NMCC. Small amounts of bio-samples for PIXE are decomposed quickly and easily in a sealed PTFE (polytetrafluoroethylene) vessel with a microwave oven. This sample preparation method of bio-samples also is described. (author)

  10. Coherence and diffraction limited resolution in microscopic OCT by a unified approach for the correction of dispersion and aberrations

    Science.gov (United States)

    Schulz-Hildebrandt, H.; Münter, Michael; Ahrens, M.; Spahr, H.; Hillmann, D.; König, P.; Hüttmann, G.

    2018-03-01

    Optical coherence tomography (OCT) images scattering tissues with 5 to 15 μm resolution. This is usually not sufficient for a distinction of cellular and subcellular structures. Increasing axial and lateral resolution and compensation of artifacts caused by dispersion and aberrations is required to achieve cellular and subcellular resolution. This includes defocus which limit the usable depth of field at high lateral resolution. OCT gives access the phase of the scattered light and hence correction of dispersion and aberrations is possible by numerical algorithms. Here we present a unified dispersion/aberration correction which is based on a polynomial parameterization of the phase error and an optimization of the image quality using Shannon's entropy. For validation, a supercontinuum light sources and a costume-made spectrometer with 400 nm bandwidth were combined with a high NA microscope objective in a setup for tissue and small animal imaging. Using this setup and computation corrections, volumetric imaging at 1.5 μm resolution is possible. Cellular and near cellular resolution is demonstrated in porcine cornea and the drosophila larva, when computational correction of dispersion and aberrations is used. Due to the excellent correction of the used microscope objective, defocus was the main contribution to the aberrations. In addition, higher aberrations caused by the sample itself were successfully corrected. Dispersion and aberrations are closely related artifacts in microscopic OCT imaging. Hence they can be corrected in the same way by optimization of the image quality. This way microscopic resolution is easily achieved in OCT imaging of static biological tissues.

  11. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    Science.gov (United States)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  12. Corrective Action Investigation Plan for Corrective Action Unit 366: Area 11 Plutonium Valley Dispersion Sites, Nevada National Security Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Patrick Matthews

    2011-09-01

    Corrective Action Unit 366 comprises the six corrective action sites (CASs) listed below: (1) 11-08-01, Contaminated Waste Dump No.1; (2) 11-08-02, Contaminated Waste Dump No.2; (3) 11-23-01, Radioactively Contaminated Area A; (4) 11-23-02, Radioactively Contaminated Area B; (5) 11-23-03, Radioactively Contaminated Area C; and (6) 11-23-04, Radioactively Contaminated Area D. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of CAAs that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed July 6, 2011, by representatives of the Nevada Division of Environmental Protection and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 366. The presence and nature of contamination at CAU 366 will be evaluated based on information collected from a field investigation. Radiological contamination will be evaluated based on a comparison of the total effective dose (TED) at sample locations to the dose-based final action level (FAL). The TED will be calculated by summing the estimates of internal and external dose. Results from the analysis of soil samples collected from sample plots will be used to calculate internal radiological dose. Thermoluminescent dosimeters placed at each sample location will be used to measure external radiological dose. Based on historical documentation of the releases

  13. Use of aspiration method for collecting brain samples for rabies diagnosis in small wild animals.

    Science.gov (United States)

    Iamamoto, K; Quadros, J; Queiroz, L H

    2011-02-01

    In developing countries such as Brazil, where canine rabies is still a considerable problem, samples from wildlife species are infrequently collected and submitted for screening for rabies. A collaborative study was established involving environmental biologists and veterinarians for rabies epidemiological research in a specific ecological area located at the Sao Paulo State, Brazil. The wild animals' brains are required to be collected without skull damage because the skull's measurements are important in the identification of the captured animal species. For this purpose, samples from bats and small mammals were collected using an aspiration method by inserting a plastic pipette into the brain through the magnum foramen. While there is a progressive increase in the use of the plastic pipette technique in various studies undertaken, it is also appreciated that this method could foster collaborative research between wildlife scientists and rabies epidemiologists thus improving rabies surveillance. © 2009 Blackwell Verlag GmbH.

  14. Corrective Action Decision Document/Closure Report for Corrective Action Unit 541: Small Boy Nevada National Security Site and Nevada Test and Training Range, Nevada, Revision 0 with ROTC-1

    Energy Technology Data Exchange (ETDEWEB)

    Kidman, Raymond [Navarro, Las Vegas, NV (United States); Matthews, Patrick [Navarro, Las Vegas, NV (United States)

    2016-08-01

    The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 541 based on the no further action alternative listed in Table ES-1.

  15. Inner filter correction of dissolved organic matter fluorescence

    DEFF Research Database (Denmark)

    Kothawala, D.N.,; Murphy, K.R.; Stedmon, Colin

    2013-01-01

    The fluorescence of dissolved organic matter (DOM) is suppressed by a phenomenon of self-quenching known as the inner filter effect (IFE). Despite widespread use of fluorescence to characterize DOM in surface waters, the advantages and constraints of IFE correction are poorly defined. We assessed...... the effectiveness of a commonly used absorbance-based approach (ABA), and a recently proposed controlled dilution approach (CDA) to correct for IFE. Linearity between corrected fluorescence and total absorbance (ATotal; the sum of absorbance at excitation and emission wavelengths) across the full excitation......-emission matrix (EEM) in dilution series of four samples indicated both ABA and CDA were effective to an absorbance of at least 1.5 in a 1 cm cell, regardless of wavelength positioning. In regions of the EEMs where signal to background noise (S/N) was low, CDA correction resulted in more variability than ABA...

  16. Numerical correction of distorted images in full-field optical coherence tomography

    Science.gov (United States)

    Min, Gihyeon; Kim, Ju Wan; Choi, Woo June; Lee, Byeong Ha

    2012-03-01

    We propose a numerical method which can numerically correct the distorted en face images obtained with a full field optical coherence tomography (FF-OCT) system. It is shown that the FF-OCT image of the deep region of a biological sample is easily blurred or degraded because the sample has a refractive index (RI) much higher than its surrounding medium in general. It is analyzed that the focal plane of the imaging system is segregated from the imaging plane of the coherence-gated system due to the RI mismatch. This image-blurring phenomenon is experimentally confirmed by imaging the chrome pattern of a resolution test target through its glass substrate in water. Moreover, we demonstrate that the blurred image can be appreciably corrected by using the numerical correction process based on the Fresnel-Kirchhoff diffraction theory. The proposed correction method is applied to enhance the image of a human hair, which permits the distinct identification of the melanin granules inside the cortex layer of the hair shaft.

  17. Systematic studies of small scintillators for new sampling calorimeter

    International Nuclear Information System (INIS)

    Jacosalem, E.P.; Sanchez, A.L.C.; Bacala, A.M.; Iba, S.; Nakajima, N.; Ono, H.; Miyata, H.

    2007-01-01

    A new sampling calorimeter using very thin scintillators and the multi-pixel photon counter (MPPC) has been proposed to produce better position resolution for the international linear collider (ILC) experiment. As part of this R and D study, small plastic scintillators of different sizes, thickness and wrapping reflectors are systematically studied. The scintillation light due to beta rays from a collimated 90 Sr source are collected from the scintillator by wavelength-shifting (WLS) fiber and converted into electrical signals at the PMT. The wrapped scintillator that gives the best light yield is determined by comparing the measured pulse height of each 10 x 40 x 2 mm strip scintillator covered with 3M reflective mirror film, teflon, white paint, black tape, gold, aluminum and white paint+teflon. The pulse height dependence on position, length and thickness of the 3M reflective mirror film and teflon wrapped scintillators are measured. Results show that the 3M radiant mirror film-wrapped scintillator has the greatest light yield with an average of 9.2 photoelectrons. It is observed that light yield slightly increases with scintillator length, but increases to about 100% when WLS fiber diameter is increased from 1.0 mm to 1.6 mm. The position dependence measurement along the strip scintillator showed the uniformity of light transmission from the sensor to the PMT. A dip across the strip is observed which is 40% of the maximum pulse height. The block type scintillator pulse height, on the other hand, is found to be almost proportional to scintillator thickness. (author)

  18. Use of x-ray scattering in absorption corrections for x-ray fluorescence analysis of aerosol loaded filters

    International Nuclear Information System (INIS)

    Nielson, K.K.; Garcia, S.R.

    1976-09-01

    Two methods are described for computing multielement x-ray absorption corrections for aerosol samples collected in IPC-1478 and Whatman 41 filters. The first relies on scatter peak intensities and scattering cross sections to estimate the mass of light elements (Z less than 14) in the sample. This mass is used with the measured heavy element (Z greater than or equal to 14) masses to iteratively compute sample absorption corrections. The second method utilizes a linear function of ln(μ) vs ln(E) determined from the scatter peak ratios and estimates sample mass from the scatter peak intensities. Both methods assume a homogeneous depth distribution of aerosol in a fraction of the front of the filters, and the assumption is evaluated with respect to an exponential aerosol depth distribution. Penetration depths for various real, synthethic and liquid aerosols were measured. Aerosol penetration appeared constant over a 1.1 mg/cm 2 range of sample loading for IPC filters, while absorption corrections for Si and S varied by a factor of two over the same loading range. Corrections computed by the two methods were compared with measured absorption corrections and with atomic absorption analyses of the same samples

  19. Complete O(α) QED corrections to the process ep→eX in mixed variables

    International Nuclear Information System (INIS)

    Bardin, D.; Joint Inst. of Nuclear Research, Moscow; Christova, P.; Kalinovskaya, L.; Riemann, T.

    1995-04-01

    The complete set of OMIKRON (α) QED corrections with soft photon exponentiation to the process ep→eX in mixed variables (y=y h , Q 2 =Q l 2 ) is calculated in the quark parton model. Compared to earlier attempts, we additionally determine the lepton-quark interference and the quarkonic corrections. The net results are compared to the approximation with only leptonic corrections, which amount to several percent (at large x or y: several dozens of percents). We find that the newly calculated corrections modify this by few percent or less and become negligible at small y. (orig.)

  20. Empirically Determined Response Matrices for On-Line Orbit and Energy Correction at Jefferson Lab

    International Nuclear Information System (INIS)

    Leigh Harwood; Alicia Hofler; Michele Joyce; Valeri Lebedev; David Bryan

    2001-01-01

    Jefferson Lab uses feedback loops (less than 1 hertz update rate) to correct drifts in CEBAF's electron beam orbit and energy. Previous incarnations of these loops used response matrices that were computed by a numerical model of the machine. Jefferson Lab is transitioning this feedback system to use empirically determined response matrices whereby the software introduces small orbit or energy deviations using the loop's actuators and measures the system response with the loop's sensors. This method is in routine use for orbit correction. This paper will describe the orbit correction system and future plans to extend this method to energy correction