WorldWideScience

Sample records for small sample correction

  1. Calculation of coincidence summing corrections for a specific small soil sample geometry

    Energy Technology Data Exchange (ETDEWEB)

    Helmer, R.G.; Gehrke, R.J.

    1996-10-01

    Previously, a system was developed at the INEL for measuring the {gamma}-ray emitting nuclides in small soil samples for the purpose of environmental monitoring. These samples were counted close to a {approx}20% Ge detector and, therefore, it was necessary to take into account the coincidence summing that occurs for some nuclides. In order to improve the technical basis for the coincidence summing corrections, the authors have carried out a study of the variation in the coincidence summing probability with position within the sample volume. A Monte Carlo electron and photon transport code (CYLTRAN) was used to compute peak and total efficiencies for various photon energies from 30 to 2,000 keV at 30 points throughout the sample volume. The geometry for these calculations included the various components of the detector and source along with the shielding. The associated coincidence summing corrections were computed at these 30 positions in the sample volume and then averaged for the whole source. The influence of the soil and the detector shielding on the efficiencies was investigated.

  2. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    Science.gov (United States)

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  3. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. 78 FR 59798 - Small Business Subcontracting: Correction

    Science.gov (United States)

    2013-09-30

    ... SMALL BUSINESS ADMINISTRATION 13 CFR Part 125 RIN 3245-AG22 Small Business Subcontracting: Correction AGENCY: U.S. Small Business Administration. ACTION: Correcting amendments. SUMMARY: This document... business subcontracting to implement provisions of the Small Business Jobs Act of 2010. This correction...

  5. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  6. Standard Deviation for Small Samples

    Science.gov (United States)

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  7. Correcting sample drift using Fourier harmonics.

    Science.gov (United States)

    Bárcena-González, G; Guerrero-Lebrero, M P; Guerrero, E; Reyes, D F; Braza, V; Yañez, A; Nuñez-Moraleda, B; González, D; Galindo, P L

    2018-07-01

    During image acquisition of crystalline materials by high-resolution scanning transmission electron microscopy, the sample drift could lead to distortions and shears that hinder their quantitative analysis and characterization. In order to measure and correct this effect, several authors have proposed different methodologies making use of series of images. In this work, we introduce a methodology to determine the drift angle via Fourier analysis by using a single image based on the measurements between the angles of the second Fourier harmonics in different quadrants. Two different approaches, that are independent of the angle of acquisition of the image, are evaluated. In addition, our results demonstrate that the determination of the drift angle is more accurate by using the measurements of non-consecutive quadrants when the angle of acquisition is an odd multiple of 45°. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Accurate EPR radiosensitivity calibration using small sample masses

    Science.gov (United States)

    Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.

    2000-03-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.

  9. Accurate EPR radiosensitivity calibration using small sample masses

    International Nuclear Information System (INIS)

    Hayes, R.B.; Haskell, E.H.; Barrus, J.K.; Kenner, G.H.; Romanyukha, A.A.

    2000-01-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed

  10. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  11. 78 FR 27442 - Coal Mine Dust Sampling Devices; Correction

    Science.gov (United States)

    2013-05-10

    ... DEPARTMENT OF LABOR Mine Safety and Health Administration Coal Mine Dust Sampling Devices; Correction AGENCY: Mine Safety and Health Administration, Labor. ACTION: Notice; correction. SUMMARY: On April 30, 2013, Mine Safety and Health Administration (MSHA) published a notice in the Federal Register...

  12. Gamma-ray self-attenuation corrections in environmental samples

    International Nuclear Information System (INIS)

    Robu, E.; Giovani, C.

    2009-01-01

    Gamma-spectrometry is a commonly used technique in environmental radioactivity monitoring. Frequently the bulk samples that should be measured differ with respect to composition and density from the reference sample used for efficiency calibration. Correction factors should be applied in these cases for activity measurement. Linear attenuation coefficients and self-absorption correction factors have been evaluated for soil, grass and liquid sources with different densities and geometries.(authors)

  13. Higher order QCD corrections in small x physics

    International Nuclear Information System (INIS)

    Chachamis, G.

    2006-11-01

    We study higher order QCD corrections in small x Physics. The numerical implementation of the full NLO photon impact factor is the remaining necessary piece for the testing of the NLO BFKL resummation against data from physical processes, such as γ * γ * collisions. We perform the numerical integration over phase space for the virtual corrections to the NLO photon impact factor. This, along with the previously calculated real corrections, makes feasible in the near future first estimates for the γ*γ* total cross section, since the convolution of the full impact factor with the NLO BFKL gluon Green's function is now straightforward. The NLO corrections for the photon impact factor are sizeable and negative. In the second part of this thesis, we estimate higher order correction to the BK equation. We are mainly interested in whether partonic saturation delays or not in rapidity when going beyond the leading order. In our investigation, we use the so called 'rapidity veto' which forbid two emissions to be very close in rapidity, to 'switch on' higher order corrections to the BK equation. From analytic and numerical analysis, we conclude that indeed saturation does delay in rapidity when higher order corrections are taken into account. In the last part, we investigate higher order QCD corrections as additional corrections to the Electroweak (EW) sector. The question of whether BFKL corrections are of any importance in the Regge limit for the EW sector seems natural; although they arise in higher loop level, the accumulation of logarithms in energy s at high energies, cannot be dismissed without an investigation. We focus on the process γγ→ZZ. We calculate the pQCD corrections in the forward region at leading logarithmic (LL) BFKL accuracy, which are of the order of few percent at the TeV energy scale. (orig.)

  14. Higher order QCD corrections in small x physics

    Energy Technology Data Exchange (ETDEWEB)

    Chachamis, G.

    2006-11-15

    We study higher order QCD corrections in small x Physics. The numerical implementation of the full NLO photon impact factor is the remaining necessary piece for the testing of the NLO BFKL resummation against data from physical processes, such as {gamma}{sup *}{gamma}{sup *} collisions. We perform the numerical integration over phase space for the virtual corrections to the NLO photon impact factor. This, along with the previously calculated real corrections, makes feasible in the near future first estimates for the {gamma}*{gamma}* total cross section, since the convolution of the full impact factor with the NLO BFKL gluon Green's function is now straightforward. The NLO corrections for the photon impact factor are sizeable and negative. In the second part of this thesis, we estimate higher order correction to the BK equation. We are mainly interested in whether partonic saturation delays or not in rapidity when going beyond the leading order. In our investigation, we use the so called 'rapidity veto' which forbid two emissions to be very close in rapidity, to 'switch on' higher order corrections to the BK equation. From analytic and numerical analysis, we conclude that indeed saturation does delay in rapidity when higher order corrections are taken into account. In the last part, we investigate higher order QCD corrections as additional corrections to the Electroweak (EW) sector. The question of whether BFKL corrections are of any importance in the Regge limit for the EW sector seems natural; although they arise in higher loop level, the accumulation of logarithms in energy s at high energies, cannot be dismissed without an investigation. We focus on the process {gamma}{gamma}{yields}ZZ. We calculate the pQCD corrections in the forward region at leading logarithmic (LL) BFKL accuracy, which are of the order of few percent at the TeV energy scale. (orig.)

  15. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  16. Bulk sample self-attenuation correction by transmission measurement

    International Nuclear Information System (INIS)

    Parker, J.L.; Reilly, T.D.

    1976-01-01

    Various methods used in either finding or avoiding the attenuation correction in the passive γ-ray assay of bulk samples are reviewed. Detailed consideration is given to the transmission method, which involves experimental determination of the sample linear attenuation coefficient by measuring the transmission through the sample of a beam of gamma rays from an external source. The method was applied to box- and cylindrically-shaped samples

  17. Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay

    International Nuclear Information System (INIS)

    Wang Na; Wu Zhi-Hai; Peng Li

    2014-01-01

    In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  18. Small sample whole-genome amplification

    Science.gov (United States)

    Hara, Christine; Nguyen, Christine; Wheeler, Elizabeth; Sorensen, Karen; Arroyo, Erin; Vrankovich, Greg; Christian, Allen

    2005-11-01

    Many challenges arise when trying to amplify and analyze human samples collected in the field due to limitations in sample quantity, and contamination of the starting material. Tests such as DNA fingerprinting and mitochondrial typing require a certain sample size and are carried out in large volume reactions; in cases where insufficient sample is present whole genome amplification (WGA) can be used. WGA allows very small quantities of DNA to be amplified in a way that enables subsequent DNA-based tests to be performed. A limiting step to WGA is sample preparation. To minimize the necessary sample size, we have developed two modifications of WGA: the first allows for an increase in amplified product from small, nanoscale, purified samples with the use of carrier DNA while the second is a single-step method for cleaning and amplifying samples all in one column. Conventional DNA cleanup involves binding the DNA to silica, washing away impurities, and then releasing the DNA for subsequent testing. We have eliminated losses associated with incomplete sample release, thereby decreasing the required amount of starting template for DNA testing. Both techniques address the limitations of sample size by providing ample copies of genomic samples. Carrier DNA, included in our WGA reactions, can be used when amplifying samples with the standard purification method, or can be used in conjunction with our single-step DNA purification technique to potentially further decrease the amount of starting sample necessary for future forensic DNA-based assays.

  19. Small-sample-worth perturbation methods

    International Nuclear Information System (INIS)

    1985-01-01

    It has been assumed that the perturbed region, R/sub p/, is large enough so that: (1) even without a great deal of biasing there is a substantial probability that an average source-neutron will enter it; and (2) once having entered, the neutron is likely to make several collisions in R/sub p/ during its lifetime. Unfortunately neither assumption is valid for the typical configurations one encounters in small-sample-worth experiments. In such experiments one measures the reactivity change which is induced when a very small void in a critical assembly is filled with a sample of some test-material. Only a minute fraction of the fission-source neutrons ever gets into the sample and, of those neutrons that do, most emerge uncollided. Monte Carlo small-sample perturbations computations are described

  20. Gaseous radiocarbon measurements of small samples

    International Nuclear Information System (INIS)

    Ruff, M.; Szidat, S.; Gaeggeler, H.W.; Suter, M.; Synal, H.-A.; Wacker, L.

    2010-01-01

    Radiocarbon dating by means of accelerator mass spectrometry (AMS) is a well-established method for samples containing carbon in the milligram range. However, the measurement of small samples containing less than 50 μg carbon often fails. It is difficult to graphitise these samples and the preparation is prone to contamination. To avoid graphitisation, a solution can be the direct measurement of carbon dioxide. The MICADAS, the smallest accelerator for radiocarbon dating in Zurich, is equipped with a hybrid Cs sputter ion source. It allows the measurement of both, graphite targets and gaseous CO 2 samples, without any rebuilding. This work presents experiences dealing with small samples containing 1-40 μg carbon. 500 unknown samples of different environmental research fields have been measured yet. Most of the samples were measured with the gas ion source. These data are compared with earlier measurements of small graphite samples. The performance of the two different techniques is discussed and main contributions to the blank determined. An analysis of blank and standard data measured within years allowed a quantification of the contamination, which was found to be of the order of 55 ng and 750 ng carbon (50 pMC) for the gaseous and the graphite samples, respectively. For quality control, a number of certified standards were measured using the gas ion source to demonstrate reliability of the data.

  1. Attenuation correction for the NIH ATLAS small animal PET scanner

    CERN Document Server

    Yao, Rutao; Liow, JeihSan; Seidel, Jurgen

    2003-01-01

    We evaluated two methods of attenuation correction for the NIH ATLAS small animal PET scanner: 1) a CT-based method that derives 511 keV attenuation coefficients (mu) by extrapolation from spatially registered CT images; and 2) an analytic method based on the body outline of emission images and an empirical mu. A specially fabricated attenuation calibration phantom with cylindrical inserts that mimic different body tissues was used to derive the relationship to convert CT values to (I for PET. The methods were applied to three test data sets: 1) a uniform cylinder phantom, 2) the attenuation calibration phantom, and 3) a mouse injected with left bracket **1**8F right bracket FDG. The CT-based attenuation correction factors were larger in non-uniform regions of the imaging subject, e.g. mouse head, than the analytic method. The two methods had similar correction factors for regions with uniform density and detectable emission source distributions.

  2. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  3. A Geology Sampling System for Small Bodies

    Science.gov (United States)

    Naids, Adam J.; Hood, Anthony D.; Abell, Paul; Graff, Trevor; Buffington, Jesse

    2016-01-01

    Human exploration of microgravity bodies is being investigated as a precursor to a Mars surface mission. Asteroids, comets, dwarf planets, and the moons of Mars all fall into this microgravity category and some are being discussed as potential mission targets. Obtaining geological samples for return to Earth will be a major objective for any mission to a small body. Currently, the knowledge base for geology sampling in microgravity is in its infancy. Humans interacting with non-engineered surfaces in microgravity environment pose unique challenges. In preparation for such missions a team at the NASA Johnson Space Center has been working to gain experience on how to safely obtain numerous sample types in such an environment. This paper describes the type of samples the science community is interested in, highlights notable prototype work, and discusses an integrated geology sampling solution.

  4. Accelerator mass spectrometry of small biological samples.

    Science.gov (United States)

    Salehpour, Mehran; Forsgard, Niklas; Possnert, Göran

    2008-12-01

    Accelerator mass spectrometry (AMS) is an ultra-sensitive technique for isotopic ratio measurements. In the biomedical field, AMS can be used to measure femtomolar concentrations of labeled drugs in body fluids, with direct applications in early drug development such as Microdosing. Likewise, the regenerative properties of cells which are of fundamental significance in stem-cell research can be determined with an accuracy of a few years by AMS analysis of human DNA. However, AMS nominally requires about 1 mg of carbon per sample which is not always available when dealing with specific body substances such as localized, organ-specific DNA samples. Consequently, it is of analytical interest to develop methods for the routine analysis of small samples in the range of a few tens of microg. We have used a 5 MV Pelletron tandem accelerator to study small biological samples using AMS. Different methods are presented and compared. A (12)C-carrier sample preparation method is described which is potentially more sensitive and less susceptible to contamination than the standard procedures.

  5. Small Scale Yielding Correction of Constraint Loss in Small Sized Fracture Toughness Test Specimens

    International Nuclear Information System (INIS)

    Kim, Maan Won; Kim, Min Chul; Lee, Bong Sang; Hong, Jun Hwa

    2005-01-01

    Fracture toughness data in the ductile-brittle transition region of ferritic steels show scatter produced by local sampling effects and specimen geometry dependence which results from relaxation in crack tip constraint. The ASTM E1921 provides a standard test method to define the median toughness temperature curve, so called Master Curve, for the material corresponding to a 1T crack front length and also defines a reference temperature, T 0 , at which median toughness value is 100 MPam for a 1T size specimen. The ASTM E1921 procedures assume that high constraint, small scaling yielding (SSY) conditions prevail at fracture along the crack front. Violation of the SSY assumption occurs most often during tests of smaller specimens. Constraint loss in such cases leads to higher toughness values and thus lower T 0 values. When applied to a structure with low constraint geometry, the standard fracture toughness estimates may lead to strongly over-conservative estimates. A lot of efforts have been made to adjust the constraint effect. In this work, we applied a small-scale yielding correction (SSYC) to adjust the constraint loss of 1/3PCVN and PCVN specimens which are relatively smaller than 1T size specimen at the fracture toughness Master Curve test

  6. Privacy problems in the small sample selection

    Directory of Open Access Journals (Sweden)

    Loredana Cerbara

    2013-05-01

    Full Text Available The side of social research that uses small samples for the production of micro data, today finds some operating difficulties due to the privacy law. The privacy code is a really important and necessary law because it guarantees the Italian citizen’s rights, as already happens in other Countries of the world. However it does not seem appropriate to limit once more the possibilities of the data production of the national centres of research. That possibilities are already moreover compromised due to insufficient founds is a common problem becoming more and more frequent in the research field. It would be necessary, therefore, to include in the law the possibility to use telephonic lists to select samples useful for activities directly of interest and importance to the citizen, such as the collection of the data carried out on the basis of opinion polls by the centres of research of the Italian CNR and some universities.

  7. The modular small-angle X-ray scattering data correction sequence.

    Science.gov (United States)

    Pauw, B R; Smith, A J; Snow, T; Terrill, N J; Thünemann, A F

    2017-12-01

    Data correction is probably the least favourite activity amongst users experimenting with small-angle X-ray scattering: if it is not done sufficiently well, this may become evident only during the data analysis stage, necessitating the repetition of the data corrections from scratch. A recommended comprehensive sequence of elementary data correction steps is presented here to alleviate the difficulties associated with data correction, both in the laboratory and at the synchrotron. When applied in the proposed order to the raw signals, the resulting absolute scattering cross section will provide a high degree of accuracy for a very wide range of samples, with its values accompanied by uncertainty estimates. The method can be applied without modification to any pinhole-collimated instruments with photon-counting direct-detection area detectors.

  8. TableSim--A program for analysis of small-sample categorical data.

    Science.gov (United States)

    David J. Rugg

    2003-01-01

    Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.

  9. Transportable high sensitivity small sample radiometric calorimeter

    International Nuclear Information System (INIS)

    Wetzel, J.R.; Biddle, R.S.; Cordova, B.S.; Sampson, T.E.; Dye, H.R.; McDow, J.G.

    1998-01-01

    A new small-sample, high-sensitivity transportable radiometric calorimeter, which can be operated in different modes, contains an electrical calibration method, and can be used to develop secondary standards, will be described in this presentation. The data taken from preliminary tests will be presented to indicate the precision and accuracy of the instrument. The calorimeter and temperature-controlled bath, at present, require only a 30-in. by 20-in. tabletop area. The calorimeter is operated from a laptop computer system using unique measurement module capable of monitoring all necessary calorimeter signals. The calorimeter can be operated in the normal calorimeter equilibration mode, as a comparison instrument, using twin chambers and an external electrical calibration method. The sample chamber is 0.75 in (1.9 cm) in diameter by 2.5 in. (6.35 cm) long. This size will accommodate most 238 Pu heat standards manufactured in the past. The power range runs from 0.001 W to <20 W. The high end is only limited by sample size

  10. Exploratory Factor Analysis With Small Samples and Missing Data.

    Science.gov (United States)

    McNeish, Daniel

    2017-01-01

    Exploratory factor analysis (EFA) is an extremely popular method for determining the underlying factor structure for a set of variables. Due to its exploratory nature, EFA is notorious for being conducted with small sample sizes, and recent reviews of psychological research have reported that between 40% and 60% of applied studies have 200 or fewer observations. Recent methodological studies have addressed small size requirements for EFA models; however, these models have only considered complete data, which are the exception rather than the rule in psychology. Furthermore, the extant literature on missing data techniques with small samples is scant, and nearly all existing studies focus on topics that are not of primary interest to EFA models. Therefore, this article presents a simulation to assess the performance of various missing data techniques for EFA models with both small samples and missing data. Results show that deletion methods do not extract the proper number of factors and estimate the factor loadings with severe bias, even when data are missing completely at random. Predictive mean matching is the best method overall when considering extracting the correct number of factors and estimating factor loadings without bias, although 2-stage estimation was a close second.

  11. Empirical method for matrix effects correction in liquid samples

    International Nuclear Information System (INIS)

    Vigoda de Leyt, Dora; Vazquez, Cristina

    1987-01-01

    A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es

  12. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  13. ASSESSING SMALL SAMPLE WAR-GAMING DATASETS

    Directory of Open Access Journals (Sweden)

    W. J. HURLEY

    2013-10-01

    Full Text Available One of the fundamental problems faced by military planners is the assessment of changes to force structure. An example is whether to replace an existing capability with an enhanced system. This can be done directly with a comparison of measures such as accuracy, lethality, survivability, etc. However this approach does not allow an assessment of the force multiplier effects of the proposed change. To gauge these effects, planners often turn to war-gaming. For many war-gaming experiments, it is expensive, both in terms of time and dollars, to generate a large number of sample observations. This puts a premium on the statistical methodology used to examine these small datasets. In this paper we compare the power of three tests to assess population differences: the Wald-Wolfowitz test, the Mann-Whitney U test, and re-sampling. We employ a series of Monte Carlo simulation experiments. Not unexpectedly, we find that the Mann-Whitney test performs better than the Wald-Wolfowitz test. Resampling is judged to perform slightly better than the Mann-Whitney test.

  14. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  15. Multivariate correction in laser-enhanced ionization with laser sampling

    International Nuclear Information System (INIS)

    Popov, A.M.; Labutin, T.A.; Sychev, D.N.; Gorbatenko, A.A.; Zorov, N.B.

    2007-01-01

    The opportunity of normalizing laser-enhanced ionization (LEI) signals by several reference signals (RS) measured simultaneously has been examined in view of correcting variations of laser parameters and matrix interferences. Opto-acoustic, atomic emission and non-selective ionization signals and their paired combination were used as RS for Li determination in aluminum alloys (0-6% Mg, 0-5% Cu, 0-1% Sc, 0-1% Ag). The specific normalization procedure in case of RS essential multicollinearity has been proposed. LEI and RS for each definite ablation pulse energy were plotted in Cartesian co-ordinates (x and y axes - the RS values, z axis - LEI signal). It was found that in the three-dimensional space the slope of the correlation line to the plane of RS depends on the analyte content in the solid sample. The use of this slope has therefore been proposed as a multivariate corrected analytical signal. Multivariate correlative normalization provides analytical signal free of matrix interferences for Al-Mg-Cu-Li alloys. The application of this novel approach to the determination of Li allows plotting unified calibration curves for Al-alloys of different matrix composition

  16. Multivariate correction in laser-enhanced ionization with laser sampling

    Energy Technology Data Exchange (ETDEWEB)

    Popov, A.M. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation); Labutin, T.A. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation)], E-mail: timurla@laser.chem.msu.ru; Sychev, D.N.; Gorbatenko, A.A.; Zorov, N.B. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation)

    2007-03-15

    The opportunity of normalizing laser-enhanced ionization (LEI) signals by several reference signals (RS) measured simultaneously has been examined in view of correcting variations of laser parameters and matrix interferences. Opto-acoustic, atomic emission and non-selective ionization signals and their paired combination were used as RS for Li determination in aluminum alloys (0-6% Mg, 0-5% Cu, 0-1% Sc, 0-1% Ag). The specific normalization procedure in case of RS essential multicollinearity has been proposed. LEI and RS for each definite ablation pulse energy were plotted in Cartesian co-ordinates (x and y axes - the RS values, z axis - LEI signal). It was found that in the three-dimensional space the slope of the correlation line to the plane of RS depends on the analyte content in the solid sample. The use of this slope has therefore been proposed as a multivariate corrected analytical signal. Multivariate correlative normalization provides analytical signal free of matrix interferences for Al-Mg-Cu-Li alloys. The application of this novel approach to the determination of Li allows plotting unified calibration curves for Al-alloys of different matrix composition.

  17. Pierre Gy's sampling theory and sampling practice heterogeneity, sampling correctness, and statistical process control

    CERN Document Server

    Pitard, Francis F

    1993-01-01

    Pierre Gy's Sampling Theory and Sampling Practice, Second Edition is a concise, step-by-step guide for process variability management and methods. Updated and expanded, this new edition provides a comprehensive study of heterogeneity, covering the basic principles of sampling theory and its various applications. It presents many practical examples to allow readers to select appropriate sampling protocols and assess the validity of sampling protocols from others. The variability of dynamic process streams using variography is discussed to help bridge sampling theory with statistical process control. Many descriptions of good sampling devices, as well as descriptions of poor ones, are featured to educate readers on what to look for when purchasing sampling systems. The book uses its accessible, tutorial style to focus on professional selection and use of methods. The book will be a valuable guide for mineral processing engineers; metallurgists; geologists; miners; chemists; environmental scientists; and practit...

  18. 78 FR 45051 - Small Business Size Standards; Support Activities for Mining; Correction

    Science.gov (United States)

    2013-07-26

    ... Regulations by increasing small business size standards for three of the four industries in North American... SMALL BUSINESS ADMINISTRATION 13 CFR Part 121 RIN 3245-AG44 Small Business Size Standards; Support Activities for Mining; Correction AGENCY: U.S. Small Business Administration. ACTION: Final rule; correction...

  19. Maybe Small Is Too Small a Term: Introduction to Advancing Small Sample Prevention Science.

    Science.gov (United States)

    Fok, Carlotta Ching Ting; Henry, David; Allen, James

    2015-10-01

    Prevention research addressing health disparities often involves work with small population groups experiencing such disparities. The goals of this special section are to (1) address the question of what constitutes a small sample; (2) identify some of the key research design and analytic issues that arise in prevention research with small samples; (3) develop applied, problem-oriented, and methodologically innovative solutions to these design and analytic issues; and (4) evaluate the potential role of these innovative solutions in describing phenomena, testing theory, and evaluating interventions in prevention research. Through these efforts, we hope to promote broader application of these methodological innovations. We also seek whenever possible, to explore their implications in more general problems that appear in research with small samples but concern all areas of prevention research. This special section includes two sections. The first section aims to provide input for researchers at the design phase, while the second focuses on analysis. Each article describes an innovative solution to one or more challenges posed by the analysis of small samples, with special emphasis on testing for intervention effects in prevention research. A concluding article summarizes some of their broader implications, along with conclusions regarding future directions in research with small samples in prevention science. Finally, a commentary provides the perspective of the federal agencies that sponsored the conference that gave rise to this special section.

  20. The Accuracy of Inference in Small Samples of Dynamic Panel Data Models

    NARCIS (Netherlands)

    Bun, M.J.G.; Kiviet, J.F.

    2001-01-01

    Through Monte Carlo experiments the small sample behavior is examined of various inference techniques for dynamic panel data models when both the time-series and cross-section dimensions of the data set are small. The LSDV technique and corrected versions of it are compared with IV and GMM

  1. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  2. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  3. A Monte Carlo procedure for Hamiltonians with small nonlocal correction terms

    International Nuclear Information System (INIS)

    Mack, G.; Pinn, K.

    1986-03-01

    We consider lattice field theories whose Hamiltonians contain small nonlocal correction terms. We propose to do simulations for an auxiliarly polymer system with field dependent activities. If a nonlocal correction term to the Hamiltonian is small, it need to be evaluated only rarely. (orig.)

  4. Nonlinear correction to the longitudinal structure function at small x

    International Nuclear Information System (INIS)

    Boroun, G.R.

    2010-01-01

    We computed the longitudinal proton structure function F L , using the nonlinear Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (NLDGLAP) evolution equation approach at small x. For the gluon distribution, the nonlinear effects are related to the longitudinal structure function. As the very small-x behavior of the gluon distribution is obtained by solving the Gribov, Levin, Ryskin, Mueller and Qiu (GLR-MQ) evolution equation with the nonlinear shadowing term incorporated, we show that the strong rise that corresponds to the linear QCD evolution equations can be tamed by screening effects. Consequently, the obtained longitudinal structure function shows a tamed growth at small x. We computed the predictions for all details of the nonlinear longitudinal structure function in the kinematic range where it has been measured by the H1 Collaboration and made comparisons with the computation by Moch, Vermaseren and Vogt at the second order with input data from the MRST QCD fit. (orig.)

  5. Importance of Attenuation Correction (AC) for Small Animal PET Imaging

    DEFF Research Database (Denmark)

    El Ali, Henrik H.; Bodholdt, Rasmus Poul; Jørgensen, Jesper Tranekjær

    2012-01-01

    was performed. Methods: Ten NMRI nude mice with subcutaneous implantation of human breast cancer cells (MCF-7) were scanned consecutively in small animal PET and CT scanners (MicroPETTM Focus 120 and ImTek’s MicroCATTM II). CT-based AC, PET-based AC and uniform AC methods were compared. Results: The activity...

  6. A simple method of correcting for variation of sample thickness in the determination of the activity of environmental samples by gamma spectrometry

    International Nuclear Information System (INIS)

    Galloway, R.B.

    1991-01-01

    Gamma ray spectrometry is a well established method of determining the activity of radioactive components in environmental samples. It is usual to maintain precisely the same counting geometry in measurements on samples under investigation as in the calibration measurements on standard materials of known activity, thus avoiding perceived uncertainties and complications in correcting for changes in counting geometry. However this may not always be convenient if, as on some occasions, only a small quantity of sample material is available for analysis. A procedure which avoids re-calibration for each sample size is described and is shown to be simple to use without significantly reducing the accuracy of measurement of the activity of typical environmental samples. The correction procedure relates to the use of cylindrical samples at a constant distance from the detector, the samples all having the same diameter but various thicknesses being permissible. (author)

  7. Cerebral Small Vessel Disease: Cognition, Mood, Daily Functioning, and Imaging Findings from a Small Pilot Sample

    Directory of Open Access Journals (Sweden)

    John G. Baker

    2012-04-01

    Full Text Available Cerebral small vessel disease, a leading cause of cognitive decline, is considered a relatively homogeneous disease process, and it can co-occur with Alzheimer’s disease. Clinical reports of magnetic resonance imaging (MRI/computed tomography and single photon emission computed tomography (SPECT imaging and neuropsychology testing for a small pilot sample of 14 patients are presented to illustrate disease characteristics through findings from structural and functional imaging and cognitive assessment. Participants showed some decreases in executive functioning, attention, processing speed, and memory retrieval, consistent with previous literature. An older subgroup showed lower age-corrected scores at a single time point compared to younger participants. Performance on a computer-administered cognitive measure showed a slight overall decline over a period of 8–28 months. For a case study with mild neuropsychology findings, the MRI report was normal while the SPECT report identified perfusion abnormalities. Future research can test whether advances in imaging analysis allow for identification of cerebral small vessel disease before changes are detected in cognition.

  8. An Improvement to Interval Estimation for Small Samples

    Directory of Open Access Journals (Sweden)

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  9. Correction for sample self-absorption in activity determination by gamma spectrometry

    International Nuclear Information System (INIS)

    Galloway, R.B.

    1991-01-01

    Gamma ray spectrometry is a convenient method of determining the activity of the radioactive components in environmental samples. Commonly samples vary in gamma absorption or differ in absorption from the calibration standards available, so that accurate correction for self-absorption in the sample is essential. A versatile correction procedure is described. (orig.)

  10. 40 CFR 1065.690 - Buoyancy correction for PM sample media.

    Science.gov (United States)

    2010-07-01

    ... mass, use a sample media density of 920 kg/m3. (3) For PTFE membrane (film) media with an integral... media. 1065.690 Section 1065.690 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Buoyancy correction for PM sample media. (a) General. Correct PM sample media for their buoyancy in air if...

  11. Estimation for small domains in double sampling for stratification ...

    African Journals Online (AJOL)

    In this article, we investigate the effect of randomness of the size of a small domain on the precision of an estimator of mean for the domain under double sampling for stratification. The result shows that for a small domain that cuts across various strata with unknown weights, the sampling variance depends on the within ...

  12. Development of electric discharge equipment for small specimen sampling

    International Nuclear Information System (INIS)

    Okamoto, Koji; Kitagawa, Hideaki; Kusumoto, Junichi; Kanaya, Akihiro; Kobayashi, Toshimi

    2009-01-01

    We have developed the on-site electric discharge sampling equipment that can effectively take samples such as small specimens from the surface portion of the plant components. Compared with the conventional sampling equipment, our sampling equipment can take samples that are thinner in depth and larger in area. In addition, the affection to the equipment can be held down to the minimum, and the thermally-affected zone of the material due to electric discharge is small, which is to be ignored. Therefore, our equipment is excellent in taking samples for various tests such as residual life evaluation.

  13. 75 FR 17036 - Energy Conservation Program: Energy Conservation Standards for Small Electric Motors; Correction

    Science.gov (United States)

    2010-04-05

    ... Conservation Program: Energy Conservation Standards for Small Electric Motors; Correction AGENCY: Office of... standards for small electric motors, which was published on March 9, 2010. In that final rule, the U.S... titled ``Energy Conservation Standards for Small Electric Motors.'' 75 FR 10874. Since the publication of...

  14. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  15. ANL small-sample calorimeter system design and operation

    International Nuclear Information System (INIS)

    Roche, C.T.; Perry, R.B.; Lewis, R.N.; Jung, E.A.; Haumann, J.R.

    1978-07-01

    The Small-Sample Calorimetric System is a portable instrument designed to measure the thermal power produced by radioactive decay of plutonium-containing fuels. The small-sample calorimeter is capable of measuring samples producing power up to 32 milliwatts at a rate of one sample every 20 min. The instrument is contained in two packages: a data-acquisition module consisting of a microprocessor with an 8K-byte nonvolatile memory, and a measurement module consisting of the calorimeter and a sample preheater. The total weight of the system is 18 kg

  16. A thermostat for precise measurements of thermoresistance of small samples

    International Nuclear Information System (INIS)

    Rusinowski, Z.; Slowinski, B.; Winiewski, R.

    1996-01-01

    In the work a simple experimental set-up is described in which special attention is paid to the important problem of the thermal stability of thermoresistance measurements of small samples of manganin

  17. Studies on the true coincidence correction in measuring filter samples by gamma spectrometry

    CERN Document Server

    Lian Qi; Chang Yong Fu; Xia Bing

    2002-01-01

    The true coincidence correction in measuring filter samples has been studied by high efficiency HPGe gamma detectors. The true coincidence correction for a specific three excited levels de-excitation case has been analyzed, and the typical analytical expressions of true coincidence correction factors have been given. According to the measured relative efficiency on the detector surface with 8 'single' energy gamma emitters and efficiency of filter samples, the peak and total efficiency surfaces are fitted. The true coincidence correction factors of sup 6 sup 0 Co and sup 1 sup 5 sup 2 Eu calculated by the efficiency surfaces agree well with experimental results

  18. Identification and Correction of Sample Mix-Ups in Expression Genetic Data: A Case Study.

    Science.gov (United States)

    Broman, Karl W; Keller, Mark P; Broman, Aimee Teo; Kendziorski, Christina; Yandell, Brian S; Sen, Śaunak; Attie, Alan D

    2015-08-19

    In a mouse intercross with more than 500 animals and genome-wide gene expression data on six tissues, we identified a high proportion (18%) of sample mix-ups in the genotype data. Local expression quantitative trait loci (eQTL; genetic loci influencing gene expression) with extremely large effect were used to form a classifier to predict an individual's eQTL genotype based on expression data alone. By considering multiple eQTL and their related transcripts, we identified numerous individuals whose predicted eQTL genotypes (based on their expression data) did not match their observed genotypes, and then went on to identify other individuals whose genotypes did match the predicted eQTL genotypes. The concordance of predictions across six tissues indicated that the problem was due to mix-ups in the genotypes (although we further identified a small number of sample mix-ups in each of the six panels of gene expression microarrays). Consideration of the plate positions of the DNA samples indicated a number of off-by-one and off-by-two errors, likely the result of pipetting errors. Such sample mix-ups can be a problem in any genetic study, but eQTL data allow us to identify, and even correct, such problems. Our methods have been implemented in an R package, R/lineup. Copyright © 2015 Broman et al.

  19. Radioenzymatic assay for trimethoprim in very small serum samples.

    OpenAIRE

    Yogev, R; Melick, C; Tan-Pong, L

    1985-01-01

    A modification of the methotrexate radioassay kit (supplied by New England Enzyme Center) enabled determination of trimethoprim levels in 5-microliter serum samples. An excellent correlation between this assay and high-pressure liquid chromatography assay was found. These preliminary results suggest that with this method rapid determination of trimethoprim levels in very small samples (5 to 10 microliters) can be achieved.

  20. Radioenzymatic assay for trimethoprim in very small serum samples

    International Nuclear Information System (INIS)

    Yogev, R.; Melick, C.; Tan-Pong, L.

    1985-01-01

    A modification of the methotrexate radioassay kit (supplied by New England Enzyme Center) enabled determination of trimethoprim levels in 5-microliter serum samples. An excellent correlation between this assay and high-pressure liquid chromatography assay was found. These preliminary results suggest that with this method rapid determination of trimethoprim levels in very small samples (5 to 10 microliters) can be achieved

  1. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  2. Efficiency and attenuation correction factors determination in gamma spectrometric assay of bulk samples using self radiation

    International Nuclear Information System (INIS)

    Haddad, Kh.

    2009-02-01

    Gamma spectrometry forms the most important and capable tool for measuring radioactive materials. Determination of the efficiency and attenuation correction factors is the most tedious problem in the gamma spectrometric assay of bulk samples. A new experimental and easy method for these correction factors determination using self radiation was proposed in this work. An experimental study of the correlation between self attenuation correction factor and sample thickness and its practical application was also introduced. The work was performed on NORM and uranyl nitrate bulk sample. The results of proposed methods agreed with those of traditional ones.(author)

  3. Correction for the absorption of plutonium alpha particles in filter paper used for dust sampling

    Energy Technology Data Exchange (ETDEWEB)

    Simons, J G

    1956-01-01

    This sample of air-borne dust collected on a filter paper when laboratory air is monitored for plutonium with the 1195 portable dust sampling unit may be regarded, for counting purposes, as a thick source with a non-uniform distribution of alpha-active plutonium. Experiments have been carried out to determine a correction factor to be applied to the observed count on the filter paper sample to correct for internal absorption in the paper and on the dust layer. From the results obtained it is recommended that a correction factor of 2 be used.

  4. Corrective Action Investigation Plan for Corrective Action Unit 541: Small Boy Nevada National Security Site and Nevada Test and Training Range, Nevada with ROTC 1

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick [Navarro-Intera, LLC (N-I), Las Vegas, NV (United States)

    2014-09-01

    Corrective Action Unit (CAU) 541 is co-located on the boundary of Area 5 of the Nevada National Security Site and Range 65C of the Nevada Test and Training Range, approximately 65 miles northwest of Las Vegas, Nevada. CAU 541 is a grouping of sites where there has been a suspected release of contamination associated with nuclear testing. This document describes the planned investigation of CAU 541, which comprises the following corrective action sites (CASs): 05-23-04, Atmospheric Tests (6) - BFa Site; 05-45-03, Atmospheric Test Site - Small Boy. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable CAAs that will be presented in the investigation report. The sites will be investigated based on the data quality objectives (DQOs) developed on April 1, 2014, by representatives of the Nevada Division of Environmental Protection; U.S. Air Force; and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Field Office. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 541. The site investigation process also will be conducted in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices to be applied to this activity. The potential contamination sources associated with CASs 05-23-04 and 05-45-03 are from nuclear testing activities conducted at the Atmospheric Tests (6) - BFa Site and Atmospheric Test Site - Small Boy sites. The presence and nature of

  5. Self-absorption corrections of various sample-detector geometries in gamma-ray spectrometry using sample Monte Carlo Simulations

    International Nuclear Information System (INIS)

    Ahmad Saat; Appleby, P.G.; Nolan, P.J.

    1997-01-01

    Corrections for self-absorption in gamma-ray spectrometry have been developed using a simple Monte Carlo simulation technique. The simulation enables the calculation of gamma-ray path lengths in the sample which, using available data, can be used to calculate self-absorption correction factors. The simulation was carried out on three sample geometries: disk, Marinelli beaker, and cylinder (for well-type detectors). Mathematical models and experimental measurements are used to evaluate the simulations. A good agreement of within a few percents was observed. The simulation results are also in good agreement with those reported in the literature. The simulation code was carried out in FORTRAN 90,

  6. Determination Of Activity Of Radionuclides In Moss-Soil Sample With Self-Absorption Correction

    International Nuclear Information System (INIS)

    Tran Thien Thanh; Chau Van Tao; Truong Thi Hong Loan; Hoang Duc Tam

    2011-01-01

    Hyper Pure Germanium (HPGe) spectrometer system is a very powerful tool for radioactivity measurements. The systematic uncertainty in the full energy peak efficiency is due to the differences between the matrix (density and chemical composition) of the reference and the other bulk samples. For getting precision from the gamma spectrum analysis, the absorbed correction in the sample should be considered. For gamma spectral analysis, a correction for absorption effects in sample should be considered, especially for bulk samples. The results were presented and discussed in this paper. (author)

  7. Multi-element analysis of small biological samples

    International Nuclear Information System (INIS)

    Rokita, E.; Cafmeyer, J.; Maenhaut, W.

    1983-01-01

    A method combining PIXE and INAA was developed to determine the elemental composition of small biological samples. The method needs virtually no sample preparation and less than 1 mg is sufficient for the analysis. The method was used for determining up to 18 elements in leaves taken from Cracow Herbaceous. The factors which influence the elemental composition of leaves and the possible use of leaves as an environmental pollution indicator are discussed

  8. Conversion of Small Algal Oil Sample to JP-8

    Science.gov (United States)

    2012-01-01

    cracking of Algal Oil to SPK Hydroprocessing Lab Plant uop Nitrogen Hydrogen Product ., __ Small Scale Lab Hydprocessing plant - Down flow trickle ... bed configuration - Capable of retaining 25 cc of catalyst bed Meter UOP ·CONFIDENTIAL File Number The catalytic deoxygenation stage of the...content which combined with the samples acidity, is a challenge to reactor metallurgy. None the less, an attempt was made to convert this sample to

  9. Attenuation correction for the collimated gamma ray assay of cylindrical samples

    International Nuclear Information System (INIS)

    Patra, Sabyasachi; Agarwal, Chhavi; Goswami, A.; Gathibandhe, M.

    2015-01-01

    The Hybrid Monte Carlo (HMC) method developed earlier for attenuation correction of non-collimated samples [Agarwal et al., 2008, Nucl. Instrum. Methods A 597, 198], has been extended to the segmented gamma ray assay of cylindrical samples. The method has been validated both experimentally and theoretically. For experimental validation, the results of HMC calculation have been compared with the experimentally obtained attenuation correction factors. The HMC attenuation correction factors have also been compared with the results obtained from literature available near-field and far-field formulae at two sample-to-detector distances (10.3 cm and 20.4 cm). The method has been found to be valid at all sample-to-detector distances over a wide range of transmittance. On the other hand, the literature available near-field and far-field formulae have been found to work over a limited range of sample-to detector distances and transmittances. The HMC method has been further extended to circular collimated geometries where analytical formula for attenuation correction does not exist. - Highlights: • Hybrid Monte Carlo method for attenuation correction developed for SGA system. • Method found to work for all sample-detector geometries for all transmittances. • The near-field formula applicable only after certain sample-detector distance. • The far-field formula applicable only for higher transmittances (>18%). • Hybrid Monte Carlo method further extended to circular collimated geometry

  10. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...

  11. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  12. Systematic studies of small scintillators for new sampling calorimeter

    Indian Academy of Sciences (India)

    A new sampling calorimeter using very thin scintillators and the multi-pixel photon counter (MPPC) has been proposed to produce better position resolution for the international linear collider (ILC) experiment. As part of this R & D study, small plastic scintillators of different sizes, thickness and wrapping reflectors are ...

  13. A General Linear Method for Equating with Small Samples

    Science.gov (United States)

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  14. Testing of Small Graphite Samples for Nuclear Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Julie Chapman

    2010-11-01

    Accurately determining the mechanical properties of small irradiated samples is crucial to predicting the behavior of the overal irradiated graphite components within a Very High Temperature Reactor. The sample size allowed in a material test reactor, however, is limited, and this poses some difficulties with respect to mechanical testing. In the case of graphite with a larger grain size, a small sample may exhibit characteristics not representative of the bulk material, leading to inaccuracies in the data. A study to determine a potential size effect on the tensile strength was pursued under the Next Generation Nuclear Plant program. It focuses first on optimizing the tensile testing procedure identified in the American Society for Testing and Materials (ASTM) Standard C 781-08. Once the testing procedure was verified, a size effect was assessed by gradually reducing the diameter of the specimens. By monitoring the material response, a size effect was successfully identified.

  15. [Progress in sample preparation and analytical methods for trace polar small molecules in complex samples].

    Science.gov (United States)

    Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua

    2015-09-01

    Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.

  16. Structure-based sampling and self-correcting machine learning for accurate calculations of potential energy surfaces and vibrational levels

    Science.gov (United States)

    Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter

    2017-06-01

    We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.

  17. Improvements to the Chebyshev expansion of attenuation correction factors for cylindrical samples

    International Nuclear Information System (INIS)

    Mildner, D.F.R.; Carpenter, J.M.

    1990-01-01

    The accuracy of the Chebyshev expansion coefficients used for the calculation of attenuation correction factors for cylinderical samples has been improved. An increased order of expansion allows the method to be useful over a greater range of attenuation. It is shown that many of these coefficients are exactly zero, others are rational numbers, and others are rational frations of π -1 . The assumptions of Sears in his asymptotic expression of the attenuation correction factor are also examined. (orig.)

  18. Establishing the Validity of the Personality Assessment Inventory Drug and Alcohol Scales in a Corrections Sample

    Science.gov (United States)

    Patry, Marc W.; Magaletta, Philip R.; Diamond, Pamela M.; Weinman, Beth A.

    2011-01-01

    Although not originally designed for implementation in correctional settings, researchers and clinicians have begun to use the Personality Assessment Inventory (PAI) to assess offenders. A relatively small number of studies have made attempts to validate the alcohol and drug abuse scales of the PAI, and only a very few studies have validated those…

  19. Small Mammal Sampling in Mortandad and Los Alamos Canyons, 2005

    International Nuclear Information System (INIS)

    Kathy Bennett; Sherri Sherwood; Rhonda Robinson

    2006-01-01

    As part of an ongoing ecological field investigation at Los Alamos National Laboratory, a study was conducted that compared measured contaminant concentrations in sediment to population parameters for small mammals in the Mortandad Canyon watershed. Mortandad Canyon and its tributary canyons have received contaminants from multiple solid waste management units and areas of concern since establishment of the Laboratory in the 1940s. The study included three reaches within Effluent and Mortandad canyons (E-1W, M-2W, and M-3) that had a spread in the concentrations of metals and radionuclides and included locations where polychlorinated biphenyls and perchlorate had been detected. A reference location, reach LA-BKG in upper Los Alamos Canyon, was also included in the study for comparison purposes. A small mammal study was initiated to assess whether potential adverse effects were evident in Mortandad Canyon due to the presence of contaminants, designated as contaminants of potential ecological concern, in the terrestrial media. Study sites, including the reference site, were sampled in late July/early August. Species diversity and the mean daily capture rate were the highest for E-1W reach and the lowest for the reference site. Species composition among the three reaches in Mortandad was similar with very little overlap with the reference canyon. Differences in species composition and diversity were most likely due to differences in habitat. Sex ratios, body weights, and reproductive status of small mammals were also evaluated. However, small sample sizes of some species within some sites affected the analysis. Ratios of males to females by species of each site (n = 5) were tested using a Chi-square analysis. No differences were detected. Where there was sufficient sample size, body weights of adult small mammals were compared between sites. No differences in body weights were found. Reproductive status of species appears to be similar across sites. However, sample

  20. Small Mammal Sampling in Mortandad and Los Alamos Canyons, 2005

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, Kathy; Sherwood, Sherri; Robinson, Rhonda

    2006-08-15

    As part of an ongoing ecological field investigation at Los Alamos National Laboratory, a study was conducted that compared measured contaminant concentrations in sediment to population parameters for small mammals in the Mortandad Canyon watershed. Mortandad Canyon and its tributary canyons have received contaminants from multiple solid waste management units and areas of concern since establishment of the Laboratory in the 1940s. The study included three reaches within Effluent and Mortandad canyons (E-1W, M-2W, and M-3) that had a spread in the concentrations of metals and radionuclides and included locations where polychlorinated biphenyls and perchlorate had been detected. A reference location, reach LA-BKG in upper Los Alamos Canyon, was also included in the study for comparison purposes. A small mammal study was initiated to assess whether potential adverse effects were evident in Mortandad Canyon due to the presence of contaminants, designated as contaminants of potential ecological concern, in the terrestrial media. Study sites, including the reference site, were sampled in late July/early August. Species diversity and the mean daily capture rate were the highest for E-1W reach and the lowest for the reference site. Species composition among the three reaches in Mortandad was similar with very little overlap with the reference canyon. Differences in species composition and diversity were most likely due to differences in habitat. Sex ratios, body weights, and reproductive status of small mammals were also evaluated. However, small sample sizes of some species within some sites affected the analysis. Ratios of males to females by species of each site (n = 5) were tested using a Chi-square analysis. No differences were detected. Where there was sufficient sample size, body weights of adult small mammals were compared between sites. No differences in body weights were found. Reproductive status of species appears to be similar across sites. However, sample

  1. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    Science.gov (United States)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model

  2. Research on self-absorption corrections for laboratory γ spectral analysis of soil samples

    International Nuclear Information System (INIS)

    Tian Zining; Jia Mingyan; Li Huibin; Cheng Ziwei; Ju Lingjun; Shen Maoquan; Yang Xiaoyan; Yan Ling; Fen Tiancheng

    2010-01-01

    Based on the calibration results of the point sources,dimensions of HPGe crystal were characterized.Linear attenuation coefficients and detection efficiencies of all kinds of samples were calculated,and the function F(μ) of φ75 mm x 25 mm sample was established. Standard surface source was used to simulate the source of different heights in the soil sample. And the function ε(h) which reflect the relationship between detection efficiencies and heights of the surface sources was determined. The detection efficiency of calibration source can be obtained by integration, F(μ) functions of soil samples established is consistent with the result of MCNP calculation code. Several φ75 mm x 25 mm soil samples were measured by the HPGe spectrometer,and the function F(μ) was used to correct the self absorption. F(μ) functions of soil samples of various dimensions can be calculated by MCNP calculation code established, and self absorption correction can be done. To verify the efficiency of calculation results, φ75 mm x 75 mm soil samples were measured. Several φ75 mm x 25 mm soil samples from aerosphere nuclear testing field was measured by the HPGe spectrometer,and the function F(μ) was used to correct the self absorption. The function F(m) was established, and the technical method which is used to correct the soil samples of unknown area is also given. The correction method of surface source greatly improves the gamma spectrum's metrical accuracy, and it will be widely applied to environmental radioactive investigation. (authors)

  3. Radiocarbon measurements of small gaseous samples at CologneAMS

    Science.gov (United States)

    Stolz, A.; Dewald, A.; Altenkirch, R.; Herb, S.; Heinze, S.; Schiffer, M.; Feuerstein, C.; Müller-Gatermann, C.; Wotte, A.; Rethemeyer, J.; Dunai, T.

    2017-09-01

    A second SO-110 B (Arnold et al., 2010) ion source was installed at the 6 MV CologneAMS for the measurement of gaseous samples. For the gas supply a dedicated device from Ionplus AG was connected to the ion source. Special effort was devoted to determine optimized operation parameters for the ion source, which give a high carbon current output and a high 14C- yield. The latter is essential in cases when only small samples are available. Additionally a modified immersion lens and modified target pieces were tested and the target position was optimized.

  4. A multi-dimensional sampling method for locating small scatterers

    International Nuclear Information System (INIS)

    Song, Rencheng; Zhong, Yu; Chen, Xudong

    2012-01-01

    A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method. (paper)

  5. An experimental verification of laser-velocimeter sampling bias and its correction

    Science.gov (United States)

    Johnson, D. A.; Modarress, D.; Owen, F. K.

    1982-01-01

    The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.

  6. Motion correction in simultaneous PET/MR brain imaging using sparsely sampled MR navigators

    DEFF Research Database (Denmark)

    Keller, Sune H; Hansen, Casper; Hansen, Christian

    2015-01-01

    BACKGROUND: We present a study performing motion correction (MC) of PET using MR navigators sampled between other protocolled MR sequences during simultaneous PET/MR brain scanning with the purpose of evaluating its clinical feasibility and the potential improvement of image quality. FINDINGS......: Twenty-nine human subjects had a 30-min [(11)C]-PiB PET scan with simultaneous MR including 3D navigators sampled at six time points, which were used to correct the PET image for rigid head motion. Five subjects with motion greater than 4 mm were reconstructed into six frames (one for each navigator...

  7. Local heterogeneity effects on small-sample worths

    International Nuclear Information System (INIS)

    Schaefer, R.W.

    1986-01-01

    One of the parameters usually measured in a fast reactor critical assembly is the reactivity associated with inserting a small sample of a material into the core (sample worth). Local heterogeneities introduced by the worth measurement techniques can have a significant effect on the sample worth. Unfortunately, the capability is lacking to model some of the heterogeneity effects associated with the experimental technique traditionally used at ANL (the radial tube technique). It has been suggested that these effects could account for a large portion of what remains of the longstanding central worth discrepancy. The purpose of this paper is to describe a large body of experimental data - most of which has never been reported - that shows the effect of radial tube-related local heterogeneities

  8. Research of pneumatic control transmission system for small irradiation samples

    International Nuclear Information System (INIS)

    Bai Zhongxiong; Zhang Haibing; Rong Ru; Zhang Tao

    2008-01-01

    In order to reduce the absorbed dose damage for the operator, pneumatic control has been adopted to realize the rapid transmission of small irradiation samples. On/off of pneumatic circuit and directions for the rapid transmission system are controlled by the electrical control part. The main program initializes the system and detects the location of the manual/automatic change-over switch, and call for the corresponding subprogram to achieve the automatic or manual operation. Automatic subprogram achieves the automatic sample transmission; Manual subprogram completes the deflation, and back and forth movement of the radiation samples. This paper introduces in detail the implementation of the system, in terms of both hardware and software design. (authors)

  9. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  10. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  11. Use of the small gas proportional counters for the carbon-14 measurement of very small samples

    International Nuclear Information System (INIS)

    Sayre, E.V.; Harbottle, G.; Stoenner, R.W.; Otlet, R.L.; Evans, G.V.

    1981-01-01

    Two recent developments are: the first is the mass-spectrometric separation of 14 C and 12 C ions, followed by counting of the 14 C, while the second is the extension of conventional proportional counter operation, using CO 2 as counting gas, to very small counters and samples. Although the second method is slow (months of counting time are required for 10 mg of carbon) it does not require operator intervention and many samples may be counted simultaneously. Also, it costs only a fraction of the capital expense of an accelerator installation. The development, construction and operation of suitable small counters are described, and results of three actual dating studies involving milligram scale carbon samples will be given. None of these could have been carried out if conventional, gram-sized samples had been needed. New installations, based on the use of these counters, are under construction or in the planning stages. These are located at Brookhaven Laboratory, the National Bureau of Standards (USA) and Harwell (UK). The Harwell installation, which is in advanced stages of construction, will be described in outline. The main significance of the small-counter method is, that although it will not suffice to measure the smallest (much less than 10 mg) or oldest samples, it will permit existing radiocarbon laboratories to extend their capability considerably, in the direction of smaller samples, at modest expense

  12. Fast shading correction for cone beam CT in radiation therapy via sparse sampling on planning CT.

    Science.gov (United States)

    Shi, Linxi; Tsui, Tiffany; Wei, Jikun; Zhu, Lei

    2017-05-01

    The image quality of cone beam computed tomography (CBCT) is limited by severe shading artifacts, hindering its quantitative applications in radiation therapy. In this work, we propose an image-domain shading correction method using planning CT (pCT) as prior information which is highly adaptive to clinical environment. We propose to perform shading correction via sparse sampling on pCT. The method starts with a coarse mapping between the first-pass CBCT images obtained from the Varian TrueBeam system and the pCT. The scatter correction method embedded in the Varian commercial software removes some image errors but the CBCT images still contain severe shading artifacts. The difference images between the mapped pCT and the CBCT are considered as shading errors, but only sparse shading samples are selected for correction using empirical constraints to avoid carrying over false information from pCT. A Fourier-Transform-based technique, referred to as local filtration, is proposed to efficiently process the sparse data for effective shading correction. The performance of the proposed method is evaluated on one anthropomorphic pelvis phantom and 17 patients, who were scheduled for radiation therapy. (The codes of the proposed method and sample data can be downloaded from https://sites.google.com/view/linxicbct) RESULTS: The proposed shading correction substantially improves the CBCT image quality on both the phantom and the patients to a level close to that of the pCT images. On the phantom, the spatial nonuniformity (SNU) difference between CBCT and pCT is reduced from 74 to 1 HU. The root of mean square difference of SNU between CBCT and pCT is reduced from 83 to 10 HU on the pelvis patients, and from 101 to 12 HU on the thorax patients. The robustness of the proposed shading correction is fully investigated with simulated registration errors between CBCT and pCT on the phantom and mis-registration on patients. The sparse sampling scheme of our method successfully

  13. Correction

    CERN Multimedia

    2002-01-01

    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  14. Thermal neutron absorption cross section of small samples

    International Nuclear Information System (INIS)

    Nghiep, T.D.; Vinh, T.T.; Son, N.N.; Vuong, T.V.; Hung, N.T.

    1989-01-01

    A modified steady method for determining the macroscopic thermal neutron absorption cross section of small samples 500 cm 3 in volume is described. The method uses a moderating block of paraffin, Pu-Be neutron source emitting 1.1x10 6 n.s. -1 , SNM-14 counter and ordinary counting equipment. The interval of cross section from 2.6 to 1.3x10 4 (10 -3 cm 2 g -1 ) was measured. The experimental data are described by calculation formulae. 7 refs.; 4 figs

  15. Method for Measuring Thermal Conductivity of Small Samples Having Very Low Thermal Conductivity

    Science.gov (United States)

    Miller, Robert A.; Kuczmarski, Maria a.

    2009-01-01

    This paper describes the development of a hot plate method capable of using air as a standard reference material for the steady-state measurement of the thermal conductivity of very small test samples having thermal conductivity on the order of air. As with other approaches, care is taken to ensure that the heat flow through the test sample is essentially one-dimensional. However, unlike other approaches, no attempt is made to use heated guards to block the flow of heat from the hot plate to the surroundings. It is argued that since large correction factors must be applied to account for guard imperfections when sample dimensions are small, it may be preferable to simply measure and correct for the heat that flows from the heater disc to directions other than into the sample. Experimental measurements taken in a prototype apparatus, combined with extensive computational modeling of the heat transfer in the apparatus, show that sufficiently accurate measurements can be obtained to allow determination of the thermal conductivity of low thermal conductivity materials. Suggestions are made for further improvements in the method based on results from regression analyses of the generated data.

  16. Implementation of Cascade Gamma and Positron Range Corrections for I-124 Small Animal PET

    Science.gov (United States)

    Harzmann, S.; Braun, F.; Zakhnini, A.; Weber, W. A.; Pietrzyk, U.; Mix, M.

    2014-02-01

    Small animal Positron Emission Tomography (PET) should provide accurate quantification of regional radiotracer concentrations and high spatial resolution. This is challenging for non-pure positron emitters with high positron endpoint energies, such as I-124: On the one hand the cascade gammas emitted from this isotope can produce coincidence events with the 511 keV annihilation photons leading to quantification errors. On the other hand the long range of the high energy positron degrades spatial resolution. This paper presents the implementation of a comprehensive correction technique for both of these effects. The established corrections include a modified sinogram-based tail-fitting approach to correct for scatter, random and cascade gamma coincidences and a compensation for resolution degradation effects during the image reconstruction. Resolution losses were compensated for by an iterative algorithm which incorporates a convolution kernel derived from line source measurements for the microPET Focus 120 system. The entire processing chain for these corrections was implemented, whereas previous work has only addressed parts of this process. Monte Carlo simulations with GATE and measurements of mice with the microPET Focus 120 show that the proposed method reduces absolute quantification errors on average to 2.6% compared to 15.6% for the ordinary Maximum Likelihood Expectation Maximization algorithm. Furthermore resolution was improved in the order of 11-29% depending on the number of convolution iterations. In summary, a comprehensive, fast and robust algorithm for the correction of small animal PET studies with I-124 was developed which improves quantitative accuracy and spatial resolution.

  17. Pulsed Direct Current Electrospray: Enabling Systematic Analysis of Small Volume Sample by Boosting Sample Economy.

    Science.gov (United States)

    Wei, Zhenwei; Xiong, Xingchuang; Guo, Chengan; Si, Xingyu; Zhao, Yaoyao; He, Muyi; Yang, Chengdui; Xu, Wei; Tang, Fei; Fang, Xiang; Zhang, Sichun; Zhang, Xinrong

    2015-11-17

    We had developed pulsed direct current electrospray ionization mass spectrometry (pulsed-dc-ESI-MS) for systematically profiling and determining components in small volume sample. Pulsed-dc-ESI utilized constant high voltage to induce the generation of single polarity pulsed electrospray remotely. This method had significantly boosted the sample economy, so as to obtain several minutes MS signal duration from merely picoliter volume sample. The elongated MS signal duration enable us to collect abundant MS(2) information on interested components in a small volume sample for systematical analysis. This method had been successfully applied for single cell metabolomics analysis. We had obtained 2-D profile of metabolites (including exact mass and MS(2) data) from single plant and mammalian cell, concerning 1034 components and 656 components for Allium cepa and HeLa cells, respectively. Further identification had found 162 compounds and 28 different modification groups of 141 saccharides in a single Allium cepa cell, indicating pulsed-dc-ESI a powerful tool for small volume sample systematical analysis.

  18. Correction

    Directory of Open Access Journals (Sweden)

    2012-01-01

    Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.

  19. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  20. Correction

    Directory of Open Access Journals (Sweden)

    2014-01-01

    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  1. Determination of the self-attenuation correction factor for environmental samples analysis in gamma spectrometry

    International Nuclear Information System (INIS)

    Santos, Talita O.; Rocha, Zildete; Knupp, Eliana A.N.; Kastner, Geraldo F.; Oliveira, Arno H. de; Oliveira, Arno H. de

    2015-01-01

    Gamma spectrometry technique has been used in order to obtain the activity concentrations of natural and artificial radionuclides in environmental samples of different origins, compositions and densities. These samples characteristics may influence the calibration condition by the self-attenuation effect. The sample density has been considered the most important factor. For reliable results, it is necessary to determine self-attenuation correction factor which has been subject of great interest due to its effect on activity concentration. In this context, the aim of this work is to show the calibration process considering the correction by self-attenuation in the evaluation of the concentration of each radionuclide to a gamma HPGEe detector spectrometry system. (author)

  2. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  3. Attenuation correction for freely moving small animal brain PET studies based on a virtual scanner geometry

    International Nuclear Information System (INIS)

    Angelis, G I; Kyme, A Z; Ryder, W J; Fulton, R R; Meikle, S R

    2014-01-01

    Attenuation correction in positron emission tomography brain imaging of freely moving animals is a very challenging problem since the torso of the animal is often within the field of view and introduces a non negligible attenuating factor that can degrade the quantitative accuracy of the reconstructed images. In the context of unrestrained small animal imaging, estimation of the attenuation correction factors without the need for a transmission scan is highly desirable. An attractive approach that avoids the need for a transmission scan involves the generation of the hull of the animal’s head based on the reconstructed motion corrected emission images. However, this approach ignores the attenuation introduced by the animal’s torso. In this work, we propose a virtual scanner geometry which moves in synchrony with the animal’s head and discriminates between those events that traversed only the animal’s head (and therefore can be accurately compensated for attenuation) and those that might have also traversed the animal’s torso. For each recorded pose of the animal’s head a new virtual scanner geometry is defined and therefore a new system matrix must be calculated leading to a time-varying system matrix. This new approach was evaluated on phantom data acquired on the microPET Focus 220 scanner using a custom-made phantom and step-wise motion. Results showed that when the animal’s torso is within the FOV and not appropriately accounted for during attenuation correction it can lead to bias of up to 10% . Attenuation correction was more accurate when the virtual scanner was employed leading to improved quantitative estimates (bias < 2%), without the need to account for the attenuation introduced by the extraneous compartment. Although the proposed method requires increased computational resources, it can provide a reliable approach towards quantitatively accurate attenuation correction for freely moving animal studies. (paper)

  4. True coincidence summing correction determination for 214Bi principal gamma lines in NORM samples

    International Nuclear Information System (INIS)

    Haddad, Kh.

    2014-01-01

    The gamma lines 609.3 and 1,120.3 keV are two of the most intensive γ emissions of 214 Bi, but they have serious true coincidence summing (TCS) effects due to the complex decay schemes with multi-cascading transitions. TCS effects cause inaccurate count rate and hence erroneous results. A simple and easy experimental method for determination of TCS correction of 214 Bi gamma lines was developed in this work using naturally occurring radioactive material samples. Height efficiency and self attenuation corrections were determined as well. The developed method has been formulated theoretically and validated experimentally. The corrections problems were solved simply with neither additional standard source nor simulation skills. (author)

  5. Receiver calibration and the nonlinearity parameter measurement of thick solid samples with diffraction and attenuation corrections.

    Science.gov (United States)

    Jeong, Hyunjo; Barnard, Daniel; Cho, Sungjong; Zhang, Shuzeng; Li, Xiongbing

    2017-11-01

    This paper presents analytical and experimental techniques for accurate determination of the nonlinearity parameter (β) in thick solid samples. When piezoelectric transducers are used for β measurements, the receiver calibration is required to determine the transfer function from which the absolute displacement can be calculated. The measured fundamental and second harmonic displacement amplitudes should be modified to account for beam diffraction and material absorption. All these issues are addressed in this study and the proposed technique is validated through the β measurements of thick solid samples. A simplified self-reciprocity calibration procedure for a broadband receiver is described. The diffraction and attenuation corrections for the fundamental and second harmonics are explicitly derived. Aluminum alloy samples in five different thicknesses (4, 6, 8, 10, 12cm) are prepared and β measurements are made using the finite amplitude, through-transmission method. The effects of diffraction and attenuation corrections on β measurements are systematically investigated. When diffraction and attenuation corrections are all properly made, the variation of β between different thickness samples is found to be less than 3.2%. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    Science.gov (United States)

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  8. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS) measurements

    Science.gov (United States)

    Dohe, S.; Sherlock, V.; Hase, F.; Gisi, M.; Robinson, J.; Sepúlveda, E.; Schneider, M.; Blumenstock, T.

    2013-08-01

    The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF) of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE) is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment). Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y) at both sites show discrepancies of 0.2-0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  9. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS measurements

    Directory of Open Access Journals (Sweden)

    S. Dohe

    2013-08-01

    Full Text Available The Total Carbon Column Observing Network (TCCON has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment. Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y at both sites show discrepancies of 0.2–0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  10. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    Science.gov (United States)

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  11. Measurement of phthalates in small samples of mammalian tissue

    International Nuclear Information System (INIS)

    Acott, P.D.; Murphy, M.G.; Ogborn, M.R.; Crocker, J.F.S.

    1987-01-01

    Di-(2-ethylhexyl)-phthalate (DEHP) is a phthalic acid ester that is used as a plasticizer in polyvinyl chloride products, many of which have widespread medical application. DEHP has been shown to be leached from products used for storage and delivery of blood transfusions during procedures such as plasmaphoresis, hemodialysis and open heart surgery. Results of studies in this laboratory have suggested that there is an association between the absorption and deposition of DEHP (and/or related chemicals) in the kidney and the acquired renal cystic disease (ACD) frequently seen in patients who have undergone prolonged dialysis treatment. In order to determine the relationship between the two, it has been necessary to establish a method for extracting and accurately quantitating minute amounts of these chemicals in small tissue samples. The authors have now established such a method using kidneys from normal rats and from a rat model for ACD

  12. Monte Carlo and experimental determination of correction factors for gamma knife perfexion small field dosimetry measurements

    Science.gov (United States)

    Zoros, E.; Moutsatsos, A.; Pappas, E. P.; Georgiou, E.; Kollias, G.; Karaiskos, P.; Pantelis, E.

    2017-09-01

    Detector-, field size- and machine-specific correction factors are required for precise dosimetry measurements in small and non-standard photon fields. In this work, Monte Carlo (MC) simulation techniques were used to calculate the k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} and k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors for a series of ionization chambers, a synthetic microDiamond and diode dosimeters, used for reference and/or output factor (OF) measurements in the Gamma Knife Perfexion photon fields. Calculations were performed for the solid water (SW) and ABS plastic phantoms, as well as for a water phantom of the same geometry. MC calculations for the k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors in SW were compared against corresponding experimental results for a subset of ionization chambers and diode detectors. Reference experimental OF data were obtained through the weighted average of corresponding measurements using TLDs, EBT-2 films and alanine pellets. k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} values close to unity (within 1%) were calculated for most of ionization chambers in water. Greater corrections of up to 6.0% were observed for chambers with relatively large air-cavity dimensions and steel central electrode. A phantom correction of 1.006 and 1.024 (breaking down to 1.014 from the ABS sphere and 1.010 from the accompanying ABS phantom adapter) were calculated for the SW and ABS phantoms, respectively, adding up to k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} corrections in water. Both measurements and MC calculations for the diode and microDiamond detectors resulted in lower than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors, due to their denser sensitive volume and encapsulation materials. In comparison, higher than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} results for the ionization chambers suggested field size depended dose underestimations (being significant for the 4 mm field), with magnitude depending on the combination of

  13. Assessment of radioactivity for 24 hours urine sample depending on correction factor by using creatinine

    International Nuclear Information System (INIS)

    Kharita, M. H.; Maghrabi, M.

    2006-09-01

    Assessment of intake and internal does requires knowing the amount of radioactivity in 24 hours urine sample, sometimes it is difficult to get 24 hour sample because this method is not comfortable and in most cases the workers refuse to collect this amount of urine. This work focuses on finding correction factor of 24 hour sample depending on knowing the amount of creatinine in the sample whatever the size of this sample. Then the 24 hours excretion of radionuclide is calculated assuming the average creatinine excretion rate is 1.7 g per 24 hours, based on the amount of activity and creatinine in the urine sample. Several urine sample were collected from occupationally exposed workers the amount and ratios of creatinine and activity in these samples were determined, then normalized to 24 excretion of radionuclide. The average chemical recovery was 77%. It should be emphasized that this method should only be used if a 24 hours sample was not possible to collect. (author)

  14. Correction of MRI-induced geometric distortions in whole-body small animal PET-MRI

    Energy Technology Data Exchange (ETDEWEB)

    Frohwein, Lynn J., E-mail: frohwein@uni-muenster.de; Schäfers, Klaus P. [European Institute for Molecular Imaging, University of Münster, Münster 48149 (Germany); Hoerr, Verena; Faber, Cornelius [Department of Clinical Radiology, University Hospital of Münster, Münster 48149 (Germany)

    2015-07-15

    Purpose: The fusion of positron emission tomography (PET) and magnetic resonance imaging (MRI) data can be a challenging task in whole-body PET-MRI. The quality of the registration between these two modalities in large field-of-views (FOV) is often degraded by geometric distortions of the MRI data. The distortions at the edges of large FOVs mainly originate from MRI gradient nonlinearities. This work describes a method to measure and correct for these kind of geometric distortions in small animal MRI scanners to improve the registration accuracy of PET and MRI data. Methods: The authors have developed a geometric phantom which allows the measurement of geometric distortions in all spatial axes via control points. These control points are detected semiautomatically in both PET and MRI data with a subpixel accuracy. The spatial transformation between PET and MRI data is determined with these control points via 3D thin-plate splines (3D TPS). The transformation derived from the 3D TPS is finally applied to real MRI mouse data, which were acquired with the same scan parameters used in the phantom data acquisitions. Additionally, the influence of the phantom material on the homogeneity of the magnetic field is determined via field mapping. Results: The spatial shift according to the magnetic field homogeneity caused by the phantom material was determined to a mean of 0.1 mm. The results of the correction show that distortion with a maximum error of 4 mm could be reduced to less than 1 mm with the proposed correction method. Furthermore, the control point-based registration of PET and MRI data showed improved congruence after correction. Conclusions: The developed phantom has been shown to have no considerable negative effect on the homogeneity of the magnetic field. The proposed method yields an appropriate correction of the measured MRI distortion and is able to improve the PET and MRI registration. Furthermore, the method is applicable to whole-body small animal

  15. Correction of MRI-induced geometric distortions in whole-body small animal PET-MRI

    International Nuclear Information System (INIS)

    Frohwein, Lynn J.; Schäfers, Klaus P.; Hoerr, Verena; Faber, Cornelius

    2015-01-01

    Purpose: The fusion of positron emission tomography (PET) and magnetic resonance imaging (MRI) data can be a challenging task in whole-body PET-MRI. The quality of the registration between these two modalities in large field-of-views (FOV) is often degraded by geometric distortions of the MRI data. The distortions at the edges of large FOVs mainly originate from MRI gradient nonlinearities. This work describes a method to measure and correct for these kind of geometric distortions in small animal MRI scanners to improve the registration accuracy of PET and MRI data. Methods: The authors have developed a geometric phantom which allows the measurement of geometric distortions in all spatial axes via control points. These control points are detected semiautomatically in both PET and MRI data with a subpixel accuracy. The spatial transformation between PET and MRI data is determined with these control points via 3D thin-plate splines (3D TPS). The transformation derived from the 3D TPS is finally applied to real MRI mouse data, which were acquired with the same scan parameters used in the phantom data acquisitions. Additionally, the influence of the phantom material on the homogeneity of the magnetic field is determined via field mapping. Results: The spatial shift according to the magnetic field homogeneity caused by the phantom material was determined to a mean of 0.1 mm. The results of the correction show that distortion with a maximum error of 4 mm could be reduced to less than 1 mm with the proposed correction method. Furthermore, the control point-based registration of PET and MRI data showed improved congruence after correction. Conclusions: The developed phantom has been shown to have no considerable negative effect on the homogeneity of the magnetic field. The proposed method yields an appropriate correction of the measured MRI distortion and is able to improve the PET and MRI registration. Furthermore, the method is applicable to whole-body small animal

  16. Small-molecule Wnt agonists correct cleft palates in Pax9 mutant mice in utero.

    Science.gov (United States)

    Jia, Shihai; Zhou, Jing; Fanelli, Christopher; Wee, Yinshen; Bonds, John; Schneider, Pascal; Mues, Gabriele; D'Souza, Rena N

    2017-10-15

    Clefts of the palate and/or lip are among the most common human craniofacial malformations and involve multiple genetic and environmental factors. Defects can only be corrected surgically and require complex life-long treatments. Our studies utilized the well-characterized Pax9 -/- mouse model with a consistent cleft palate phenotype to test small-molecule Wnt agonist therapies. We show that the absence of Pax9 alters the expression of Wnt pathway genes including Dkk1 and Dkk2 , proven antagonists of Wnt signaling. The functional interactions between Pax9 and Dkk1 are shown by the genetic rescue of secondary palate clefts in Pax9 -/- Dkk1 f/+ ;Wnt1Cre embryos. The controlled intravenous delivery of small-molecule Wnt agonists (Dkk inhibitors) into pregnant Pax9 +/- mice restored Wnt signaling and led to the growth and fusion of palatal shelves, as marked by an increase in cell proliferation and osteogenesis in utero , while other organ defects were not corrected. This work underscores the importance of Pax9-dependent Wnt signaling in palatogenesis and suggests that this functional upstream molecular relationship can be exploited for the development of therapies for human cleft palates that arise from single-gene disorders. © 2017. Published by The Company of Biologists Ltd.

  17. Atmospheric Correction Performance of Hyperspectral Airborne Imagery over a Small Eutrophic Lake under Changing Cloud Cover

    Directory of Open Access Journals (Sweden)

    Lauri Markelin

    2016-12-01

    Full Text Available Atmospheric correction of remotely sensed imagery of inland water bodies is essential to interpret water-leaving radiance signals and for the accurate retrieval of water quality variables. Atmospheric correction is particularly challenging over inhomogeneous water bodies surrounded by comparatively bright land surface. We present results of AisaFENIX airborne hyperspectral imagery collected over a small inland water body under changing cloud cover, presenting challenging but common conditions for atmospheric correction. This is the first evaluation of the performance of the FENIX sensor over water bodies. ATCOR4, which is not specifically designed for atmospheric correction over water and does not make any assumptions on water type, was used to obtain atmospherically corrected reflectance values, which were compared to in situ water-leaving reflectance collected at six stations. Three different atmospheric correction strategies in ATCOR4 was tested. The strategy using fully image-derived and spatially varying atmospheric parameters produced a reflectance accuracy of ±0.002, i.e., a difference of less than 15% compared to the in situ reference reflectance. Amplitude and shape of the remotely sensed reflectance spectra were in general accordance with the in situ data. The spectral angle was better than 4.1° for the best cases, in the spectral range of 450–750 nm. The retrieval of chlorophyll-a (Chl-a concentration using a popular semi-analytical band ratio algorithm for turbid inland waters gave an accuracy of ~16% or 4.4 mg/m3 compared to retrieval of Chl-a from reflectance measured in situ. Using fixed ATCOR4 processing parameters for whole images improved Chl-a retrieval results from ~6 mg/m3 difference to reference to approximately 2 mg/m3. We conclude that the AisaFENIX sensor, in combination with ATCOR4 in image-driven parametrization, can be successfully used for inland water quality observations. This implies that the need for in situ

  18. Evaluation of energy deposition by 153Sm in small samples

    International Nuclear Information System (INIS)

    Cury, M.I.C.; Siqueira, P.T.D.; Yoriyaz, H.; Coelho, P.R.P.; Da Silva, M.A.; Okazaki, K.

    2002-01-01

    Aim: This work presents evaluations of the absorbed dose by 'in vitro' blood cultures when mixed with 153 Sm solutions of different concentrations. Although 153 Sm is used as radiopharmaceutical mainly due to its beta emission, which is short-range radiation, it also emits gamma radiation which has a longer-range penetration. Therefore it turns to be a difficult task to determine the absorbed dose by small samples where the infinite approximation is no longer valid. Materials and Methods: MCNP-4C (Monte Carlo N - Particle transport code) has been used to perform the evaluations. It is not a deterministic code that calculates the value of a specific quantity solving the physical equations involved in the problem, but a virtual experiment where the events related to the problems are simulated and the concerned quantities are tallied. MCNP also stands out by its possibilities to specify geometrically any problem. However, these features, among others, turns MCNP in a time consuming code. The simulated problem consists of a cylindrical plastic tube with 1.5 cm internal diameter and 0.1cm thickness. It also has 2.0 cm height conic bottom end, so that the represented sample has 4.0 ml ( consisted by 1 ml of blood and 3 ml culture medium). To evaluate the energy deposition in the blood culture in each 153 Sm decay, the problem has been divided in 3 steps to account to the β- emissions (which has a continuum spectrum), gammas and conversion and Auger electrons emissions. Afterwards each emission contribution was weighted and summed to present the final value. Besides this radiation 'fragmentation', simulations were performed for many different amounts of 153 Sm solution added to the sample. These amounts cover a range from 1μl to 0.5 ml. Results: The average energy per disintegration of 153 Sm is 331 keV [1]. Gammas account for 63 keV and β-, conversion and Auger electrons account for 268 keV. The simulations performed showed an average energy deposition of 260 ke

  19. Estimating the residential demand function for natural gas in Seoul with correction for sample selection bias

    International Nuclear Information System (INIS)

    Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun

    2009-01-01

    Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)

  20. Efficiency corrections in determining the 137Cs inventory of environmental soil samples by using relative measurement method and GEANT4 simulations

    International Nuclear Information System (INIS)

    Li, Gang; Liang, Yongfei; Xu, Jiayun; Bai, Lixin

    2015-01-01

    The determination of 137 Cs inventory is widely used to estimate the soil erosion or deposition rate. The generally used method to determine the activity of volumetric samples is the relative measurement method, which employs a calibration standard sample with accurately known activity. This method has great advantages in accuracy and operation only when there is a small difference in elemental composition, sample density and geometry between measuring samples and the calibration standard. Otherwise it needs additional efficiency corrections in the calculating process. The Monte Carlo simulations can handle these correction problems easily with lower financial cost and higher accuracy. This work presents a detailed description to the simulation and calibration procedure for a conventionally used commercial P-type coaxial HPGe detector with cylindrical sample geometry. The effects of sample elemental composition, density and geometry were discussed in detail and calculated in terms of efficiency correction factors. The effect of sample placement was also analyzed, the results indicate that the radioactive nuclides and sample density are not absolutely uniform distributed along the axial direction. At last, a unified binary quadratic functional relationship of efficiency correction factors as a function of sample density and height was obtained by the least square fitting method. This function covers the sample density and height range of 0.8–1.8 g/cm 3 and 3.0–7.25 cm, respectively. The efficiency correction factors calculated by the fitted function are in good agreement with those obtained by the GEANT4 simulations with the determination coefficient value greater than 0.9999. The results obtained in this paper make the above-mentioned relative measurements more accurate and efficient in the routine radioactive analysis of environmental cylindrical soil samples. - Highlights: • Determination of 137 Cs inventory in environmental soil samples by using relative

  1. Fitted temperature-corrected Compton cross sections for Monte Carlo applications and a sampling distribution

    International Nuclear Information System (INIS)

    Wienke, B.R.; Devaney, J.J.; Lathrop, B.L.

    1984-01-01

    Simple temperature-corrected cross sections, which replace the static Klein-Nishina set in a one-to-one manner, are developed for Monte Carlo applications. The reduced set is obtained from a nonlinear least-squares fit to the exact photon-Maxwellian electron cross sections by using a Klein-Nishina-like formula as the fitting equation. Two parameters are sufficient, and accurate to two decimal places, to explicitly fit the exact cross sections over a range of 0 to 100 keV in electron temperature and 0 to 1 MeV in incident photon energy. Since the fit equations are Klein-Nishina-like, existing Monte Carlo code algorithms using the Klein-Nishina formula can be trivially modified to accommodate corrections for a moving Maxwellian electron background. The simple two parameter scheme and other fits are presented and discussed and comparisons with exact predictions are exhibited. The fits are made to the total photon-Maxwellian electron cross section and the fitting parameters can be consistently used in both the energy conservation equation for photon-electron scattering and the differential cross section, as they are presently sampled in Monte Carlo photonics applications. The fit equations are motivated in a very natural manner by the asymptotic expansion of the exact photon-Maxwellian effective cross-section kernel. A probability distribution is also obtained for the corrected set of equations

  2. Systematic studies of small scintillators for new sampling calorimeter

    International Nuclear Information System (INIS)

    Jacosalem, E.P.; Sanchez, A.L.C.; Bacala, A.M.; Iba, S.; Nakajima, N.; Ono, H.; Miyata, H.

    2007-01-01

    A new sampling calorimeter using very thin scintillators and the multi-pixel photon counter (MPPC) has been proposed to produce better position resolution for the international linear collider (ILC) experiment. As part of this R and D study, small plastic scintillators of different sizes, thickness and wrapping reflectors are systematically studied. The scintillation light due to beta rays from a collimated 90 Sr source are collected from the scintillator by wavelength-shifting (WLS) fiber and converted into electrical signals at the PMT. The wrapped scintillator that gives the best light yield is determined by comparing the measured pulse height of each 10 x 40 x 2 mm strip scintillator covered with 3M reflective mirror film, teflon, white paint, black tape, gold, aluminum and white paint+teflon. The pulse height dependence on position, length and thickness of the 3M reflective mirror film and teflon wrapped scintillators are measured. Results show that the 3M radiant mirror film-wrapped scintillator has the greatest light yield with an average of 9.2 photoelectrons. It is observed that light yield slightly increases with scintillator length, but increases to about 100% when WLS fiber diameter is increased from 1.0 mm to 1.6 mm. The position dependence measurement along the strip scintillator showed the uniformity of light transmission from the sensor to the PMT. A dip across the strip is observed which is 40% of the maximum pulse height. The block type scintillator pulse height, on the other hand, is found to be almost proportional to scintillator thickness. (author)

  3. Efficiency corrections in determining the (137)Cs inventory of environmental soil samples by using relative measurement method and GEANT4 simulations.

    Science.gov (United States)

    Li, Gang; Liang, Yongfei; Xu, Jiayun; Bai, Lixin

    2015-08-01

    The determination of (137)Cs inventory is widely used to estimate the soil erosion or deposition rate. The generally used method to determine the activity of volumetric samples is the relative measurement method, which employs a calibration standard sample with accurately known activity. This method has great advantages in accuracy and operation only when there is a small difference in elemental composition, sample density and geometry between measuring samples and the calibration standard. Otherwise it needs additional efficiency corrections in the calculating process. The Monte Carlo simulations can handle these correction problems easily with lower financial cost and higher accuracy. This work presents a detailed description to the simulation and calibration procedure for a conventionally used commercial P-type coaxial HPGe detector with cylindrical sample geometry. The effects of sample elemental composition, density and geometry were discussed in detail and calculated in terms of efficiency correction factors. The effect of sample placement was also analyzed, the results indicate that the radioactive nuclides and sample density are not absolutely uniform distributed along the axial direction. At last, a unified binary quadratic functional relationship of efficiency correction factors as a function of sample density and height was obtained by the least square fitting method. This function covers the sample density and height range of 0.8-1.8 g/cm(3) and 3.0-7.25 cm, respectively. The efficiency correction factors calculated by the fitted function are in good agreement with those obtained by the GEANT4 simulations with the determination coefficient value greater than 0.9999. The results obtained in this paper make the above-mentioned relative measurements more accurate and efficient in the routine radioactive analysis of environmental cylindrical soil samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.

    2017-11-27

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  5. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.; Pan, B.; Lubineau, Gilles

    2017-01-01

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  6. Inverse Gaussian model for small area estimation via Gibbs sampling

    African Journals Online (AJOL)

    We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...

  7. Detector to detector corrections: a comprehensive experimental study of detector specific correction factors for beam output measurements for small radiotherapy beams

    DEFF Research Database (Denmark)

    Azangwe, Godfrey; Grochowska, Paulina; Georg, Dietmar

    2014-01-01

    -doped aluminium oxide (Al2O3:C), organic plastic scintillators, diamond detectors, liquid filled ion chamber, and a range of small volume air filled ionization chambers (volumes ranging from 0.002 cm3 to 0.3 cm3). All detector measurements were corrected for volume averaging effect and compared with dose ratios...... measurements, the authors recommend the use of detectors that require relatively little correction, such as unshielded diodes, diamond detectors or microchambers, and solid state detectors such as alanine, TLD, Al2O3:C, or scintillators....

  8. Respiration-averaged CT for attenuation correction in non-small-cell lung cancer

    International Nuclear Information System (INIS)

    Cheng, Nai-Ming; Ho, Kung-Chu; Yen, Tzu-Chen; Yu, Chih-Teng; Wu, Yi-Cheng; Liu, Yuan-Chang; Wang, Chih-Wei

    2009-01-01

    Breathing causes artefacts on PET/CT images. Cine CT has been used to reduce respiratory artefacts by acquiring multiple images during a single breathing cycle. The aim of this prospective study in non-small-cell lung cancer (NSCLC) patients was twofold. Firstly, we sought to compare the motion artefacts in PET/CT images attenuation-corrected with helical CT (HCT) and with averaged CT (ACT), which provides an average of cine CT images. Secondly, we wanted to evaluate the differences in maximum standardized uptake values (SUV max ) between HCT and ACT. Enrolled in the study were 80 patients with NSCLC. PET images attenuation-corrected with HCT (PET/HCT) and with ACT (PET/ACT) were obtained in all patients. Misregistration was evaluated by measurement of the curved photopenic area in the lower thorax of the PET images for all patients and direct measurement of misregistration for selected lesions. SUV max was measured separately at the primary tumours, regional lymph nodes, and background. A total of 80 patients with NSCLC were included. Significantly lower misregistrations were observed in PET/ACT images than in PET/HCT images (below-thoracic misregistration 0.25±0.58 cm vs. 1.17±1.17 cm, p max were noted in PET/ACT images than in PET/HCT images in the primary tumour (p max in PET/ACT images was higher by 0.35 for the main tumours and 0.34 for lymph nodes. Due to its significantly reduced misregistration, PET/ACT provided more reliable SUV max and may be useful in treatment planning and monitoring the therapeutic response in patients with NSCLC. (orig.)

  9. Respiratory Motion Correction for Compressively Sampled Free Breathing Cardiac MRI Using Smooth l1-Norm Approximation

    Directory of Open Access Journals (Sweden)

    Muhammad Bilal

    2018-01-01

    Full Text Available Transformed domain sparsity of Magnetic Resonance Imaging (MRI has recently been used to reduce the acquisition time in conjunction with compressed sensing (CS theory. Respiratory motion during MR scan results in strong blurring and ghosting artifacts in recovered MR images. To improve the quality of the recovered images, motion needs to be estimated and corrected. In this article, a two-step approach is proposed for the recovery of cardiac MR images in the presence of free breathing motion. In the first step, compressively sampled MR images are recovered by solving an optimization problem using gradient descent algorithm. The L1-norm based regularizer, used in optimization problem, is approximated by a hyperbolic tangent function. In the second step, a block matching algorithm, known as Adaptive Rood Pattern Search (ARPS, is exploited to estimate and correct respiratory motion among the recovered images. The framework is tested for free breathing simulated and in vivo 2D cardiac cine MRI data. Simulation results show improved structural similarity index (SSIM, peak signal-to-noise ratio (PSNR, and mean square error (MSE with different acceleration factors for the proposed method. Experimental results also provide a comparison between k-t FOCUSS with MEMC and the proposed method.

  10. Construct Validity of the MMPI-2-RF Triarchic Psychopathy Scales in Correctional and Collegiate Samples.

    Science.gov (United States)

    Kutchen, Taylor J; Wygant, Dustin B; Tylicki, Jessica L; Dieter, Amy M; Veltri, Carlo O C; Sellbom, Martin

    2017-01-01

    This study examined the MMPI-2-RF (Ben-Porath & Tellegen, 2008/2011) Triarchic Psychopathy scales recently developed by Sellbom et al. ( 2016 ) in 3 separate groups of male correctional inmates and 2 college samples. Participants were administered a diverse battery of psychopathy specific measures (e.g., Psychopathy Checklist-Revised [Hare, 2003 ], Psychopathic Personality Inventory-Revised [Lilienfeld & Widows, 2005 ], Triarchic Psychopathy Measure [Patrick, 2010 ]), omnibus personality and psychopathology measures such as the Personality Assessment Inventory (Morey, 2007 ) and Personality Inventory for DSM-5 (Krueger, Derringer, Markon, Watson, & Skodol, 2012 ), and narrow-band measures that capture conceptually relevant constructs. Our results generally evidenced strong support for the convergent and discriminant validity for the MMPI-2-RF Triarchic scales. Boldness was largely associated with measures of fearless dominance, social potency, and stress immunity. Meanness showed strong relationships with measures of callousness, aggression, externalizing tendencies, and poor interpersonal functioning. Disinhibition exhibited strong associations with poor impulse control, stimulus seeking, and general externalizing proclivities. Our results provide additional construct validation to both the triarchic model and MMPI-2-RF Triarchic scales. Given the widespread use of the MMPI-2-RF in correctional and forensic settings, our results have important implications for clinical assessment in these 2 areas, where psychopathy is a highly relevant construct.

  11. Collateral Information for Equating in Small Samples: A Preliminary Investigation

    Science.gov (United States)

    Kim, Sooyeon; Livingston, Samuel A.; Lewis, Charles

    2011-01-01

    This article describes a preliminary investigation of an empirical Bayes (EB) procedure for using collateral information to improve equating of scores on test forms taken by small numbers of examinees. Resampling studies were done on two different forms of the same test. In each study, EB and non-EB versions of two equating methods--chained linear…

  12. Advanced astigmatism-corrected tandem Wadsworth mounting for small-scale spectral broadband imaging spectrometer.

    Science.gov (United States)

    Lei, Yu; Lin, Guan-yu

    2013-01-01

    Tandem gratings of double-dispersion mount make it possible to design an imaging spectrometer for the weak light observation with high spatial resolution, high spectral resolution, and high optical transmission efficiency. The traditional tandem Wadsworth mounting is originally designed to match the coaxial telescope and large-scale imaging spectrometer. When it is used to connect the off-axis telescope such as off-axis parabolic mirror, it presents lower imaging quality than to connect the coaxial telescope. It may also introduce interference among the detector and the optical elements as it is applied to the short focal length and small-scale spectrometer in a close volume by satellite. An advanced tandem Wadsworth mounting has been investigated to deal with the situation. The Wadsworth astigmatism-corrected mounting condition for which is expressed as the distance between the second concave grating and the imaging plane is calculated. Then the optimum arrangement for the first plane grating and the second concave grating, which make the anterior Wadsworth condition fulfilling each wavelength, is analyzed by the geometric and first order differential calculation. These two arrangements comprise the advanced Wadsworth mounting condition. The spectral resolution has also been calculated by these conditions. An example designed by the optimum theory proves that the advanced tandem Wadsworth mounting performs excellently in spectral broadband.

  13. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  14. Safety evaluation of small samples for isotope production

    International Nuclear Information System (INIS)

    Sharma, Archana; Singh, Tej; Varde, P.V.

    2015-09-01

    Radioactive isotopes are widely used in basic and applied science and engineering, most notably as environmental and industrial tracers, and for medical imaging procedures. Production of radioisotope constitutes important activity of Indian nuclear program. Since its initial criticality DHRUVA reactor has been facilitating the regular supply of most of the radioisotopes required in the country for application in the fields of medicine, industry and agriculture. In-pile irradiation of the samples requires a prior estimation of the sample reactivity load, heating rate, activity developed and shielding thickness required for post irradiation handling. This report is an attempt to highlight the contributions of DHRUVA reactor, as well as to explain in detail the methodologies used in safety evaluation of the in pile irradiation samples. (author)

  15. A high-efficiency neutron coincidence counter for small samples

    International Nuclear Information System (INIS)

    Miller, M.C.; Menlove, H.O.; Russo, P.A.

    1991-01-01

    The inventory sample coincidence counter (INVS) has been modified to enhance its performance. The new design is suitable for use with a glove box sample-well (in-line application) as well as for use in the standard at-line mode. The counter has been redesigned to count more efficiently and be less sensitive to variations in sample position. These factors lead to a higher degree of precision and accuracy in a given counting period and allow for the practical use of the INVS counter with gamma-ray isotopics to obtain a plutonium assay independent of operator declarations and time-consuming chemicals analysis. A calculation study was performed using the Los Alamos transport code MCNP to optimize the design parameters. 5 refs., 7 figs., 8 tabs

  16. Reliability assessment based on small samples of normal distribution

    International Nuclear Information System (INIS)

    Ma Zhibo; Zhu Jianshi; Xu Naixin

    2003-01-01

    When the pertinent parameter involved in reliability definition complies with normal distribution, the conjugate prior of its distributing parameters (μ, h) is of normal-gamma distribution. With the help of maximum entropy and the moments-equivalence principles, the subjective information of the parameter and the sampling data of its independent variables are transformed to a Bayesian prior of (μ,h). The desired estimates are obtained from either the prior or the posterior which is formed by combining the prior and sampling data. Computing methods are described and examples are presented to give demonstrations

  17. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

    2007-01-01

    In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... from various variables such as gender, age, BMI, alcohol, smoking, and menopause. The reference intervals were compared to reference intervals calculated using IFCC recommendations. Where comparable, the IFCC calculated reference intervals had a wider range compared to the variance component models...

  18. The problem in 180 deg data sampling and radioactivity decay correction in gated cardiac blood pool scanning using SPECT

    International Nuclear Information System (INIS)

    Ohtake, Tohru; Watanabe, Toshiaki; Nishikawa, Junichi

    1986-01-01

    In cardiac blood pool scanning using SPECT, half 180 deg data collection (HD) vs. full 360 deg data collection (FD) and Tc-99m decay are problems in quantifying the ejection count (EC) (end-diastolic count - end-systolic count) of both ventricles and the ratio of the ejection count of the right and left ventricles (RVEC/LVEC). We studied the change produced by altering the starting position of data sampling in HD scans. In our results of phantom and 4 clinical cases, when the cardiac axis deviation was not large and there was not remarkable cardiac enlargement, the change in LVEC, RVEC and RVEC/LVEC was small (1 - 4 %) within 12 degree change of the starting position, and the difference between the results of HD scan with a good starting position (the average of LV peak and RV peak) and FD scan was not large (less than 7 %). Because of this, we think HD scan can be used in those cases. But when the cardiac axis deviation was large or there was remarkable cardiac enlargement, the change of LVEC, RVEC and RVEC/LVEC was large (more than 10 %) even within 12 degree change of the starting position. So we think FD scan would be better in those cases. In our results of 6 patients, the half-life of Tc-99m labeled albumin in blood varied from 2 to 4 hr (3.03 ± 0.59 hr, mean ± s.d.). Using a program for radioactivity (RA) decay correction, we studied the change in LVEC, RVEC and LVEC/RVEC in 11 cases. When RA decay correction was performed using a halflife of 3.0 hr, LVEC increased 7.5 %, RVEC increased 8.7 % and RVEC/LVEC increased 0.9 % on the average in HD scans of 8 cases (LPO to RAO, 32 views, 60 beat/1 view). We think RA decay correction would not be needed in quantifying RVEC/LVEC in most cases because the change of RVEC/LVEC was very small. (author)

  19. Mars ascent propulsion options for small sample return vehicles

    International Nuclear Information System (INIS)

    Whitehead, J. C.

    1997-01-01

    An unprecedented combination of high propellant fraction and small size is required for affordable-scale Mars return, regardless of the number of stages, or whether Mars orbit rendezvous or in-situ propellant options are used. Conventional space propulsion technology is too heavy, even without structure or other stage subsystems. The application of launch vehicle design principles to the development of new hardware on a tiny scale is therefore suggested. Miniature pump-fed rocket engines fed by low pressure tanks can help to meet this challenge. New concepts for engine cycles using piston pumps are described, and development issues are outlined

  20. Advanced path sampling of the kinetic network of small proteins

    NARCIS (Netherlands)

    Du, W.

    2014-01-01

    This thesis is focused on developing advanced path sampling simulation methods to study protein folding and unfolding, and to build kinetic equilibrium networks describing these processes. In Chapter 1 the basic knowledge of protein structure and folding theories were introduced and a brief overview

  1. Small sample approach, and statistical and epidemiological aspects

    NARCIS (Netherlands)

    Offringa, Martin; van der Lee, Hanneke

    2011-01-01

    In this chapter, the design of pharmacokinetic studies and phase III trials in children is discussed. Classical approaches and relatively novel approaches, which may be more useful in the context of drug research in children, are discussed. The burden of repeated blood sampling in pediatric

  2. Measurements of accurate x-ray scattering data of protein solutions using small stationary sample cells

    Science.gov (United States)

    Hong, Xinguo; Hao, Quan

    2009-01-01

    In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 °C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.

  3. Measurements of accurate x-ray scattering data of protein solutions using small stationary sample cells

    International Nuclear Information System (INIS)

    Hong Xinguo; Hao Quan

    2009-01-01

    In this paper, we report a method of precise in situ x-ray scattering measurements on protein solutions using small stationary sample cells. Although reduction in the radiation damage induced by intense synchrotron radiation sources is indispensable for the correct interpretation of scattering data, there is still a lack of effective methods to overcome radiation-induced aggregation and extract scattering profiles free from chemical or structural damage. It is found that radiation-induced aggregation mainly begins on the surface of the sample cell and grows along the beam path; the diameter of the damaged region is comparable to the x-ray beam size. Radiation-induced aggregation can be effectively avoided by using a two-dimensional scan (2D mode), with an interval as small as 1.5 times the beam size, at low temperature (e.g., 4 deg. C). A radiation sensitive protein, bovine hemoglobin, was used to test the method. A standard deviation of less than 5% in the small angle region was observed from a series of nine spectra recorded in 2D mode, in contrast to the intensity variation seen using the conventional stationary technique, which can exceed 100%. Wide-angle x-ray scattering data were collected at a standard macromolecular diffraction station using the same data collection protocol and showed a good signal/noise ratio (better than the reported data on the same protein using a flow cell). The results indicate that this method is an effective approach for obtaining precise measurements of protein solution scattering.

  4. Standard Format for Chromatographic-polarimetric System small samples assessment

    International Nuclear Information System (INIS)

    Naranjo, S.; Fajer, V.; Fonfria, C.; Patinno, R.

    2012-01-01

    The treatment of samples containing optically active substances to be evaluated as part of quality control of raw material entering industrial process, and also during the modifications exerted on it to obtain the desired final composition is still and unsolved problem for many industries. That is the case of sugarcane industry. Sometimes the troubles implied are enlarged because samples to be evaluated are not bigger than one milliliter. Reduction of gel beds in G-10 and G-50 chromatographic columns having an inner diameter of 16 mm, instead of 25, and bed heights adjustable to requirements by means of sliding stoppers to increase analytical power were evaluated with glucose and sucrose standards in concentrations from 1 to 10 g/dL, using aliquots of 1 ml without undesirable dilutions that could affect either detection or chromatographic profile. Assays with seaweed extracts gave good results that are shown. It is established the advantage to know concentration of a separated substance by the height of its peak and the savings in time and reagents resulting . Sample expanded uncertainty in both systems is compared. It is also presented several programs for data acquisition, storing and processing. (Author)

  5. Correcting for Systematic Bias in Sample Estimates of Population Variances: Why Do We Divide by n-1?

    Science.gov (United States)

    Mittag, Kathleen Cage

    An important topic presented in introductory statistics courses is the estimation of population parameters using samples. Students learn that when estimating population variances using sample data, we always get an underestimate of the population variance if we divide by n rather than n-1. One implication of this correction is that the degree of…

  6. Impact of multicollinearity on small sample hydrologic regression models

    Science.gov (United States)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  7. Automated Sampling and Extraction of Krypton from Small Air Samples for Kr-85 Measurement Using Atom Trap Trace Analysis

    International Nuclear Information System (INIS)

    Hebel, S.; Hands, J.; Goering, F.; Kirchner, G.; Purtschert, R.

    2015-01-01

    Atom-Trap-Trace-Analysis (ATTA) provides the capability of measuring the Krypton-85 concentration in microlitre amounts of krypton extracted from air samples of about 1 litre. This sample size is sufficiently small to allow for a range of applications, including on-site spot sampling and continuous sampling over periods of several hours. All samples can be easily handled and transported to an off-site laboratory for ATTA measurement, or stored and analyzed on demand. Bayesian sampling methodologies can be applied by blending samples for bulk measurement and performing in-depth analysis as required. Prerequisite for measurement is the extraction of a pure krypton fraction from the sample. This paper introduces an extraction unit able to isolate the krypton in small ambient air samples with high speed, high efficiency and in a fully automated manner using a combination of cryogenic distillation and gas chromatography. Air samples are collected using an automated smart sampler developed in-house to achieve a constant sampling rate over adjustable time periods ranging from 5 minutes to 3 hours per sample. The smart sampler can be deployed in the field and operate on battery for one week to take up to 60 air samples. This high flexibility of sampling and the fast, robust sample preparation are a valuable tool for research and the application of Kr-85 measurements to novel Safeguards procedures. (author)

  8. Predicting Drug-Target Interactions Based on Small Positive Samples.

    Science.gov (United States)

    Hu, Pengwei; Chan, Keith C C; Hu, Yanxing

    2018-01-01

    evaluation of ODT shows that it can be potentially useful. It confirms that predicting potential or missing DTIs based on the known interactions is a promising direction to solve problems related to the use of uncertain and unreliable negative samples and those related to the great demand in computational resources. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Comparing Server Energy Use and Efficiency Using Small Sample Sizes

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Henry C.; Qin, Yong; Price, Phillip N.

    2014-11-01

    This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a

  10. 76 FR 78182 - Application of the Segregation Rules to Small Shareholders; Correction

    Science.gov (United States)

    2011-12-16

    ... CONTACT: Concerning the proposed regulations, Stephen R. Cleary, (202) 622-7750 (not a toll-free number... ``regard to Sec. 1.382-2T(h)(i)(A)) or a first'' is corrected to read ``regard to Sec. 1.382-2T(h)(2)(i)(A.... Clarification of Sec. 1.382-2T(j)(3)'', last line of the paragraph, the language ``2T(h)(i)(A).'' is corrected...

  11. Use of calibration standards and the correction for sample self-attenuation in gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Parker, J.L.

    1984-08-01

    The efficient use of appropriate calibration standards and the correction for the attenuation of the gamma rays within an assay sample by the sample itself are two important and closely related subjects in gamma-ray nondestructive assay. Much research relating to those subjects has been done in the Nuclear Safeguards Research and Development program at the Los Alamos National Laboratory since 1970. This report brings together most of the significant results of that research. Also discussed are the nature of appropriate calibration standards and the necessary conditions on the composition, size, and shape of the samples to allow accurate assays. Procedures for determining the correction for the sample self-attenuation are described at length including both general principles and several specific useful cases. The most useful concept is that knowing the linear attenuation coefficient of the sample (which can usually be determined) and the size and shape of the sample and its position relative to the detector permits the computation of the correction factor for the self-attenuation. A major objective of the report is to explain how the procedures for determining the self-attenuation correction factor can be applied so that calibration standards can be entirely appropriate without being particularly similar, either physically or chemically, to the items to be assayed. This permits minimization of the number of standards required to assay items with a wide range of size, shape, and chemical composition. 17 references, 18 figures, 2 tables

  12. The use of calibration standards and the correction for sample self-attenuation in gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Parker, J.L.

    1986-11-01

    The efficient use of appropriate calibration standards and the correction for the attenuation of the gamma rays within an assay sample by the sample itself are two important and closely related subjects in gamma-ray nondestructive assay. Much research relating to those subjects has been done in the Nuclear Safeguards Research and Development program at the Los Alamos National Laboratory since 1970. This report brings together most of the significant results of that research. Also discussed are the nature of appropriate calibration standards and the necessary conditions on the composition, size, and shape of the samples to allow accurate assays. Procedures for determining the correction for the sample self-attenuation are described at length including both general principles and several specific useful cases. The most useful concept is that knowing the linear attenuation coefficient of the sample (which can usually be determined) and the size and shape of the sample and its position relative to the detector permits the computation of the correction factor for the self-attenuation. A major objective of the report is to explain how the procedures for determining the self-attenuation correction factor can be applied so that calibration standards can be entirely appropriate without being particularly similar, either physically or chemically, to the items to be assayed. This permits minimization of the number of standards required to assay items with a wide range of size, shape, and chemical composition

  13. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  14. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    International Nuclear Information System (INIS)

    Mathew, D; Tanny, S; Parsai, E; Sperling, N

    2015-01-01

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measured on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm 2 to 0.6×0.6 cm 2 , normalized to values at 5×5cm 2 . Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm 2 fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference

  15. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, D; Tanny, S; Parsai, E; Sperling, N [University of Toledo Medical Center, Toledo, OH (United States)

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measured on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class

  16. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  17. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    Science.gov (United States)

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  18. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  19. SOPPA and CCSD vibrational corrections to NMR indirect spin-spin coupling constants of small hydrocarbons

    DEFF Research Database (Denmark)

    Faber, Rasmus; Sauer, Stephan P. A.

    2015-01-01

    We present zero-point vibrational corrections to the indirect nuclear spin-spin coupling constants in ethyne, ethene, cyclopropene and allene. The calculations have been carried out both at the level of the second order polarization propagator approximation (SOPPA) employing a new implementation ...

  20. SOPPA and CCSD vibrational corrections to NMR indirect spin-spin coupling constants of small hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Faber, Rasmus; Sauer, Stephan P. A. [Department of Chemistry, University of Copenhagen, Universitetsparken 5, DK-2100 Copenhagen Ø (Denmark)

    2015-12-31

    We present zero-point vibrational corrections to the indirect nuclear spin-spin coupling constants in ethyne, ethene, cyclopropene and allene. The calculations have been carried out both at the level of the second order polarization propagator approximation (SOPPA) employing a new implementation in the DALTON program, at the density functional theory level with the B3LYP functional employing also the Dalton program and at the level of coupled cluster singles and doubles (CCSD) theory employing the implementation in the CFOUR program. Specialized coupling constant basis sets, aug-cc-pVTZ-J, have been employed in the calculations. We find that on average the SOPPA results for both the equilibrium geometry values and the zero-point vibrational corrections are in better agreement with the CCSD results than the corresponding B3LYP results. Furthermore we observed that the vibrational corrections are in the order of 5 Hz for the one-bond carbon-hydrogen couplings and about 1 Hz or smaller for the other couplings apart from the one-bond carbon-carbon coupling (11 Hz) and the two-bond carbon-hydrogen coupling (4 Hz) in ethyne. However, not for all couplings lead the inclusion of zero-point vibrational corrections to better agreement with experiment.

  1. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  2. Reducing overlay sampling for APC-based correction per exposure by replacing measured data with computational prediction

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc

    2016-03-01

    One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.

  3. Mechanical characteristics of historic mortars from tests on small-sample non-standard on small-sample non-standard specimens

    Czech Academy of Sciences Publication Activity Database

    Drdácký, Miloš; Slížková, Zuzana

    2008-01-01

    Roč. 17, č. 1 (2008), s. 20-29 ISSN 1407-7353 R&D Projects: GA ČR(CZ) GA103/06/1609 Institutional research plan: CEZ:AV0Z20710524 Keywords : small-sample non-standard testing * lime * historic mortar Subject RIV: AL - Art, Architecture, Cultural Heritage

  4. Effect of tubing length on the dispersion correction of an arterially sampled input function for kinetic modeling in PET.

    Science.gov (United States)

    O'Doherty, Jim; Chilcott, Anna; Dunn, Joel

    2015-11-01

    Arterial sampling with dispersion correction is routinely performed for kinetic analysis of PET studies. Because of the the advent of PET-MRI systems, non-MR safe instrumentation will be required to be kept outside the scan room, which requires the length of the tubing between the patient and detector to increase, thus worsening the effects of dispersion. We examined the effects of dispersion in idealized radioactive blood studies using various lengths of tubing (1.5, 3, and 4.5 m) and applied a well-known transmission-dispersion model to attempt to correct the resulting traces. A simulation study was also carried out to examine noise characteristics of the model. The model was applied to patient traces using a 1.5 m acquisition tubing and extended to its use at 3 m. Satisfactory dispersion correction of the blood traces was achieved in the 1.5 m line. Predictions on the basis of experimental measurements, numerical simulations and noise analysis of resulting traces show that corrections of blood data can also be achieved using the 3 m tubing. The effects of dispersion could not be corrected for the 4.5 m line by the selected transmission-dispersion model. On the basis of our setup, correction of dispersion in arterial sampling tubing up to 3 m by the transmission-dispersion model can be performed. The model could not dispersion correct data acquired using a 4.5 m arterial tubing.

  5. Gamma self-shielding correction factors calculation for aqueous bulk sample analysis by PGNAA technique

    International Nuclear Information System (INIS)

    Nasrabadi, M.N.; Mohammadi, A.; Jalali, M.

    2009-01-01

    In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required.

  6. A two-phase sampling survey for nonresponse and its paradata to correct nonresponse bias in a health surveillance survey.

    Science.gov (United States)

    Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A

    2017-02-01

    The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in

  7. Absorption corrections for x-ray fluorescence analysis of environmental samples

    International Nuclear Information System (INIS)

    Bazan, F.; Bonner, N.A.

    1975-01-01

    The discovery of a very simple and useful relationship between the absorption coefficient of a particular element and the ratio of incoherent to coherent scattering by the sample containing the element is discussed. By measuring the absorption coefficients for a few elements in a few samples, absorption coefficients for many elements in an entire set of similar samples can be obtained. (auth)

  8. Absorption corrections for x-ray fluorescence analysis of environmental samples

    International Nuclear Information System (INIS)

    Bazan, F.; Bonner, N.A.

    1976-01-01

    The discovery of a very simple and useful relationship between the absorption coefficient of a particular element and the ratio of incoherent to coherent scattering by the sample containing the element is discussed. By measuring the absorption coefficients for a few elements in a few samples, absorption coefficients for many elements in an entire set of similar samples can be obtained

  9. Direct analysis of 210Pb in sediment samples: Self-absorption corrections

    International Nuclear Information System (INIS)

    Cutshall, N.H.; Larsen, I.L.; Olsen, C.R.

    1983-01-01

    A procedure for the direct #betta#-ray instrumental analysis of 210 Pb in sediment samples is presented. The problem of dependence of self-absorption on sample composition is solved by making a direct transmission measurement on each sample. The procedure has been verified by intercalibrations and other tests. (orig.)

  10. Small incision corneal refractive surgery using the small incision lenticule extraction (SMILE) procedure for the correction of myopia and myopic astigmatism: results of a 6 month prospective study.

    Science.gov (United States)

    Sekundo, Walter; Kunert, Kathleen S; Blum, Marcus

    2011-03-01

    This 6 month prospective multi-centre study evaluated the feasibility of performing myopic femtosecond lenticule extraction (FLEx) through a small incision using the small incision lenticule extraction (SMILE) procedure. Prospective, non-randomised clinical trial. PARTICIPANTS; Ninety-one eyes of 48 patients with myopia with and without astigmatism completed the final 6 month follow-up. The patients' mean age was 35.3 years. Their preoperative mean spherical equivalent (SE) was −4.75±1.56 D. A refractive lenticule of intrastromal corneal tissue was cut utilising a prototype of the Carl Zeiss Meditec AG VisuMax femtosecond laser system. Simultaneously two opposite small ‘pocket’ incisions were created by the laser system. Thereafter, the lenticule was manually dissected with a spatula and removed through one of incisions using modified McPherson forceps. Uncorrected visual acuity (UCVA) and best spectacle corrected visual acuity (BSCVA) after 6 months, objective and manifest refraction as well as slit-lamp examination, side effects and a questionnaire. Six months postoperatively the mean SE was −0.01 D±0.49 D. Most treated eyes (95.6%) were within ±1.0 D, and 80.2% were within ±0.5 D of intended correction. Of the eyes treated, 83.5% had an UCVA of 1.0 (20/20) or better, 53% remained unchanged, 32.3% gained one line, 3.3% gained two lines of BSCVA, 8.8% lost one line and 1.1% lost ≥2 lines of BSCVA. When answering a standardised questionnaire, 93.3% of patients were satisfied with the results obtained and would undergo the procedure again. SMILE is a promising new flapless minimally invasive refractive procedure to correct myopia.

  11. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  12. Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Small-vessel Survey and Auction Sampling to Estimate Growth and Maturity of Eteline Snappers and Improve Data-Limited Stock Assessments. This biosampling project...

  13. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  14. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  15. Rules of attraction: The role of bait in small mammal sampling at ...

    African Journals Online (AJOL)

    Baits or lures are commonly used for surveying small mammal communities, not only because they attract large numbers of these animals, but also because they provide sustenance for trapped individuals. In this study we used Sherman live traps with five bait treatments to sample small mammal populations at three ...

  16. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  17. Authorship Correction: Sampling Key Populations for HIV Surveillance: Results From Eight Cross-Sectional Studies Using Respondent-Driven Sampling and Venue-Based Snowball Sampling.

    Science.gov (United States)

    Rao, Amrita; Stahlman, Shauna; Hargreaves, James; Weir, Sharon; Edwards, Jessie; Rice, Brian; Kochelani, Duncan; Mavimbela, Mpumelelo; Baral, Stefan

    2018-01-15

    [This corrects the article DOI: 10.2196/publichealth.8116.]. ©Amrita Rao, Shauna Stahlman, James Hargreaves, Sharon Weir, Jessie Edwards, Brian Rice, Duncan Kochelani, Mpumelelo Mavimbela, Stefan Baral. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 15.01.2018.

  18. Integrating sphere based reflectance measurements for small-area semiconductor samples

    Science.gov (United States)

    Saylan, S.; Howells, C. T.; Dahlem, M. S.

    2018-05-01

    This article describes a method that enables reflectance spectroscopy of small semiconductor samples using an integrating sphere, without the use of additional optical elements. We employed an inexpensive sample holder to measure the reflectance of different samples through 2-, 3-, and 4.5-mm-diameter apertures and applied a mathematical formulation to remove the bias from the measured spectra caused by illumination of the holder. Using the proposed method, the reflectance of samples fabricated using expensive or rare materials and/or low-throughput processes can be measured. It can also be incorporated to infer the internal quantum efficiency of small-area, research-level solar cells. Moreover, small samples that reflect light at large angles and develop scattering may also be measured reliably, by virtue of an integrating sphere insensitive to directionalities.

  19. THE FLATULENCE SYMPTOM IN SMALL CHILDREN: CAUSES AND WAYS OF CORRECTION

    Directory of Open Access Journals (Sweden)

    A. N. Surkov

    2013-01-01

    Full Text Available Gastrointestinal tract malfunctions, food allergy, intestinal microbiocenosis disorder, disaccharide insufficiency, celiac disease and several other causes lead to an increased gas-formation, overdistension of intestinal loops and abdominal pains in 0-1-year-old children. The crucial task of flatulence elimination is the correction of causes of its occurrence. Frequent intestinal spasm episodes in infants reduce the quality of life of them and their families in general and are also associated with the subsequent child’s physical and mental maldevelopments. Simethicone-based suspension (in the form of antifoaming agent helps to cope with the issue; it has carminative properties; this allows to reduce the amount of gases in the intestinal lumen, thus terminating pain symptoms.

  20. Correction for the interference of strontium in the determination of uranium in geologic samples by X-ray fluorescence

    International Nuclear Information System (INIS)

    Roca, M.; Bayon, A.

    1981-01-01

    A suitable empirical algorithm for the correction for the spectral interference of the SrKα on the ULα line has been derived. It works successfully for SrO concentrations up to 8% with a minimum detectable limit of 20 ppm U 3 O 8 . X-ray spectrometry procedure allows also the determination of the SrO contents of the samples. A program in BASIC language for data reduction has been written. (Author) 3 refs

  1. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  2. Pharmacological Correction of Stress-Induced Gastric Ulceration by Novel Small-Molecule Agents with Antioxidant Profile

    Directory of Open Access Journals (Sweden)

    Konstantin V. Kudryavtsev

    2014-01-01

    Full Text Available This study was designed to determine novel small-molecule agents influencing the pathogenesis of gastric lesions induced by stress. To achieve this goal, four novel organic compounds containing structural fragments with known antioxidant activity were synthesized, characterized by physicochemical methods, and evaluated in vivo at water immersion restraint conditions. The levels of lipid peroxidation products and activities of antioxidative system enzymes were measured in gastric mucosa and correlated with the observed gastroprotective activity of the active compounds. Prophylactic single-dose 1 mg/kg treatment with (2-hydroxyphenylthioacetyl derivatives of L-lysine and L-proline efficiently decreases up to 86% stress-induced stomach ulceration in rats. Discovered small-molecule antiulcer agents modulate activities of gastric mucosa tissue superoxide dismutase, catalase, and xanthine oxidase in concerted directions. Gastroprotective effect of (2-hydroxyphenylthioacetyl derivatives of L-lysine and L-proline at least partially depends on the correction of gastric mucosa oxidative balance.

  3. Can mt2 much-gt mb2 arise from small corrections in four-family models

    International Nuclear Information System (INIS)

    Mendel, R.R.; Margolis, B.; Therrien, E.; Valin, P.

    1989-01-01

    This paper proposes a general dynamical scheme capable of explaining naturally the main properties of the observed spectrum, namely the strong inter-family mass hierarchies and the mixing pattern. The authors illustrate these properties in the three-family case with a simple toy model. There is an indication that large values of m t may be required in order to obtain |V by |much-lt|V bc ; the fact the m 2 much-gt m 2 could be due to small corrections in a four-family model where m' ∼ m'. The authors point out possible natural explanations for the small mass of the e, μ and τ neutrinos in the three and four family cases

  4. Small Sample Properties of the Wilcoxon Signed Rank Test with Discontinuous and Dependent Observations

    OpenAIRE

    Nadine Chlass; Jens J. Krueger

    2007-01-01

    This Monte-Carlo study investigates sensitivity of the Wilcoxon signed rank test to certain assumption violations in small samples. Emphasis is put on within-sample-dependence, between-sample dependence, and the presence of ties. Our results show that both assumption violations induce severe size distortions and entail power losses. Surprisingly, these consequences do vary substantially with other properties the data may display. Results provided are particularly relevant for experimental set...

  5. Optimum measuring net for correcting mineralizing heterogeneity effect in XRF sampling

    International Nuclear Information System (INIS)

    Zhou Sichun; Zhao Youqing; Zhang Yuhuan

    2000-01-01

    The mineralizing heterogeneity effect in XRF sampling was investigated with theory of mathematical statistics. A method called 'Optimum measuring Net' has been developed. The theoretical estimation and experimental results show that the mineralizing heterogeneity effect can be cut down to the minimum with the method

  6. Self-absorption corrections for gamma ray spectral measurements of 210Pb in environmental samples

    International Nuclear Information System (INIS)

    Miller, K.M.

    1987-01-01

    Theoretical considerations and experimental data are used to demonstrate the basic behaviour of the self-absorption effect of a sample matrix in gamma ray spectrometry, particularly as it relates to the analysis of 210 Pb in environmental media. The results indicate that it may not be appropriate to apply the commonly used self-absorption function in all cases. (orig.)

  7. Gender Wage Gap : A Semi-Parametric Approach With Sample Selection Correction

    NARCIS (Netherlands)

    Picchio, M.; Mussida, C.

    2010-01-01

    Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. This paper proposes a new semi-parametric estimator of densities in the presence of covariates which incorporates

  8. Correction of sampling bias in a cross-sectional study of post-surgical complications.

    Science.gov (United States)

    Fluss, Ronen; Mandel, Micha; Freedman, Laurence S; Weiss, Inbal Salz; Zohar, Anat Ekka; Haklai, Ziona; Gordon, Ethel-Sherry; Simchen, Elisheva

    2013-06-30

    Cross-sectional designs are often used to monitor the proportion of infections and other post-surgical complications acquired in hospitals. However, conventional methods for estimating incidence proportions when applied to cross-sectional data may provide estimators that are highly biased, as cross-sectional designs tend to include a high proportion of patients with prolonged hospitalization. One common solution is to use sampling weights in the analysis, which adjust for the sampling bias inherent in a cross-sectional design. The current paper describes in detail a method to build weights for a national survey of post-surgical complications conducted in Israel. We use the weights to estimate the probability of surgical site infections following colon resection, and validate the results of the weighted analysis by comparing them with those obtained from a parallel study with a historically prospective design. Copyright © 2012 John Wiley & Sons, Ltd.

  9. Absorption and enhancement corrections using XRF analysis of some chemical samples

    International Nuclear Information System (INIS)

    Falih, Arwa Gaddal

    1996-06-01

    In this work samples containing Cr, Fe and Ni salts invarying ratios were prepared so as to represent approximately the concentrations of these elements in naturally occurring ore samples. These samples were then analyzed by EDXRF spectrometer system and the inter element effects (absorption and enhancement) were evaluated by means of two method: by using AXIL-QXAS software to calculate the effects and by the emission-transmission method to experimentally determine the same effects. The results obtained were compared and a discrepancy in the absorption results was observed. The discrepancy was attributed to the fact that the absorption in the two methods was calculated in different manners, i.e. in the emission-transmission method the absorption factor was calculated by adding different absorption terms by what is known as the additive law, but in the software program it was calculated from the scattered peaks method which does not obey this law. It was concluded that the program should be modified by inserting the emission-transmission method in the software program to calculate the absorption. Quality assurance of the data was performed though the analysis of the standard alloys obtained from the International Atomic Energy Agency (IAEA). (Author)

  10. Motion correction for passive radiation imaging of small vessels in ship-to-ship inspections

    Energy Technology Data Exchange (ETDEWEB)

    Ziock, K.P., E-mail: ziockk@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Boehnen, C.B.; Ernst, J.M.; Fabris, L. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Hayward, J.P. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Department of Nuclear Engineering, University of Tennessee, Knoxville, TN (United States); Karnowski, T.P.; Paquit, V.C.; Patlolla, D.R. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Trombino, D.G. [Lawrence Livermore National Laboratory, Livermore, CA (United States)

    2016-01-01

    Passive radiation detection remains one of the most acceptable means of ascertaining the presence of illicit nuclear materials. In maritime applications it is most effective against small to moderately sized vessels, where attenuation in the target vessel is of less concern. Unfortunately, imaging methods that can remove source confusion, localize a source, and avoid other systematic detection issues cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing system sensitivity. This is particularly true for the smaller watercraft, where passive inspections are most valuable. We have developed a combined gamma-ray, stereo visible-light imaging system that addresses this problem. Data from the stereo imager are used to track the relative location and orientation of the target vessel in the field of view of a coded-aperture gamma-ray imager. Using this information, short-exposure gamma-ray images are projected onto the target vessel using simple tomographic back-projection techniques, revealing the location of any sources within the target. The complex autonomous tracking and image reconstruction system runs in real time on a 48-core workstation that deploys with the system.

  11. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    Science.gov (United States)

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  12. Method to make accurate concentration and isotopic measurements for small gas samples

    Science.gov (United States)

    Palmer, M. R.; Wahl, E.; Cunningham, K. L.

    2013-12-01

    Carbon isotopic ratio measurements of CO2 and CH4 provide valuable insight into carbon cycle processes. However, many of these studies, like soil gas, soil flux, and water head space experiments, provide very small gas sample volumes, too small for direct measurement by current constant-flow Cavity Ring-Down (CRDS) isotopic analyzers. Previously, we addressed this issue by developing a sample introduction module which enabled the isotopic ratio measurement of 40ml samples or smaller. However, the system, called the Small Sample Isotope Module (SSIM), does dilute the sample during the delivery with inert carrier gas which causes a ~5% reduction in concentration. The isotopic ratio measurements are not affected by this small dilution, but researchers are naturally interested accurate concentration measurements. We present the accuracy and precision of a new method of using this delivery module which we call 'double injection.' Two portions of the 40ml of the sample (20ml each) are introduced to the analyzer, the first injection of which flushes out the diluting gas and the second injection is measured. The accuracy of this new method is demonstrated by comparing the concentration and isotopic ratio measurements for a gas sampled directly and that same gas measured through the SSIM. The data show that the CO2 concentration measurements were the same within instrument precision. The isotopic ratio precision (1σ) of repeated measurements was 0.16 permil for CO2 and 1.15 permil for CH4 at ambient concentrations. This new method provides a significant enhancement in the information provided by small samples.

  13. Accelerator mass spectrometry of ultra-small samples with applications in the biosciences

    International Nuclear Information System (INIS)

    Salehpour, Mehran; Håkansson, Karl; Possnert, Göran

    2013-01-01

    An overview is presented covering the biological accelerator mass spectrometry activities at Uppsala University. The research utilizes the Uppsala University Tandem laboratory facilities, including a 5 MV Pelletron tandem accelerator and two stable isotope ratio mass spectrometers. In addition, a dedicated sample preparation laboratory for biological samples with natural activity is in use, as well as another laboratory specifically for 14 C-labeled samples. A variety of ongoing projects are described and presented. Examples are: (1) Ultra-small sample AMS. We routinely analyze samples with masses in the 5–10 μg C range. Data is presented regarding the sample preparation method, (2) bomb peak biological dating of ultra-small samples. A long term project is presented where purified and cell-specific DNA from various part of the human body including the heart and the brain are analyzed with the aim of extracting regeneration rate of the various human cells, (3) biological dating of various human biopsies, including atherosclerosis related plaques is presented. The average built up time of the surgically removed human carotid plaques have been measured and correlated to various data including the level of insulin in the human blood, and (4) In addition to standard microdosing type measurements using small pharmaceutical drugs, pre-clinical pharmacokinetic data from a macromolecular drug candidate are discussed.

  14. Accelerator mass spectrometry of ultra-small samples with applications in the biosciences

    Energy Technology Data Exchange (ETDEWEB)

    Salehpour, Mehran, E-mail: mehran.salehpour@physics.uu.se [Department of Physics and Astronomy, Ion Physics, PO Box 516, SE-751 20 Uppsala (Sweden); Hakansson, Karl; Possnert, Goeran [Department of Physics and Astronomy, Ion Physics, PO Box 516, SE-751 20 Uppsala (Sweden)

    2013-01-15

    An overview is presented covering the biological accelerator mass spectrometry activities at Uppsala University. The research utilizes the Uppsala University Tandem laboratory facilities, including a 5 MV Pelletron tandem accelerator and two stable isotope ratio mass spectrometers. In addition, a dedicated sample preparation laboratory for biological samples with natural activity is in use, as well as another laboratory specifically for {sup 14}C-labeled samples. A variety of ongoing projects are described and presented. Examples are: (1) Ultra-small sample AMS. We routinely analyze samples with masses in the 5-10 {mu}g C range. Data is presented regarding the sample preparation method, (2) bomb peak biological dating of ultra-small samples. A long term project is presented where purified and cell-specific DNA from various part of the human body including the heart and the brain are analyzed with the aim of extracting regeneration rate of the various human cells, (3) biological dating of various human biopsies, including atherosclerosis related plaques is presented. The average built up time of the surgically removed human carotid plaques have been measured and correlated to various data including the level of insulin in the human blood, and (4) In addition to standard microdosing type measurements using small pharmaceutical drugs, pre-clinical pharmacokinetic data from a macromolecular drug candidate are discussed.

  15. Application of bias correction methods to improve U3Si2 sample preparation for quantitative analysis by WDXRF

    International Nuclear Information System (INIS)

    Scapin, Marcos A.; Guilhen, Sabine N.; Azevedo, Luciana C. de; Cotrim, Marycel E.B.; Pires, Maria Ap. F.

    2017-01-01

    The determination of silicon (Si), total uranium (U) and impurities in uranium-silicide (U 3 Si 2 ) samples by wavelength dispersion X-ray fluorescence technique (WDXRF) has been already validated and is currently implemented at IPEN's X-Ray Fluorescence Laboratory (IPEN-CNEN/SP) in São Paulo, Brazil. Sample preparation requires the use of approximately 3 g of H 3 BO 3 as sample holder and 1.8 g of U 3 Si 2 . However, because boron is a neutron absorber, this procedure precludes U 3 Si 2 sample's recovery, which, in time, considering routinely analysis, may account for significant unusable uranium waste. An estimated average of 15 samples per month are expected to be analyzed by WDXRF, resulting in approx. 320 g of U 3 Si 2 that would not return to the nuclear fuel cycle. This not only impacts in production losses, but generates another problem: radioactive waste management. The purpose of this paper is to present the mathematical models that may be applied for the correction of systematic errors when H 3 BO 3 sample holder is substituted by cellulose-acetate {[C 6 H 7 O 2 (OH) 3-m (OOCCH 3 )m], m = 0∼3}, thus enabling U 3 Si 2 sample’s recovery. The results demonstrate that the adopted mathematical model is statistically satisfactory, allowing the optimization of the procedure. (author)

  16. Minerals sampling: sensibility analysis and correction factors for Pierre Gy's equation

    International Nuclear Information System (INIS)

    Vallebuona, G.; Niedbalski, F.

    2005-01-01

    Pierre Gy's equation is widely used in ore sampling. This equation is based in four parameters: shape factor, size distribution factor, mineralogical factor and liberation factor. The usual practice is to consider fixed values for the shape and size distribution factors. This practice does not represent well several important ores. The mineralogical factor considers only one specie of interest and the gangue, leaving out other cases such as polymetallic ores where there are more than one species of interest. A sensibility analysis to the Gy's equation factors was done and a procedure to determine specific values for them was developed and presented in this work. mean ore characteristics, associated with an insecure use of the actual procedure, were determined. finally, for a case study, the effects of using each alternative were evaluated. (Author) 4 refs

  17. Split Hopkinson Resonant Bar Test for Sonic-Frequency Acoustic Velocity and Attenuation Measurements of Small, Isotropic Geologic Samples

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, S.

    2011-04-01

    Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver - the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 C, and concurrently with x-ray CT imaging. The described Split Hopkinson Resonant Bar (SHRB) test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples and a natural rock sample.

  18. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  19. Rural and small-town attitudes about alcohol use during pregnancy: a community and provider sample.

    Science.gov (United States)

    Logan, T K; Walker, Robert; Nagle, Laura; Lewis, Jimmie; Wiesenhahn, Donna

    2003-01-01

    While there has been considerable research on prenatal alcohol use, there have been limited studies focused on women in rural and small-town environments. This 2-part study examines gender differences in attitudes and perceived barriers to intervention in large community sample of persons living in rural and small-town environments in Kentucky (n = 3,346). The study also examines rural/small-town prenatal service providers' perceptions of barriers to assessment and intervention with pregnant substance abusers (n = 138). Surveys were administered to a convenience sample of employees and customers from 16 rural and small-town community outlets. There were 1503 males (45%) and 1843 females (55%) ranging in age from under 18 years old to over 66 years old. Surveys also were mailed to prenatal providers in county health departments of the 13-county study area, with 138 of 149 responding. Overall results of the community sample suggest that neither males nor females were knowledgeable about the harmful effects of alcohol use during pregnancy. Results also indicate substantial gender differences in alcohol attitudes, knowledge, and perceived barriers. Further, prenatal care providers identified several barriers in assessment and treatment of pregnant women with alcohol use problems in rural and small-town communities, including lack of knowledge and comfort with assessment as well as a lack of available and accessible treatment for referrals.

  20. Baysian estimation of P(X > x) from a small sample of Gaussian data

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2017-01-01

    The classical statistical uncertainty problem of estimation of upper tail probabilities on the basis of a small sample of observations of a Gaussian random variable is considered. Predictive posterior estimation is discussed, adopting the standard statistical model with diffuse priors of the two...

  1. Sensitivity study of micro four-point probe measurements on small samples

    DEFF Research Database (Denmark)

    Wang, Fei; Petersen, Dirch Hjorth; Hansen, Torben Mikael

    2010-01-01

    probes than near the outer ones. The sensitive area is defined for infinite film, circular, square, and rectangular test pads, and convergent sensitivities are observed for small samples. The simulations show that the Hall sheet resistance RH in micro Hall measurements with position error suppression...

  2. Efficiency calibration and measurement of self-absorption correction of environmental gamma spectroscopy of soils samples using Marinelli beaker

    International Nuclear Information System (INIS)

    Abdi, M. R.; Mostajaboddavati, M.; Hassanzadeh, S.; Faghihian, H.; Rezaee, Kh.; Kamali, M.

    2006-01-01

    A nonlinear function in combination with the method of mixing activity calibrated is applied for fitting the experimental peak efficiency of HPGe spectrometers in 59-2614 keV energy range. The preparation of Marinelli beaker standards of mixed gamma and RG-set at secular equilibrium with its daughter radionuclides was studied. Standards were prepared by mixing of known amounts of 13B a, 241 Am, 152 Eu, 207 Bi, 24 Na, Al 2 O 3 powder and soil. The validity of these standards was checked by comparison with certified standard reference material RG-set and IAEA-Soil-6 Self-absorption was measured for the activity calculation of the gamma-ray lines about series of 238 U daughter, 232 Th series, 137 Cs and 40 K in soil samples. Self-absorption in the sample will depend on a number of factor including sample composition, density, sample size and gamma-ray energy. Seven Marinelli beaker standards were prepared in different degrees of compaction with bulk density ( ρ) of 1.000 to 1.600 g cm -3 . The detection efficiency versus density was obtained and the equation of self-absorption correction factors calculated for soil samples

  3. A scanning tunneling microscope capable of imaging specified micron-scale small samples.

    Science.gov (United States)

    Tao, Wei; Cao, Yufei; Wang, Huafeng; Wang, Kaiyou; Lu, Qingyou

    2012-12-01

    We present a home-built scanning tunneling microscope (STM) which allows us to precisely position the tip on any specified small sample or sample feature of micron scale. The core structure is a stand-alone soft junction mechanical loop (SJML), in which a small piezoelectric tube scanner is mounted on a sliding piece and a "U"-like soft spring strip has its one end fixed to the sliding piece and its opposite end holding the tip pointing to the sample on the scanner. Here, the tip can be precisely aligned to a specified small sample of micron scale by adjusting the position of the spring-clamped sample on the scanner in the field of view of an optical microscope. The aligned SJML can be transferred to a piezoelectric inertial motor for coarse approach, during which the U-spring is pushed towards the sample, causing the tip to approach the pre-aligned small sample. We have successfully approached a hand cut tip that was made from 0.1 mm thin Pt∕Ir wire to an isolated individual 32.5 × 32.5 μm(2) graphite flake. Good atomic resolution images and high quality tunneling current spectra for that specified tiny flake are obtained in ambient conditions with high repeatability within one month showing high and long term stability of the new STM structure. In addition, frequency spectra of the tunneling current signals do not show outstanding tip mount related resonant frequency (low frequency), which further confirms the stability of the STM structure.

  4. A scanning tunneling microscope capable of imaging specified micron-scale small samples

    Science.gov (United States)

    Tao, Wei; Cao, Yufei; Wang, Huafeng; Wang, Kaiyou; Lu, Qingyou

    2012-12-01

    We present a home-built scanning tunneling microscope (STM) which allows us to precisely position the tip on any specified small sample or sample feature of micron scale. The core structure is a stand-alone soft junction mechanical loop (SJML), in which a small piezoelectric tube scanner is mounted on a sliding piece and a "U"-like soft spring strip has its one end fixed to the sliding piece and its opposite end holding the tip pointing to the sample on the scanner. Here, the tip can be precisely aligned to a specified small sample of micron scale by adjusting the position of the spring-clamped sample on the scanner in the field of view of an optical microscope. The aligned SJML can be transferred to a piezoelectric inertial motor for coarse approach, during which the U-spring is pushed towards the sample, causing the tip to approach the pre-aligned small sample. We have successfully approached a hand cut tip that was made from 0.1 mm thin Pt/Ir wire to an isolated individual 32.5 × 32.5 μm2 graphite flake. Good atomic resolution images and high quality tunneling current spectra for that specified tiny flake are obtained in ambient conditions with high repeatability within one month showing high and long term stability of the new STM structure. In addition, frequency spectra of the tunneling current signals do not show outstanding tip mount related resonant frequency (low frequency), which further confirms the stability of the STM structure.

  5. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  6. System for sampling liquids in small jugs obturated by screwed taps

    International Nuclear Information System (INIS)

    Besnier, J.

    1995-01-01

    This invention describes a machine which samples automatically liquids in small jugs obturated by screwed taps. This device can be situated in an isolated room in order to work with radioactive liquids. The machine can be divided in three main parts: a module to catch the jug, in order to take and fix it, a module to open and to close it, and a module to sample. The later takes the liquid thanks to a suction device and puts it in a container, in order to analyse the sample. (TEC)

  7. Respondent-driven sampling and the recruitment of people with small injecting networks.

    Science.gov (United States)

    Paquette, Dana; Bryant, Joanne; de Wit, John

    2012-05-01

    Respondent-driven sampling (RDS) is a form of chain-referral sampling, similar to snowball sampling, which was developed to reach hidden populations such as people who inject drugs (PWID). RDS is said to reach members of a hidden population that may not be accessible through other sampling methods. However, less attention has been paid as to whether there are segments of the population that are more likely to be missed by RDS. This study examined the ability of RDS to capture people with small injecting networks. A study of PWID, using RDS, was conducted in 2009 in Sydney, Australia. The size of participants' injecting networks was examined by recruitment chain and wave. Participants' injecting network characteristics were compared to those of participants from a separate pharmacy-based study. A logistic regression analysis was conducted to examine the characteristics independently associated with having small injecting networks, using the combined RDS and pharmacy-based samples. In comparison with the pharmacy-recruited participants, RDS participants were almost 80% less likely to have small injecting networks, after adjusting for other variables. RDS participants were also more likely to have their injecting networks form a larger proportion of those in their social networks, and to have acquaintances as part of their injecting networks. Compared to those with larger injecting networks, individuals with small injecting networks were equally likely to engage in receptive sharing of injecting equipment, but less likely to have had contact with prevention services. These findings suggest that those with small injecting networks are an important group to recruit, and that RDS is less likely to capture these individuals.

  8. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  9. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  10. Corrections to the 148Nd method of evaluation of burnup for the PIE samples from Mihama-3 and Genkai-1 reactors

    International Nuclear Information System (INIS)

    Suyama, Kenya; Mochizuki, Hiroki

    2006-01-01

    The value of the burnup is one of the most important parameters of samples taken by post-irradiation examination (PIE). Generally, it is evaluated by the Neodymium-148 method. Precise evaluation of the burnup value requires: (1) an effective fission yield of 148 Nd; (2) neutron capture reactions of 147 Nd and 148 Nd; (3) a conversion factor from fissions per initial heavy metal to the burnup unit GWd/t. In this study, the burnup values of the PIE data from Mihama-3 and Genkai-1 PWRs, which were taken by the Japan Atomic Energy Research Institute, were re-evaluated using more accurate corrections for each of these three items. The PIE data were then re-analyzed using SWAT and SWAT2 code systems with JENDL-3.3 library. The re-evaluation of the effective fission yield of 148 Nd has an effect of 1.5-2.0% on burnup values. Considering the neutron capture reactions of 147 Nd and 148 Nd removes dependence of C/E values of 148 Nd on the burnup value. The conversion factor from FIMA(%) to GWd/t changes according to the burnup value. Its effect on the burnup evaluation is small for samples having burnup of larger than 30 GWd/t. The analyses using the corrected burnup values showed that the calculated 148 Nd concentrations and the PIE data is approximately 1%, whereas this was 3-5% in prior analyses. This analysis indicates that the burnup values of samples from Mihama-3 and Genkai-1 PWRs should be corrected by 2-3%. The effect of re-evaluation of the burnup value on the neutron multiplication factor is an approximately 0.6% change in PIE samples having the burnup of larger than 30 GWd/t. Finally, comparison between calculation results using a single pin-cell model and an assembly model is carried out. Because the results agreed with each other within a few percent, we concluded that the single pin-cell model is suitable for the analysis of PIE samples and that the underestimation of plutonium isotopes, which occurred in the previous analyses, does not result from a geometry

  11. Corrections to the {sup 148}Nd method of evaluation of burnup for the PIE samples from Mihama-3 and Genkai-1 reactors

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya [Fuel Cycle Facility Safety Research Group, Nuclear Safety Research Center, Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki 319-1195 (Japan)]. E-mail: suyama.kenya@jaea.go.jp; Mochizuki, Hiroki [Japan Research Institute, Limited, 16 Ichiban-cho, Chiyoda-ku, Tokyo 102-0082 (Japan)

    2006-03-15

    The value of the burnup is one of the most important parameters of samples taken by post-irradiation examination (PIE). Generally, it is evaluated by the Neodymium-148 method. Precise evaluation of the burnup value requires: (1) an effective fission yield of {sup 148}Nd; (2) neutron capture reactions of {sup 147}Nd and {sup 148}Nd; (3) a conversion factor from fissions per initial heavy metal to the burnup unit GWd/t. In this study, the burnup values of the PIE data from Mihama-3 and Genkai-1 PWRs, which were taken by the Japan Atomic Energy Research Institute, were re-evaluated using more accurate corrections for each of these three items. The PIE data were then re-analyzed using SWAT and SWAT2 code systems with JENDL-3.3 library. The re-evaluation of the effective fission yield of {sup 148}Nd has an effect of 1.5-2.0% on burnup values. Considering the neutron capture reactions of {sup 147}Nd and {sup 148}Nd removes dependence of C/E values of {sup 148}Nd on the burnup value. The conversion factor from FIMA(%) to GWd/t changes according to the burnup value. Its effect on the burnup evaluation is small for samples having burnup of larger than 30 GWd/t. The analyses using the corrected burnup values showed that the calculated {sup 148}Nd concentrations and the PIE data is approximately 1%, whereas this was 3-5% in prior analyses. This analysis indicates that the burnup values of samples from Mihama-3 and Genkai-1 PWRs should be corrected by 2-3%. The effect of re-evaluation of the burnup value on the neutron multiplication factor is an approximately 0.6% change in PIE samples having the burnup of larger than 30 GWd/t. Finally, comparison between calculation results using a single pin-cell model and an assembly model is carried out. Because the results agreed with each other within a few percent, we concluded that the single pin-cell model is suitable for the analysis of PIE samples and that the underestimation of plutonium isotopes, which occurred in the previous

  12. EDXRF applied to the chemical element determination of small invertebrate samples

    International Nuclear Information System (INIS)

    Magalhaes, Marcelo L.R.; Santos, Mariana L.O.; Cantinha, Rebeca S.; Souza, Thomas Marques de; Franca, Elvis J. de

    2015-01-01

    Energy Dispersion X-Ray Fluorescence - EDXRF is a fast analytical technique of easy operation, however demanding reliable analytical curves due to the intrinsic matrix dependence and interference during the analysis. By using biological materials of diverse matrices, multielemental analytical protocols can be implemented and a group of chemical elements could be determined in diverse biological matrices depending on the chemical element concentration. Particularly for invertebrates, EDXRF presents some advantages associated to the possibility of the analysis of small size samples, in which a collimator can be used that directing the incidence of X-rays to a small surface of the analyzed samples. In this work, EDXRF was applied to determine Cl, Fe, P, S and Zn in invertebrate samples using the collimator of 3 mm and 10 mm. For the assessment of the analytical protocol, the SRM 2976 Trace Elements in Mollusk produced and SRM 8415 Whole Egg Powder by the National Institute of Standards and Technology - NIST were also analyzed. After sampling by using pitfall traps, invertebrate were lyophilized, milled and transferred to polyethylene vials covered by XRF polyethylene. Analyses were performed at atmosphere lower than 30 Pa, varying voltage and electric current according to the chemical element to be analyzed. For comparison, Zn in the invertebrate material was also quantified by graphite furnace atomic absorption spectrometry after acid treatment (mixture of nitric acid and hydrogen peroxide) of samples have. Compared to the collimator of 10 mm, the SRM 2976 and SRM 8415 results obtained by the 3 mm collimator agreed well at the 95% confidence level since the E n Number were in the range of -1 and 1. Results from GFAAS were in accordance to the EDXRF values for composite samples. Therefore, determination of some chemical elements by EDXRF can be recommended for very small invertebrate samples (lower than 100 mg) with advantage of preserving the samples. (author)

  13. EDXRF applied to the chemical element determination of small invertebrate samples

    Energy Technology Data Exchange (ETDEWEB)

    Magalhaes, Marcelo L.R.; Santos, Mariana L.O.; Cantinha, Rebeca S.; Souza, Thomas Marques de; Franca, Elvis J. de, E-mail: marcelo_rlm@hotmail.com, E-mail: marianasantos_ufpe@hotmail.com, E-mail: rebecanuclear@gmail.com, E-mail: thomasmarques@live.com.pt, E-mail: ejfranca@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2015-07-01

    Energy Dispersion X-Ray Fluorescence - EDXRF is a fast analytical technique of easy operation, however demanding reliable analytical curves due to the intrinsic matrix dependence and interference during the analysis. By using biological materials of diverse matrices, multielemental analytical protocols can be implemented and a group of chemical elements could be determined in diverse biological matrices depending on the chemical element concentration. Particularly for invertebrates, EDXRF presents some advantages associated to the possibility of the analysis of small size samples, in which a collimator can be used that directing the incidence of X-rays to a small surface of the analyzed samples. In this work, EDXRF was applied to determine Cl, Fe, P, S and Zn in invertebrate samples using the collimator of 3 mm and 10 mm. For the assessment of the analytical protocol, the SRM 2976 Trace Elements in Mollusk produced and SRM 8415 Whole Egg Powder by the National Institute of Standards and Technology - NIST were also analyzed. After sampling by using pitfall traps, invertebrate were lyophilized, milled and transferred to polyethylene vials covered by XRF polyethylene. Analyses were performed at atmosphere lower than 30 Pa, varying voltage and electric current according to the chemical element to be analyzed. For comparison, Zn in the invertebrate material was also quantified by graphite furnace atomic absorption spectrometry after acid treatment (mixture of nitric acid and hydrogen peroxide) of samples have. Compared to the collimator of 10 mm, the SRM 2976 and SRM 8415 results obtained by the 3 mm collimator agreed well at the 95% confidence level since the E{sub n} Number were in the range of -1 and 1. Results from GFAAS were in accordance to the EDXRF values for composite samples. Therefore, determination of some chemical elements by EDXRF can be recommended for very small invertebrate samples (lower than 100 mg) with advantage of preserving the samples. (author)

  14. STATISTICAL EVALUATION OF SMALL SCALE MIXING DEMONSTRATION SAMPLING AND BATCH TRANSFER PERFORMANCE - 12093

    Energy Technology Data Exchange (ETDEWEB)

    GREER DA; THIEN MG

    2012-01-12

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS) has previously presented the results of mixing performance in two different sizes of small scale DSTs to support scale up estimates of full scale DST mixing performance. Currently, sufficient sampling of DSTs is one of the largest programmatic risks that could prevent timely delivery of high level waste to the WTP. WRPS has performed small scale mixing and sampling demonstrations to study the ability to sufficiently sample the tanks. The statistical evaluation of the demonstration results which lead to the conclusion that the two scales of small DST are behaving similarly and that full scale performance is predictable will be presented. This work is essential to reduce the risk of requiring a new dedicated feed sampling facility and will guide future optimization work to ensure the waste feed delivery mission will be accomplished successfully. This paper will focus on the analytical data collected from mixing, sampling, and batch transfer testing from the small scale mixing demonstration tanks and how those data are being interpreted to begin to understand the relationship between samples taken prior to transfer and samples from the subsequent batches transferred. An overview of the types of data collected and examples of typical raw data will be provided. The paper will then discuss the processing and manipulation of the data which is necessary to begin evaluating sampling and batch transfer performance. This discussion will also include the evaluation of the analytical measurement capability with regard to the simulant material used in the demonstration tests. The

  15. Refractive lenticule extraction (ReLEx through a small incision (SMILE for correction of myopia and myopic astigmatism: current perspectives

    Directory of Open Access Journals (Sweden)

    Ağca A

    2016-10-01

    Full Text Available Alper Ağca,1 Ahmet Demirok,2 Yusuf Yıldırım,1 Ali Demircan,1 Dilek Yaşa,1 Ceren Yeşilkaya,1 İrfan Perente,1 Muhittin Taşkapılı1 1Beyoğlu Eye Research and Training Hospital, 2Department of Ophthalmology, Istanbul Medeniyet University, Istanbul, Turkey Abstract: Small-incision lenticule extraction (SMILE is an alternative to laser-assisted in situ keratomileusis (LASIK and photorefractive keratectomy (PRK for the correction of myopia and myopic astigmatism. SMILE can be performed for the treatment of myopia ≤-12 D and astigmatism ≤5 D. The technology is currently only available in the VisuMax femtosecond laser platform. It offers several advantages over LASIK and PRK; however, hyperopia treatment, topography-guided treatment, and cyclotorsion control are not available in the current platform. The working principles, potential advantages, and disadvantages are discussed in this review. Keywords: SMILE, small-incision lenticule extraction, femtosecond laser, laser in situ keratomileusis, corneal biomechanics

  16. Accuracy and Radiation Dose of CT-Based Attenuation Correction for Small Animal PET: A Monte Carlo Simulation Study

    International Nuclear Information System (INIS)

    Yang, Ching-Ching; Chan, Kai-Chieh

    2013-06-01

    -Small animal PET allows qualitative assessment and quantitative measurement of biochemical processes in vivo, but the accuracy and reproducibility of imaging results can be affected by several parameters. The first aim of this study was to investigate the performance of different CT-based attenuation correction strategies and assess the resulting impact on PET images. The absorbed dose in different tissues caused by scanning procedures was also discussed to minimize biologic damage generated by radiation exposure due to PET/CT scanning. A small animal PET/CT system was modeled based on Monte Carlo simulation to generate imaging results and dose distribution. Three energy mapping methods, including the bilinear scaling method, the dual-energy method and the hybrid method which combines the kVp conversion and the dual-energy method, were investigated comparatively through assessing the accuracy of estimating linear attenuation coefficient at 511 keV and the bias introduced into PET quantification results due to CT-based attenuation correction. Our results showed that the hybrid method outperformed the bilinear scaling method, while the dual-energy method achieved the highest accuracy among the three energy mapping methods. Overall, the accuracy of PET quantification results have similar trend as that for the estimation of linear attenuation coefficients, whereas the differences between the three methods are more obvious in the estimation of linear attenuation coefficients than in the PET quantification results. With regards to radiation exposure from CT, the absorbed dose ranged between 7.29-45.58 mGy for 50-kVp scan and between 6.61-39.28 mGy for 80-kVp scan. For 18 F radioactivity concentration of 1.86x10 5 Bq/ml, the PET absorbed dose was around 24 cGy for tumor with a target-to-background ratio of 8. The radiation levels for CT scans are not lethal to the animal, but concurrent use of PET in longitudinal study can increase the risk of biological effects. The

  17. Biota dose assessment of small mammals sampled near uranium mines in northern Arizona

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Minter, K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kuhne, W. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kubilius, W. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2018-01-09

    In 2015, the U. S. Geological Survey (USGS) collected approximately 50 small mammal carcasses from Northern Arizona uranium mines and other background locations. Based on the highest gross alpha results, 11 small mammal samples were selected for radioisotopic analyses. None of the background samples had significant gross alpha results. The 11 small mammals were identified relative to the three ‘indicator’ mines located south of Fredonia, AZ on the Kanab Plateau (Kanab North Mine, Pinenut Mine, and Arizona 1 Mine) (Figure 1-1) and are operated by Energy Fuels Resources Inc. (EFRI). EFRI annually reports soil analysis for uranium and radium-226 using Arizona Department of Environmental Quality (ADEQ)-approved Standard Operating Procedures for Soil Sampling (EFRI 2016a, 2016b, 2017). In combination with the USGS small mammal radioiosotopic tissue analyses, a biota dose assessment was completed by Savannah River National Laboratory (SRNL) using the RESidual RADioactivity-BIOTA (RESRAD-BIOTA, V. 1.8) dose assessment tool provided by the Argonne National Laboratory (ANL 2017).

  18. Correct liquid scintillation counting of steroids and glycosides in RIA samples: a comparison of xylene-based, dioxane-based and colloidal counting systems. Chapter 14

    International Nuclear Information System (INIS)

    Spolders, H.

    1977-01-01

    In RIA, the following parameters are important for accurate liquid scintillation counting. (1) Absence of chemiluminescence. (2) Stability of count rate. (3) Dissolving properties for the sample. For samples with varying colours, a quench correction must be applied. For any type of accurate quench correction, a homogeneous sample is necessary. This can be obtained if proteins and the buffer can be dissolved completely in the scintillator solution. In this paper, these criteria are compared in xylene-based, dioxane-based and colloidal scintillation solutions for either bound or free antigens of different polarity. The labelling radioisotope used was 3 H. Using colloidal scintillators with plasma and buffer samples, phasing or sedimentation of salt or proteins sometimes occurs. The influence of sedimentation or phasing on count rate stability and correct quench correction is illustrated by varying the ratio between the scintillator solution and a RIA sample containing a semi-polar steroid aldosterone. (author)

  19. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    Science.gov (United States)

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  20. Statistical issues in reporting quality data: small samples and casemix variation.

    Science.gov (United States)

    Zaslavsky, A M

    2001-12-01

    To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.

  1. Determination of self absorption correction factor (SAF) for gross alpha measurement in water samples by BIS method

    International Nuclear Information System (INIS)

    Raveendran, Nanda; Baburajan, A.; Ravi, P.M.

    2018-01-01

    The laboratories accredited by AERB undertake the measurement of gross alpha and gross beta in packaged drinking water from manufactures across the country and analyze as per the procedure of Bureau of Indian standards. The accurate measurements of gross alpha in the drinking water sample is a challenge due to the self absorption of alpha particle from varying precipitate (Fe(OH) 3 +BaSO 4 ) thickness and total dissolved solids (TDS). This paper deals with a study on tracer recovery generation and self absorption correction factor (SAF). ESL, Tarapur has participated in an inter-laboratory comparison exercise conducted by IDS, RSSD, BARC as per the recommendation of AERB for the accredited laboratories. The thickness of the precipitate is an important aspect which affected the counting process. The activity was reported after conducting multiple experiments with uranium tracer recovery and precipitate thickness. Later on to make our efforts simplified, an average tracer recovery and Self Absorption correction Factor (SAF) was derived by the present experiment and the same was used for the re-calculation of activity from the count rate reported earlier

  2. Microdochium nivale and Microdochium majus in seed samples of Danish small grain cereals

    DEFF Research Database (Denmark)

    Nielsen, L. K.; Justesen, A. F.; Jensen, J. D.

    2013-01-01

    Microdochium nivale and Microdochium majus are two of fungal species found in the Fusarium Head Blight (FHB) complex infecting small grain cereals. Quantitative real-time PCR assays were designed to separate the two Microdochium species based on the translation elongation factor 1a gene (TEF-1a......) and used to analyse a total of 374 seed samples of wheat, barley, triticale, rye and oat sampled from farmers’ fields across Denmark from 2003 to 2007. Both fungal species were detected in the five cereal species but M. majus showed a higher prevalence compared to M. nivale in most years in all cereal...... species except rye, in which M. nivale represented a larger proportion of the biomass and was more prevalent than M. majus in some samples. Historical samples of wheat and barley from 1957 to 2000 similarly showed a strong prevalence of M. majus over M. nivale indicating that M. majus has been the main...

  3. Basic distribution free identification tests for small size samples of environmental data

    International Nuclear Information System (INIS)

    Federico, A.G.; Musmeci, F.

    1998-01-01

    Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data [it

  4. Quantifying predictability through information theory: small sample estimation in a non-Gaussian framework

    International Nuclear Information System (INIS)

    Haven, Kyle; Majda, Andrew; Abramov, Rafail

    2005-01-01

    Many situations in complex systems require quantitative estimates of the lack of information in one probability distribution relative to another. In short term climate and weather prediction, examples of these issues might involve the lack of information in the historical climate record compared with an ensemble prediction, or the lack of information in a particular Gaussian ensemble prediction strategy involving the first and second moments compared with the non-Gaussian ensemble itself. The relative entropy is a natural way to quantify the predictive utility in this information, and recently a systematic computationally feasible hierarchical framework has been developed. In practical systems with many degrees of freedom, computational overhead limits ensemble predictions to relatively small sample sizes. Here the notion of predictive utility, in a relative entropy framework, is extended to small random samples by the definition of a sample utility, a measure of the unlikeliness that a random sample was produced by a given prediction strategy. The sample utility is the minimum predictability, with a statistical level of confidence, which is implied by the data. Two practical algorithms for measuring such a sample utility are developed here. The first technique is based on the statistical method of null-hypothesis testing, while the second is based upon a central limit theorem for the relative entropy of moment-based probability densities. These techniques are tested on known probability densities with parameterized bimodality and skewness, and then applied to the Lorenz '96 model, a recently developed 'toy' climate model with chaotic dynamics mimicking the atmosphere. The results show a detection of non-Gaussian tendencies of prediction densities at small ensemble sizes with between 50 and 100 members, with a 95% confidence level

  5. Suitability of small diagnostic peripheral-blood samples for cell-therapy studies.

    Science.gov (United States)

    Stephanou, Coralea; Papasavva, Panayiota; Zachariou, Myria; Patsali, Petros; Epitropou, Marilena; Ladas, Petros; Al-Abdulla, Ruba; Christou, Soteroulla; Antoniou, Michael N; Lederer, Carsten W; Kleanthous, Marina

    2017-02-01

    Primary hematopoietic stem and progenitor cells (HSPCs) are key components of cell-based therapies for blood disorders and are thus the authentic substrate for related research. We propose that ubiquitous small-volume diagnostic samples represent a readily available and as yet untapped resource of primary patient-derived cells for cell- and gene-therapy studies. In the present study we compare isolation and storage methods for HSPCs from normal and thalassemic small-volume blood samples, considering genotype, density-gradient versus lysis-based cell isolation and cryostorage media with different serum contents. Downstream analyses include viability, recovery, differentiation in semi-solid media and performance in liquid cultures and viral transductions. We demonstrate that HSPCs isolated either by ammonium-chloride potassium (ACK)-based lysis or by gradient isolation are suitable for functional analyses in clonogenic assays, high-level HSPC expansion and efficient lentiviral transduction. For cryostorage of cells, gradient isolation is superior to ACK lysis, and cryostorage in freezing media containing 50% fetal bovine serum demonstrated good results across all tested criteria. For assays on freshly isolated cells, ACK lysis performed similar to, and for thalassemic samples better than, gradient isolation, at a fraction of the cost and hands-on time. All isolation and storage methods show considerable variation within sample groups, but this is particularly acute for density gradient isolation of thalassemic samples. This study demonstrates the suitability of small-volume blood samples for storage and preclinical studies, opening up the research field of HSPC and gene therapy to any blood diagnostic laboratory with corresponding bioethics approval for experimental use of surplus material. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  6. Taking sputum samples from small children with cystic fibrosis: a matter of cooperation

    DEFF Research Database (Denmark)

    Pehn, Mette; Bregnballe, Vibeke

    2014-01-01

    Objectives: An important part of the disease control in Danish guidelines for care of patients with cystic fibrosis (CF) is a monthly sputum sample by tracheal suchtion. Coping to this unpleasant procedure in small children depends heavily on the support from parents and nurse. The objective...... of this study was to develop a tool to help parents and children to cope with tracheal suctioning. Methods: Three short videos showing how nurses perform tracheal suctioning to get a sputum sample from small children with cystic fibrosis were made. The videos were shown to and discussed with parents...... and children to help them identify their own challenges in coping with the procedure. The study was carried out in the outpatient clinic at the CF centre, Aarhus Univeristy Hospital. Results: The videos are a useful tool to convince the parents, nurses and children from the age of about four years...

  7. Auto-validating von Neumann rejection sampling from small phylogenetic tree spaces

    Directory of Open Access Journals (Sweden)

    York Thomas

    2009-01-01

    Full Text Available Abstract Background In phylogenetic inference one is interested in obtaining samples from the posterior distribution over the tree space on the basis of some observed DNA sequence data. One of the simplest sampling methods is the rejection sampler due to von Neumann. Here we introduce an auto-validating version of the rejection sampler, via interval analysis, to rigorously draw samples from posterior distributions over small phylogenetic tree spaces. Results The posterior samples from the auto-validating sampler are used to rigorously (i estimate posterior probabilities for different rooted topologies based on mitochondrial DNA from human, chimpanzee and gorilla, (ii conduct a non-parametric test of rate variation between protein-coding and tRNA-coding sites from three primates and (iii obtain a posterior estimate of the human-neanderthal divergence time. Conclusion This solves the open problem of rigorously drawing independent and identically distributed samples from the posterior distribution over rooted and unrooted small tree spaces (3 or 4 taxa based on any multiply-aligned sequence data.

  8. Mass amplifying probe for sensitive fluorescence anisotropy detection of small molecules in complex biological samples.

    Science.gov (United States)

    Cui, Liang; Zou, Yuan; Lin, Ninghang; Zhu, Zhi; Jenkins, Gareth; Yang, Chaoyong James

    2012-07-03

    Fluorescence anisotropy (FA) is a reliable and excellent choice for fluorescence sensing. One of the key factors influencing the FA value for any molecule is the molar mass of the molecule being measured. As a result, the FA method with functional nucleic acid aptamers has been limited to macromolecules such as proteins and is generally not applicable for the analysis of small molecules because their molecular masses are relatively too small to produce observable FA value changes. We report here a molecular mass amplifying strategy to construct anisotropy aptamer probes for small molecules. The probe is designed in such a way that only when a target molecule binds to the probe does it activate its binding ability to an anisotropy amplifier (a high molecular mass molecule such as protein), thus significantly increasing the molecular mass and FA value of the probe/target complex. Specifically, a mass amplifying probe (MAP) consists of a targeting aptamer domain against a target molecule and molecular mass amplifying aptamer domain for the amplifier protein. The probe is initially rendered inactive by a small blocking strand partially complementary to both target aptamer and amplifier protein aptamer so that the mass amplifying aptamer domain would not bind to the amplifier protein unless the probe has been activated by the target. In this way, we prepared two probes that constitute a target (ATP and cocaine respectively) aptamer, a thrombin (as the mass amplifier) aptamer, and a fluorophore. Both probes worked well against their corresponding small molecule targets, and the detection limits for ATP and cocaine were 0.5 μM and 0.8 μM, respectively. More importantly, because FA is less affected by environmental interferences, ATP in cell media and cocaine in urine were directly detected without any tedious sample pretreatment. Our results established that our molecular mass amplifying strategy can be used to design aptamer probes for rapid, sensitive, and selective

  9. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Report of the advisory group meeting on elemental analysis of extremely small samples

    International Nuclear Information System (INIS)

    2002-01-01

    This publication contains summary of discussions held at the meeting with brief description and comparative characteristics of most common nuclear analytical techniques used for analysis of very small samples as well as the conclusions of the meeting. Some aspect of reference materials and quality control are also discussed. The publication also contains individual contributions made by the participants, each of these papers haven provided with an abstract and indexed separately

  11. Enrichment and determination of small amounts of 90Sr/90Y in water samples

    International Nuclear Information System (INIS)

    Mundschenk, H.

    1979-01-01

    Small amounts of 90 Sr/ 90 Y can be concentrated from large volumes of surface water (100 l) by precipitation of the phosphates, using bentonite as adsorber matrix. In the case of samples containing no or nearly no suspended matter (tap water, ground water, sea water), the daughter 90 Y can be extracted directly by using filter beds impregnated with HDEHP. The applicability of both techniques is demonstrated under realistic conditions. (orig.) 891 HP/orig. 892 MKO [de

  12. A simple technique for measuring the superconducting critical temperature of small (>= 10 μg) samples

    International Nuclear Information System (INIS)

    Pereira, R.F.R.; Meyer, E.; Silveira, M.F. da.

    1983-01-01

    A simple technique for measuring the superconducting critical temperature of small (>=10μg) samples is described. The apparatus is built in the form of a probe, which can be introduced directly into a liquid He storage dewar and permits the determination of the critical temperature, with an imprecision of +- 0.05 K above 4.2 K, in about 10 minutes. (Author) [pt

  13. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  14. Hybrid image and blood sampling input function for quantification of small animal dynamic PET data

    International Nuclear Information System (INIS)

    Shoghi, Kooresh I.; Welch, Michael J.

    2007-01-01

    We describe and validate a hybrid image and blood sampling (HIBS) method to derive the input function for quantification of microPET mice data. The HIBS algorithm derives the peak of the input function from the image, which is corrected for recovery, while the tail is derived from 5 to 6 optimally placed blood sampling points. A Bezier interpolation algorithm is used to link the rightmost image peak data point to the leftmost blood sampling point. To assess the performance of HIBS, 4 mice underwent 60-min microPET imaging sessions following a 0.40-0.50-mCi bolus administration of 18 FDG. In total, 21 blood samples (blood-sampled plasma time-activity curve, bsPTAC) were obtained throughout the imaging session to compare against the proposed HIBS method. MicroPET images were reconstructed using filtered back projection with a zoom of 2.75 on the heart. Volumetric regions of interest (ROIs) were composed by drawing circular ROIs 3 pixels in diameter on 3-4 transverse planes of the left ventricle. Performance was characterized by kinetic simulations in terms of bias in parameter estimates when bsPTAC and HIBS are used as input functions. The peak of the bsPTAC curve was distorted in comparison to the HIBS-derived curve due to temporal limitations and delay in blood sampling, which affected the rates of bidirectional exchange between plasma and tissue. The results highlight limitations in using bsPTAC. The HIBS method, however, yields consistent results, and thus, is a substitute for bsPTAC

  15. A novel approach for small sample size family-based association studies: sequential tests.

    Science.gov (United States)

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  16. Improvement of 137Cs analysis in small volume seawater samples using the Ogoya underground facility

    International Nuclear Information System (INIS)

    Hirose, K.; Komura, K.; Kanazawa University, Ishikawa; Aoyama, M.; Igarashi, Y.

    2008-01-01

    137 Cs in seawater is one of the most powerful tracers of water motion. Large volumes of samples have been required for determination of 137 Cs in seawater. This paper describes improvement of separation and purification processes of 137 Cs in seawater, which includes purification of 137 Cs using hexachloroplatinic acid in addition to ammonium phosphomolybdate (AMP) precipitation. As a result, we succeeded the 137 Cs determination in seawater with a smaller sample volume of 10 liter by using ultra-low background gamma-spectrometry in the Ogoya underground facility. 137 Cs detection limit was about 0.1 mBq (counting time: 10 6 s). This method is applied to determine 137 Cs in small samples of the South Pacific deep waters. (author)

  17. Determination of phosphorus in small amounts of protein samples by ICP-MS.

    Science.gov (United States)

    Becker, J Sabine; Boulyga, Sergei F; Pickhardt, Carola; Becker, J; Buddrus, Stefan; Przybylski, Michael

    2003-02-01

    Inductively coupled plasma mass spectrometry (ICP-MS) is used for phosphorus determination in protein samples. A small amount of solid protein sample (down to 1 micro g) or digest (1-10 micro L) protein solution was denatured in nitric acid and hydrogen peroxide by closed-microvessel microwave digestion. Phosphorus determination was performed with an optimized analytical method using a double-focusing sector field inductively coupled plasma mass spectrometer (ICP-SFMS) and quadrupole-based ICP-MS (ICP-QMS). For quality control of phosphorus determination a certified reference material (CRM), single cell proteins (BCR 273) with a high phosphorus content of 26.8+/-0.4 mg g(-1), was analyzed. For studies on phosphorus determination in proteins while reducing the sample amount as low as possible the homogeneity of CRM BCR 273 was investigated. Relative standard deviation and measurement accuracy in ICP-QMS was within 2%, 3.5%, 11% and 12% when using CRM BCR 273 sample weights of 40 mg, 5 mg, 1 mg and 0.3 mg, respectively. The lowest possible sample weight for an accurate phosphorus analysis in protein samples by ICP-MS is discussed. The analytical method developed was applied for the analysis of homogeneous protein samples in very low amounts [1-100 micro g of solid protein sample, e.g. beta-casein or down to 1 micro L of protein or digest in solution (e.g., tau protein)]. A further reduction of the diluted protein solution volume was achieved by the application of flow injection in ICP-SFMS, which is discussed with reference to real protein digests after protein separation using 2D gel electrophoresis.The detection limits for phosphorus in biological samples were determined by ICP-SFMS down to the ng g(-1) level. The present work discusses the figure of merit for the determination of phosphorus in a small amount of protein sample with ICP-SFMS in comparison to ICP-QMS.

  18. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  19. Technical Note: New methodology for measuring viscosities in small volumes characteristic of environmental chamber particle samples

    Directory of Open Access Journals (Sweden)

    L. Renbaum-Wolff

    2013-01-01

    Full Text Available Herein, a method for the determination of viscosities of small sample volumes is introduced, with important implications for the viscosity determination of particle samples from environmental chambers (used to simulate atmospheric conditions. The amount of sample needed is < 1 μl, and the technique is capable of determining viscosities (η ranging between 10−3 and 103 Pascal seconds (Pa s in samples that cover a range of chemical properties and with real-time relative humidity and temperature control; hence, the technique should be well-suited for determining the viscosities, under atmospherically relevant conditions, of particles collected from environmental chambers. In this technique, supermicron particles are first deposited on an inert hydrophobic substrate. Then, insoluble beads (~1 μm in diameter are embedded in the particles. Next, a flow of gas is introduced over the particles, which generates a shear stress on the particle surfaces. The sample responds to this shear stress by generating internal circulations, which are quantified with an optical microscope by monitoring the movement of the beads. The rate of internal circulation is shown to be a function of particle viscosity but independent of the particle material for a wide range of organic and organic-water samples. A calibration curve is constructed from the experimental data that relates the rate of internal circulation to particle viscosity, and this calibration curve is successfully used to predict viscosities in multicomponent organic mixtures.

  20. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    Science.gov (United States)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application

  1. Nano-Scale Sample Acquisition Systems for Small Class Exploration Spacecraft

    Science.gov (United States)

    Paulsen, G.

    2015-12-01

    The paradigm for space exploration is changing. Large and expensive missions are very rare and the space community is turning to smaller, lighter, and less expensive missions that could still perform great exploration. These missions are also within reach of commercial companies such as the Google Lunar X Prize teams that develop small scale lunar missions. Recent commercial endeavors such as "Planet Labs inc." and Sky Box Imaging, inc. show that there are new benefits and business models associated with miniaturization of space hardware. The Nano-Scale Sample Acquisition System includes NanoDrill for capture of small rock cores and PlanetVac for capture of surface regolith. These two systems are part of the ongoing effort to develop "Micro Sampling" systems for deployment by the small spacecraft with limited payload capacities. The ideal applications include prospecting missions to the Moon and Asteroids. The MicroDrill is a rotary-percussive coring drill that captures cores 7 mm in diameter and up to 2 cm long. The drill weighs less than 1 kg and can capture a core from a 40 MPa strength rock within a few minutes, with less than 10 Watt power and less than 10 Newton of preload. The PlanetVac is a pneumatic based regolith acquisition system that can capture surface sample in touch-and-go maneuver. These sampling systems were integrated within the footpads of commercial quadcopter for testing. As such, they could also be used by geologists on Earth to explore difficult to get to locations.

  2. Precise Th/U-dating of small and heavily coated samples of deep sea corals

    Science.gov (United States)

    Lomitschka, Michael; Mangini, Augusto

    1999-07-01

    Marine carbonate skeletons like deep-sea corals are frequently coated with iron and manganese oxides/hydroxides which adsorb additional thorium and uranium out of the sea water. A new cleaning procedure has been developed to reduce this contamination. In this further cleaning step a solution of Na 2EDTA (Na 2H 2T B) and ascorbic acid is used which composition is optimised especially for samples of 20 mg of weight. It was first tested on aliquots of a reef-building coral which had been artificially contaminated with powdered ferromanganese nodule. Applied on heavily contaminated deep-sea corals (scleractinia), it reduced excess 230Th by another order of magnitude in addition to usual cleaning procedures. The measurement of at least three fractions of different contamination, together with an additional standard correction for contaminated carbonates results in Th/U-ages corrected for the authigenic component. A good agreement between Th/U- and 14C-ages can be achieved even for extremely coated corals.

  3. An improved optimization algorithm of the three-compartment model with spillover and partial volume corrections for dynamic FDG PET images of small animal hearts in vivo

    Science.gov (United States)

    Li, Yinlin; Kundu, Bijoy K.

    2018-03-01

    The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged  -1.4  ±  8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4  ±  5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic

  4. A passive guard for low thermal conductivity measurement of small samples by the hot plate method

    International Nuclear Information System (INIS)

    Jannot, Yves; Godefroy, Justine; Degiovanni, Alain; Grigorova-Moutiers, Veneta

    2017-01-01

    Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6  ×  0.6 m 2 ). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a , enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015–0.2 W m −1 K −1 ), but only on T a . The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045  ×  0.045 m 2 . (paper)

  5. A simple Bayesian approach to quantifying confidence level of adverse event incidence proportion in small samples.

    Science.gov (United States)

    Liu, Fang

    2016-01-01

    In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.

  6. Tools for Inspecting and Sampling Waste in Underground Radioactive Storage Tanks with Small Access Riser Openings

    International Nuclear Information System (INIS)

    Nance, T.A.

    1998-01-01

    Underground storage tanks with 2 inches to 3 inches diameter access ports at the Department of Energy's Savannah River Site have been used to store radioactive solvents and sludge. In order to close these tanks, the contents of the tanks need to first be quantified in terms of volume and chemical and radioactive characteristics. To provide information on the volume of waste contained within the tanks, a small remote inspection system was needed. This inspection system was designed to provide lighting and provide pan and tilt capabilities in an inexpensive package with zoom abilities and color video. This system also needed to be utilized inside of a plastic tent built over the access port to contain any contamination exiting from the port. This system had to be build to travel into the small port opening, through the riser pipe, into the tank evacuated space, and out of the riser pipe and access port with no possibility of being caught and blocking the access riser. Long thin plates were found in many access riser pipes that blocked the inspection system from penetrating into the tank interiors. Retrieval tools to clear the plates from the tanks using developed sampling devices while providing safe containment for the samples. This paper will discuss the inspection systems, tools for clearing access pipes, and solvent sampling tools developed to evaluate the tank contents of the underground solvent storage tanks

  7. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  8. Simultaneous small-sample comparisons in longitudinal or multi-endpoint trials using multiple marginal models

    DEFF Research Database (Denmark)

    Pallmann, Philip; Ritz, Christian; Hothorn, Ludwig A

    2018-01-01

    , however only asymptotically. In this paper, we show how to make the approach also applicable to small-sample data problems. Specifically, we discuss the computation of adjusted P values and simultaneous confidence bounds for comparisons of randomised treatment groups as well as for levels......Simultaneous inference in longitudinal, repeated-measures, and multi-endpoint designs can be onerous, especially when trying to find a reasonable joint model from which the interesting effects and covariances are estimated. A novel statistical approach known as multiple marginal models greatly...... simplifies the modelling process: the core idea is to "marginalise" the problem and fit multiple small models to different portions of the data, and then estimate the overall covariance matrix in a subsequent, separate step. Using these estimates guarantees strong control of the family-wise error rate...

  9. Small Sample Reactivity Measurements in the RRR/SEG Facility: Reanalysis using TRIPOLI-4

    Energy Technology Data Exchange (ETDEWEB)

    Hummel, Andrew [Idaho National Lab. (INL), Idaho Falls, ID (United States); Palmiotti, Guiseppe [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This work involved reanalyzing the RRR/SEG integral experiments performed at the Rossendorf facility in Germany throughout the 1970s and 80s. These small sample reactivity worth measurements were carried out using the pile oscillator technique for many different fission products, structural materials, and standards. The coupled fast-thermal system was designed such that the measurements would provide insight into elemental data, specifically the competing effects between neutron capture and scatter. Comparing the measured to calculated reactivity values can then provide adjustment criteria to ultimately improve nuclear data for fast reactor designs. Due to the extremely small reactivity effects measured (typically less than 1 pcm) and the specific heterogeneity of the core, the tool chosen for this analysis was TRIPOLI-4. This code allows for high fidelity 3-dimensional geometric modeling, and the most recent, unreleased version, is capable of exact perturbation theory.

  10. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  11. Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.

    Science.gov (United States)

    Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe

    2012-01-01

    Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.

  12. Small Scale Mixing Demonstration Batch Transfer and Sampling Performance of Simulated HLW - 12307

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Jesse; Townson, Paul; Vanatta, Matt [EnergySolutions, Engineering and Technology Group, Richland, WA, 99354 (United States)

    2012-07-01

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste treatment Plant (WTP) has been recognized as a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. At the end of 2009 DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS), awarded a contract to EnergySolutions to design, fabricate and operate a demonstration platform called the Small Scale Mixing Demonstration (SSMD) to establish pre-transfer sampling capacity, and batch transfer performance data at two different scales. This data will be used to examine the baseline capacity for a tank mixed via rotational jet mixers to transfer consistent or bounding batches, and provide scale up information to predict full scale operational performance. This information will then in turn be used to define the baseline capacity of such a system to transfer and sample batches sent to WTP. The Small Scale Mixing Demonstration (SSMD) platform consists of 43'' and 120'' diameter clear acrylic test vessels, each equipped with two scaled jet mixer pump assemblies, and all supporting vessels, controls, services, and simulant make up facilities. All tank internals have been modeled including the air lift circulators (ALCs), the steam heating coil, and the radius between the wall and floor. The test vessels are set up to simulate the transfer of HLW out of a mixed tank, and collect a pre-transfer sample in a manner similar to the proposed baseline configuration. The collected material is submitted to an NQA-1 laboratory for chemical analysis. Previous work has been done to assess tank mixing performance at both scales. This work involved a combination of unique instruments to understand the three dimensional distribution of solids using a combination of Coriolis meter measurements, in situ chord length distribution

  13. Static, Mixed-Array Total Evaporation for Improved Quantitation of Plutonium Minor Isotopes in Small Samples

    Science.gov (United States)

    Stanley, F. E.; Byerly, Benjamin L.; Thomas, Mariam R.; Spencer, Khalil J.

    2016-06-01

    Actinide isotope measurements are a critical signature capability in the modern nuclear forensics "toolbox", especially when interrogating anthropogenic constituents in real-world scenarios. Unfortunately, established methodologies, such as traditional total evaporation via thermal ionization mass spectrometry, struggle to confidently measure low abundance isotope ratios (evaporation techniques as a straightforward means of improving plutonium minor isotope measurements, which have been resistant to enhancement in recent years because of elevated radiologic concerns. Results are presented for small sample (~20 ng) applications involving a well-known plutonium isotope reference material, CRM-126a, and compared with traditional total evaporation methods.

  14. Use of aspiration method for collecting brain samples for rabies diagnosis in small wild animals.

    Science.gov (United States)

    Iamamoto, K; Quadros, J; Queiroz, L H

    2011-02-01

    In developing countries such as Brazil, where canine rabies is still a considerable problem, samples from wildlife species are infrequently collected and submitted for screening for rabies. A collaborative study was established involving environmental biologists and veterinarians for rabies epidemiological research in a specific ecological area located at the Sao Paulo State, Brazil. The wild animals' brains are required to be collected without skull damage because the skull's measurements are important in the identification of the captured animal species. For this purpose, samples from bats and small mammals were collected using an aspiration method by inserting a plastic pipette into the brain through the magnum foramen. While there is a progressive increase in the use of the plastic pipette technique in various studies undertaken, it is also appreciated that this method could foster collaborative research between wildlife scientists and rabies epidemiologists thus improving rabies surveillance. © 2009 Blackwell Verlag GmbH.

  15. Investigation of Phase Transition-Based Tethered Systems for Small Body Sample Capture

    Science.gov (United States)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Scharf, Daniel; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and possible return to Earth. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  16. Modeling and Testing of Phase Transition-Based Deployable Systems for Small Body Sample Capture

    Science.gov (United States)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Keim, Jason; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and return. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  17. On-product overlay enhancement using advanced litho-cluster control based on integrated metrology, ultra-small DBO targets and novel corrections

    Science.gov (United States)

    Bhattacharyya, Kaustuve; Ke, Chih-Ming; Huang, Guo-Tsai; Chen, Kai-Hsiung; Smilde, Henk-Jan H.; Fuchs, Andreas; Jak, Martin; van Schijndel, Mark; Bozkurt, Murat; van der Schaar, Maurits; Meyer, Steffen; Un, Miranda; Morgan, Stephen; Wu, Jon; Tsai, Vincent; Liang, Frida; den Boef, Arie; ten Berge, Peter; Kubis, Michael; Wang, Cathy; Fouquet, Christophe; Terng, L. G.; Hwang, David; Cheng, Kevin; Gau, TS; Ku, Y. C.

    2013-04-01

    Aggressive on-product overlay requirements in advanced nodes are setting a superior challenge for the semiconductor industry. This forces the industry to look beyond the traditional way-of-working and invest in several new technologies. Integrated metrology2, in-chip overlay control, advanced sampling and process correction-mechanism (using the highest order of correction possible with scanner interface today), are a few of such technologies considered in this publication.

  18. Increased accuracy of the carbon-14 D-xylose breath test in detecting small-intestinal bacterial overgrowth by correction with the gastric emptying rate

    International Nuclear Information System (INIS)

    Chang Chisen; Chen Granhum; Kao Chiahung; Wang Shyhjen; Peng Shihnen; Huang Chihkuen; Poon Sekkwong

    1995-01-01

    The aim of this study was to determine whether the accuracy of 14 C-D-xylose breath test for detecting bacterial overgrowth can be increased by correction with the gastric emptying rate of 14 C-D-xylose. Ten culture-positive patients and ten culture-negative controls were included in the study. Small-intestinal aspirates for bacteriological culture were obtained endoscopically. A liquid-phase gastric emptying study was performed simultaneously to assess the amount of 14 C-D-xylose that entered the small intestine. The results of the percentage of expired 14 CO 2 at 30 min were corrected with the amount of 14 C-D-xylose that entered the small intestine. There were six patients in the culture-positive group with a 14 CO 2 concentration above the normal limit. Three out of four patients with initially negative results using the uncorrected method proved to be positive after correction. All these three patients had prolonged gastric emptying of 14 C-D-xylose. When compared with cultures of small-intestine aspirates, the sensitivity and specificity of the uncorrected 14 C-D-xylose breath test were 60% and 90%, respectively. In contrast, the sensitivity and specificity of the corrected 14 C-D-xylose breath test improved to 90% and 100%, respectively. (orig./MG)

  19. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    Science.gov (United States)

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Vector analysis of high (≥3 diopters) astigmatism correction using small-incision lenticule extraction and laser in situ keratomileusis.

    Science.gov (United States)

    Chan, Tommy C Y; Wang, Yan; Ng, Alex L K; Zhang, Jiamei; Yu, Marco C Y; Jhanji, Vishal; Cheng, George P M

    2018-06-13

    To compare the astigmatic correction in high myopic astigmatism between small-incision lenticule extraction and laser in situ keratomileusis (LASIK) using vector analysis. Hong Kong Laser Eye Center, Hong Kong. Retrospective case series. Patients who had correction of myopic astigmatism of 3.0 diopters (D) or more and had either small-incision lenticule extraction or femtosecond laser-assisted LASIK were included. Only the left eye was included for analysis. Visual and refractive results were presented and compared between groups. The study comprised 105 patients (40 eyes in the small-incision lenticule extraction group and 65 eyes in the femtosecond laser-assisted LASIK group.) The mean preoperative manifest cylinder was -3.42 D ± 0.55 (SD) in the small-incision lenticule extraction group and -3.47 ± 0.49 D in the LASIK group (P = .655). At 3 months, there was no significant between-group difference in uncorrected distance visual acuity (P = .915) and manifest spherical equivalent (P = .145). Ninety percent and 95.4% of eyes were within ± 0.5 D of the attempted cylindrical correction for the small-incision lenticule extraction and LASIK group, respectively (P = .423). Vector analysis showed comparable target-induced astigmatism (P = .709), surgically induced astigmatism vector (P = .449), difference vector (P = .335), and magnitude of error (P = .413) between groups. The absolute angle of error was 1.88 ± 2.25 degrees in the small-incision lenticule extraction group and 1.37 ± 1.58 degrees in the LASIK group (P = .217). Small-incision lenticule extraction offered astigmatic correction comparable to LASIK in eyes with high myopic astigmatism. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  1. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Vivek [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cai, Guowei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gribok, Andrei V. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mahadevan, Sankaran [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presented in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.

  2. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    International Nuclear Information System (INIS)

    Calderon, E; Siergiej, D

    2014-01-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detector (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement

  3. Predicting Antitumor Activity of Peptides by Consensus of Regression Models Trained on a Small Data Sample

    Directory of Open Access Journals (Sweden)

    Ivanka Jerić

    2011-11-01

    Full Text Available Predicting antitumor activity of compounds using regression models trained on a small number of compounds with measured biological activity is an ill-posed inverse problem. Yet, it occurs very often within the academic community. To counteract, up to some extent, overfitting problems caused by a small training data, we propose to use consensus of six regression models for prediction of biological activity of virtual library of compounds. The QSAR descriptors of 22 compounds related to the opioid growth factor (OGF, Tyr-Gly-Gly-Phe-Met with known antitumor activity were used to train regression models: the feed-forward artificial neural network, the k-nearest neighbor, sparseness constrained linear regression, the linear and nonlinear (with polynomial and Gaussian kernel support vector machine. Regression models were applied on a virtual library of 429 compounds that resulted in six lists with candidate compounds ranked by predicted antitumor activity. The highly ranked candidate compounds were synthesized, characterized and tested for an antiproliferative activity. Some of prepared peptides showed more pronounced activity compared with the native OGF; however, they were less active than highly ranked compounds selected previously by the radial basis function support vector machine (RBF SVM regression model. The ill-posedness of the related inverse problem causes unstable behavior of trained regression models on test data. These results point to high complexity of prediction based on the regression models trained on a small data sample.

  4. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  5. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai; Pang, Herbert; Tong, Tiejun; Genton, Marc G.

    2015-01-01

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  6. Evaluation applications of instrument calibration research findings in psychology for very small samples

    Science.gov (United States)

    Fisher, W. P., Jr.; Petry, P.

    2016-11-01

    Many published research studies document item calibration invariance across samples using Rasch's probabilistic models for measurement. A new approach to outcomes evaluation for very small samples was employed for two workshop series focused on stress reduction and joyful living conducted for health system employees and caregivers since 2012. Rasch-calibrated self-report instruments measuring depression, anxiety and stress, and the joyful living effects of mindfulness behaviors were identified in peer-reviewed journal articles. Items from one instrument were modified for use with a US population, other items were simplified, and some new items were written. Participants provided ratings of their depression, anxiety and stress, and the effects of their mindfulness behaviors before and after each workshop series. The numbers of participants providing both pre- and post-workshop data were low (16 and 14). Analysis of these small data sets produce results showing that, with some exceptions, the item hierarchies defining the constructs retained the same invariant profiles they had exhibited in the published research (correlations (not disattenuated) range from 0.85 to 0.96). In addition, comparisons of the pre- and post-workshop measures for the three constructs showed substantively and statistically significant changes. Implications for program evaluation comparisons, quality improvement efforts, and the organization of communications concerning outcomes in clinical fields are explored.

  7. Self-attenuation correction in the environmental sample gamma spectrometry; Correcao de auto-absorcao na espectrometria gama de amostras ambientais

    Energy Technology Data Exchange (ETDEWEB)

    Venturini, Luzia; Nisti, Marcelo B. [Instituto de Pesquisas Energeticas e Nucleares (IPEN), Sao Paulo, SP (Brazil)

    1997-10-01

    Self-attenuation corrections were calculated for gamma ray spectrometry of environmental samples with densities from 0.42 g/ml up to 1.59 g/ml, measured in Marinelli beakers and polyethylene flasks. These corrections are to be used when the counting efficiency is calculated for water measured in the same geometry. The model of Debertin for Marinelli beaker, numerical integration and experimental linear attenuation coefficients were used. (author). 3 refs., 4 figs., 6 tabs.

  8. Sci—Fri AM: Mountain — 01: Validation of a new formulism and the related correction factors on output factor determination for small photon fields

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yizhen; Younge, Kelly; Nielsen, Michelle; Mutanga, Theodore [Peel Regional Cancer Center, Trillium Health Partners, Mississauga, ON (Canada); Cui, Congwu [Peel Regional Cancer Center, Trillium Health Partners, Mississauga, ON (Canada); Department of Radiation Oncology, University of Toronto, Toronto, ON (Canada); Das, Indra J. [Radiation Oncology Dept., Indiana University- School of Medicine, Indianapolis, IN (United States)

    2014-08-15

    Small field dosimetry measurements including output factors are difficult due to lack of charged-particle equilibrium, occlusion of the radiation source, the finite size of detectors, and non-water equivalence of detector components. With available detectors significant variations could be measured that will lead to incorrect delivered dose to patients. IAEA/AAPM have provided a framework and formulation to correct the detector response in small photon fields. Monte Carlo derived correction factors for some commonly used small field detectors are now available, however validation has not been performed prior to this study. An Exradin A16 chamber, EDGE detector and SFD detector were used to perform the output factor measurement for a series of conical fields (5–30mm) on a Varian iX linear accelerator. Discrepancies up to 20%, 10% and 6% were observed for 5, 7.5 and 10 mm cones between the initial output factors measured by the EDGE detector and the A16 ion chamber, while the discrepancies for the conical fields larger than 10 mm were less than 4%. After the application of the correction, the output factors agree well with each other to within 1%. Caution is needed when determining the output factors for small photon fields, especially for fields 10 mm in diameter or smaller. More than one type of detector should be used, each with proper corrections applied to the measurement results. It is concluded that with the application of correction factors to appropriately chosen detectors, output can be measured accurately for small fields.

  9. The use of secondary ion mass spectrometry in forensic analyses of ultra-small samples

    Science.gov (United States)

    Cliff, John

    2010-05-01

    It is becoming increasingly important in forensic science to perform chemical and isotopic analyses on very small sample sizes. Moreover, in some instances the signature of interest may be incorporated in a vast background making analyses impossible by bulk methods. Recent advances in instrumentation make secondary ion mass spectrometry (SIMS) a powerful tool to apply to these problems. As an introduction, we present three types of forensic analyses in which SIMS may be useful. The causal organism of anthrax (Bacillus anthracis) chelates Ca and other metals during spore formation. Thus, the spores contain a trace element signature related to the growth medium that produced the organisms. Although other techniques have been shown to be useful in analyzing these signatures, the sample size requirements are generally relatively large. We have shown that time of flight SIMS (TOF-SIMS) combined with multivariate analysis, can clearly separate Bacillus sp. cultures prepared in different growth media using analytical spot sizes containing approximately one nanogram of spores. An important emerging field in forensic analysis is that of provenance of fecal pollution. The strategy of choice for these analyses-developing host-specific nucleic acid probes-has met with considerable difficulty due to lack of specificity of the probes. One potentially fruitful strategy is to combine in situ nucleic acid probing with high precision isotopic analyses. Bulk analyses of human and bovine fecal bacteria, for example, indicate a relative difference in d13C content of about 4 per mil. We have shown that sample sizes of several nanograms can be analyzed with the IMS 1280 with precisions capable of separating two per mil differences in d13C. The NanoSIMS 50 is capable of much better spatial resolution than the IMS 1280, albeit at a cost of analytical precision. Nevertheless we have documented precision capable of separating five per mil differences in d13C using analytical spots containing

  10. Vertical Sampling Scales for Atmospheric Boundary Layer Measurements from Small Unmanned Aircraft Systems (sUAS

    Directory of Open Access Journals (Sweden)

    Benjamin L. Hemingway

    2017-09-01

    Full Text Available The lowest portion of the Earth’s atmosphere, known as the atmospheric boundary layer (ABL, plays an important role in the formation of weather events. Simple meteorological measurements collected from within the ABL, such as temperature, pressure, humidity, and wind velocity, are key to understanding the exchange of energy within this region, but conventional surveillance techniques such as towers, radar, weather balloons, and satellites do not provide adequate spatial and/or temporal coverage for monitoring weather events. Small unmanned aircraft, or aerial, systems (sUAS provide a versatile, dynamic platform for atmospheric sensing that can provide higher spatio-temporal sampling frequencies than available through most satellite sensing methods. They are also able to sense portions of the atmosphere that cannot be measured from ground-based radar, weather stations, or weather balloons and have the potential to fill gaps in atmospheric sampling. However, research on the vertical sampling scales for collecting atmospheric measurements from sUAS and the variabilities of these scales across atmospheric phenomena (e.g., temperature and humidity is needed. The objective of this study is to use variogram analysis, a common geostatistical technique, to determine optimal spatial sampling scales for two atmospheric variables (temperature and relative humidity captured from sUAS. Results show that vertical sampling scales of approximately 3 m for temperature and 1.5–2 m for relative humidity were sufficient to capture the spatial structure of these phenomena under the conditions tested. Future work is needed to model these scales across the entire ABL as well as under variable conditions.

  11. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    International Nuclear Information System (INIS)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K.

    2015-01-01

    Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. The detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm 3 . The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum detectable

  12. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    Energy Technology Data Exchange (ETDEWEB)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K. [Canberra Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-07-01

    Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. The detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm{sup 3}. The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum detectable

  13. Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes.

    Science.gov (United States)

    Demitri, Nevine; Zoubir, Abdelhak M

    2017-01-01

    Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.

  14. Perspectives of an acoustic–electrostatic/electrodynamic hybrid levitator for small fluid and solid samples

    International Nuclear Information System (INIS)

    Lierke, E G; Holitzner, L

    2008-01-01

    The feasibility of an acoustic–electrostatic hybrid levitator for small fluid and solid samples is evaluated. A proposed design and its theoretical assessment are based on the optional implementation of simple hardware components (ring electrodes) and standard laboratory equipment into typical commercial ultrasonic standing wave levitators. These levitators allow precise electrical charging of drops during syringe- or ink-jet-type deployment. The homogeneous electric 'Millikan field' between the grounded ultrasonic transducer and the electrically charged reflector provide an axial compensation of the sample weight in an indifferent equilibrium, which can be balanced by using commercial optical position sensors in combination with standard electronic PID position control. Radial electrostatic repulsion forces between the charged sample and concentric ring electrodes of the same polarity provide stable positioning at the centre of the levitator. The levitator can be used in a pure acoustic or electrostatic mode or in a hybrid combination of both subsystems. Analytical evaluations of the radial–axial force profiles are verified with detailed numerical finite element calculations under consideration of alternative boundary conditions. The simple hardware modification with implemented double-ring electrodes in ac/dc operation is also feasible for an electrodynamic/acoustic hybrid levitator

  15. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  16. Sensitive power compensated scanning calorimeter for analysis of phase transformations in small samples

    International Nuclear Information System (INIS)

    Lopeandia, A.F.; Cerdo, Ll.; Clavaguera-Mora, M.T.; Arana, Leonel R.; Jensen, K.F.; Munoz, F.J.; Rodriguez-Viejo, J.

    2005-01-01

    We have designed and developed a sensitive scanning calorimeter for use with microgram or submicrogram, thin film, or powder samples. Semiconductor processing techniques are used to fabricate membrane based microreactors with a small heat capacity of the addenda, 120 nJ/K at room temperature. At heating rates below 10 K/s the heat released or absorbed by the sample during a given transformation is compensated through a resistive Pt heater by a digital controller so that the calorimeter works as a power compensated device. Its use and dynamic sensitivity is demonstrated by analyzing the melting behavior of thin films of indium and high density polyethylene. Melting enthalpies in the range of 40-250 μJ for sample masses on the order of 1.5 μg have been measured with accuracy better than 5% at heating rates ∼0.2 K/s. The signal-to-noise ratio, limited by the electronic setup, is 200 nW

  17. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  18. The small sample uncertainty aspect in relation to bullwhip effect measurement

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2009-01-01

    The bullwhip effect as a concept has been known for almost half a century starting with the Forrester effect. The bullwhip effect is observed in many supply chains, and it is generally accepted as a potential malice. Despite of this fact, the bullwhip effect still seems to be first and foremost...... a conceptual phenomenon. This paper intends primarily to investigate why this might be so and thereby investigate the various aspects, possibilities and obstacles that must be taken into account, when considering the potential practical use and measure of the bullwhip effect in order to actually get the supply...... chain under control. This paper will put special emphasis on the unavoidable small-sample uncertainty aspects relating to the measurement or estimation of the bullwhip effect.  ...

  19. Determination of 35S-aminoacyl-transfer ribonucleic acid specific radioactivity in small tissue samples

    International Nuclear Information System (INIS)

    Samarel, A.M.; Ogunro, E.A.; Ferguson, A.G.; Lesch, M.

    1981-01-01

    Rate determination of protein synthesis utilizing tracer amino acid incorporation requires accurate assessment of the specific radioactivity of the labeled precursor aminoacyl-tRNA pool. Previously published methods presumably useful for the measurement of any aminoacyl-tRNA were unsuccessful when applied to [ 35 S]methionine, due to the unique chemical properties of this amino acid. Herein we describe modifications of these methods necessary for the measurement of 35 S-aminoacyl-tRNA specific radioactivity from small tissue samples incubated in the presence of [ 35 S]methionine. The use of [ 35 S]methionine of high specific radioactivity enables analysis of the methionyl-tRNA from less than 100 mg of tissue. Conditions for optimal recovery of 35 S-labeled dansyl-amino acid derivatives are presented and possible applications of this method are discussed

  20. Determination of /sup 35/S-aminoacyl-transfer ribonucleic acid specific radioactivity in small tissue samples

    Energy Technology Data Exchange (ETDEWEB)

    Samarel, A.M.; Ogunro, E.A.; Ferguson, A.G.; Lesch, M.

    1981-11-15

    Rate determination of protein synthesis utilizing tracer amino acid incorporation requires accurate assessment of the specific radioactivity of the labeled precursor aminoacyl-tRNA pool. Previously published methods presumably useful for the measurement of any aminoacyl-tRNA were unsuccessful when applied to (/sup 35/S)methionine, due to the unique chemical properties of this amino acid. Herein we describe modifications of these methods necessary for the measurement of /sup 35/S-aminoacyl-tRNA specific radioactivity from small tissue samples incubated in the presence of (/sup 35/S)methionine. The use of (/sup 35/S)methionine of high specific radioactivity enables analysis of the methionyl-tRNA from less than 100 mg of tissue. Conditions for optimal recovery of /sup 35/S-labeled dansyl-amino acid derivatives are presented and possible applications of this method are discussed.

  1. Basic distribution free identification tests for small size samples of environmental data

    Energy Technology Data Exchange (ETDEWEB)

    Federico, A.G.; Musmeci, F. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dipt. Ambiente

    1998-01-01

    Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data. [Italiano] Nell`analisi di dati ambientali ricorre spesso il caso di dover sottoporre a test l`ipotesi di provenienza di due, o piu`, insiemi di dati dalla stessa popolazione. Tipicamente i dati disponibili sono pochi e spesso l`ipotesi di provenienza da distribuzioni normali non e` sostenibile. D`altra aprte la diffusione odierna di Personal Computer fornisce nuove possibili soluzioni basate sull`uso intensivo delle risorse della CPU. Il rapporto analizza il problema e presenta la possibilita` di utilizzo di due test non parametrici basati sulle proprieta` intrinseche di equiprobabilita` dei campioni. Il primo e` basato su una tecnica di ricampionamento esaustivo mentre il secondo su un approccio di tipo bootstrap. E` presentato un programma di semplice utilizzo e un caso di studio basato su dati di contaminazione di bambini a Chernobyl.

  2. Sampling versus systematic full lymphatic dissection in surgical treatment of non-small cell lung cancer.

    Science.gov (United States)

    Koulaxouzidis, Georgios; Karagkiouzis, Grigorios; Konstantinou, Marios; Gkiozos, Ioannis; Syrigos, Konstantinos

    2013-04-22

    The extent of mediastinal lymph node assessment during surgery for non-small cell cancer remains controversial. Different techniques are used, ranging from simple visual inspection of the unopened mediastinum to an extended bilateral lymph node dissection. Furthermore, different terms are used to define these techniques. Sampling is the removal of one or more lymph nodes under the guidance of pre-operative findings. Systematic (full) nodal dissection is the removal of all mediastinal tissue containing the lymph nodes systematically within anatomical landmarks. A Medline search was conducted to identify articles in the English language that addressed the role of mediastinal lymph node resection in the treatment of non-small cell lung cancer. Opinions as to the reasons for favoring full lymphatic dissection include complete resection, improved nodal staging and better local control due to resection of undetected micrometastasis. Arguments against routine full lymphatic dissection are increased morbidity, increase in operative time, and lack of evidence of improved survival. For complete resection of non-small cell lung cancer, many authors recommend a systematic nodal dissection as the standard approach during surgery, and suggest that this provides both adequate nodal staging and guarantees complete resection. Whether extending the lymph node dissection influences survival or recurrence rate is still not known. There are valid arguments in favor in terms not only of an improved local control but also of an improved long-term survival. However, the impact of lymph node dissection on long-term survival should be further assessed by large-scale multicenter randomized trials.

  3. Identification of mistakes and their correction by a small group discussion as a revision exercise at the end of a teaching module in biochemistry.

    Science.gov (United States)

    Bobby, Zachariah; Nandeesha, H; Sridhar, M G; Soundravally, R; Setiya, Sajita; Babu, M Sathish; Niranjan, G

    2014-01-01

    Graduate medical students often get less opportunity for clarifying their doubts and to reinforce their concepts after lecture classes. The Medical Council of India (MCI) encourages group discussions among students. We evaluated the effect of identifying mistakes in a given set of wrong statements and their correction by a small group discussion by graduate medical students as a revision exercise. At the end of a module, a pre-test consisting of multiple-choice questions (MCQs) was conducted. Later, a set of incorrect statements related to the topic was given to the students and they were asked to identify the mistakes and correct them in a small group discussion. The effects on low, medium and high achievers were evaluated by a post-test and delayed post-tests with the same set of MCQs. The mean post-test marks were significantly higher among all the three groups compared to the pre-test marks. The gain from the small group discussion was equal among low, medium and high achievers. The gain from the exercise was retained among low, medium and high achievers after 15 days. Identification of mistakes in statements and their correction by a small group discussion is an effective, but unconventional revision exercise in biochemistry. Copyright 2014, NMJI.

  4. Small population size of Pribilof Rock Sandpipers confirmed through distance-sampling surveys in Alaska

    Science.gov (United States)

    Ruthrauff, Daniel R.; Tibbitts, T. Lee; Gill, Robert E.; Dementyev, Maksim N.; Handel, Colleen M.

    2012-01-01

    The Rock Sandpiper (Calidris ptilocnemis) is endemic to the Bering Sea region and unique among shorebirds in the North Pacific for wintering at high latitudes. The nominate subspecies, the Pribilof Rock Sandpiper (C. p. ptilocnemis), breeds on four isolated islands in the Bering Sea and appears to spend the winter primarily in Cook Inlet, Alaska. We used a stratified systematic sampling design and line-transect method to survey the entire breeding range of this population during springs 2001-2003. Densities were up to four times higher on the uninhabited and more northerly St. Matthew and Hall islands than on St. Paul and St. George islands, which both have small human settlements and introduced reindeer herds. Differences in density, however, appeared to be more related to differences in vegetation than to anthropogenic factors, raising some concern for prospective effects of climate change. We estimated the total population at 19 832 birds (95% CI 17 853–21 930), ranking it among the smallest of North American shorebird populations. To determine the vulnerability of C. p. ptilocnemis to anthropogenic and stochastic environmental threats, future studies should focus on determining the amount of gene flow among island subpopulations, the full extent of the subspecies' winter range, and the current trajectory of this small population.

  5. A review of empirical research related to the use of small quantitative samples in clinical outcome scale development.

    Science.gov (United States)

    Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S

    2016-11-01

    There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.

  6. Weighted piecewise LDA for solving the small sample size problem in face verification.

    Science.gov (United States)

    Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis

    2007-03-01

    A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.

  7. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    International Nuclear Information System (INIS)

    Soh, R; Lee, J; Harianto, F

    2014-01-01

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm 2 small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm 3 , 2.64g/cm 3 ) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm 3 , HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute material for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm 2 was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm 2 small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced

  8. SU-C-201-06: Small Field Correction Factors for the MicroDiamond Detector in the Gamma Knife-Model C Derived Using Monte Carlo Methods

    International Nuclear Information System (INIS)

    Barrett, J C; Knill, C

    2016-01-01

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to

  9. SU-C-201-06: Small Field Correction Factors for the MicroDiamond Detector in the Gamma Knife-Model C Derived Using Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, J C [Wayne State University, Detroit, MI (United States); Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI (United States); Knill, C [Wayne State University, Detroit, MI (United States); Beaumont Hospital, Canton, MI (United States)

    2016-06-15

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to

  10. Identification of multiple mRNA and DNA sequences from small tissue samples isolated by laser-assisted microdissection.

    Science.gov (United States)

    Bernsen, M R; Dijkman, H B; de Vries, E; Figdor, C G; Ruiter, D J; Adema, G J; van Muijen, G N

    1998-10-01

    Molecular analysis of small tissue samples has become increasingly important in biomedical studies. Using a laser dissection microscope and modified nucleic acid isolation protocols, we demonstrate that multiple mRNA as well as DNA sequences can be identified from a single-cell sample. In addition, we show that the specificity of procurement of tissue samples is not compromised by smear contamination resulting from scraping of the microtome knife during sectioning of lesions. The procedures described herein thus allow for efficient RT-PCR or PCR analysis of multiple nucleic acid sequences from small tissue samples obtained by laser-assisted microdissection.

  11. 40 CFR Appendix A to Subpart F of... - Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines

    Science.gov (United States)

    2010-07-01

    ... Enforcement Auditing of Small Nonroad Engines A Appendix A to Subpart F of Part 90 Protection of Environment...-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Selective Enforcement Auditing Pt. 90, Subpt. F, App. A Appendix A to Subpart F of Part 90—Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines...

  12. Evaluating the biological potential in samples returned from planetary satellites and small solar system bodies: framework for decision making

    National Research Council Canada - National Science Library

    National Research Council Staff; Space Studies Board; Division on Engineering and Physical Sciences; National Research Council; National Academy of Sciences

    ... from Planetary Satellites and Small Solar System Bodies Framework for Decision Making Task Group on Sample Return from Small Solar System Bodies Space Studies Board Commission on Physical Sciences, Mathematics, and Applications National Research Council NATIONAL ACADEMY PRESS Washington, D.C. 1998 i Copyrightthe true use are Please breaks...

  13. Radioisotopic method for the measurement of lipolysis in small samples of human adipose tissue

    International Nuclear Information System (INIS)

    Leibel, R.L.; Hirsch, J.; Berry, E.M.; Gruen, R.K.

    1984-01-01

    To facilitate the study of adrenoreceptor response in small needle biopsy samples of human subcutaneous adipose tissue, we developed a dual radioisotopic technique for measuring lipolysis rate. Aliquots (20-75 mg) of adipose tissue fragments were incubated in a buffered albumin medium containing [ 3 H]palmitate and [ 14 C]glucose, each of high specific activity. In neutral glycerides synthesized in this system, [ 14 C]glucose is incorporated exclusively into the glyceride-glycerol moiety and 3 H appears solely in the esterified fatty acid. Alpha-2 and beta-1 adrenoreceptor activation of tissue incubated in this system does not alter rates of 14 C-labeled glyceride accumulation, but does produce a respective increase or decrease in the specific activity of fatty acids esterified into newly synthesized glycerides. This alteration in esterified fatty acid specific activity is reflected in the ratio of 14 C: 3 H in newly synthesized triglycerides extracted from the incubated adipose tissue. There is a high correlation (r . 0.90) between the 14 C: 3 H ratio in triglycerides and the rate of lipolysis as reflected in glycerol release into the incubation medium. The degree of adrenoreceptor activation by various concentrations of lipolytic and anti-lipolytic substances can be assessed by comparing this ratio in stimulated tissue to that characterizing unstimulated tissue or the incubation medium. This technique permits the study of very small, unweighed tissue biopsy fragments, the only limitation on sensitivity being the specific activity of the medium glucose and palmitate. It is, therefore, useful for serial examinations of adipose tissue adrenoreceptor dose-response characteristics under a variety of clinical circumstances

  14. Corrections of arterial input function for dynamic H215O PET to assess perfusion of pelvic tumours: arterial blood sampling versus image extraction

    International Nuclear Information System (INIS)

    Luedemann, L; Sreenivasa, G; Michel, R; Rosner, C; Plotkin, M; Felix, R; Wust, P; Amthauer, H

    2006-01-01

    Assessment of perfusion with 15 O-labelled water (H 2 15 O) requires measurement of the arterial input function (AIF). The arterial time activity curve (TAC) measured using the peripheral sampling scheme requires corrections for delay and dispersion. In this study, parametrizations with and without arterial spillover correction for fitting of the tissue curve are evaluated. Additionally, a completely noninvasive method for generation of the AIF from a dynamic positron emission tomography (PET) acquisition is applied to assess perfusion of pelvic tumours. This method uses a volume of interest (VOI) to extract the TAC from the femoral artery. The VOI TAC is corrected for spillover using a separate tissue TAC and for recovery by determining the recovery coefficient on a coregistered CT data set. The techniques were applied in five patients with pelvic tumours who underwent a total of 11 examinations. Delay and dispersion correction of the blood TAC without arterial spillover correction yielded in seven examinations solutions inconsistent with physiology. Correction of arterial spillover increased the fitting accuracy and yielded consistent results in all patients. Generation of an AIF from PET image data was investigated as an alternative to arterial blood sampling and was shown to have an intrinsic potential to determine the AIF noninvasively and reproducibly. The AIF extracted from a VOI in a dynamic PET scan was similar in shape to the blood AIF but yielded significantly higher tissue perfusion values (mean of 104.0 ± 52.0%) and lower partition coefficients (-31.6 ± 24.2%). The perfusion values and partition coefficients determined with the VOI technique have to be corrected in order to compare the results with those of studies using a blood AIF

  15. Filter Bank Regularized Common Spatial Pattern Ensemble for Small Sample Motor Imagery Classification.

    Science.gov (United States)

    Park, Sang-Hoon; Lee, David; Lee, Sang-Goog

    2018-02-01

    For the last few years, many feature extraction methods have been proposed based on biological signals. Among these, the brain signals have the advantage that they can be obtained, even by people with peripheral nervous system damage. Motor imagery electroencephalograms (EEG) are inexpensive to measure, offer a high temporal resolution, and are intuitive. Therefore, these have received a significant amount of attention in various fields, including signal processing, cognitive science, and medicine. The common spatial pattern (CSP) algorithm is a useful method for feature extraction from motor imagery EEG. However, performance degradation occurs in a small-sample setting (SSS), because the CSP depends on sample-based covariance. Since the active frequency range is different for each subject, it is also inconvenient to set the frequency range to be different every time. In this paper, we propose the feature extraction method based on a filter bank to solve these problems. The proposed method consists of five steps. First, motor imagery EEG is divided by a using filter bank. Second, the regularized CSP (R-CSP) is applied to the divided EEG. Third, we select the features according to mutual information based on the individual feature algorithm. Fourth, parameter sets are selected for the ensemble. Finally, we classify using ensemble based on features. The brain-computer interface competition III data set IVa is used to evaluate the performance of the proposed method. The proposed method improves the mean classification accuracy by 12.34%, 11.57%, 9%, 4.95%, and 4.47% compared with CSP, SR-CSP, R-CSP, filter bank CSP (FBCSP), and SR-FBCSP. Compared with the filter bank R-CSP ( , ), which is a parameter selection version of the proposed method, the classification accuracy is improved by 3.49%. In particular, the proposed method shows a large improvement in performance in the SSS.

  16. Retrospective biodosimetry with small tooth enamel samples using K-Band and X-Band

    International Nuclear Information System (INIS)

    Gomez, Jorge A.; Kinoshita, Angela; Leonor, Sergio J.; Belmonte, Gustavo C.; Baffa, Oswaldo

    2011-01-01

    In an attempt to make the in vitro electron spin resonance (ESR) retrospective dosimetry of the tooth enamel a lesser invasive method, experiments using X-Band and K-Band were performed, aiming to determine conditions that could be used in cases of accidental exposures. First, a small prism from the enamel was removed and ground with an agate mortar and pestle until particles reach a diameter of approximately less than 0.5 mm. This enamel extraction process resulted in lower signal artifact compared with the direct enamel extraction performed with a diamond burr abrasion. The manual grinding of the enamel does not lead to any induced ESR signal artifact, whereas the use of a diamond burr at low speed produces a signal artifact equivalent to the dosimetric signal induced by a dose of 500 mGy of gamma irradiation. A mass of 25 mg of enamel was removed from a sound molar tooth previously irradiated in vitro with a dose of 100 mGy. This amount of enamel was enough to detect the dosimetric signal in a standard X-Band spectrometer. However using a K-Band spectrometer, samples mass between 5 and 10 mg were sufficient to obtain the same sensitivity. An overall evaluation of the uncertainties involved in the process in this and other dosimetric assessments performed at our laboratory indicates that it is possible at K-Band to estimate a 100 mGy dose with 25% accuracy. In addition, the use of K-Band also presented higher sensitivity and allowed the use of smaller sample mass in comparison with X-Band. Finally, the restoration process performed on a tooth after extraction of the 25 mg of enamel is described. This was conducted by dental treatment using photopolymerizable resin which enabled complete recovery of the tooth from the functional and aesthetic viewpoint showing that this procedure can be minimally invasive.

  17. Retrospective biodosimetry with small tooth enamel samples using K-Band and X-Band

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Jorge A. [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil); Kinoshita, Angela [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil); Universidade Sagrado Coracao - USC, 17011-160 Bauru, Sao Paulo (Brazil); Leonor, Sergio J. [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil); Belmonte, Gustavo C. [Universidade Sagrado Coracao - USC, 17011-160 Bauru, Sao Paulo (Brazil); Baffa, Oswaldo, E-mail: baffa@usp.br [Departamento de Fisica, FFCLRP, Universidade de Sao Paulo, 14040-901 Ribeirao Preto, Sao Paulo (Brazil)

    2011-09-15

    In an attempt to make the in vitro electron spin resonance (ESR) retrospective dosimetry of the tooth enamel a lesser invasive method, experiments using X-Band and K-Band were performed, aiming to determine conditions that could be used in cases of accidental exposures. First, a small prism from the enamel was removed and ground with an agate mortar and pestle until particles reach a diameter of approximately less than 0.5 mm. This enamel extraction process resulted in lower signal artifact compared with the direct enamel extraction performed with a diamond burr abrasion. The manual grinding of the enamel does not lead to any induced ESR signal artifact, whereas the use of a diamond burr at low speed produces a signal artifact equivalent to the dosimetric signal induced by a dose of 500 mGy of gamma irradiation. A mass of 25 mg of enamel was removed from a sound molar tooth previously irradiated in vitro with a dose of 100 mGy. This amount of enamel was enough to detect the dosimetric signal in a standard X-Band spectrometer. However using a K-Band spectrometer, samples mass between 5 and 10 mg were sufficient to obtain the same sensitivity. An overall evaluation of the uncertainties involved in the process in this and other dosimetric assessments performed at our laboratory indicates that it is possible at K-Band to estimate a 100 mGy dose with 25% accuracy. In addition, the use of K-Band also presented higher sensitivity and allowed the use of smaller sample mass in comparison with X-Band. Finally, the restoration process performed on a tooth after extraction of the 25 mg of enamel is described. This was conducted by dental treatment using photopolymerizable resin which enabled complete recovery of the tooth from the functional and aesthetic viewpoint showing that this procedure can be minimally invasive.

  18. Measurement of double differential cross sections of charged particle emission reactions by incident DT neutrons. Correction for energy loss of charged particle in sample materials

    International Nuclear Information System (INIS)

    Takagi, Hiroyuki; Terada, Yasuaki; Murata, Isao; Takahashi, Akito

    2000-01-01

    In the measurement of charged particle emission spectrum induced by neutrons, correcting the energy loss of charged particle in sample materials becomes a very important inverse problem. To deal with this inverse problem, we have applied the Bayesian unfolding method to correct the energy loss, and tested the performance of the method. Although this method is very simple, it was confirmed from the test that the performance was not inferior to other methods at all, and therefore the method could be a powerful tool for charged particle spectrum measurement. (author)

  19. Small-kernel constrained-least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-10-01

    Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the restoration filter is derived by maximizing the smoothness of the restored image while satisfying a fidelity constraint related to how well the restored image matches the actual data. The traditional derivation and implementation of the constrained least-squares restoration filter is based on an incomplete discrete/discrete system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a recent paper Park demonstrated that a derivation of the Wiener filter based on the incomplete discrete/discrete model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous model. In a similar way, in this paper, we show that a derivation of the constrained least-squares filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model and, by so doing, an improved restoration filter is derived. Building on previous work by Reichenbach and Park for the Wiener filter, we also show that this improved constrained least-squares restoration filter can be efficiently implemented as a small-kernel convolution in the spatial domain.

  20. Bootstrap-DEA analysis of BRICS’ energy efficiency based on small sample data

    International Nuclear Information System (INIS)

    Song, Ma-Lin; Zhang, Lin-Ling; Liu, Wei; Fisher, Ron

    2013-01-01

    Highlights: ► The BRICS’ economies have flourished with increasingly energy consumptions. ► The analyses and comparison of energy efficiency are conducted among the BRICS. ► As a whole, there is low energy efficiency but a growing trend of BRICS. ► The BRICS should adopt relevant energy policies based on their own conditions. - Abstract: As a representative of many emerging economies, BRICS’ economies have been greatly developed in recent years. Meanwhile, the proportion of energy consumption of BRICS to the whole world consumption has increased. Therefore, it is significant to analyze and compare the energy efficiency among them. This paper firstly utilizes a Super-SBM model to measure and calculate the energy efficiency of BRICS, then analyzes their present status and development trend. Further, Bootstrap is applied to modify the values based on DEA derived from small sample data, and finally the relationship between energy efficiency and carbon emissions is measured. Results show that energy efficiency of BRICS as a whole is low but has a quickly increasing trend. Also, the relationship between energy efficiency and carbon emissions vary from country to country because of their different energy structures. The governments of BRICS should make some relevant energy policies according to their own conditions

  1. Small average differences in attenuation corrected images between men and women in myocardial perfusion scintigraphy: a novel normal stress database

    International Nuclear Information System (INIS)

    Trägårdh, Elin; Sjöstrand, Karl; Jakobsson, David; Edenbrandt, Lars

    2011-01-01

    The American Society of Nuclear Cardiology and the Society of Nuclear Medicine state that incorporation of attenuation-corrected (AC) images in myocardial perfusion scintigraphy (MPS) will improve image quality, interpretive certainty, and diagnostic accuracy. However, commonly used software packages for MPS usually include normal stress databases for non-attenuation corrected (NC) images but not for attenuation-corrected (AC) images. The aim of the study was to develop and compare different normal stress databases for MPS in relation to NC vs. AC images, male vs. female gender, and presence vs. absence of obesity. The principal hypothesis was that differences in mean count values between men and women would be smaller with AC than NC images, thereby allowing for construction and use of gender-independent AC stress database. Normal stress perfusion databases were developed with data from 126 male and 205 female patients with normal MPS. The following comparisons were performed for all patients and separately for normal weight vs. obese patients: men vs. women for AC; men vs. women for NC; AC vs. NC for men; and AC vs. NC for women. When comparing AC for men vs. women, only minor differences in mean count values were observed, and there were no differences for normal weight vs. obese patients. For all other analyses major differences were found, particularly for the inferior wall. The results support the hypothesis that it is possible to use not only gender independent but also weight independent AC stress databases

  2. Impact of PET - CT motion correction in minimising the gross tumour volume in non-small cell lung cancer

    Directory of Open Access Journals (Sweden)

    Michael Masoomi

    2013-10-01

    Full Text Available AbstractObjective: To investigate the impact of respiratory motion on localization, and quantification lung lesions for the Gross Tumour Volume utilizing an in-house developed Auto3Dreg programme and dynamic NURBS-based cardiac-torso digitised phantom (NCAT. Methods: Respiratory motion may result in more than 30% underestimation of the SUV values of lung, liver and kidney tumour lesions. The motion correction technique adopted in this study was an image-based motion correction approach using, an in-house developed voxel-intensity-based and a multi-resolution multi-optimisation (MRMO algorithm. All the generated frames were co-registered to a reference frame using a time efficient scheme. The NCAT phantom was used to generate CT attenuation maps and activity distribution volumes for the lung regions. Quantitative assessment including Region of Interest (ROI, image fidelity and image correlation techniques, as well as semi-quantitative line profile analysis and qualitatively overlaying non-motion and motion corrected image frames were performed. Results: the largest transformation was observed in the Z-direction. The greatest translation was for the frame 3, end inspiration, and the smallest for the frame 5 which was closet frame to the reference frame at 67% expiration. Visual assessment of the lesion sizes, 20-60mm at 3 different locations, apex, mid and base of lung showed noticeable improvement for all the foci and their locations. The maximum improvements for the image fidelity were from 0.395 to 0.930 within the lesion volume of interest. The greatest improvement in activity concentration underestimation, post motion correction, was 7% below the true activity for the 20 mm lesion. The discrepancies in activity underestimation were reduced with increasing the lesion sizes. Overlay activity distribution on the attenuation map showed improved localization of the PET metabolic information to the anatomical CT images. Conclusion: The respiratory

  3. The use of commercially available PC-interface cards for elemental mapping in small samples using XRF

    International Nuclear Information System (INIS)

    Abu Bakar bin Ghazali; Hoyes Garnet

    1991-01-01

    This paper demonstrates the use of ADC and reed relay cards to scan a small sample for acquiring data of X-ray fluorescence. The result shows the distribution of an element such as zinc content in the sample by means of colours, signifying the concentration

  4. Superficial structures cartography of the Essaouira basin under ground (Morocco, by small refraction seismic: contribution of the static corrections in the reinterpretation of the speeds variations.

    Directory of Open Access Journals (Sweden)

    Dahaoui M.

    2018-01-01

    Full Text Available The static corrections are a necessary step in the sequence of the seismic processing. This paper presents a study of these corrections in the Essaouira basin. The main objective of this study is to calculate the static corrections by exploiting the seismic data acquired in the field to improve the deep structures imaging. It is to determine the roof and the basis of the superficial layers which constitute the weathered zone while calculating the delays of seismic wave’s arrivals in these layers. The purpose is to cancel the effect of the topography and the weathered zone, in order to avoid any confusion when the seismic and geological interpretation. The results obtained show the average values of the static corrections varying between - 127 and 282 ms (double time, with existence of high values by location, particularly in the Eastern and North-Eastern of the basin, which meant the presence of altered zone with irregular topography and whose thickness and speeds vary laterally. In effect the variations of velocities in the fifty meters from the surface may introduce significant anomalies in seismic refraction, with heavy consequences when the interpretation or the drilling establishment. These variations are mainly due to lateral changes in facies and variations in the formations thickness. The calculation of the static corrections, revealed high values at certain areas (East and North-East, which will enable us to better orient the future campaigns in these zones. It is therefore necessary to concentrate the seismic cores drillings and the small refraction seismic profiles by tightening the seismic lines meshes in order to have the maximum values of static corrections and thereafter a better imaging of the reflectors.

  5. Determination of ring correction factors for leaded gloves used in grab sampling activities at Hanford tank farms

    Energy Technology Data Exchange (ETDEWEB)

    RATHBONE, B.A.

    1999-06-24

    This study evaluates the effectiveness of lead lined gloves in reducing extremity dose from two sources specific to tank waste sampling activities: (1) sludge inside glass sample jars and (2) sludge as thin layer contamination on the exterior surface of sample jars. The response of past and present Hanford Extremity Dosimeters (ring) designs under these conditions is also evaluated.

  6. Determination of ring correction factors for leaded gloves used in grab sampling activities at Hanford tank farms

    International Nuclear Information System (INIS)

    RATHBONE, B.A.

    1999-01-01

    This study evaluates the effectiveness of lead lined gloves in reducing extremity dose from two sources specific to tank waste sampling activities: (1) sludge inside glass sample jars and (2) sludge as thin layer contamination on the exterior surface of sample jars. The response of past and present Hanford Extremity Dosimeters (ring) designs under these conditions is also evaluated

  7. A study of the dosimetry of small field photon beams used in intensity modulated radiation therapy in inhomogeneous media: Monte Carlo simulations, and algorithm comparisons and corrections

    International Nuclear Information System (INIS)

    Jones, Andrew Osler

    2004-01-01

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however, their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs farther downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. The dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the

  8. Measurement of regional cerebral blood flow using one-point arterial blood sampling and microsphere model with 123I-IMP. Correction of one-point arterial sampling count by whole brain count ratio

    International Nuclear Information System (INIS)

    Makino, Kenichi; Masuda, Yasuhiko; Gotoh, Satoshi

    1998-01-01

    The experimental subjects were 189 patients with cerebrovascular disorders. 123 I-IMP, 222 MBq, was administered by intravenous infusion. Continuous arterial blood sampling was carried out for 5 minutes, and arterial blood was also sampled once at 5 minutes after 123 I-IMP administration. Then the whole blood count of the one-point arterial sampling was compared with the octanol-extracted count of the continuous arterial sampling. A positive correlation was found between the two values. The ratio of the continuous sampling octanol-extracted count (OC) to the one-point sampling whole blood count (TC5) was compared with the whole brain count ratio (5:29 ratio, Cn) using 1-minute planar SPECT images, centering on 5 and 29 minutes after 123 I-IMP administration. Correlation was found between the two values. The following relationship was shown from the correlation equation. OC/TC5=0.390969 x Cn-0.08924. Based on this correlation equation, we calculated the theoretical continuous arterial sampling octanol-extracted count (COC). COC=TC5 x (0.390969 x Cn-0.08924). There was good correlation between the value calculated with this equation and the actually measured value. The coefficient improved to r=0.94 from the r=0.87 obtained before using the 5:29 ratio for correction. For 23 of these 189 cases, another one-point arterial sampling was carried out at 6, 7, 8, 9 and 10 minutes after the administration of 123 I-IMP. The correlation coefficient was also improved for these other point samplings when this correction method using the 5:29 ratio was applied. It was concluded that it is possible to obtain highly accurate input functions, i.e., calculated continuous arterial sampling octanol-extracted counts, using one-point arterial sampling whole blood counts by performing correction using the 5:29 ratio. (K.H.)

  9. Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ferson, S. [Applied Biomathematics, Setauket, NY (United States)

    1996-12-31

    A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.

  10. Quantum superposition of the state discrete spectrum of mathematical correlation molecule for small samples of biometric data

    Directory of Open Access Journals (Sweden)

    Vladimir I. Volchikhin

    2017-06-01

    Full Text Available Introduction: The study promotes to decrease a number of errors of calculating the correlation coefficient in small test samples. Materials and Methods: We used simulation tool for the distribution functions of the density values of the correlation coefficient in small samples. A method for quantization of the data, allows obtaining a discrete spectrum states of one of the varieties of correlation functional. This allows us to consider the proposed structure as a mathematical correlation molecule, described by some analogue continuous-quantum Schrödinger equation. Results: The chi-squared Pearson’s molecule on small samples allows enhancing power of classical chi-squared test to 20 times. A mathematical correlation molecule described in the article has similar properties. It allows in the future reducing calculation errors of the classical correlation coefficients in small samples. Discussion and Conclusions: The authors suggest that there are infinitely many mathematical molecules are similar in their properties to the actual physical molecules. Schrödinger equations are not unique, their analogues can be constructed for each mathematical molecule. You can expect a mathematical synthesis of molecules for a large number of known statistical tests and statistical moments. All this should make it possible to reduce calculation errors due to quantum effects that occur in small test samples.

  11. Physical properties of the spin Hamiltonian on honeycomb lattice samples with Kekulé and vacuum polarization corrections

    Science.gov (United States)

    Martins, Ricardo Spagnuolo; Konstantinova, Elena; Belich, Humberto; Helayël-Neto, José Abdalla

    2017-11-01

    Magnetic and thermodynamical properties of a system of spins in a honeycomb lattice, such as magnetization, magnetic susceptibility and specific heat, in a low-temperature regime are investigated by considering the effects of a Kekulé scalar exchange and QED vacuum polarization corrections to the interparticle potential. The spin lattice calculations are carried out by means of Monte Carlo simulations. We present a number of comparative plots of all the physical quantities we have considered and a detailed analysis is presented to illustrate the main features and the variation profiles of the properties with the applied external magnetic field and temperature.

  12. ANALYSIS OF MONTE CARLO SIMULATION SAMPLING TECHNIQUES ON SMALL SIGNAL STABILITY OF WIND GENERATOR- CONNECTED POWER SYSTEM

    Directory of Open Access Journals (Sweden)

    TEMITOPE RAPHAEL AYODELE

    2016-04-01

    Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.

  13. A Rational Approach for Discovering and Validating Cancer Markers in Very Small Samples Using Mass Spectrometry and ELISA Microarrays

    Directory of Open Access Journals (Sweden)

    Richard C. Zangar

    2004-01-01

    Full Text Available Identifying useful markers of cancer can be problematic due to limited amounts of sample. Some samples such as nipple aspirate fluid (NAF or early-stage tumors are inherently small. Other samples such as serum are collected in larger volumes but archives of these samples are very valuable and only small amounts of each sample may be available for a single study. Also, given the diverse nature of cancer and the inherent variability in individual protein levels, it seems likely that the best approach to screen for cancer will be to determine the profile of a battery of proteins. As a result, a major challenge in identifying protein markers of disease is the ability to screen many proteins using very small amounts of sample. In this review, we outline some technological advances in proteomics that greatly advance this capability. Specifically, we propose a strategy for identifying markers of breast cancer in NAF that utilizes mass spectrometry (MS to simultaneously screen hundreds or thousands of proteins in each sample. The best potential markers identified by the MS analysis can then be extensively characterized using an ELISA microarray assay. Because the microarray analysis is quantitative and large numbers of samples can be efficiently analyzed, this approach offers the ability to rapidly assess a battery of selected proteins in a manner that is directly relevant to traditional clinical assays.

  14. Droplet Size-Aware and Error-Correcting Sample Preparation Using Micro-Electrode-Dot-Array Digital Microfluidic Biochips.

    Science.gov (United States)

    Li, Zipeng; Lai, Kelvin Yi-Tse; Chakrabarty, Krishnendu; Ho, Tsung-Yi; Lee, Chen-Yi

    2017-12-01

    Sample preparation in digital microfluidics refers to the generation of droplets with target concentrations for on-chip biochemical applications. In recent years, digital microfluidic biochips (DMFBs) have been adopted as a platform for sample preparation. However, there remain two major problems associated with sample preparation on a conventional DMFB. First, only a (1:1) mixing/splitting model can be used, leading to an increase in the number of fluidic operations required for sample preparation. Second, only a limited number of sensors can be integrated on a conventional DMFB; as a result, the latency for error detection during sample preparation is significant. To overcome these drawbacks, we adopt a next generation DMFB platform, referred to as micro-electrode-dot-array (MEDA), for sample preparation. We propose the first sample-preparation method that exploits the MEDA-specific advantages of fine-grained control of droplet sizes and real-time droplet sensing. Experimental demonstration using a fabricated MEDA biochip and simulation results highlight the effectiveness of the proposed sample-preparation method.

  15. Design and experimental testing of air slab caps which convert commercial electron diodes into dual purpose, correction-free diodes for small field dosimetry.

    Science.gov (United States)

    Charles, P H; Cranmer-Sargison, G; Thwaites, D I; Kairn, T; Crowe, S B; Pedrazzini, G; Aland, T; Kenny, J; Langton, C M; Trapp, J V

    2014-10-01

    Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable "air cap". A set of output ratios (ORDet (fclin) ) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to ORDet (fclin) measured using an IBA stereotactic field diode (SFD). kQclin,Qmsr (fclin,fmsr) was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that kQclin,Qmsr (fclin,fmsr) was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which is "correction-free" in small field relative dosimetry. In addition, the feasibility of experimentally transferring kQclin,Qmsr (fclin,fmsr) values from the SFD to unknown diodes was tested by comparing the experimentally transferred kQclin,Qmsr (fclin,fmsr) values for unmodified PTWe and EDGEe diodes to Monte Carlo simulated values. 1.0 mm of air was required to make the PTWe diode correction-free. This modified diode (PTWeair) produced output factors equivalent to those in water at all field sizes (5-50 mm

  16. In-situ γ spectrometry of the Chernobyl fallout using soil-sample independent corrections for surface roughness and migration

    International Nuclear Information System (INIS)

    Karlberg, O.

    1993-12-01

    The 661 keV gamma and 32 keV X-ray fluences from Cs-137 were measured in-situ with a Gamma-X Ge detector on different types of urban and rural surfaces. In comparison with a model calculation, the 661 keV fluence was used to estimate the surface activity assuming an ideal, infinite surface and the quotient between the 32 and 661 fluences was used to estimate the correction factors for the surfaces due to migration and surface roughness. As an alternative to the X-ray method, the use of a collimator for ordinary measurements of the 661 keV peak was analysed, and compared with the X-ray method and with measurements without a collimator. The X-ray method with the optimal soil distribution and composition gives the best results, but ordinary measurements with use of a collimator with a constant correction factor seems to be an appropriate method, when soil profiles for determination of a more exact calibration factor are not available

  17. Correction for phylogeny, small number of observations and data redundancy improves the identification of coevolving amino acid pairs using mutual information

    DEFF Research Database (Denmark)

    Buslje, C.M.; Santos, J.; Delfino, J.M.

    2009-01-01

    Motivation: Mutual information (MI) theory is often applied to predict positional correlations in a multiple sequence alignment (MSA) to make possible the analysis of those positions structurally or functionally important in a given fold or protein family. Accurate identification of coevolving......-weighting techniques to reduce sequence redundancy and low-count corrections to account for small number of observations in limited size sequence families, can significantly improve the predictability of MI. The evaluation is made on large sets of both in silico-generated alignments as well as on biological sequence...

  18. Measuring Sulfur Isotope Ratios from Solid Samples with the Sample Analysis at Mars Instrument and the Effects of Dead Time Corrections

    Science.gov (United States)

    Franz, H. B.; Mahaffy, P. R.; Kasprzak, W.; Lyness, E.; Raaen, E.

    2011-01-01

    The Sample Analysis at Mars (SAM) instrument suite comprises the largest science payload on the Mars Science Laboratory (MSL) "Curiosity" rover. SAM will perform chemical and isotopic analysis of volatile compounds from atmospheric and solid samples to address questions pertaining to habitability and geochemical processes on Mars. Sulfur is a key element of interest in this regard, as sulfur compounds have been detected on the Martian surface by both in situ and remote sensing techniques. Their chemical and isotopic composition can belp constrain environmental conditions and mechanisms at the time of formation. A previous study examined the capability of the SAM quadrupole mass spectrometer (QMS) to determine sulfur isotope ratios of SO2 gas from a statistical perspective. Here we discuss the development of a method for determining sulfur isotope ratios with the QMS by sampling SO2 generated from heating of solid sulfate samples in SAM's pyrolysis oven. This analysis, which was performed with the SAM breadboard system, also required development of a novel treatment of the QMS dead time to accommodate the characteristics of an aging detector.

  19. A microfluidic paper-based analytical device for the assay of albumin-corrected fructosamine values from whole blood samples.

    Science.gov (United States)

    Boonyasit, Yuwadee; Laiwattanapaisal, Wanida

    2015-01-01

    A method for acquiring albumin-corrected fructosamine values from whole blood using a microfluidic paper-based analytical system that offers substantial improvement over previous methods is proposed. The time required to quantify both serum albumin and fructosamine is shortened to 10 min with detection limits of 0.50 g dl(-1) and 0.58 mM, respectively (S/N = 3). The proposed system also exhibited good within-run and run-to-run reproducibility. The results of the interference study revealed that the acceptable recoveries ranged from 95.1 to 106.2%. The system was compared with currently used large-scale methods (n = 15), and the results demonstrated good agreement among the techniques. The microfluidic paper-based system has the potential to continuously monitor glycemic levels in low resource settings.

  20. Correction of systematic bias in ultrasound dating in studies of small-for-gestational-age birth: an example from the Iowa Health in Pregnancy Study.

    Science.gov (United States)

    Harland, Karisa K; Saftlas, Audrey F; Wallis, Anne B; Yankowitz, Jerome; Triche, Elizabeth W; Zimmerman, M Bridget

    2012-09-01

    The authors examined whether early ultrasound dating (≤20 weeks) of gestational age (GA) in small-for-gestational-age (SGA) fetuses may underestimate gestational duration and therefore the incidence of SGA birth. Within a population-based case-control study (May 2002-June 2005) of Iowa SGA births and preterm deliveries identified from birth records (n = 2,709), the authors illustrate a novel methodological approach with which to assess and correct for systematic underestimation of GA by early ultrasound in women with suspected SGA fetuses. After restricting the analysis to subjects with first-trimester prenatal care, a nonmissing date of the last menstrual period (LMP), and early ultrasound (n = 1,135), SGA subjects' ultrasound GA was 5.5 days less than their LMP GA, on average. Multivariable linear regression was conducted to determine the extent to which ultrasound GA predicted LMP dating and to correct for systematic misclassification that results after applying standard guidelines to adjudicate differences in these measures. In the unadjusted model, SGA subjects required a correction of +1.5 weeks to the ultrasound estimate. With adjustment for maternal age, smoking, and first-trimester vaginal bleeding, standard guidelines for adjudicating differences in ultrasound and LMP dating underestimated SGA birth by 12.9% and overestimated preterm delivery by 8.7%. This methodological approach can be applied by researchers using different study populations in similar research contexts.

  1. Approach to the determination of the contact angle in hydrophobic samples with simultaneous correction of the effect of the roughness

    Science.gov (United States)

    Domínguez, Noemí; Castilla, Pau; Linzoain, María Eugenia; Durand, Géraldine; García, Cristina; Arasa, Josep

    2018-04-01

    This work presents the validation study of a method developed to measure contact angles with a confocal device in a set of hydrophobic samples. The use of this device allows the evaluation of the roughness of the surface and the determination of the contact angle in the same area of the sample. Furthermore, a theoretical evaluation of the impact of the roughness of a nonsmooth surface in the calculation of the contact angle when it is not taken into account according to Wenzel's model is also presented.

  2. Capillary absorption spectrometer and process for isotopic analysis of small samples

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.; Moran, James J.; Newburn, Matthew K.; Blake, Thomas A.

    2018-04-24

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The process also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  3. Capillary absorption spectrometer and process for isotopic analysis of small samples

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.; Moran, James J.; Newburn, Matthew K.; Blake, Thomas A.

    2016-03-29

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The method also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  4. Tapping in synchrony with a perturbed metronome: the phase correction response to small and large phase shifts as a function of tempo.

    Science.gov (United States)

    Repp, Bruno H

    2011-01-01

    When tapping is paced by an auditory sequence containing small phase shift (PS) perturbations, the phase correction response (PCR) of the tap following a PS increases with the baseline interonset interval (IOI), leading eventually to overcorrection (B. H. Repp, 2008). Experiment 1 shows that this holds even for fixed-size PSs that become imperceptible as the IOI increases (here, from 400 to 1200 ms). Earlier research has also shown (but only for IOI=500 ms) that the PCR is proportionally smaller for large than for small PSs (B. H. Repp, 2002a, 2002b). Experiment 2 introduced large PSs and found smaller PCRs than in Experiment 1, at all of the same IOIs. In Experiments 3A and 3B, the author investigated whether the change in slope of the sigmoid function relating PCR and PS magnitudes occurs at a fixed absolute or relative PS magnitude across different IOIs (600, 1000, 1400 ms). The results suggest no clear answer; the exact shape of the function may depend on the range of PSs used in an experiment. Experiment 4 examined the PCR in the IOI range from 1000 to 2000 ms and found overcorrection throughout, but with the PCR increasing much more gradually than in Experiment 1. These results provide important new information about the phase correction process and pose challenges for models of sensorimotor synchronization, which presently cannot explain nonlinear PCR functions and overcorrection. Copyright © Taylor & Francis Group, LLC

  5. [Monitoring microbiological safety of small systems of water distribution. Comparison of two sampling programs in a town in central Italy].

    Science.gov (United States)

    Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A

    2005-01-01

    To determine the frequency of sampling in small water distribution systems (distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value samples (beta= 0.24) than with 21 (beta= 0.05). For this small network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.

  6. Comparison of sampling and test methods for determining asphalt content and moisture correction in asphalt concrete mixtures.

    Science.gov (United States)

    1985-03-01

    The purpose of this report is to identify the difference, if any, in AASHTO and OSHD test procedures and results. This report addresses the effect of the size of samples taken in the field and evaluates the methods of determining the moisture content...

  7. High-speed imaging upgrade for a standard sample scanning atomic force microscope using small cantilevers

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Jonathan D.; Nievergelt, Adrian; Erickson, Blake W.; Yang, Chen; Dukic, Maja; Fantner, Georg E., E-mail: georg.fantner@epfl.ch [Ecole Polytechnique Fédérale de Lausanne, Lausanne (Switzerland)

    2014-09-15

    We present an atomic force microscope (AFM) head for optical beam deflection on small cantilevers. Our AFM head is designed to be small in size, easily integrated into a commercial AFM system, and has a modular architecture facilitating exchange of the optical and electronic assemblies. We present two different designs for both the optical beam deflection and the electronic readout systems, and evaluate their performance. Using small cantilevers with our AFM head on an otherwise unmodified commercial AFM system, we are able to take tapping mode images approximately 5–10 times faster compared to the same AFM system using large cantilevers. By using additional scanner turnaround resonance compensation and a controller designed for high-speed AFM imaging, we show tapping mode imaging of lipid bilayers at line scan rates of 100–500 Hz for scan areas of several micrometers in size.

  8. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    Science.gov (United States)

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. A TIMS-based method for the high precision measurements of the three-isotope potassium composition of small samples

    DEFF Research Database (Denmark)

    Wielandt, Daniel Kim Peel; Bizzarro, Martin

    2011-01-01

    A novel thermal ionization mass spectrometry (TIMS) method for the three-isotope analysis of K has been developed, and ion chromatographic methods for the separation of K have been adapted for the processing of small samples. The precise measurement of K-isotopes is challenged by the presence of ...

  10. Adiponectin levels measured in dried blood spot samples from neonates born small and appropriate for gestational age

    DEFF Research Database (Denmark)

    Klamer, A; Skogstrand, Kristin; Hougaard, D M

    2007-01-01

    Adiponectin levels measured in neonatal dried blood spot samples (DBSS) might be affected by both prematurity and being born small for gestational age (SGA). The aim of the study was to measure adiponectin levels in routinely collected neonatal DBSS taken on day 5 (range 3-12) postnatal from...

  11. Is a 'convenience' sample useful for estimating immunization coverage in a small population?

    Science.gov (United States)

    Weir, Jean E; Jones, Carrie

    2008-01-01

    Rapid survey methodologies are widely used for assessing immunization coverage in developing countries, approximating true stratified random sampling. Non-random ('convenience') sampling is not considered appropriate for estimating immunization coverage rates but has the advantages of low cost and expediency. We assessed the validity of a convenience sample of children presenting to a travelling clinic by comparing the coverage rate in the convenience sample to the true coverage established by surveying each child in three villages in rural Papua New Guinea. The rate of DTF immunization coverage as estimated by the convenience sample was within 10% of the true coverage when the proportion of children in the sample was two-thirds or when only children over the age of one year were counted, but differed by 11% when the sample included only 53% of the children and when all eligible children were included. The convenience sample may be sufficiently accurate for reporting purposes and is useful for identifying areas of low coverage.

  12. Classification of natural formations based on their optical characteristics using small volumes of samples

    Science.gov (United States)

    Abramovich, N. S.; Kovalev, A. A.; Plyuta, V. Y.

    1986-02-01

    A computer algorithm has been developed to classify the spectral bands of natural scenes on Earth according to their optical characteristics. The algorithm is written in FORTRAN-IV and can be used in spectral data processing programs requiring small data loads. The spectral classifications of some different types of green vegetable canopies are given in order to illustrate the effectiveness of the algorithm.

  13. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  14. An introduction to Bartlett correction and bias reduction

    CERN Document Server

    Cordeiro, Gauss M

    2014-01-01

    This book presents a concise introduction to Bartlett and Bartlett-type corrections of statistical tests and bias correction of point estimators. The underlying idea behind both groups of corrections is to obtain higher accuracy in small samples. While the main focus is on corrections that can be analytically derived, the authors also present alternative strategies for improving estimators and tests based on bootstrap, a data resampling technique, and discuss concrete applications to several important statistical models.

  15. Thermal neutron absorption cross-section for small samples (experiments in cylindrical geometry)

    International Nuclear Information System (INIS)

    Czubek, J.A.; Drozdowicz, K.; Igielski, A.; Krynicka-Drozdowicz, E.; Woznicka, U.

    1982-01-01

    Measurement results for thermal neutron macroscopic absorption cross-sections Σsub(a)1 when applying the cylindrical sample-moderator system are presented. Experiments for liquid (water solutions of H 3 BO 3 ) and solid (crushed basalts) samples are reported. Solid samples have been saturated with the H 3 BO 3 ''poisoning'' solution. The accuracy obtained for the determination of the absorption cross-section of the solid material was σ(Σsub(ma))=(1.2+2.2) c.u. in the case when porosity was measured with the accuracy of σ(phi)=0.001+0.002. The dispersion of the Σsub(ma) data obtained for basalts (taken from different quarries) was higher than the accuracy of the measurement. All experimental data for the fundamental decay constants lambda 0 together with the whole information about the samples are given. (author)

  16. Histologic examination of hepatic biopsy samples as a prognostic indicator in dogs undergoing surgical correction of congenital portosystemic shunts: 64 cases (1997-2005).

    Science.gov (United States)

    Parker, Jacquelyn S; Monnet, Eric; Powers, Barbara E; Twedt, David C

    2008-05-15

    To determine whether results of histologic examination of hepatic biopsy samples could be used as an indicator of survival time in dogs that underwent surgical correction of a congenital portosystemic shunt (PSS). Retrospective case series. 64 dogs that underwent exploratory laparotomy for an extrahepatic (n = 39) or intrahepatic (25) congenital PSS. All H&E-stained histologic slides of hepatic biopsy samples obtained at the time of surgery were reviewed by a single individual, and severity of histologic abnormalities (ie, arteriolar hyperplasia, biliary hyperplasia, fibrosis, cell swelling, lipidosis, lymphoplasmacytic cholangiohepatitis, suppurative cholangiohepatitis, lipid granulomas, and dilated sinusoids) was graded. A Cox proportional hazards regression model was used to determine whether each histologic feature was associated with survival time. Median follow-up time was 35.7 months, and median survival time was 50.6 months. Thirty-eight dogs were alive at the time of final follow-up; 15 had died of causes associated with the PSS, including 4 that died immediately after surgery; 3 had died of unrelated causes; and 8 were lost to follow-up. None of the histologic features examined were significantly associated with survival time. Findings suggested that results of histologic examination of hepatic biopsy samples obtained at the time of surgery cannot be used to predict long-term outcome in dogs undergoing surgical correction of a PSS.

  17. Development of a methodology for low-energy X-ray absorption correction in biological samples using radiation scattering techniques

    International Nuclear Information System (INIS)

    Pereira, Marcelo O.; Anjos, Marcelino J.; Lopes, Ricardo T.

    2009-01-01

    Non-destructive techniques with X-ray, such as tomography, radiography and X-ray fluorescence are sensitive to the attenuation coefficient and have a large field of applications in medical as well as industrial area. In the case of X-ray fluorescence analysis the knowledge of photon X-ray attenuation coefficients provides important information to obtain the elemental concentration. On the other hand, the mass attenuation coefficient values are determined by transmission methods. So, the use of X-ray scattering can be considered as an alternative to transmission methods. This work proposes a new method for obtain the X-ray absorption curve through superposition peak Rayleigh and Compton scattering of the lines L a e L β of Tungsten (Tungsten L lines of an X-ray tube with W anode). The absorption curve was obtained using standard samples with effective atomic number in the range from 6 to 16. The method were applied in certified samples of bovine liver (NIST 1577B) , milk powder and V-10. The experimental measurements were obtained using the portable system EDXRF of the Nuclear Instrumentation Laboratory (LIN-COPPE/UFRJ) with Tungsten (W) anode. (author)

  18. Conditional estimation of local pooled dispersion parameter in small-sample RNA-Seq data improves differential expression test.

    Science.gov (United States)

    Gim, Jungsoo; Won, Sungho; Park, Taesung

    2016-10-01

    High throughput sequencing technology in transcriptomics studies contribute to the understanding of gene regulation mechanism and its cellular function, but also increases a need for accurate statistical methods to assess quantitative differences between experiments. Many methods have been developed to account for the specifics of count data: non-normality, a dependence of the variance on the mean, and small sample size. Among them, the small number of samples in typical experiments is still a challenge. Here we present a method for differential analysis of count data, using conditional estimation of local pooled dispersion parameters. A comprehensive evaluation of our proposed method in the aspect of differential gene expression analysis using both simulated and real data sets shows that the proposed method is more powerful than other existing methods while controlling the false discovery rates. By introducing conditional estimation of local pooled dispersion parameters, we successfully overcome the limitation of small power and enable a powerful quantitative analysis focused on differential expression test with the small number of samples.

  19. An inverse-modelling approach for frequency response correction of capacitive humidity sensors in ABL research with small remotely piloted aircraft (RPA)

    Science.gov (United States)

    Wildmann, N.; Kaufmann, F.; Bange, J.

    2014-09-01

    The measurement of water vapour concentration in the atmosphere is an ongoing challenge in environmental research. Satisfactory solutions exist for ground-based meteorological stations and measurements of mean values. However, carrying out advanced research of thermodynamic processes aloft as well, above the surface layer and especially in the atmospheric boundary layer (ABL), requires the resolution of small-scale turbulence. Sophisticated optical instruments are used in airborne meteorology with manned aircraft to achieve the necessary fast-response measurements of the order of 10 Hz (e.g. LiCor 7500). Since these instruments are too large and heavy for the application on small remotely piloted aircraft (RPA), a method is presented in this study that enhances small capacitive humidity sensors to be able to resolve turbulent eddies of the order of 10 m. The sensor examined here is a polymer-based sensor of the type P14-Rapid, by the Swiss company Innovative Sensor Technologies (IST) AG, with a surface area of less than 10 mm2 and a negligible weight. A physical and dynamical model of this sensor is described and then inverted in order to restore original water vapour fluctuations from sensor measurements. Examples of flight measurements show how the method can be used to correct vertical profiles and resolve turbulence spectra up to about 3 Hz. At an airspeed of 25 m s-1 this corresponds to a spatial resolution of less than 10 m.

  20. Preparing Monodisperse Macromolecular Samples for Successful Biological Small-Angle X-ray and Neutron Scattering Experiments

    Science.gov (United States)

    Jeffries, Cy M.; Graewert, Melissa A.; Blanchet, Clément E.; Langley, David B.; Whitten, Andrew E.; Svergun, Dmitri I

    2017-01-01

    Small-angle X-ray and neutron scattering (SAXS and SANS) are techniques used to extract structural parameters and determine the overall structures and shapes of biological macromolecules, complexes and assemblies in solution. The scattering intensities measured from a sample contain contributions from all atoms within the illuminated sample volume including the solvent and buffer components as well as the macromolecules of interest. In order to obtain structural information, it is essential to prepare an exactly matched solvent blank so that background scattering contributions can be accurately subtracted from the sample scattering to obtain the net scattering from the macromolecules in the sample. In addition, sample heterogeneity caused by contaminants, aggregates, mismatched solvents, radiation damage or other factors can severely influence and complicate data analysis so it is essential that the samples are pure and monodisperse for the duration of the experiment. This Protocol outlines the basic physics of SAXS and SANS and reveals how the underlying conceptual principles of the techniques ultimately ‘translate’ into practical laboratory guidance for the production of samples of sufficiently high quality for scattering experiments. The procedure describes how to prepare and characterize protein and nucleic acid samples for both SAXS and SANS using gel electrophoresis, size exclusion chromatography and light scattering. Also included are procedures specific to X-rays (in-line size exclusion chromatography SAXS) and neutrons, specifically preparing samples for contrast matching/variation experiments and deuterium labeling of proteins. PMID:27711050

  1. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  2. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set—Effect of Pasteurization

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-01-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  3. Preparing and measuring ultra-small radiocarbon samples with the ARTEMIS AMS facility in Saclay, France

    Energy Technology Data Exchange (ETDEWEB)

    Delque-Kolic, E., E-mail: emmanuelle.delque-kolic@cea.fr [LMC14, CEA Saclay, Batiment 450 Porte 4E, 91191 Gif sur Yvette (France); Comby-Zerbino, C.; Ferkane, S.; Moreau, C.; Dumoulin, J.P.; Caffy, I.; Souprayen, C.; Quiles, A.; Bavay, D.; Hain, S.; Setti, V. [LMC14, CEA Saclay, Batiment 450 Porte 4E, 91191 Gif sur Yvette (France)

    2013-01-15

    The ARTEMIS facility in Saclay France measures, on average, 4500 samples a year for French organizations working in an array of fields, including environmental sciences, archeology and hydrology. In response to an increasing demand for the isolation of specific soil compounds and organic water fractions, we were motivated to evaluate our ability to reduce microgram samples using our standard graphitization lines and to measure the graphite thus obtained with our 3MV NEC Pelletron AMS. Our reduction facility consists of two fully automated graphitization lines. Each line has 12 reduction reactors with a reduction volume of 18 ml for the first line and 12 ml for the second. Under routine conditions, we determined that we could reduce the samples down to 10 {mu}g of carbon, even if the graphitization yield is consequently affected by the lower sample mass. Our results when testing different Fe/C ratios suggest that an amount of 1.5 mg of Fe powder was ideal (instead of lower amounts of catalyst) to prevent the sample from deteriorating too quickly under the Cs+ beam, and to facilitate pressing procedures. Several sets of microsamples produced from HOxI standard, international references and backgrounds were measured. When measuring {sup 14}C-free wood charcoal and HOxI samples we determined that our modern and dead blanks, due to the various preparation steps, were of 1.1 {+-} 0.8 and 0.2 {+-} 0.1 {mu}g, respectively. The results presented here were obtained for IAEA-C1, {sup 14}C-free wood, IAEA-C6, IAEA-C2 and FIRI C.

  4. Preparing and measuring ultra-small radiocarbon samples with the ARTEMIS AMS facility in Saclay, France

    International Nuclear Information System (INIS)

    Delqué-Količ, E.; Comby-Zerbino, C.; Ferkane, S.; Moreau, C.; Dumoulin, J.P.; Caffy, I.; Souprayen, C.; Quilès, A.; Bavay, D.; Hain, S.; Setti, V.

    2013-01-01

    The ARTEMIS facility in Saclay France measures, on average, 4500 samples a year for French organizations working in an array of fields, including environmental sciences, archeology and hydrology. In response to an increasing demand for the isolation of specific soil compounds and organic water fractions, we were motivated to evaluate our ability to reduce microgram samples using our standard graphitization lines and to measure the graphite thus obtained with our 3MV NEC Pelletron AMS. Our reduction facility consists of two fully automated graphitization lines. Each line has 12 reduction reactors with a reduction volume of 18 ml for the first line and 12 ml for the second. Under routine conditions, we determined that we could reduce the samples down to 10 μg of carbon, even if the graphitization yield is consequently affected by the lower sample mass. Our results when testing different Fe/C ratios suggest that an amount of 1.5 mg of Fe powder was ideal (instead of lower amounts of catalyst) to prevent the sample from deteriorating too quickly under the Cs+ beam, and to facilitate pressing procedures. Several sets of microsamples produced from HOxI standard, international references and backgrounds were measured. When measuring 14 C-free wood charcoal and HOxI samples we determined that our modern and dead blanks, due to the various preparation steps, were of 1.1 ± 0.8 and 0.2 ± 0.1 μg, respectively. The results presented here were obtained for IAEA-C1, 14 C-free wood, IAEA-C6, IAEA-C2 and FIRI C.

  5. Small sample analysis using sputter atomization/resonance ionization mass spectrometry

    International Nuclear Information System (INIS)

    Christie, W.H.; Goeringer, D.E.

    1986-01-01

    We have used secondary ion mass spectrometry (SIMS) to investigate the emission of ions via argon sputtering from U metal, UO 2 , and U 3 O 8 samples. We have also used laser resonance ionization techniques to study argon-sputtered neutral atoms and molecules emitted from these same samples. For the case of U metal, a significant enhancement in detection sensitivity for U is obtained via SA/RIMS. For U in the fully oxidized form (U 3 O 8 ), SA/RIMS offers no improvement in U detection sensitivity over conventional SIMS when sputtering with argon. 9 refs., 1 fig., 2 tabs

  6. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  7. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  8. Small-kernel, constrained least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  9. The Dirichet-Multinomial model for multivariate randomized response data and small samples

    NARCIS (Netherlands)

    Avetisyan, Marianna; Fox, Gerardus J.A.

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The

  10. The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples

    Science.gov (United States)

    Avetisyan, Marianna; Fox, Jean-Paul

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…

  11. In situ sampling of small volumes of soil solution using modified micro-suction cups

    NARCIS (Netherlands)

    Shen, Jianbo; Hoffland, E.

    2007-01-01

    Two modified designs of micro-pore-water samplers were tested for their capacity to collect unbiased soil solution samples containing zinc and citrate. The samplers had either ceramic or polyethersulfone (PES) suction cups. Laboratory tests of the micro-samplers were conducted using (a) standard

  12. Comparing distribution models for small samples of overdispersed counts of freshwater fish

    Science.gov (United States)

    Vaudor, Lise; Lamouroux, Nicolas; Olivier, Jean-Michel

    2011-05-01

    The study of species abundance often relies on repeated abundance counts whose number is limited by logistic or financial constraints. The distribution of abundance counts is generally right-skewed (i.e. with many zeros and few high values) and needs to be modelled for statistical inference. We used an extensive dataset involving about 100,000 fish individuals of 12 freshwater fish species collected in electrofishing points (7 m 2) during 350 field surveys made in 25 stream sites, in order to compare the performance and the generality of four distribution models of counts (Poisson, negative binomial and their zero-inflated counterparts). The negative binomial distribution was the best model (Bayesian Information Criterion) for 58% of the samples (species-survey combinations) and was suitable for a variety of life histories, habitat, and sample characteristics. The performance of the models was closely related to samples' statistics such as total abundance and variance. Finally, we illustrated the consequences of a distribution assumption by calculating confidence intervals around the mean abundance, either based on the most suitable distribution assumption or on an asymptotical, distribution-free (Student's) method. Student's method generally corresponded to narrower confidence intervals, especially when there were few (≤3) non-null counts in the samples.

  13. Including screening in van der Waals corrected density functional theory calculations: The case of atoms and small molecules physisorbed on graphene

    Energy Technology Data Exchange (ETDEWEB)

    Silvestrelli, Pier Luigi; Ambrosetti, Alberto [Dipartimento di Fisica e Astronomia, Università di Padova, via Marzolo 8, I–35131 Padova, Italy and DEMOCRITOS National Simulation Center of the Italian Istituto Officina dei Materiali (IOM) of the Italian National Research Council (CNR), Trieste (Italy)

    2014-03-28

    The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H{sub 2}, H{sub 2}O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems.

  14. Principal components in the discrimination of outliers: A study in simulation sample data corrected by Pearson's and Yates´s chi-square distance

    Directory of Open Access Journals (Sweden)

    Manoel Vitor de Souza Veloso

    2016-04-01

    Full Text Available Current study employs Monte Carlo simulation in the building of a significance test to indicate the principal components that best discriminate against outliers. Different sample sizes were generated by multivariate normal distribution with different numbers of variables and correlation structures. Corrections by chi-square distance of Pearson´s and Yates's were provided for each sample size. Pearson´s correlation test showed the best performance. By increasing the number of variables, significance probabilities in favor of hypothesis H0 were reduced. So that the proposed method could be illustrated, a multivariate time series was applied with regard to sales volume rates in the state of Minas Gerais, obtained in different market segments.

  15. A new method to detect and correct sample tilt in scanning transmission electron microscopy bright-field imaging

    Energy Technology Data Exchange (ETDEWEB)

    Brown, H.G. [School of Physics, University of Melbourne, Parkville, Victoria 3010 (Australia); Ishikawa, R.; Sánchez-Santolino, G. [Institute of Engineering Innovation, School of Engineering, University of Tokyo, Tokyo 113-8656 (Japan); Lugg, N.R., E-mail: shibata@sigma.t.u-tokyo.ac.jp [Institute of Engineering Innovation, School of Engineering, University of Tokyo, Tokyo 113-8656 (Japan); Ikuhara, Y. [Institute of Engineering Innovation, School of Engineering, University of Tokyo, Tokyo 113-8656 (Japan); Allen, L.J. [School of Physics, University of Melbourne, Parkville, Victoria 3010 (Australia); Shibata, N. [Institute of Engineering Innovation, School of Engineering, University of Tokyo, Tokyo 113-8656 (Japan)

    2017-02-15

    Important properties of functional materials, such as ferroelectric shifts and octahedral distortions, are associated with displacements of the positions of lighter atoms in the unit cell. Annular bright-field scanning transmission electron microscopy is a good experimental method for investigating such phenomena due to its ability to image light and heavy atoms simultaneously. To map atomic positions at the required accuracy precise angular alignment of the sample with the microscope optical axis is necessary, since misalignment (tilt) of the specimen contributes to errors in position measurements of lighter elements in annular bright-field imaging. In this paper it is shown that it is possible to detect tilt with the aid of images recorded using a central bright-field detector placed within the inner radius of the annular bright-field detector. For a probe focus near the middle of the specimen the central bright-field image becomes especially sensitive to tilt and we demonstrate experimentally that misalignment can be detected with a precision of less than a milliradian, as we also confirm in simulation. Coma in the probe, an aberration that can be misidentified as tilt of the specimen, is also investigated and it is shown how the effects of coma and tilt can be differentiated. The effects of tilt may be offset to a large extent by shifting the diffraction plane detector an amount equivalent to the specimen tilt and we provide an experimental proof of principle of this using a segmented detector system. - Highlights: • Octahedral distortions are associated with displacements of lighter atoms. • Annular bright-field imaging is sensitive to light and heavy atoms simultaneously. • Mistilt of the specimen leads to errors in position measurements of lighter elements. • It is possible to detect tilt using images taken by a central bright-field detector. • Tilt may be offset by shifting the diffraction plane detector by an equivalent amount.

  16. Calculation code of heterogeneity effects for analysis of small sample reactivity worth

    International Nuclear Information System (INIS)

    Okajima, Shigeaki; Mukaiyama, Takehiko; Maeda, Akio.

    1988-03-01

    The discrepancy between experimental and calculated central reactivity worths has been one of the most significant interests for the analysis of fast reactor critical experiment. Two effects have been pointed out so as to be taken into account in the calculation as the possible cause of the discrepancy; one is the local heterogeneity effect which is associated with the measurement geometry, the other is the heterogeneity effect on the distribution of the intracell adjoint flux. In order to evaluate these effects in the analysis of FCA actinide sample reactivity worth the calculation code based on the collision probability method was developed. The code can handle the sample size effect which is one of the local heterogeneity effects and also the intracell adjoint heterogeneity effect. (author)

  17. Gravimetric and volumetric approaches adapted for hydrogen sorption measurements with in situ conditioning on small sorbent samples

    International Nuclear Information System (INIS)

    Poirier, E.; Chahine, R.; Tessier, A.; Bose, T.K.

    2005-01-01

    We present high sensitivity (0 to 1 bar, 295 K) gravimetric and volumetric hydrogen sorption measurement systems adapted for in situ sample conditioning at high temperature and high vacuum. These systems are designed especially for experiments on sorbents available in small masses (mg) and requiring thorough degassing prior to sorption measurements. Uncertainty analysis from instrumental specifications and hydrogen absorption measurements on palladium are presented. The gravimetric and volumetric systems yield cross-checkable results within about 0.05 wt % on samples weighing from (3 to 25) mg. Hydrogen storage capacities of single-walled carbon nanotubes measured at 1 bar and 295 K with both systems are presented

  18. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  19. Design and experimental testing of air slab caps which convert commercial electron diodes into dual purpose, correction-free diodes for small field dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Charles, P. H., E-mail: paulcharles111@gmail.com [Department of Radiation Oncology, Princess Alexandra Hospital, Ipswich Road, Woolloongabba, Brisbane, Queensland 4102, Australia and School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001 (Australia); Cranmer-Sargison, G. [Department of Medical Physics, Saskatchewan Cancer Agency, 20 Campus Drive, Saskatoon, Saskatchewan S7L 3P6, Canada and College of Medicine, University of Saskatchewan, 107 Wiggins Road, Saskatoon, Saskatchewan S7N 5E5 (Canada); Thwaites, D. I. [Institute of Medical Physics, School of Physics, University of Sydney, New South Wales 2006 (Australia); Kairn, T. [School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001, Australia and Genesis CancerCare Queensland, The Wesley Medical Centre, Suite 1, 40 Chasely Street, Auchenflower, Brisbane, Queensland 4066 (Australia); Crowe, S. B.; Langton, C. M.; Trapp, J. V. [School of Chemistry, Physics and Mechanical Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, Queensland 4001 (Australia); Pedrazzini, G. [Genesis CancerCare Queensland, The Wesley Medical Centre, Suite 1, 40 Chasely Street, Auchenflower, Brisbane, Queensland 4066 (Australia); Aland, T.; Kenny, J. [Epworth Radiation Oncology, 89 Bridge Road, Richmond, Melbourne, Victoria 3121 (Australia)

    2014-10-15

    Purpose: Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Methods: Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable “air cap”. A set of output ratios (OR{sub Det}{sup f{sub c}{sub l}{sub i}{sub n}}) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to OR{sub Det}{sup f{sub c}{sub l}{sub i}{sub n}} measured using an IBA stereotactic field diode (SFD). k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which is “correction-free” in small field relative dosimetry. In addition, the feasibility of experimentally transferring k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r

  20. Determination of Organic Pollutants in Small Samples of Groundwaters by Liquid-Liquid Extraction and Capillary Gas Chromatography

    DEFF Research Database (Denmark)

    Harrison, I.; Leader, R.U.; Higgo, J.J.W.

    1994-01-01

    A method is presented for the determination of 22 organic compounds in polluted groundwaters. The method includes liquid-liquid extraction of the base/neutral organics from small, alkaline groundwater samples, followed by derivatisation and liquid-liquid extraction of phenolic compounds after neu...... neutralisation. The extracts were analysed by capillary gas chromatography. Dual detection by flame Ionisation and electron capture was used to reduce analysis time....

  1. A summary of methods of predicting reliability life of nuclear equipment with small samples

    International Nuclear Information System (INIS)

    Liao Weixian

    2000-03-01

    Some of nuclear equipment are manufactured in small batch, e.g., 1-3 sets. Their service life may be very difficult to determine experimentally in view of economy and technology. The method combining theoretical analysis with material tests to predict the life of equipment is put forward, based on that equipment consists of parts or elements which are made of different materials. The whole life of an equipment part consists of the crack forming life (i.e., the fatigue life or the damage accumulation life) and the crack extension life. Methods of predicting machine life has systematically summarized with the emphasis on those which use theoretical analysis to substitute large scale prototype experiments. Meanwhile, methods and steps of predicting reliability life have been described by taking into consideration of randomness of various variables and parameters in engineering. Finally, the latest advance and trends of machine life prediction are discussed

  2. An Inset CT Specimen for Evaluating Fracture in Small Samples of Material

    Science.gov (United States)

    Yahyazadehfar, M.; Nazari, A.; Kruzic, J.J.; Quinn, G.D.; Arola, D.

    2013-01-01

    In evaluations on the fracture behavior of hard tissues and many biomaterials, the volume of material available to study is not always sufficient to apply a standard method of practice. In the present study an inset Compact Tension (inset CT) specimen is described, which uses a small cube of material (approximately 2×2×2 mm3) that is molded within a secondary material to form the compact tension geometry. A generalized equation describing the Mode I stress intensity was developed for the specimen using the solutions from a finite element model that was defined over permissible crack lengths, variations in specimen geometry, and a range in elastic properties of the inset and mold materials. A validation of the generalized equation was performed using estimates for the fracture toughness of a commercial dental composite via the “inset CT” specimen and the standard geometry defined by ASTM E399. Results showed that the average fracture toughness obtained from the new specimen (1.23 ± 0.02 MPa•m0.5) was within 2% of that from the standard. Applications of the inset CT specimen are presented for experimental evaluations on the crack growth resistance of dental enamel and root dentin, including their fracture resistance curves. Potential errors in adopting this specimen are then discussed, including the effects of debonding between the inset and molding material on the estimated stress intensity distribution. Results of the investigation show that the inset CT specimen offers a viable approach for studying the fracture behavior of small volumes of structural materials. PMID:24268892

  3. Density-viscosity product of small-volume ionic liquid samples using quartz crystal impedance analysis.

    Science.gov (United States)

    McHale, Glen; Hardacre, Chris; Ge, Rile; Doy, Nicola; Allen, Ray W K; MacInnes, Jordan M; Bown, Mark R; Newton, Michael I

    2008-08-01

    Quartz crystal impedance analysis has been developed as a technique to assess whether room-temperature ionic liquids are Newtonian fluids and as a small-volume method for determining the values of their viscosity-density product, rho eta. Changes in the impedance spectrum of a 5-MHz fundamental frequency quartz crystal induced by a water-miscible room-temperature ionic liquid, 1-butyl-3-methylimiclazolium trifluoromethylsulfonate ([C4mim][OTf]), were measured. From coupled frequency shift and bandwidth changes as the concentration was varied from 0 to 100% ionic liquid, it was determined that this liquid provided a Newtonian response. A second water-immiscible ionic liquid, 1-butyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide [C4mim][NTf2], with concentration varied using methanol, was tested and also found to provide a Newtonian response. In both cases, the values of the square root of the viscosity-density product deduced from the small-volume quartz crystal technique were consistent with those measured using a viscometer and density meter. The third harmonic of the crystal was found to provide the closest agreement between the two measurement methods; the pure ionic liquids had the largest difference of approximately 10%. In addition, 18 pure ionic liquids were tested, and for 11 of these, good-quality frequency shift and bandwidth data were obtained; these 12 all had a Newtonian response. The frequency shift of the third harmonic was found to vary linearly with square root of viscosity-density product of the pure ionic liquids up to a value of square root(rho eta) approximately 18 kg m(-2) s(-1/2), but with a slope 10% smaller than that predicted by the Kanazawa and Gordon equation. It is envisaged that the quartz crystal technique could be used in a high-throughput microfluidic system for characterizing ionic liquids.

  4. Interstitial water studies on small core samples, Deep Sea Drilling Project, Leg 5

    Science.gov (United States)

    Manheim, F. T.; Chan, K.M.; Sayles, F.L.

    1970-01-01

    Leg 5 samples fall into two categories with respect to interstitial water composition: 1) rapidly deposited terrigenous or appreciably terrigenous deposits, such as in Hole 35 (western Escanaba trough, off Cape Mendocino, California); and, 2) slowly deposited pelagic clays and biogenic muds and oozes. Interstitial waters in the former show modest to slight variations in chloride and sodium, but drastic changes in non-conservative ions such as magnesium and sulfate. The pelagic deposits show only relatively minor changes in both conservative and non-conservative pore fluid constituents. As was pointed out in earlier Leg Reports, it is believed that much of the variation in chloride in pore fluids within individual holes is attributable to the manipulation of samples on board ship and in the laboratory. On the other hand, the scatter in sodium is due in part to analytical error (on the order of 2 to 3 per cent, in terms of a standard deviation), and it probably accounts for most of the discrepancies in total anion and cation balance. All constituents reported here, with the exception of bulk water content, were analyzed on water samples which were sealed in plastic tubes aboard ship and were subsequently opened and divided into weighed aliquots in the laboratory. Analytical methods follow the atomic absorption, wet chemical and emission spectrochemical techniques briefly summarized in previous reports, e.g. Manheim et al., 1969, and Chan and Manheim, 1970. The authors acknowledge assistance from W. Sunda, D. Kerr, C. Lawson and H. Richards, and thank D. Spencer, P. Brewer and E. Degens for allowing the use of equipment and laboratory facilities.

  5. exTAS - next-generation TAS for small samples and extreme conditions

    International Nuclear Information System (INIS)

    Kulda, J.; Hiess, A.

    2011-01-01

    The currently used implementation of horizontally and vertically focusing optics in three-axis spectrometers (TAS) permits efficient studies of excitations in sub-cm 3 - sized single crystals]. With the present proposal we wish to stimulate a further paradigm shift into the domain of mm 3 -sized samples. exTAS combines highly focused mm-sized focal spots, boosting the sensitivity limits, with a spectrometer layout down-scaled to a table-top size to provide high flexibility in optimizing acceptance angles and to achieve sub-millimeter positioning accuracy. (authors)

  6. Sophistication of 14C measurement at JAEA-AMS-MUTSU. Attempt on a small quantity of sample

    International Nuclear Information System (INIS)

    Tanaka, Takayuki; Kabuto, Shoji; Kinoshita, Naoki; Yamamoto, Nobuo

    2010-01-01

    In the investigations on substance dynamics using the molecular weight and chemical fractionation, the utilization of 14 C measurement by an accelerator mass spectrometry (AMS) have started. As a result of the fractionation, sample contents required for AMS measurement have been downsized. We expect that this trend toward a small quantity of sample will be steadily accelerated in the future. As 14 C measurement by AMS established at Mutsu office require about 2 mg of sample content at present, our AMS lags behind the others in the trend. We try to downsize the needed sample content for 14 C measurement by our AMS. In this study, we modified the shape of the target-piece in which the sample is packed and which is regularly needed to radiocarbon measurement by our AMS. Moreover, we improved on the apparatus needed to pack the sample. As a result of the improvement, we revealed that it is possible to measure the 14 C using our AMS even by the amount of the sample of about 0.5 mg. (author)

  7. A method for multiple sequential analyses of macrophage functions using a small single cell sample

    Directory of Open Access Journals (Sweden)

    F.R.F. Nascimento

    2003-09-01

    Full Text Available Microbial pathogens such as bacillus Calmette-Guérin (BCG induce the activation of macrophages. Activated macrophages can be characterized by the increased production of reactive oxygen and nitrogen metabolites, generated via NADPH oxidase and inducible nitric oxide synthase, respectively, and by the increased expression of major histocompatibility complex class II molecules (MHC II. Multiple microassays have been developed to measure these parameters. Usually each assay requires 2-5 x 10(5 cells per well. In some experimental conditions the number of cells is the limiting factor for the phenotypic characterization of macrophages. Here we describe a method whereby this limitation can be circumvented. Using a single 96-well microassay and a very small number of peritoneal cells obtained from C3H/HePas mice, containing as little as <=2 x 10(5 macrophages per well, we determined sequentially the oxidative burst (H2O2, nitric oxide production and MHC II (IAk expression of BCG-activated macrophages. More specifically, with 100 µl of cell suspension it was possible to quantify H2O2 release and nitric oxide production after 1 and 48 h, respectively, and IAk expression after 48 h of cell culture. In addition, this microassay is easy to perform, highly reproducible and more economical.

  8. Aspects of working with manipulators and small samples in an αβγ-box

    International Nuclear Information System (INIS)

    Zubler, Robert; Bertsch, Johannes; Heimgartner, Peter

    2007-01-01

    The Laboratory for Materials Behaviour, operator of the Hotlab and part of the Paul Scherrer Institute (PSI) is studying corrosion- and mechanical phenomena of irradiated fuel rod cladding materials. To improve the options for mechanical tests, a heavy shielded αβγ) universal electro-mechanical testing machine has been installed. The machine is equipped with an 800 deg. C furnace. The furnace chamber is part of the inner α-box and can be flushed with inert gas. The specimen can be observed by camera during the tests. The foreseen active specimens are very small and can not be handled by hand. Before starting active tests, tools and installations had to be improved and a lot of manipulator practise had to be absolved. For the operational permit, given by the authorities (Swiss Federal Nuclear Safety Inspectorate, HSK), many safety data concerning furnace cooling, air pressure and γ- shielding had to be collected. Up to now various inactive tests have been performed. Besides the operational and safety features, results of inactive mechanical tests and tests for active commissioning are presented. (authors)

  9. Liquid-chromatographic analysis for cyclosporine with use of a microbore column and small sample volume.

    Science.gov (United States)

    Annesley, T; Matz, K; Balogh, L; Clayton, L; Giacherio, D

    1986-07-01

    This liquid-chromatographic assay requires 0.2 to 0.5 mL of whole blood, avoids the use of diethyl ether, and consumes only 10 to 20% of the solvents used in prior methods. Sample preparation involves an acidic extraction with methyl-t-butyl ether, performed in a 13 X 100 mm disposable glass tube, then a short second extraction of the organic phase with sodium hydroxide. After evaporation of the methyl-t-butyl ether, chromatography is performed on an "Astec" 2.0-mm (i.d.) octyl column. We compared results by this procedure with those by use of earlier larger-scale extractions and their respective 4.6-mm (i.d.) columns; analytical recoveries of cyclosporins A and D were comparable with previous findings and results for patients' specimens were equivalent, but the microbore columns provided greatly increased resolution and sensitivity.

  10. Sampling large landscapes with small-scale stratification-User's Manual

    Science.gov (United States)

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  11. A simple method for regional cerebral blood flow measurement by one-point arterial blood sampling and 123I-IMP microsphere model (part 2). A study of time correction of one-point blood sample count

    International Nuclear Information System (INIS)

    Masuda, Yasuhiko; Makino, Kenichi; Gotoh, Satoshi

    1999-01-01

    In our previous paper regarding determination of the regional cerebral blood flow (rCBF) using the 123 I-IMP microsphere model, we reported that the accuracy of determination of the integrated value of the input function from one-point arterial blood sampling can be increased by performing correction using the 5 min: 29 min ratio for the whole-brain count. However, failure to carry out the arterial blood collection at exactly 5 minutes after 123 I-IMP injection causes errors with this method, and there is thus a time limitation. We have now revised out method so that the one-point arterial blood sampling can be performed at any time during the interval between 5 minutes and 20 minutes after 123 I-IMP injection, with addition of a correction step for the sampling time. This revised method permits more accurate estimation of the integral of the input functions. This method was then applied to 174 experimental subjects: one-point blood samples collected at random times between 5 and 20 minutes, and the estimated values for the continuous arterial octanol extraction count (COC) were determined. The mean error rate between the COC and the actual measured continuous arterial octanol extraction count (OC) was 3.6%, and the standard deviation was 12.7%. Accordingly, in 70% of the cases, the rCBF was able to be estimated within an error rate of 13%, while estimation was possible in 95% of the cases within an error rate of 25%. This improved method is a simple technique for determination of the rCBF by 123 I-IMP microsphere model and one-point arterial blood sampling which no longer shows a time limitation and does not require any octanol extraction step. (author)

  12. Slurry sampling high-resolution continuum source electrothermal atomic absorption spectrometry for direct beryllium determination in soil and sediment samples after elimination of SiO interference by least-squares background correction.

    Science.gov (United States)

    Husáková, Lenka; Urbanová, Iva; Šafránková, Michaela; Šídová, Tereza

    2017-12-01

    In this work a simple, efficient, and environmentally-friendly method is proposed for determination of Be in soil and sediment samples employing slurry sampling and high-resolution continuum source electrothermal atomic absorption spectrometry (HR-CS-ETAAS). The spectral effects originating from SiO species were identified and successfully corrected by means of a mathematical correction algorithm. Fractional factorial design has been employed to assess the parameters affecting the analytical results and especially to help in the development of the slurry preparation and optimization of measuring conditions. The effects of seven analytical variables including particle size, concentration of glycerol and HNO 3 for stabilization and analyte extraction, respectively, the effect of ultrasonic agitation for slurry homogenization, concentration of chemical modifier, pyrolysis and atomization temperature were investigated by a 2 7-3 replicate (n = 3) design. Using the optimized experimental conditions, the proposed method allowed the determination of Be with a detection limit being 0.016mgkg -1 and characteristic mass 1.3pg. Optimum results were obtained after preparing the slurries by weighing 100mg of a sample with particle size < 54µm and adding 25mL of 20% w/w glycerol. The use of 1μg Rh and 50μg citric acid was found satisfactory for the analyte stabilization. Accurate data were obtained with the use of matrix-free calibration. The accuracy of the method was confirmed by analysis of two certified reference materials (NIST SRM 2702 Inorganics in Marine Sediment and IGI BIL-1 Baikal Bottom Silt) and by comparison of the results obtained for ten real samples by slurry sampling with those determined after microwave-assisted extraction by inductively coupled plasma time of flight mass spectrometry (TOF-ICP-MS). The reported method has a precision better than 7%. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Targeted histology sampling from atypical small acinar proliferation area detected by repeat transrectal prostate biopsy

    Directory of Open Access Journals (Sweden)

    A. V. Karman

    2017-01-01

    Full Text Available Оbjective: to define the approach to the management of patients with the detected ASAP area.Materials and methods. In the time period from 2012 through 2015, 494 patients with previously negative biopsy and remaining suspicion of prostate cancer (PCa were examined. The patients underwent repeat 24-core multifocal prostate biopsy with taking additional tissue samples from suspicious areas detected by multiparametric magnetic resonance imaging and transrectal ultrasound. An isolated ASAP area was found in 127 (25. 7 % of the 494 examined men. All of them were offered to perform repeat target transrectal biopsy of this area. Targeted transrectal ultrasound guided biopsy of the ASAP area was performed in 56 (44.1 % of the 127 patients, 53 of them being included in the final analysis.Results. PCa was diagnosed in 14 (26.4 % of the 53 patients, their mean age being 64.4 ± 6.9 years. The average level of prostate-specific antigen (PSA in PCa patients was 6.8 ± 3.0 ng/ml, in those with benign lesions – 9.3 ± 6.5 ng/ml; the percentage ratio of free/total PSA with PCa was 16.2 ± 7,8 %, with benign lesions – 23.3 ± 7.7 %; PSA density in PCa patients was 0.14 ± 0.07 ng/ml/cm3, in those with benign lesions – 0.15 ± 0.12 ng/ml/cm3. Therefore, with ASAP area being detected in repeat prostate biopsy samples, it is advisable that targeted extended biopsy of this area be performed. 

  14. k-space sampling optimization for ultrashort TE imaging of cortical bone: Applications in radiation therapy planning and MR-based PET attenuation correction

    International Nuclear Information System (INIS)

    Hu, Lingzhi; Traughber, Melanie; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Muzic, Raymond F. Jr.

    2014-01-01

    Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2 ∗ = 1/T2 ∗ , was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2 ∗ of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2 ∗ of human skull was measured as 0.2–0.3 ms −1 depending on the specific region, which is more than ten times greater than the R2 ∗ of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in brain. High-quality, bone

  15. Application of bias correction methods to improve U{sub 3}Si{sub 2} sample preparation for quantitative analysis by WDXRF

    Energy Technology Data Exchange (ETDEWEB)

    Scapin, Marcos A.; Guilhen, Sabine N.; Azevedo, Luciana C. de; Cotrim, Marycel E.B.; Pires, Maria Ap. F., E-mail: mascapin@ipen.br, E-mail: snguilhen@ipen.br, E-mail: lvsantana@ipen.br, E-mail: mecotrim@ipen.br, E-mail: mapires@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The determination of silicon (Si), total uranium (U) and impurities in uranium-silicide (U{sub 3}Si{sub 2}) samples by wavelength dispersion X-ray fluorescence technique (WDXRF) has been already validated and is currently implemented at IPEN's X-Ray Fluorescence Laboratory (IPEN-CNEN/SP) in São Paulo, Brazil. Sample preparation requires the use of approximately 3 g of H{sub 3}BO{sub 3} as sample holder and 1.8 g of U{sub 3}Si{sub 2}. However, because boron is a neutron absorber, this procedure precludes U{sub 3}Si{sub 2} sample's recovery, which, in time, considering routinely analysis, may account for significant unusable uranium waste. An estimated average of 15 samples per month are expected to be analyzed by WDXRF, resulting in approx. 320 g of U{sub 3}Si{sub 2} that would not return to the nuclear fuel cycle. This not only impacts in production losses, but generates another problem: radioactive waste management. The purpose of this paper is to present the mathematical models that may be applied for the correction of systematic errors when H{sub 3}BO{sub 3} sample holder is substituted by cellulose-acetate {[C_6H_7O_2(OH)_3_-_m(OOCCH_3)m], m = 0∼3}, thus enabling U{sub 3}Si{sub 2} sample’s recovery. The results demonstrate that the adopted mathematical model is statistically satisfactory, allowing the optimization of the procedure. (author)

  16. Algorithm for computing significance levels using the Kolmogorov-Smirnov statistic and valid for both large and small samples

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.

  17. Detection of seizures from small samples using nonlinear dynamic system theory.

    Science.gov (United States)

    Yaylali, I; Koçak, H; Jayakar, P

    1996-07-01

    The electroencephalogram (EEG), like many other biological phenomena, is quite likely governed by nonlinear dynamics. Certain characteristics of the underlying dynamics have recently been quantified by computing the correlation dimensions (D2) of EEG time series data. In this paper, D2 of the unbiased autocovariance function of the scalp EEG data was used to detect electrographic seizure activity. Digital EEG data were acquired at a sampling rate of 200 Hz per channel and organized in continuous frames (duration 2.56 s, 512 data points). To increase the reliability of D2 computations with short duration data, raw EEG data were initially simplified using unbiased autocovariance analysis to highlight the periodic activity that is present during seizures. The D2 computation was then performed from the unbiased autocovariance function of each channel using the Grassberger-Procaccia method with Theiler's box-assisted correlation algorithm. Even with short duration data, this preprocessing proved to be computationally robust and displayed no significant sensitivity to implementation details such as the choices of embedding dimension and box size. The system successfully identified various types of seizures in clinical studies.

  18. Beam-hardening correction by a surface fitting and phase classification by a least square support vector machine approach for tomography images of geological samples

    Science.gov (United States)

    Khan, F.; Enzmann, F.; Kersten, M.

    2015-12-01

    In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.

  19. A novel baseline-correction method for standard addition based derivative spectra and its application to quantitative analysis of benzo(a)pyrene in vegetable oil samples.

    Science.gov (United States)

    Li, Na; Li, Xiu-Ying; Zou, Zhe-Xiang; Lin, Li-Rong; Li, Yao-Qun

    2011-07-07

    In the present work, a baseline-correction method based on peak-to-derivative baseline measurement was proposed for the elimination of complex matrix interference that was mainly caused by unknown components and/or background in the analysis of derivative spectra. This novel method was applicable particularly when the matrix interfering components showed a broad spectral band, which was common in practical analysis. The derivative baseline was established by connecting two crossing points of the spectral curves obtained with a standard addition method (SAM). The applicability and reliability of the proposed method was demonstrated through both theoretical simulation and practical application. Firstly, Gaussian bands were used to simulate 'interfering' and 'analyte' bands to investigate the effect of different parameters of interfering band on the derivative baseline. This simulation analysis verified that the accuracy of the proposed method was remarkably better than other conventional methods such as peak-to-zero, tangent, and peak-to-peak measurements. Then the above proposed baseline-correction method was applied to the determination of benzo(a)pyrene (BaP) in vegetable oil samples by second-derivative synchronous fluorescence spectroscopy. The satisfactory results were obtained by using this new method to analyze a certified reference material (coconut oil, BCR(®)-458) with a relative error of -3.2% from the certified BaP concentration. Potentially, the proposed method can be applied to various types of derivative spectra in different fields such as UV-visible absorption spectroscopy, fluorescence spectroscopy and infrared spectroscopy.

  20. A method for analysing small samples of floral pollen for free and protein-bound amino acids.

    Science.gov (United States)

    Stabler, Daniel; Power, Eileen F; Borland, Anne M; Barnes, Jeremy D; Wright, Geraldine A

    2018-02-01

    Pollen provides floral visitors with essential nutrients including proteins, lipids, vitamins and minerals. As an important nutrient resource for pollinators, including honeybees and bumblebees, pollen quality is of growing interest in assessing available nutrition to foraging bees. To date, quantifying the protein-bound amino acids in pollen has been difficult and methods rely on large amounts of pollen, typically more than 1 g. More usual is to estimate a crude protein value based on the nitrogen content of pollen, however, such methods provide no information on the distribution of essential and non-essential amino acids constituting the proteins.Here, we describe a method of microwave-assisted acid hydrolysis using low amounts of pollen that allows exploration of amino acid composition, quantified using ultra high performance liquid chromatography (UHPLC), and a back calculation to estimate the crude protein content of pollen.Reliable analysis of protein-bound and free amino acids as well as an estimation of crude protein concentration was obtained from pollen samples as low as 1 mg. Greater variation in both protein-bound and free amino acids was found in pollen sample sizes amino acids in smaller sample sizes, we suggest a correction factor to apply to specific sample sizes of pollen in order to estimate total crude protein content.The method described in this paper will allow researchers to explore the composition of amino acids in pollen and will aid research assessing the available nutrition to pollinating animals. This method will be particularly useful in assaying the pollen of wild plants, from which it is difficult to obtain large sample weights.

  1. Improving PET Quantification of Small Animal [68Ga]DOTA-Labeled PET/CT Studies by Using a CT-Based Positron Range Correction.

    Science.gov (United States)

    Cal-Gonzalez, Jacobo; Vaquero, Juan José; Herraiz, Joaquín L; Pérez-Liva, Mailyn; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Udías, José Manuel

    2018-01-19

    Image quality of positron emission tomography (PET) tracers that emits high-energy positrons, such as Ga-68, Rb-82, or I-124, is significantly affected by positron range (PR) effects. PR effects are especially important in small animal PET studies, since they can limit spatial resolution and quantitative accuracy of the images. Since generators accessibility has made Ga-68 tracers wide available, the aim of this study is to show how the quantitative results of [ 68 Ga]DOTA-labeled PET/X-ray computed tomography (CT) imaging of neuroendocrine tumors in mice can be improved using positron range correction (PRC). Eighteen scans in 12 mice were evaluated, with three different models of tumors: PC12, AR42J, and meningiomas. In addition, three different [ 68 Ga]DOTA-labeled radiotracers were used to evaluate the PRC with different tracer distributions: [ 68 Ga]DOTANOC, [ 68 Ga]DOTATOC, and [ 68 Ga]DOTATATE. Two PRC methods were evaluated: a tissue-dependent (TD-PRC) and a tissue-dependent spatially-variant correction (TDSV-PRC). Taking a region in the liver as reference, the tissue-to-liver ratio values for tumor tissue (TLR tumor ), lung (TLR lung ), and necrotic areas within the tumors (TLR necrotic ) and their respective relative variations (ΔTLR) were evaluated. All TLR values in the PRC images were significantly different (p DOTA-labeled PET/CT imaging of mice with neuroendocrine tumors, hence demonstrating that these techniques could also ameliorate the deleterious effect of the positron range in clinical PET imaging.

  2. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    Energy Technology Data Exchange (ETDEWEB)

    Rabiet, M., E-mail: marion.rabiet@unilim.f [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France); Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M. [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France)

    2010-03-15

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10{sup -3} g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  3. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    International Nuclear Information System (INIS)

    Rabiet, M.; Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M.

    2010-01-01

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10 -3 g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  4. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Jamshid Jamali

    2017-01-01

    Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  5. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study.

    Science.gov (United States)

    Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  6. Corrective Action Decision Document/Closure Report for Corrective Action Unit 541: Small Boy Nevada National Security Site and Nevada Test and Training Range, Nevada, Revision 0 with ROTC-1

    Energy Technology Data Exchange (ETDEWEB)

    Kidman, Raymond [Navarro, Las Vegas, NV (United States); Matthews, Patrick [Navarro, Las Vegas, NV (United States)

    2016-08-01

    The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 541 based on the no further action alternative listed in Table ES-1.

  7. Correction of enhanced Na(+)-H+ exchange of rat small intestinal brush-border membranes in streptozotocin-induced diabetes by insulin or 1,25-dihydroxycholecalciferol

    International Nuclear Information System (INIS)

    Dudeja, P.K.; Wali, R.K.; Klitzke, A.; Sitrin, M.D.; Brasitus, T.A.

    1991-01-01

    Diabetes was induced in rats by administration of a single i.p. injection of streptozotocin (50 mg/kg body wt). After 7 d, diabetic rats were further treated with insulin or 1,25-dihydroxycholecalciferol [1,25(OH)2D3] for an additional 5-7 d. Control, diabetic, diabetic + insulin, and diabetic + 1,25(OH)2D3 rats were then killed, their proximal small intestines were removed, and villus-tip epithelial cells were isolated and used to prepare brush-border membrane vesicles. Preparations from each of these groups were then analyzed and compared with respect to their amiloride-sensitive, electroneutral Na(+)-H+ exchange activity, using 22 Na uptake as well as acridine orange techniques. The results of these experiments demonstrated that (a) H+ gradient-dependent 22 Na uptake as well as Na+ gradient-dependent transmembrane H+ fluxes were significantly increased in diabetic vesicles compared to their control counterparts, (b) kinetic studies demonstrated that this enhanced 22 Na uptake in diabetes was a result of increased maximal velocity (Vmax) of this exchanger with no change in apparent affinity (Km) for Na+, (c) serum levels of 1,25(OH)2D3 were significantly lower in diabetic animals compared with their control counterparts; and (d) insulin or 1,25(OH)2D3 treatment restored the Vmax alterations to control values, without any significant changes in Km, concomitant with significantly increasing the serum levels of 1,25(OH)2D3 in diabetic animals. These results indicate that Na(+)-H+ activity is significantly increased in proximal small intestinal luminal membranes of streptozotocin-induced diabetic rats. Moreover, alterations in the serum levels of 1,25(OH)2D3 may, at least in part, explain this enhanced antiporter activity and its correction by insulin

  8. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments

    Directory of Open Access Journals (Sweden)

    Wim Bras

    2014-11-01

    Full Text Available Small- and wide-angle X-ray scattering (SAXS, WAXS are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments.

  9. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments.

    Science.gov (United States)

    Bras, Wim; Koizumi, Satoshi; Terrill, Nicholas J

    2014-11-01

    Small- and wide-angle X-ray scattering (SAXS, WAXS) are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments.

  10. A compact time-of-flight SANS instrument optimised for measurements of small sample volumes at the European Spallation Source

    Energy Technology Data Exchange (ETDEWEB)

    Kynde, Søren, E-mail: kynde@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark); Hewitt Klenø, Kaspar [Niels Bohr Institute, University of Copenhagen (Denmark); Nagy, Gergely [SINQ, Paul Scherrer Institute (Switzerland); Mortensen, Kell; Lefmann, Kim [Niels Bohr Institute, University of Copenhagen (Denmark); Kohlbrecher, Joachim, E-mail: Joachim.kohlbrecher@psi.ch [SINQ, Paul Scherrer Institute (Switzerland); Arleth, Lise, E-mail: arleth@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark)

    2014-11-11

    The high flux at European Spallation Source (ESS) will allow for performing experiments with relatively small beam-sizes while maintaining a high intensity of the incoming beam. The pulsed nature of the source makes the facility optimal for time-of-flight small-angle neutron scattering (ToF-SANS). We find that a relatively compact SANS instrument becomes the optimal choice in order to obtain the widest possible q-range in a single setting and the best possible exploitation of the neutrons in each pulse and hence obtaining the highest possible flux at the sample position. The instrument proposed in the present article is optimised for performing fast measurements of small scattering volumes, typically down to 2×2×2 mm{sup 3}, while covering a broad q-range from about 0.005 1/Å to 0.5 1/Å in a single instrument setting. This q-range corresponds to that available at a typical good BioSAXS instrument and is relevant for a wide set of biomacromolecular samples. A central advantage of covering the whole q-range in a single setting is that each sample has to be loaded only once. This makes it convenient to use the fully automated high-throughput flow-through sample changers commonly applied at modern synchrotron BioSAXS-facilities. The central drawback of choosing a very compact instrument is that the resolution in terms of δλ/λ obtained with the short wavelength neutrons becomes worse than what is usually the standard at state-of-the-art SANS instruments. Our McStas based simulations of the instrument performance for a set of characteristic biomacromolecular samples show that the resulting smearing effects still have relatively minor effects on the obtained data and can be compensated for in the data analysis. However, in cases where a better resolution is required in combination with the large simultaneous q-range characteristic of the instrument, we show that this can be obtained by inserting a set of choppers.

  11. Using Data-Dependent Priors to Mitigate Small Sample Bias in Latent Growth Models: A Discussion and Illustration Using M"plus"

    Science.gov (United States)

    McNeish, Daniel M.

    2016-01-01

    Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…

  12. A new CF-IRMS system for quantifying stable isotopes of carbon monoxide from ice cores and small air samples

    Directory of Open Access Journals (Sweden)

    Z. Wang

    2010-10-01

    Full Text Available We present a new analysis technique for stable isotope ratios (δ13C and δ18O of atmospheric carbon monoxide (CO from ice core samples. The technique is an online cryogenic vacuum extraction followed by continuous-flow isotope ratio mass spectrometry (CF-IRMS; it can also be used with small air samples. The CO extraction system includes two multi-loop cryogenic cleanup traps, a chemical oxidant for oxidation to CO2, a cryogenic collection trap, a cryofocusing unit, gas chromatography purification, and subsequent injection into a Finnigan Delta Plus IRMS. Analytical precision of 0.2‰ (±1δ for δ13C and 0.6‰ (±1δ for δ18O can be obtained for 100 mL (STP air samples with CO mixing ratios ranging from 60 ppbv to 140 ppbv (~268–625 pmol CO. Six South Pole ice core samples from depths ranging from 133 m to 177 m were processed for CO isotope analysis after wet extraction. To our knowledge, this is the first measurement of stable isotopes of CO in ice core air.

  13. Forecasting elections with mere recognition from small, lousy samples: A comparison of collective recognition, wisdom of crowds, and representative polls

    Directory of Open Access Journals (Sweden)

    Wolfgang Gaissmeier

    2011-02-01

    Full Text Available We investigated the extent to which the human capacity for recognition helps to forecast political elections: We compared naive recognition-based election forecasts computed from convenience samples of citizens' recognition of party names to (i standard polling forecasts computed from representative samples of citizens' voting intentions, and to (ii simple---and typically very accurate---wisdom-of-crowds-forecasts computed from the same convenience samples of citizens' aggregated hunches about election results. Results from four major German elections show that mere recognition of party names forecast the parties' electoral success fairly well. Recognition-based forecasts were most competitive with the other models when forecasting the smaller parties' success and for small sample sizes. However, wisdom-of-crowds-forecasts outperformed recognition-based forecasts in most cases. It seems that wisdom-of-crowds-forecasts are able to draw on the benefits of recognition while at the same time avoiding its downsides, such as lack of discrimination among very famous parties or recognition caused by factors unrelated to electoral success. Yet it seems that a simple extension of the recognition-based forecasts---asking people what proportion of the population would recognize a party instead of whether they themselves recognize it---is also able to eliminate these downsides.

  14. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds

    Science.gov (United States)

    Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie

    2018-01-01

    Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km2, 4.50 km2, and 1.87 km2, respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content. PMID:29652811

  15. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds

    Directory of Open Access Journals (Sweden)

    Zhenming Zhang

    2018-04-01

    Full Text Available Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km2, 4.50 km2, and 1.87 km2, respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.

  16. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds.

    Science.gov (United States)

    Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie; Huang, Xianfei

    2018-04-13

    Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km², 4.50 km², and 1.87 km², respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.

  17. Application of inductively coupled plasma mass spectrometry for multielement analysis in small sample amounts of thyroid tissue from Chernobyl area

    International Nuclear Information System (INIS)

    Becker, J.S.; Dietze, H.J.; Boulyga, S.F.; Bazhanova, N.N.; Kanash, N.V.; Malenchenko, A.F.

    2000-01-01

    As a result of the Chernobyl nuclear power plant accident in 1986, thyroid pathologies occurred among children in some regions of belarus. Besides the irradiation of children's thyroids by radioactive iodine and caesium nuclides, toxic elements from fallout are a direct risk to health. Inductively coupled plasma quadrupole-based mass spectrometry (Icp-Ms) and instrumental neutron activation analysis (IAA) were used for multielement determination in small amounts (I-10 mg) of human thyroid tissue samples. The accuracy of the applied analytical technique for small biological sample amounts was checked using NIST standard reference material oyster tissue (SRM 1566 b). Almost all essential elements as well as a number of toxic elements such as Cd, Pb, Hg, U etc. Were determined in a multitude of human thyroid tissues by quadrupole-based Icp-Ms using micro nebulization. In general, the thyroid tissue affected by pathology is characterized by higher calcium content. Some other elements, among them Sr, Zn, Fe, Mn, V, As, Cr, Ni, Pb, U, Ba, Sb, were also Accumulated in such tissue. The results obtained will be used as initial material for further specific studies of the role of particular elements in thyroid pathology development

  18. X-ray fluorescence microscopy artefacts in elemental maps of topologically complex samples: Analytical observations, simulation and a map correction method

    Science.gov (United States)

    Billè, Fulvio; Kourousias, George; Luchinat, Enrico; Kiskinova, Maya; Gianoncelli, Alessandra

    2016-08-01

    XRF spectroscopy is among the most widely used non-destructive techniques for elemental analysis. Despite the known angular dependence of X-ray fluorescence (XRF), topological artefacts remain an unresolved issue when using X-ray micro- or nano-probes. In this work we investigate the origin of the artefacts in XRF imaging of topologically complex samples, which are unresolved problems in studies of organic matter due to the limited travel distances of low energy XRF emission from the light elements. In particular we mapped Human Embryonic Kidney (HEK293T) cells. The exemplary results with biological samples, obtained with a soft X-ray scanning microscope installed at a synchrotron facility were used for testing a mathematical model based on detector response simulations, and for proposing an artefact correction method based on directional derivatives. Despite the peculiar and specific application, the methodology can be easily extended to hard X-rays and to set-ups with multi-array detector systems when the dimensions of surface reliefs are in the order of the probing beam size.

  19. Antibiotic Resistance in Animal and Environmental Samples Associated with Small-Scale Poultry Farming in Northwestern Ecuador.

    Science.gov (United States)

    Braykov, Nikolay P; Eisenberg, Joseph N S; Grossman, Marissa; Zhang, Lixin; Vasco, Karla; Cevallos, William; Muñoz, Diana; Acevedo, Andrés; Moser, Kara A; Marrs, Carl F; Foxman, Betsy; Trostle, James; Trueba, Gabriel; Levy, Karen

    2016-01-01

    The effects of animal agriculture on the spread of antibiotic resistance (AR) are cross-cutting and thus require a multidisciplinary perspective. Here we use ecological, epidemiological, and ethnographic methods to examine populations of Escherichia coli circulating in the production poultry farming environment versus the domestic environment in rural Ecuador, where small-scale poultry production employing nontherapeutic antibiotics is increasingly common. We sampled 262 "production birds" (commercially raised broiler chickens and laying hens) and 455 "household birds" (raised for domestic use) and household and coop environmental samples from 17 villages between 2010 and 2013. We analyzed data on zones of inhibition from Kirby-Bauer tests, rather than established clinical breakpoints for AR, to distinguish between populations of organisms. We saw significantly higher levels of AR in bacteria from production versus household birds; resistance to either amoxicillin-clavulanate, cephalothin, cefotaxime, and gentamicin was found in 52.8% of production bird isolates and 16% of household ones. A strain jointly resistant to the 4 drugs was exclusive to a subset of isolates from production birds (7.6%) and coop surfaces (6.5%) and was associated with a particular purchase site. The prevalence of AR in production birds declined with bird age (P resistance (AR) in E. coli isolates from small-scale poultry production environments versus domestic environments in rural Ecuador, where such backyard poultry operations have become established over the past decade. Our previous research in the region suggests that introduction of AR bacteria through travel and commerce may be an important source of AR in villages of this region. This report extends the prior analysis by examining small-scale production chicken farming as a potential source of resistant strains. Our results suggest that AR strains associated with poultry production likely originate from sources outside the study

  20. Mutational status of synchronous and metachronous tumor samples in patients with metastatic non-small-cell lung cancer

    International Nuclear Information System (INIS)

    Quéré, Gilles; Descourt, Renaud; Robinet, Gilles; Autret, Sandrine; Raguenes, Odile; Fercot, Brigitte; Alemany, Pierre; Uguen, Arnaud; Férec, Claude; Quintin-Roué, Isabelle; Le Gac, Gérald

    2016-01-01

    Despite reported discordance between the mutational status of primary lung cancers and their metastases, metastatic sites are rarely biopsied and targeted therapy is guided by genetic biomarkers detected in the primary tumor. This situation is mostly explained by the apparent stability of EGFR-activating mutations. Given the dramatic increase in the range of candidate drugs and high rates of drug resistance, rebiopsy or liquid biopsy may become widespread. The purpose of this study was to test genetic biomarkers used in clinical practice (EGFR, ALK) and candidate biomarkers identified by the French National Cancer Institute (KRAS, BRAF, PIK3CA, HER2) in patients with metastatic non-small-cell lung cancer for whom two tumor samples were available. A retrospective study identified 88 tumor samples collected synchronously or metachronously, from the same or two different sites, in 44 patients. Mutation analysis used SNaPshot (EGFR, KRAS, BRAF missense mutations), pyrosequencing (EGFR and PIK3CA missense mutations), sizing assays (EGFR and HER2 indels) and IHC and/or FISH (ALK rearrangements). About half the patients (52 %) harbored at least one mutation. Five patients had an activating mutation of EGFR in both the primary tumor and the metastasis. The T790M resistance mutation was detected in metastases in 3 patients with acquired resistance to EGFR tyrosine kinase inhibitors. FISH showed discordance in ALK status between a small biopsy sample and the surgical specimen. KRAS mutations were observed in 36 % of samples, six patients (14 %) having discordant genotypes; all discordances concerned sampling from different sites. Two patients (5 %) showed PI3KCA mutations. One metastasis harbored both PI3KCA and KRAS mutations, while the synchronously sampled primary tumor was mutation free. No mutations were detected in BRAF and HER2. This study highlighted noteworthy intra-individual discordance in KRAS mutational status, whereas EGFR status was stable. Intratumoral

  1. Investigation of chemical modifiers for the determination of lead in fertilizers and limestone using graphite furnace atomic absorption spectrometry with Zeeman-effect background correction and slurry sampling

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Aline R. [Instituto de Química, Universidade Federal do Rio Grande do Sul, Av. Bento Gonçalves 9500, 91501-970 Porto Alegre, RS (Brazil); Instituto Nacional de Ciência e Tecnologia do CNPq–INCT de Energia e Ambiente, Universidade Federal da Bahia, Salvador, BA (Brazil); Becker, Emilene M.; Dessuy, Morgana B. [Instituto de Química, Universidade Federal do Rio Grande do Sul, Av. Bento Gonçalves 9500, 91501-970 Porto Alegre, RS (Brazil); Vale, Maria Goreti R., E-mail: mgrvale@ufrgs.br [Instituto de Química, Universidade Federal do Rio Grande do Sul, Av. Bento Gonçalves 9500, 91501-970 Porto Alegre, RS (Brazil); Instituto Nacional de Ciência e Tecnologia do CNPq–INCT de Energia e Ambiente, Universidade Federal da Bahia, Salvador, BA (Brazil); Welz, Bernhard [Instituto Nacional de Ciência e Tecnologia do CNPq–INCT de Energia e Ambiente, Universidade Federal da Bahia, Salvador, BA (Brazil); Departamento de Química, Universidade Federal de Santa Catarina, 88040-900 Florianópolis, SC (Brazil)

    2014-02-01

    In this work, chemical modifiers in solution (Pd/Mg, NH{sub 4}H{sub 2}PO{sub 4} and NH{sub 4}NO{sub 3}/Pd) were compared with permanent modifiers (Ir and Ru) for the determination of lead in fertilizer and limestone samples using slurry sampling and graphite furnace atomic absorption spectrometry with Zeeman-effect background correction. The analytical line at 283.3 nm was used due to some spectral interference observed at 217.0 nm. The NH{sub 4}H{sub 2}PO{sub 4} was abandoned due to severe spectral interference even at the 283.3-nm line. For Pd/Mg and NH{sub 4}NO{sub 3}/Pd the optimum pyrolysis and atomization temperatures were 900 °C and 1900 °C, respectively. For Ru and Ir, the integrated absorbance signal was stable up to pyrolysis temperatures of 700 °C and 900 °C, respectively, and up to atomization temperature of 1700 °C. The limit of detection (LOD) was 17 ng g{sup −1} using Pd/Mg and 29 ng g{sup −1} using NH{sub 4}NO{sub 3}/Pd. Among the permanent modifiers investigated, the LOD was 22 ng g{sup −1} Pb for Ir and 10 ng g{sup −1} Pb for Ru. The accuracy of the method was evaluated using the certified reference material NIST SRM 695. Although Ru provided lower LOD, which can be attributed to a lower blank signal, only the modifiers in solution showed concordant values of Pb concentration for the NIST SRM 695 and the most of analyzed samples. Moreover, the Pd/Mg modifier provided the highest sensitivity and for this reason it is more suitable for the determination of Pb in fertilizers samples in slurry; besides this it presented a better signal-to-noise ratio than NH{sub 4}NO{sub 3}/Pd. - Highlights: • Lead has been determined in fertilizers using slurry sampling GF AAS. • The mixture of palladium and magnesium nitrates was found to be the ideal chemical modifier. • Calibration could be carried out against aqueous standard solutions. • The proposed method is much faster than the EPA method, which includes sample digestion.

  2. Identification of potential small molecule allosteric modulator sites on IL-1R1 ectodomain using accelerated conformational sampling method.

    Directory of Open Access Journals (Sweden)

    Chao-Yie Yang

    Full Text Available The interleukin-1 receptor (IL-1R is the founding member of the interleukin 1 receptor family which activates innate immune response by its binding to cytokines. Reports showed dysregulation of cytokine production leads to aberrant immune cells activation which contributes to auto-inflammatory disorders and diseases. Current therapeutic strategies focus on utilizing antibodies or chimeric cytokine biologics. The large protein-protein interaction interface between cytokine receptor and cytokine poses a challenge in identifying binding sites for small molecule inhibitor development. Based on the significant conformational change of IL-1R type 1 (IL-1R1 ectodomain upon binding to different ligands observed in crystal structures, we hypothesized that transient small molecule binding sites may exist when IL-1R1 undergoes conformational transition and thus suitable for inhibitor development. Here, we employed accelerated molecular dynamics (MD simulation to efficiently sample conformational space of IL-1R1 ectodomain. Representative IL-1R1 ectodomain conformations determined from the hierarchy cluster analysis were analyzed by the SiteMap program which leads to identify small molecule binding sites at the protein-protein interaction interface and allosteric modulator locations. The cosolvent mapping analysis using phenol as the probe molecule further confirms the allosteric modulator site as a binding hotspot. Eight highest ranked fragment molecules identified from in silico screening at the modulator site were evaluated by MD simulations. Four of them restricted the IL-1R1 dynamical motion to inactive conformational space. The strategy from this study, subject to in vitro experimental validation, can be useful to identify small molecule compounds targeting the allosteric modulator sites of IL-1R and prevent IL-1R from binding to cytokine by trapping IL-1R in inactive conformations.

  3. Analytical Method for Carbon and Oxygen Isotope of Small Carbonate Samples with the GasBench Ⅱ-IRMS Device

    Directory of Open Access Journals (Sweden)

    LIANG Cui-cui

    2015-01-01

    Full Text Available An analytical method for measuring carbon and oxygen isotopic compositions of trace amount carbonate (>15 μg was established by Delta V Advantage isotope Ratio MS coupled with GasBench Ⅱ. Different trace amount (5-50 μg carbonate standard samples (IAEA-CO-1 were measured by GasBench Ⅱ with 12 mL and 3.7 mL vials. When the weight of samples was less than 40 μg and it was acidified in 12 mL vials, most standard deviations of the δ13C and δ18O were more than 0.1‰, which couldn’t satisfied high-precision measurements. When the weight of samples was greater than 15 μg and it was acidified in 3.7 mL vials, standard deviations for the δ13C and δ18O were 0.01‰-0.07‰ and 0.01‰-0.08‰ respectively, which satisfied high-precision measurements. Therefore, small 3.7 mL vials were used to increase the concentration of carbon dioxide in headspace, carbonate samples even less as 15 μg can be analyzed routinely by a GasBench Ⅱ continuous-flow IRMS. Meanwhile, the linear relationship between sample’s weight and peak’s area was strong (R2>0.993 2 and it can be used to determine the carbon content of carbonate samples.

  4. Success and failure rates of tumor genotyping techniques in routine pathological samples with non-small-cell lung cancer.

    Science.gov (United States)

    Vanderlaan, Paul A; Yamaguchi, Norihiro; Folch, Erik; Boucher, David H; Kent, Michael S; Gangadharan, Sidharta P; Majid, Adnan; Goldstein, Michael A; Huberman, Mark S; Kocher, Olivier N; Costa, Daniel B

    2014-04-01

    Identification of some somatic molecular alterations in non-small-cell lung cancer (NSCLC) has become evidence-based practice. The success and failure rate of using commercially available tumor genotyping techniques in routine day-to-day NSCLC pathology samples is not well described. We sought to evaluate the success and failure rate of EGFR mutation, KRAS mutation, and ALK FISH in a cohort of lung cancers subjected to routine clinical tumor genotype. Clinicopathologic data, tumor genotype success and failure rates were retrospectively compiled and analyzed from 381 patient-tumor samples. From these 381 patients with lung cancer, the mean age was 65 years, 61.2% were women, 75.9% were white, 27.8% were never smokers, 73.8% had advanced NSCLC and 86.1% had adenocarcinoma histology. The tumor tissue was obtained from surgical specimens in 48.8%, core needle biopsies in 17.9%, and as cell blocks from aspirates or fluid in 33.3% of cases. Anatomic sites for tissue collection included lung (49.3%), lymph nodes (22.3%), pleura (11.8%), bone (6.0%), brain (6.0%), among others. The overall success rate for EGFR mutation analysis was 94.2%, for KRAS mutation 91.6% and for ALK FISH 91.6%. The highest failure rates were observed when the tissue was obtained from image-guided percutaneous transthoracic core-needle biopsies (31.8%, 27.3%, and 35.3% for EGFR, KRAS, and ALK tests, respectively) and bone specimens (23.1%, 15.4%, and 23.1%, respectively). In specimens obtained from bone, the failure rates were significantly higher for biopsies than resection specimens (40% vs. 0%, p=0.024 for EGFR) and for decalcified compared to non-decalcified samples (60% vs. 5.5%, p=0.021 for EGFR). Tumor genotype techniques are feasible in most samples, outside small image-guided percutaneous transthoracic core-needle biopsies and bone samples from core biopsies with decalcification, and therefore expansion of routine tumor genotype into the care of patients with NSCLC may not require special

  5. The effect of albedo neutrons on the neutron multiplication of small plutonium oxide samples in a PNCC chamber

    CERN Document Server

    Bourva, L C A; Weaver, D R

    2002-01-01

    This paper describes how to evaluate the effect of neutrons reflected from parts of a passive neutron coincidence chamber on the neutron leakage self-multiplication, M sub L , of a fissile sample. It is shown that albedo neutrons contribute, in the case of small plutonium bearing samples, to a significant part of M sub L , and that their effect has to be taken into account in the relationship between the measured coincidence count rates and the sup 2 sup 4 sup 0 Pu effective mass of the sample. A simple one-interaction model has been used to write the balance of neutron gains and losses in the material when exposed to the re-entrant neutron flux. The energy and intensity profiles of the re-entrant flux have been parameterised using Monte Carlo MCNP sup T sup M calculations. This technique has been implemented for the On Site Laboratory neutron/gamma counter within the existing MEPL 1.0 code for the determination of the neutron leakage self-multiplication. Benchmark tests of the resulting MEPL 2.0 code with MC...

  6. A practical method for determining γ-ray full-energy peak efficiency considering coincidence-summing and self-absorption corrections for the measurement of environmental samples after the Fukushima reactor accident

    Energy Technology Data Exchange (ETDEWEB)

    Shizuma, Kiyoshi, E-mail: shizuma@hiroshima-u.ac.jp [Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527 (Japan); Oba, Yurika; Takada, Momo [Graduate School of Integrated Arts and Sciences, Hiroshima University, Higashi-Hiroshima 739-8521 (Japan)

    2016-09-15

    A method for determining the γ-ray full-energy peak efficiency at positions close to three Ge detectors and at the well port of a well-type detector was developed for measuring environmental volume samples containing {sup 137}Cs, {sup 134}Cs and {sup 40}K. The efficiency was estimated by considering two correction factors: coincidence-summing and self-absorption corrections. The coincidence-summing correction for a cascade transition nuclide was estimated by an experimental method involving measuring a sample at the far and close positions of a detector. The derived coincidence-summing correction factors were compared with those of analytical and Monte Carlo simulation methods and good agreements were obtained. Differences in the matrix of the calibration source and the environmental sample resulted in an increase or decrease of the full-energy peak counts due to the self-absorption of γ-rays in the sample. The correction factor was derived as a function of the densities of several matrix materials. The present method was applied to the measurement of environmental samples and also low-level radioactivity measurements of water samples using the well-type detector.

  7. Simultaneous extraction and clean-up of polychlorinated biphenyls and their metabolites from small tissue samples using pressurized liquid extraction

    Science.gov (United States)

    Kania-Korwel, Izabela; Zhao, Hongxia; Norstrom, Karin; Li, Xueshu; Hornbuckle, Keri C.; Lehmler, Hans-Joachim

    2008-01-01

    A pressurized liquid extraction-based method for the simultaneous extraction and in situ clean-up of polychlorinated biphenyls (PCBs), hydroxylated (OH)-PCBs and methylsulfonyl (MeSO2)-PCBs from small (< 0.5 gram) tissue samples was developed and validated. Extraction of a laboratory reference material with hexane:dichloromethane:methanol (48:43:9, v/v) and Florisil as fat retainer allowed an efficient recovery of PCBs (78–112%; RSD: 13–37%), OH-PCBs (46±2%; RSD: 4%) and MeSO2-PCBs (89±21%; RSD: 24%). Comparable results were obtained with an established analysis method for PCBs, OH-PCBs and MeSO2-PCBs. PMID:19019378

  8. CA II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. III. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF 14 CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Parisi, M. C.; Clariá, J. J.; Marcionni, N. [Observatorio Astronómico, Universidad Nacional de Córdoba, Laprida 854, Córdoba, CP 5000 (Argentina); Geisler, D.; Villanova, S. [Departamento de Astronomía, Universidad de Concepción Casilla 160-C, Concepción (Chile); Sarajedini, A. [Department of Astronomy, University of Florida P.O. Box 112055, Gainesville, FL 32611 (United States); Grocholski, A. J., E-mail: celeste@oac.uncor.edu, E-mail: claria@oac.uncor.edu, E-mail: nmarcionni@oac.uncor.edu, E-mail: dgeisler@astro-udec.cl, E-mail: svillanova@astro-udec.cl, E-mail: ata@astro.ufl.edu, E-mail: grocholski@phys.lsu.edu [Department of Physics and Astronomy, Louisiana State University 202 Nicholson Hall, Tower Drive, Baton Rouge, LA 70803-4001 (United States)

    2015-05-15

    We obtained spectra of red giants in 15 Small Magellanic Cloud (SMC) clusters in the region of the Ca ii lines with FORS2 on the Very Large Telescope. We determined the mean metallicity and radial velocity with mean errors of 0.05 dex and 2.6 km s{sup −1}, respectively, from a mean of 6.5 members per cluster. One cluster (B113) was too young for a reliable metallicity determination and was excluded from the sample. We combined the sample studied here with 15 clusters previously studied by us using the same technique, and with 7 clusters whose metallicities determined by other authors are on a scale similar to ours. This compilation of 36 clusters is the largest SMC cluster sample currently available with accurate and homogeneously determined metallicities. We found a high probability that the metallicity distribution is bimodal, with potential peaks at −1.1 and −0.8 dex. Our data show no strong evidence of a metallicity gradient in the SMC clusters, somewhat at odds with recent evidence from Ca ii triplet spectra of a large sample of field stars. This may be revealing possible differences in the chemical history of clusters and field stars. Our clusters show a significant dispersion of metallicities, whatever age is considered, which could be reflecting the lack of a unique age–metallicity relation in this galaxy. None of the chemical evolution models currently available in the literature satisfactorily represents the global chemical enrichment processes of SMC clusters.

  9. Corrections to primordial nucleosynthesis

    International Nuclear Information System (INIS)

    Dicus, D.A.; Kolb, E.W.; Gleeson, A.M.; Sudarshan, E.C.G.; Teplitz, V.L.; Turner, M.S.

    1982-01-01

    The changes in primordial nucleosynthesis resulting from small corrections to rates for weak processes that connect neutrons and protons are discussed. The weak rates are corrected by improved treatment of Coulomb and radiative corrections, and by inclusion of plasma effects. The calculations lead to a systematic decrease in the predicted 4 He abundance of about ΔY = 0.0025. The relative changes in other primoridal abundances are also 1 to 2%

  10. MaxEnt’s parameter configuration and small samples: are we paying attention to recommendations? A systematic review

    Directory of Open Access Journals (Sweden)

    Narkis S. Morales

    2017-03-01

    Full Text Available Environmental niche modeling (ENM is commonly used to develop probabilistic maps of species distribution. Among available ENM techniques, MaxEnt has become one of the most popular tools for modeling species distribution, with hundreds of peer-reviewed articles published each year. MaxEnt’s popularity is mainly due to the use of a graphical interface and automatic parameter configuration capabilities. However, recent studies have shown that using the default automatic configuration may not be always appropriate because it can produce non-optimal models; particularly when dealing with a small number of species presence points. Thus, the recommendation is to evaluate the best potential combination of parameters (feature classes and regularization multiplier to select the most appropriate model. In this work we reviewed 244 articles published between 2013 and 2015 to assess whether researchers are following recommendations to avoid using the default parameter configuration when dealing with small sample sizes, or if they are using MaxEnt as a “black box tool.” Our results show that in only 16% of analyzed articles authors evaluated best feature classes, in 6.9% evaluated best regularization multipliers, and in a meager 3.7% evaluated simultaneously both parameters before producing the definitive distribution model. We analyzed 20 articles to quantify the potential differences in resulting outputs when using software default parameters instead of the alternative best model. Results from our analysis reveal important differences between the use of default parameters and the best model approach, especially in the total area identified as suitable for the assessed species and the specific areas that are identified as suitable by both modelling approaches. These results are worrying, because publications are potentially reporting over-complex or over-simplistic models that can undermine the applicability of their results. Of particular importance

  11. Ca II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. I. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF CLUSTERS

    International Nuclear Information System (INIS)

    Parisi, M. C.; Claria, J. J.; Grocholski, A. J.; Geisler, D.; Sarajedini, A.

    2009-01-01

    We have obtained near-infrared spectra covering the Ca II triplet lines for a large number of stars associated with 16 Small Magellanic Cloud (SMC) clusters using the VLT + FORS2. These data compose the largest available sample of SMC clusters with spectroscopically derived abundances and velocities. Our clusters span a wide range of ages and provide good areal coverage of the galaxy. Cluster members are selected using a combination of their positions relative to the cluster center as well as their location in the color-magnitude diagram, abundances, and radial velocities (RVs). We determine mean cluster velocities to typically 2.7 km s -1 and metallicities to 0.05 dex (random errors), from an average of 6.4 members per cluster. By combining our clusters with previously published results, we compile a sample of 25 clusters on a homogeneous metallicity scale and with relatively small metallicity errors, and thereby investigate the metallicity distribution, metallicity gradient, and age-metallicity relation (AMR) of the SMC cluster system. For all 25 clusters in our expanded sample, the mean metallicity [Fe/H] = -0.96 with σ = 0.19. The metallicity distribution may possibly be bimodal, with peaks at ∼-0.9 dex and -1.15 dex. Similar to the Large Magellanic Cloud (LMC), the SMC cluster system gives no indication of a radial metallicity gradient. However, intermediate age SMC clusters are both significantly more metal-poor and have a larger metallicity spread than their LMC counterparts. Our AMR shows evidence for three phases: a very early (>11 Gyr) phase in which the metallicity reached ∼-1.2 dex, a long intermediate phase from ∼10 to 3 Gyr in which the metallicity only slightly increased, and a final phase from 3 to 1 Gyr ago in which the rate of enrichment was substantially faster. We find good overall agreement with the model of Pagel and Tautvaisiene, which assumes a burst of star formation at 4 Gyr. Finally, we find that the mean RV of the cluster system

  12. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    Science.gov (United States)

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  13. Correction of estimates of retention in care among a cohort of HIV-positive patients in Uganda in the period before starting ART: a sampling-based approach.

    Science.gov (United States)

    Nyakato, Patience; Kiragga, Agnes N; Kambugu, Andrew; Bradley, John; Baisley, Kathy

    2018-04-20

    The aim of this study was to use a sampling-based approach to obtain estimates of retention in HIV care before initiation of antiretroviral treatment (ART), corrected for outcomes in patients who were lost according to clinic registers. Retrospective cohort study of HIV-positive individuals not yet eligible for ART (CD4 >500). Three urban and three rural HIV care clinics in Uganda; information was extracted from the clinic registers for all patients who had registered for pre-ART care between January and August 2015. A random sample of patients who were lost according to the clinic registers (>3 months late to scheduled visit) was traced to ascertain their outcomes. The proportion of patients lost from care was estimated using a competing risks approach, first based on the information in the clinic records alone and then using inverse probability weights to incorporate the results from tracing. Cox regression was used to determine factors associated with loss from care. Of 1153 patients registered for pre-ART care (68% women, median age 29 years, median CD4 count 645 cells/µL), 307 (27%) were lost according to clinic records. Among these, 195 (63%) were selected for tracing; outcomes were ascertained in 118 (61%). Seven patients (6%) had died, 40 (34%) were in care elsewhere and 71 (60%) were out of care. Loss from care at 9 months was 30.2% (95% CI 27.3% to 33.5%). After incorporating outcomes from tracing, loss from care decreased to 18.5% (95% CI 13.8% to 23.6%). Estimates of loss from HIV care may be too high if based on routine clinic data alone. A sampling-based approach is a feasible way of obtaining more accurate estimates of retention, accounting for transfers to other clinics. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Small Body GN and C Research Report: G-SAMPLE - An In-Flight Dynamical Method for Identifying Sample Mass [External Release Version

    Science.gov (United States)

    Carson, John M., III; Bayard, David S.

    2006-01-01

    G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  15. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices.

    Science.gov (United States)

    Westgate, Philip M

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.

  16. Determination of degree of RBC agglutination for blood typing using a small quantity of blood sample in a microfluidic system.

    Science.gov (United States)

    Chang, Yaw-Jen; Ho, Ching-Yuan; Zhou, Xin-Miao; Yen, Hsiu-Rong

    2018-04-15

    Blood typing assay is a critical test to ensure the serological compatibility of a donor and an intended recipient prior to a blood transfusion. This paper presents a microfluidic blood typing system using a small quantity of blood sample to determine the degree of agglutination of red blood cell (RBC). Two measuring methods were proposed: impedimetric measurement and electroanalytical measurement. The charge transfer resistance in the impedimetric measurement and the power parameter in the electroanalytical measurement were used for the analysis of agglutination level. From the experimental results, both measuring methods provide quantitative results, and the parameters are linearly and monotonically related to the degree of RBC agglutination. However, the electroanalytical measurement is more reliable than the impedimetric technique because the impedimetric measurement may suffer from many influencing factors, such as chip conditions. Five levels from non-agglutination (level 0) to strong agglutination (level 4+) can be discriminated in this study, conforming to the clinical requirement to prevent any risks in transfusion. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Probability estimation of rare extreme events in the case of small samples: Technique and examples of analysis of earthquake catalogs

    Science.gov (United States)

    Pisarenko, V. F.; Rodkin, M. V.; Rukavishnikova, T. A.

    2017-11-01

    The most general approach to studying the recurrence law in the area of the rare largest events is associated with the use of limit law theorems of the theory of extreme values. In this paper, we use the Generalized Pareto Distribution (GPD). The unknown GPD parameters are typically determined by the method of maximal likelihood (ML). However, the ML estimation is only optimal for the case of fairly large samples (>200-300), whereas in many practical important cases, there are only dozens of large events. It is shown that in the case of a small number of events, the highest accuracy in the case of using the GPD is provided by the method of quantiles (MQs). In order to illustrate the obtained methodical results, we have formed the compiled data sets characterizing the tails of the distributions for typical subduction zones, regions of intracontinental seismicity, and for the zones of midoceanic (MO) ridges. This approach paves the way for designing a new method for seismic risk assessment. Here, instead of the unstable characteristics—the uppermost possible magnitude M max—it is recommended to use the quantiles of the distribution of random maxima for a future time interval. The results of calculating such quantiles are presented.

  18. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  19. Critical assessment of the performance of electronic moisture analyzers for small amounts of environmental samples and biological reference materials.

    Science.gov (United States)

    Krachler, M

    2001-12-01

    Two electronic moisture analyzers were critically evaluated with regard to their suitability for determining moisture in small amounts (environmental matrices such as leaves, needles, soil, peat, sediments, and sewage sludge, as well as various biological reference materials. To this end, several homogeneous bulk materials were prepared which were subsequently employed for the development and optimization of all analytical procedures. The key features of the moisture analyzers included a halogen or ceramic heater and an integrated balance with a resolution of 0.1 mg, which is an essential prerequisite for obtaining precise results. Oven drying of the bulk materials in a conventional oven at 105 degrees C until constant mass served as reference method. A heating temperature of 65degrees C was found to provide accurate and precise results for almost all matrices investigated. To further improve the accuracy and precision, other critical parameters such as handling of sample pans, standby temperature, and measurement delay were optimized. Because of its ponderous heating behavior, the performance of the ceramic radiator was inferior to that of the halogen heater, which produced moisture results comparable to those obtained by oven drying. The developed drying procedures were successfully applied to the fast moisture analysis (1.4-6.3 min) of certified biological reference materials of similar provenance to the investigated the bulk materials. Moisture results for 200 mg aliquots ranged from 1.4 to 7.8% and good agreement was obtained between the recommended drying procedure for the reference materials and the electronic moisture analyzers with absolute uncertainties amounting to 0.1% and 0.2-0.3%, respectively.

  20. QT interval in healthy dogs: which method of correcting the QT interval in dogs is appropriate for use in small animal clinics?

    Directory of Open Access Journals (Sweden)

    Maira S. Oliveira

    2014-05-01

    Full Text Available The electrocardiography (ECG QT interval is influenced by fluctuations in heart rate (HR what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds. Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc obtained using the diverse formulae were significantly different (ρ<0.05, while those derived according to the equation QTcV = QT + 0.087(1- RR were the most consistent (linear regression. QTcV values were strongly correlated (r=0.83 with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.

  1. Structural properties of small Lin (n = 5-8) atomic clusters via ab initio random structure searching: A look into the role of different implementations of long-range dispersion corrections

    Science.gov (United States)

    Putungan, Darwin Barayang; Lin, Shi-Hsin

    2018-01-01

    In this work, we looked into the lowest energy structures of small lithium clusters (Lin, n = 5, 6, 7, 8) utilizing conventional PBE exchange-correlation functional, PBE with D2 dispersion correction and PBE with Tkatchenko and Scheffler (TS) dispersion correction, and searched using ab initio random structure searching. Results show that in general, dispersion-corrected PBE obtained similar lowest minima structures as those obtained via conventional PBE regardless of the type of implementation, although both D2 and TS found several high-energy isomers that conventional PBE did not arrive at, with TS in general giving more structures per energy range that could be attributed to its environment-dependent implementation. Moreover, D2 and TS dispersion corrections found a lowest energy geometry for Li8 cluster that is in agreement with the structure obtained via the typical benchmarking method diffusion Monte Carlo in a recent work. It is thus suggested that for much larger lithium clusters, utilization of dispersion correction could be of help in searching for lowest energy minima that is in close agreement with that of diffusion Monte Carlo results, but computationally inexpensive.

  2. The challenge of NSCLC diagnosis and predictive analysis on small samples. Practical approach of a working group

    DEFF Research Database (Denmark)

    Thunnissen, Erik; Kerr, Keith M; Herth, Felix J F

    2012-01-01

    Until recently, the division of pulmonary carcinomas into small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) was adequate for therapy selection. Due to the emergence of new treatment options subtyping of NSCLC and predictive testing have become mandatory. A practical approach to...

  3. [Self-assessment of BMI data : verification of the practicability of a correction formula on a sample of 11- to 13-year-old girls].

    Science.gov (United States)

    Wick, K; Hölling, H; Schlack, R; Bormann, B; Brix, C; Sowa, M; Strauss, B; Berger, U

    2011-06-01

    The decision to measure or to ask about data concerning height and weight in order to calculate body mass index (BMI) has an influence on the economy and validity of the measurements. Although self-reported information is less expensive, this information may possibly have a bias on the determined prevalences of different weight groups. Using representative data from the KiGGS study with a comparison of directly measured and self-reported BMI data, Kurth and Ellert (2010) developed two correction formulas for prevalences resulting from self-reported information. The aim of the study was to examine the practicability of the proposed correction formulas on our own data concerning self-reported BMI data of 11- to 13-year-old girls (n=1,271) and to assess the plausibility of the corrected measurements. As a result, the prevalences of our own data changed in the expected direction both for underweight and for overweight. Both formulas were found to be practicable, the consideration of the subjective weight status (formula 2) resulted in a greater change in prevalences compared to the first correction formula.

  4. Calculation of thermal neutron self-shielding correction factors for aqueous bulk sample prompt gamma neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Nasrabadi, M.N.; Jalali, M.; Mohammadi, A.

    2007-01-01

    In this work thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing materials is studied using bulk sample prompt gamma neutron activation analysis (BSPGNAA) with the MCNP code. The code was used to perform three dimensional simulations of a neutron source, neutron detector and sample of various material compositions. The MCNP model was validated against experimental measurements of the neutron flux performed using a BF 3 detector. Simulations were performed to predict thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing solutes. In practice, the MCNP calculations are combined with experimental measurements of the relative thermal neutron flux over the sample's surface, with respect to a reference water sample, to derive the thermal neutron self-shielding within the sample. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the average thermal neutron flux within the sample volume is required

  5. Quantification accuracy and partial volume effect in dependence of the attenuation correction of a state-of-the-art small animal PET scanner

    International Nuclear Information System (INIS)

    Mannheim, Julia G; Judenhofer, Martin S; Schmid, Andreas; Pichler, Bernd J; Tillmanns, Julia; Stiller, Detlef; Sossi, Vesna

    2012-01-01

    Quantification accuracy and partial volume effect (PVE) of the Siemens Inveon PET scanner were evaluated. The influence of transmission source activities (40 and 160 MBq) on the quantification accuracy and the PVE were determined. Dynamic range, object size and PVE for different sphere sizes, contrast ratios and positions in the field of view (FOV) were evaluated. The acquired data were reconstructed using different algorithms and correction methods. The activity level of the transmission source and the total emission activity in the FOV strongly influenced the attenuation maps. Reconstruction algorithms, correction methods, object size and location within the FOV had a strong influence on the PVE in all configurations. All evaluated parameters potentially influence the quantification accuracy. Hence, all protocols should be kept constant during a study to allow a comparison between different scans. (paper)

  6. Automated microfluidic sample-preparation platform for high-throughput structural investigation of proteins by small-angle X-ray scattering

    DEFF Research Database (Denmark)

    Lafleur, Josiane P.; Snakenborg, Detlef; Nielsen, Søren Skou

    2011-01-01

    A new microfluidic sample-preparation system is presented for the structural investigation of proteins using small-angle X-ray scattering (SAXS) at synchrotrons. The system includes hardware and software features for precise fluidic control, sample mixing by diffusion, automated X-ray exposure...... control, UV absorbance measurements and automated data analysis. As little as 15 l of sample is required to perform a complete analysis cycle, including sample mixing, SAXS measurement, continuous UV absorbance measurements, and cleaning of the channels and X-ray cell with buffer. The complete analysis...

  7. Publisher Correction

    DEFF Research Database (Denmark)

    Turcot, Valérie; Lu, Yingchang; Highland, Heather M

    2018-01-01

    In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article.......In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article....

  8. Author Correction

    DEFF Research Database (Denmark)

    Grundle, D S; Löscher, C R; Krahmann, G

    2018-01-01

    A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.......A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper....

  9. Effects of Sample Impurities on the Analysis of MS2 Bacteriophage by Small-Angle Neutron Scattering

    National Research Council Canada - National Science Library

    Elashvili, Ilya; Wick, Charles H; Kuzmanovic, Deborah A; Krueger, Susan; O'Connell, Catherine

    2005-01-01

    .... The impact of small molecular weight impurities of the resolution of structural data obtained by SANS of the bacteriophage MS2 distorts the resolution and sharpness of contrast variation peaks...

  10. Development of an evaluation method for fracture mechanical tests on small samples based on a cohesive zone model

    International Nuclear Information System (INIS)

    Mahler, Michael

    2016-01-01

    The safety and reliability of nuclear power plants of the fourth generation is an important issue. It is based on a reliable interpretation of the components for which, among other fracture mechanical material properties are required. The existing irradiation in the power plants significantly affects the material properties which therefore need to be determined on irradiated material. Often only small amounts of irradiated material are available for characterization. In that case it is not possible to manufacture sufficiently large specimens, which are necessary for fracture mechanical testing in agreement with the standard. Small specimens must be used. From this follows the idea of this study, in which the fracture toughness can be predicted with the developed method based on tests of small specimens. For this purpose, the fracture process including the crack growth is described with a continuum mechanical approach using the finite element method and the cohesive zone model. The experiments on small specimens are used for parameter identification of the cohesive zone model. The two parameters of the cohesive zone model are determined by tensile tests on notched specimens (cohesive stress) and by parameter fitting to the fracture behavior of smalls specimens (cohesive energy). To account the different triaxialities of the specimens, the cohesive stress is used depending on the triaxiality. After parameter identification a large specimen can be simulated with the cohesive zone parameters derived from small specimens. The predicted fracture toughness of this big specimen fulfills the size requirements in the standard (ASTM E1820 or ASTM E399) in contrast to the small specimen. This method can be used for ductile and brittle material behavior and was validated in this work. In summary, this method offers the possibility to determine the fracture toughness indirectly based on small specimen testing. Main advantage is the low required specimen volume. Thereby massively

  11. Chromatographic background drift correction coupled with parallel factor analysis to resolve coelution problems in three-dimensional chromatographic data: quantification of eleven antibiotics in tap water samples by high-performance liquid chromatography coupled with a diode array detector.

    Science.gov (United States)

    Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin

    2013-08-09

    Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Body Mass Index, family lifestyle, physical activity and eating behavior on a sample of primary school students in a small town of Western Sicily

    Directory of Open Access Journals (Sweden)

    Enza Sidoti

    2009-09-01

    Full Text Available

    Background: Obesity is actually a discernible issue in prosperous western society and is dramatically increasing in children and adolescents. Many studies indicate that obesity in childhood may become chronic disease in adulthood and, particularly, those who are severely overweight have an increased risk of death by cardiovascular disease. Understanding the determinants of life style and behavior in a person’s youth and making attempts to change children’s habits is considered a key strategy in the primary prevention of obesity. This study aims to find a correlation between Body Mass Index, (BMI, physical activity and eating behavior and to identify, eventually, risks, protective factors and possible directions for interventions on incorrect nutritional/physical activity and intra-familiar life styles in a sample of young adolescents in a small town of Western Sicily.

    Methods: The research surveyed the entire population of the last three curricular years of two Primary Schools in a town of western Sicily, (n=294. The instrument used for the survey was a questionnaire containing 20 different items with multiple choices answers. Personal information, physical activity and eating behaviors were collected both for parents and students to cross students’ and parents’ characteristics. Data were codified and statistical analysis was computed through Statistica and Openstat software.

    Results: Data obtained demonstrated a relevant percentage (18% of obese children. Prevalence of overweight was high as well, (23%, and many in this area (12% were at risk since they were on the limits of the lower class. A significant association was found between the percentage of students classified as having an elevated BMI and a sedentary habit and/or an incorrect eating behavior. Among the overweight and obese children a direct statistical association was also shown between the weight of their

  13. Analysis and comparison of fish growth from small samples of length-at-age data : Detection of sexual dimorphism in Eurasian perch as an example

    NARCIS (Netherlands)

    Mooij, WM; Van Rooij, JM; Wijnhoven, S

    A relatively simple approach is presented for statistical analysis and comparison of fish growth patterns inferred from size-at-age data. It can be used for any growth model and small sample sizes. Bootstrapping is used to generate confidence regions for the model parameters and for size and growth

  14. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    OpenAIRE

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60?years. By comparison, recent data (1999?2006) show that hearing thresholds in the US population have improved....

  15. A new set-up for simultaneous high-precision measurements of CO2, δ13C-CO2 and δ18O-CO2 on small ice core samples

    Science.gov (United States)

    Jenk, Theo Manuel; Rubino, Mauro; Etheridge, David; Ciobanu, Viorela Gabriela; Blunier, Thomas

    2016-08-01

    Palaeoatmospheric records of carbon dioxide and its stable carbon isotope composition (δ13C) obtained from polar ice cores provide important constraints on the natural variability of the carbon cycle. However, the measurements are both analytically challenging and time-consuming; thus only data exist from a limited number of sampling sites and time periods. Additional analytical resources with high analytical precision and throughput are thus desirable to extend the existing datasets. Moreover, consistent measurements derived by independent laboratories and a variety of analytical systems help to further increase confidence in the global CO2 palaeo-reconstructions. Here, we describe our new set-up for simultaneous measurements of atmospheric CO2 mixing ratios and atmospheric δ13C and δ18O-CO2 in air extracted from ice core samples. The centrepiece of the system is a newly designed needle cracker for the mechanical release of air entrapped in ice core samples of 8-13 g operated at -45 °C. The small sample size allows for high resolution and replicate sampling schemes. In our method, CO2 is cryogenically and chromatographically separated from the bulk air and its isotopic composition subsequently determined by continuous flow isotope ratio mass spectrometry (IRMS). In combination with thermal conductivity measurement of the bulk air, the CO2 mixing ratio is calculated. The analytical precision determined from standard air sample measurements over ice is ±1.9 ppm for CO2 and ±0.09 ‰ for δ13C. In a laboratory intercomparison study with CSIRO (Aspendale, Australia), good agreement between CO2 and δ13C results is found for Law Dome ice core samples. Replicate analysis of these samples resulted in a pooled standard deviation of 2.0 ppm for CO2 and 0.11 ‰ for δ13C. These numbers are good, though they are rather conservative estimates of the overall analytical precision achieved for single ice sample measurements. Facilitated by the small sample requirement

  16. Investigation of the Effect of Small Hardening Spots Created on the Sample Surface by Laser Complex with Solid-State Laser

    Science.gov (United States)

    Nozdrina, O.; Zykov, I.; Melnikov, A.; Tsipilev, V.; Turanov, S.

    2018-03-01

    This paper describes the results of an investigation of the effect of small hardening spots (about 1 mm) created on the surface of a sample by laser complex with solid-state laser. The melted area of the steel sample is not exceed 5%. Steel microhardness change in the region subjected to laser treatment is studied. Also there is a graph of the deformation of samples dependence on the tension. As a result, the yield plateau and plastic properties changes were detected. The flow line was tracked in the series of speckle photographs. As a result we can see how mm surface inhomogeneity can influence on the deformation and strength properties of steel.

  17. The Impact of Correcting Cognitive Distortions in Reducing Depression and the Sense of Insecurity among a Sample of Female Refugee Adolescents

    Science.gov (United States)

    Mhaidat, Fatin; ALharbi, Bassam H. M.

    2016-01-01

    This study aimed at identifying the level of depression and sense of insecurity among a sample of female refugee adolescents, and the impact of an indicative program for reducing cognitive distortions in reducing depression and their sense of insecurity. The study sample consisted of 220 female refugee adolescents, 7th to 1st secondary stage, at…

  18. Approaches for cytogenetic and molecular analyses of small flow-sorted cell populations from childhood leukemia bone marrow samples

    DEFF Research Database (Denmark)

    Obro, Nina Friesgaard; Madsen, Hans O.; Ryder, Lars Peter

    2011-01-01

    defined cell populations with subsequent analyses of leukemia-associated cytogenetic and molecular marker. The approaches described here optimize the use of the same tube of unfixed, antibody-stained BM cells for flow-sorting of small cell populations and subsequent exploratory FISH and PCR-based analyses....

  19. SU-E-T-225: Correction Matrix for PinPoint Ionization Chamber for Dosimetric Measurements in the Newly Released Incise™ Multileaf Collimator Shaped Small Field for CyberKnife M6™ Machine

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Y; Li, T; Heron, D; Huq, M [University of Pittsburgh Cancer Institute and UPMC CancerCenter, Pittsburgh, PA (United States)

    2015-06-15

    Purpose: For small field dosimetry, such as measurements of output factors for cones or MLC-shaped irregular small fields, ion chambers often Result in an underestimation of the dose, due to both the volume averaging effect and the lack of lateral charged particle equilibrium. This work presents a mathematical model for correction matrix for a PTW PinPoint ionization chamber for dosimetric measurements made in the newly released Incise™ Multileaf collimator fields of the CyberKnife M6™ machine. Methods: A correction matrix for a PTW 0.015cc PinPoint ionization chamber was developed by modeling its 3D dose response in twelve cone-shaped circular fields created using the 5mm, 7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm cones in a CyberKnife M6™ machine. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. The contribution of each dose pixel to a measurement point depended on the radial distance and the angle to the chamber axis. These readings were then compared with the theoretical dose as obtained with Monte Carlo calculation. A penalized least-square optimization algorithm was developed to generate the correction matrix. After the parameter fitting, the mathematical model was validated for MLC-shaped irregular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for PinPoint chamber for dosimetric measurements in small MLC-shaped irregular fields. The correction matrix is dependent on detector, treatment unit and the geometry of setup. The model can be applied to non-standard composite fields and provides an access to IMRT point dose validation.

  20. SU-E-T-225: Correction Matrix for PinPoint Ionization Chamber for Dosimetric Measurements in the Newly Released Incise™ Multileaf Collimator Shaped Small Field for CyberKnife M6™ Machine

    International Nuclear Information System (INIS)

    Zhang, Y; Li, T; Heron, D; Huq, M

    2015-01-01

    Purpose: For small field dosimetry, such as measurements of output factors for cones or MLC-shaped irregular small fields, ion chambers often Result in an underestimation of the dose, due to both the volume averaging effect and the lack of lateral charged particle equilibrium. This work presents a mathematical model for correction matrix for a PTW PinPoint ionization chamber for dosimetric measurements made in the newly released Incise™ Multileaf collimator fields of the CyberKnife M6™ machine. Methods: A correction matrix for a PTW 0.015cc PinPoint ionization chamber was developed by modeling its 3D dose response in twelve cone-shaped circular fields created using the 5mm, 7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm cones in a CyberKnife M6™ machine. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. The contribution of each dose pixel to a measurement point depended on the radial distance and the angle to the chamber axis. These readings were then compared with the theoretical dose as obtained with Monte Carlo calculation. A penalized least-square optimization algorithm was developed to generate the correction matrix. After the parameter fitting, the mathematical model was validated for MLC-shaped irregular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for PinPoint chamber for dosimetric measurements in small MLC-shaped irregular fields. The correction matrix is dependent on detector, treatment unit and the geometry of setup. The model can be applied to non-standard composite fields and provides an access to IMRT point dose validation

  1. Calibrating the X-ray attenuation of liquid water and correcting sample movement artefacts during in operando synchrotron X-ray radiographic imaging of polymer electrolyte membrane fuel cells.

    Science.gov (United States)

    Ge, Nan; Chevalier, Stéphane; Hinebaugh, James; Yip, Ronnie; Lee, Jongmin; Antonacci, Patrick; Kotaka, Toshikazu; Tabuchi, Yuichiro; Bazylak, Aimy

    2016-03-01

    Synchrotron X-ray radiography, due to its high temporal and spatial resolutions, provides a valuable means for understanding the in operando water transport behaviour in polymer electrolyte membrane fuel cells. The purpose of this study is to address the specific artefact of imaging sample movement, which poses a significant challenge to synchrotron-based imaging for fuel cell diagnostics. Specifically, the impact of the micrometer-scale movement of the sample was determined, and a correction methodology was developed. At a photon energy level of 20 keV, a maximum movement of 7.5 µm resulted in a false water thickness of 0.93 cm (9% higher than the maximum amount of water that the experimental apparatus could physically contain). This artefact was corrected by image translations based on the relationship between the false water thickness value and the distance moved by the sample. The implementation of this correction method led to a significant reduction in false water thickness (to ∼0.04 cm). Furthermore, to account for inaccuracies in pixel intensities due to the scattering effect and higher harmonics, a calibration technique was introduced for the liquid water X-ray attenuation coefficient, which was found to be 0.657 ± 0.023 cm(-1) at 20 keV. The work presented in this paper provides valuable tools for artefact compensation and accuracy improvements for dynamic synchrotron X-ray imaging of fuel cells.

  2. A novel device for batch-wise isolation of α-cellulose from small-amount wholewood samples

    OpenAIRE

    T. Wieloch; Gerhard Helle; Ingo Heinrich; Michael Voigt; P. Schyma

    2011-01-01

    A novel device for the chemical isolation of α-cellulose from wholewood material of tree rings was designed by the Potsdam Dendro Laboratory. It allows the simultaneous treatment of up to several hundred micro samples. Key features are the batch-wise exchange of the chemical solutions, the reusability of all major parts and the easy and unambiguous labelling of each individual sample. Compared to classical methods labour intensity and running costs are significantly reduced.

  3. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    Science.gov (United States)

    Burckhardt, Bjoern B.; Laeer, Stephanie

    2015-01-01

    In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum). Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers. PMID:25873972

  4. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    Directory of Open Access Journals (Sweden)

    Bjoern B. Burckhardt

    2015-01-01

    Full Text Available In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum. Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers.

  5. Effective absorption correction for energy dispersive X-ray mapping in a scanning transmission electron microscope: analysing the local indium distribution in rough samples of InGaN alloy layers.

    Science.gov (United States)

    Wang, X; Chauvat, M-P; Ruterana, P; Walther, T

    2017-12-01

    We have applied our previous method of self-consistent k*-factors for absorption correction in energy-dispersive X-ray spectroscopy to quantify the indium content in X-ray maps of thick compound InGaN layers. The method allows us to quantify the indium concentration without measuring the sample thickness, density or beam current, and works even if there is a drastic local thickness change due to sample roughness or preferential thinning. The method is shown to select, point-by-point in a two-dimensional spectrum image or map, the k*-factor from the local Ga K/L intensity ratio that is most appropriate for the corresponding sample geometry, demonstrating it is not the sample thickness measured along the electron beam direction but the optical path length the X-rays have to travel through the sample that is relevant for the absorption correction. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  6. Correction of effects due to reactions on complex nuclei in a sample of hydrogen-like antiproton annihilations from a heavy liquid bubble chamber experiment

    International Nuclear Information System (INIS)

    Fett, E.; Haatuft, A.; Olsen, J.M.

    1977-01-01

    A method is presented, which has been used to determine the pion multiplicity distributions for antiproton annihilations on free protons from a sample of events obtained in a heavy liquid bubble chamber experiment. The method uses data obtained in the experiment in question together with the usual invariance principles satisfied by strong interactions. Furthermore no particular nuclear model is assumed

  7. Publisher Correction

    DEFF Research Database (Denmark)

    Stokholm, Jakob; Blaser, Martin J.; Thorsen, Jonathan

    2018-01-01

    The originally published version of this Article contained an incorrect version of Figure 3 that was introduced following peer review and inadvertently not corrected during the production process. Both versions contain the same set of abundance data, but the incorrect version has the children...

  8. Publisher Correction

    DEFF Research Database (Denmark)

    Flachsbart, Friederike; Dose, Janina; Gentschew, Liljana

    2018-01-01

    The original version of this Article contained an error in the spelling of the author Robert Häsler, which was incorrectly given as Robert Häesler. This has now been corrected in both the PDF and HTML versions of the Article....

  9. Correction to

    DEFF Research Database (Denmark)

    Roehle, Robert; Wieske, Viktoria; Schuetz, Georg M

    2018-01-01

    The original version of this article, published on 19 March 2018, unfortunately contained a mistake. The following correction has therefore been made in the original: The names of the authors Philipp A. Kaufmann, Ronny Ralf Buechel and Bernhard A. Herzog were presented incorrectly....

  10. Analysis of Reflectance and Transmittance Measurements on Absorbing and Scattering Small Samples Using a Modified Integrating Sphere Setup

    DEFF Research Database (Denmark)

    Jernshøj, Kit Drescher; Hassing, Søren

    2009-01-01

    Formålet med artiklen er at anlysere reflektans og transmittans målinger på små spredende og absorberende emner. Små emner, som f.eks. grønne blade udgør en speciel eksperimentel udfordring, når sample beamet har et større tværsnit end emnet, der skal måles på. De eksperimentelle fejl, der indfør...

  11. Multi-actinide analysis with AMS for ultra-trace determination and small sample sizes: advantages and drawbacks

    Energy Technology Data Exchange (ETDEWEB)

    Quinto, Francesca; Lagos, Markus; Plaschke, Markus; Schaefer, Thorsten; Geckeis, Horst [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology (Germany); Steier, Peter; Golser, Robin [VERA Laboratory, Faculty of Physics, University of Vienna (Austria)

    2016-07-01

    With the abundance sensitivities of AMS for U-236, Np-237 and Pu-239 relative to U-238 at levels lower than 1E-15, a simultaneous determination of several actinides without previous chemical separation from each other is possible. The actinides are extracted from the matrix elements via an iron hydroxide co-precipitation and the nuclides sequentially measured from the same sputter target. This simplified method allows for the use of non-isotopic tracers and consequently the determination of Np-237 and Am-243 for which isotopic tracers with the degree of purity required by ultra-trace mass-spectrometric analysis are not available. With detection limits of circa 1E+4 atoms in a sample, 1E+8 atoms are determined with circa 1 % relative uncertainty due to counting statistics. This allows for an unprecedented reduction of the sample size down to 100 ml of natural water. However, the use of non-isotopic tracers introduces a dominating uncertainty of up to 30 % related to the reproducibility of the results. The advantages and drawbacks of the novel method will be presented with the aid of recent results from the CFM Project at the Grimsel Test Site and from the investigation of global fallout in environmental samples.

  12. SampleCNN: End-to-End Deep Convolutional Neural Networks Using Very Small Filters for Music Classification

    Directory of Open Access Journals (Sweden)

    Jongpil Lee

    2018-01-01

    Full Text Available Convolutional Neural Networks (CNN have been applied to diverse machine learning tasks for different modalities of raw data in an end-to-end fashion. In the audio domain, a raw waveform-based approach has been explored to directly learn hierarchical characteristics of audio. However, the majority of previous studies have limited their model capacity by taking a frame-level structure similar to short-time Fourier transforms. We previously proposed a CNN architecture which learns representations using sample-level filters beyond typical frame-level input representations. The architecture showed comparable performance to the spectrogram-based CNN model in music auto-tagging. In this paper, we extend the previous work in three ways. First, considering the sample-level model requires much longer training time, we progressively downsample the input signals and examine how it affects the performance. Second, we extend the model using multi-level and multi-scale feature aggregation technique and subsequently conduct transfer learning for several music classification tasks. Finally, we visualize filters learned by the sample-level CNN in each layer to identify hierarchically learned features and show that they are sensitive to log-scaled frequency.

  13. A Simple Method for Automated Solid Phase Extraction of Water Samples for Immunological Analysis of Small Pollutants.

    Science.gov (United States)

    Heub, Sarah; Tscharner, Noe; Kehl, Florian; Dittrich, Petra S; Follonier, Stéphane; Barbe, Laurent

    2016-01-01

    A new method for solid phase extraction (SPE) of environmental water samples is proposed. The developed prototype is cost-efficient and user friendly, and enables to perform rapid, automated and simple SPE. The pre-concentrated solution is compatible with analysis by immunoassay, with a low organic solvent content. A method is described for the extraction and pre-concentration of natural hormone 17β-estradiol in 100 ml water samples. Reverse phase SPE is performed with octadecyl-silica sorbent and elution is done with 200 µl of methanol 50% v/v. Eluent is diluted by adding di-water to lower the amount of methanol. After preparing manually the SPE column, the overall procedure is performed automatically within 1 hr. At the end of the process, estradiol concentration is measured by using a commercial enzyme-linked immune-sorbent assay (ELISA). 100-fold pre-concentration is achieved and the methanol content in only 10% v/v. Full recoveries of the molecule are achieved with 1 ng/L spiked de-ionized and synthetic sea water samples.

  14. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  15. Bryant J. correction formula

    International Nuclear Information System (INIS)

    Tejera R, A.; Cortes P, A.; Becerril V, A.

    1990-03-01

    For the practical application of the method proposed by J. Bryant, the authors carried out a series of small corrections, related with the bottom, the dead time of the detectors and channels, with the resolution time of the coincidences, with the accidental coincidences, with the decay scheme and with the gamma efficiency of the beta detector beta and the beta efficiency beta of the gamma detector. The calculation of the correction formula is presented in the development of the present report, being presented 25 combinations of the probability of the first existent state at once of one disintegration and the second state at once of the following disintegration. (Author)

  16. Interstellar Gas-phase Element Depletions in the Small Magellanic Cloud: A Guide to Correcting for Dust in QSO Absorption Line Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jenkins, Edward B. [Princeton University Observatory, Princeton, NJ 08544-1001 (United States); Wallerstein, George, E-mail: ebj@astro.princeton.edu, E-mail: walleg@u.washington.edu [University of Washington, Seattle, Dept. of Astronomy, Seattle, WA 98195-1580 (United States)

    2017-04-01

    We present data on the gas-phase abundances for 9 different elements in the interstellar medium of the Small Magellanic Cloud (SMC), based on the strengths of ultraviolet absorption features over relevant velocities in the spectra of 18 stars within the SMC. From this information and the total abundances defined by the element fractions in young stars in the SMC, we construct a general interpretation on how these elements condense into solid form onto dust grains. As a group, the elements Si, S, Cr, Fe, Ni, and Zn exhibit depletion sequences similar to those in the local part of our Galaxy defined by Jenkins. The elements Mg and Ti deplete less rapidly in the SMC than in the Milky Way, and Mn depletes more rapidly. We speculate that these differences might be explained by the different chemical affinities to different existing grain substrates. For instance, there is evidence that the mass fractions of polycyclic aromatic hydrocarbons in the SMC are significantly lower than those in the Milky Way. We propose that the depletion sequences that we observed for the SMC may provide a better model for interpreting the element abundances in low-metallicity Damped Lyman Alpha (DLA) and sub-DLA absorption systems that are recorded in the spectra of distant quasars and gamma-ray burst afterglows.

  17. Detection of Small Numbers of Campylobacter jejuni and Campylobacter coli Cells in Environmental Water, Sewage, and Food Samples by a Seminested PCR Assay

    Science.gov (United States)

    Waage, Astrid S.; Vardund, Traute; Lund, Vidar; Kapperud, Georg

    1999-01-01

    A rapid and sensitive assay was developed for detection of small numbers of Campylobacter jejuni and Campylobacter coli cells in environmental water, sewage, and food samples. Water and sewage samples were filtered, and the filters were enriched overnight in a nonselective medium. The enrichment cultures were prepared for PCR by a rapid and simple procedure consisting of centrifugation, proteinase K treatment, and boiling. A seminested PCR based on specific amplification of the intergenic sequence between the two Campylobacter flagellin genes, flaA and flaB, was performed, and the PCR products were visualized by agarose gel electrophoresis. The assay allowed us to detect 3 to 15 CFU of C. jejuni per 100 ml in water samples containing a background flora consisting of up to 8,700 heterotrophic organisms per ml and 10,000 CFU of coliform bacteria per 100 ml. Dilution of the enriched cultures 1:10 with sterile broth prior to the PCR was sometimes necessary to obtain positive results. The assay was also conducted with food samples analyzed with or without overnight enrichment. As few as ≤3 CFU per g of food could be detected with samples subjected to overnight enrichment, while variable results were obtained for samples analyzed without prior enrichment. This rapid and sensitive nested PCR assay provides a useful tool for specific detection of C. jejuni or C. coli in drinking water, as well as environmental water, sewage, and food samples containing high levels of background organisms. PMID:10103261

  18. Sample types applied for molecular diagnosis of therapeutic management of advanced non-small cell lung cancer in the precision medicine.

    Science.gov (United States)

    Han, Yanxi; Li, Jinming

    2017-10-26

    In this era of precision medicine, molecular biology is becoming increasingly significant for the diagnosis and therapeutic management of non-small cell lung cancer. The specimen as the primary element of the whole testing flow is particularly important for maintaining the accuracy of gene alteration testing. Presently, the main sample types applied in routine diagnosis are tissue and cytology biopsies. Liquid biopsies are considered as the most promising alternatives when tissue and cytology samples are not available. Each sample type possesses its own strengths and weaknesses, pertaining to the disparity of sampling, preparation and preservation procedures, the heterogeneity of inter- or intratumors, the tumor cellularity (percentage and number of tumor cells) of specimens, etc., and none of them can individually be a "one size to fit all". Therefore, in this review, we summarized the strengths and weaknesses of different sample types that are widely used in clinical practice, offered solutions to reduce the negative impact of the samples and proposed an optimized strategy for choice of samples during the entire diagnostic course. We hope to provide valuable information to laboratories for choosing optimal clinical specimens to achieve comprehensive functional genomic landscapes and formulate individually tailored treatment plans for NSCLC patients that are in advanced stages.

  19. A comparison of turtle sampling methods in a small lake in Standing Stone State Park, Overton County, Tennessee

    Science.gov (United States)

    Weber, A.; Layzer, James B.

    2011-01-01

    We used basking traps and hoop nets to sample turtles in Standing Stone Lake at 2-week intervals from May to November 2006. In alternate weeks, we conducted visual basking surveys. We collected and observed four species of turtles: spiny softshell (Apalone spinifera), northern map turtle (Graptemys geographica), pond slider (Trachernys scripta), and snapping turtle (Chelydra serpentina). Relative abundances varied greatly among sampling methods. To varying degrees, all methods were species selective. Population estimates from mark and recaptures of three species, basking counts, and hoop net catches indicated that pond sliders were the most abundant species, but northern map turtles were 8× more abundant than pond sliders in basking trap catches. We saw relatively few snapping turtles basking even though population estimates indicated they were the second most abundant species. Populations of all species were dominated by adult individuals. Sex ratios of three species differed significantly from 1:1. Visual surveys were the most efficient method for determining the presence of species, but capture methods were necessary to obtain size and sex data.

  20. MDMA-assisted psychotherapy using low doses in a small sample of women with chronic posttraumatic stress disorder.

    Science.gov (United States)

    Bouso, José Carlos; Doblin, Rick; Farré, Magí; Alcázar, Miguel Angel; Gómez-Jarabo, Gregorio

    2008-09-01

    The purpose of this study was to investigate the safety of different doses of MDMA-assisted psychotherapy administered in a psychotherapeutic setting to women with chronic PTSD secondary to a sexual assault, and also to obtain preliminary data regarding efficacy. Although this study was originally planned to include 29 subjects, political pressures led to the closing of the study before it could be finished, at which time only six subjects had been treated. Preliminary results from those six subjects are presented here. We found that low doses of MDMA (between 50 and 75 mg) were both psychologically and physiologically safe for all the subjects. Future studies in larger samples and using larger doses are needed in order to further clarify the safety and efficacy of MDMA in the clinical setting in subjects with PTSD.

  1. A rheo-optical apparatus for real time kinetic studies on shear-induced alignment of self-assembled soft matter with small sample volumes

    Science.gov (United States)

    Laiho, Ari; Ikkala, Olli

    2007-01-01

    In soft materials, self-assembled nanoscale structures can allow new functionalities but a general problem is to align such local structures aiming at monodomain overall order. In order to achieve shear alignment in a controlled manner, a novel type of rheo-optical apparatus has here been developed that allows small sample volumes and in situ monitoring of the alignment process during the shear. Both the amplitude and orientation angles of low level linear birefringence and dichroism are measured while the sample is subjected to large amplitude oscillatory shear flow. The apparatus is based on a commercial rheometer where we have constructed a flow cell that consists of two quartz teeth. The lower tooth can be set in oscillatory motion whereas the upper one is connected to the force transducers of the rheometer. A custom made cylindrical oven allows the operation of the flow cell at elevated temperatures up to 200 °C. Only a small sample volume is needed (from 9 to 25 mm3), which makes the apparatus suitable especially for studying new materials which are usually obtainable only in small quantities. Using this apparatus the flow alignment kinetics of a lamellar polystyrene-b-polyisoprene diblock copolymer is studied during shear under two different conditions which lead to parallel and perpendicular alignment of the lamellae. The open device geometry allows even combined optical/x-ray in situ characterization of the alignment process by combining small-angle x-ray scattering using concepts shown by Polushkin et al. [Macromolecules 36, 1421 (2003)].

  2. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...

  3. Triacylglycerol Analysis in Human Milk and Other Mammalian Species: Small-Scale Sample Preparation, Characterization, and Statistical Classification Using HPLC-ELSD Profiles.

    Science.gov (United States)

    Ten-Doménech, Isabel; Beltrán-Iturat, Eduardo; Herrero-Martínez, José Manuel; Sancho-Llopis, Juan Vicente; Simó-Alfonso, Ernesto Francisco

    2015-06-24

    In this work, a method for the separation of triacylglycerols (TAGs) present in human milk and from other mammalian species by reversed-phase high-performance liquid chromatography using a core-shell particle packed column with UV and evaporative light-scattering detectors is described. Under optimal conditions, a mobile phase containing acetonitrile/n-pentanol at 10 °C gave an excellent resolution among more than 50 TAG peaks. A small-scale method for fat extraction in these milks (particularly of interest for human milk samples) using minimal amounts of sample and reagents was also developed. The proposed extraction protocol and the traditional method were compared, giving similar results, with respect to the total fat and relative TAG contents. Finally, a statistical study based on linear discriminant analysis on the TAG composition of different types of milks (human, cow, sheep, and goat) was carried out to differentiate the samples according to their mammalian origin.

  4. Small-angle X-ray scattering tensor tomography: model of the three-dimensional reciprocal-space map, reconstruction algorithm and angular sampling requirements.

    Science.gov (United States)

    Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel

    2018-01-01

    Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.

  5. QNB: differential RNA methylation analysis for count-based small-sample sequencing data with a quad-negative binomial model.

    Science.gov (United States)

    Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia

    2017-08-31

    As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.

  6. Ochratoxin A in raisins and currants: basic extraction procedure used in two small marketing surveys of the occurrence and control of the heterogeneity of the toxins in samples.

    Science.gov (United States)

    Möller, T E; Nyberg, M

    2003-11-01

    A basic extraction procedure for analysis of ochratoxin A (OTA) in currants and raisins is described, as well as the occurrence of OTA and a control of heterogeneity of the toxin in samples bought for two small marketing surveys 1999/2000 and 2001/02. Most samples in the surveys were divided into two subsamples that were individually prepared as slurries and analysed separately. The limit of quantification for the method was estimated as 0.1 microg kg(-1) and recoveries of 85, 90 and 115% were achieved in recovery experiments at 10, 5 and 0.1 microg kg(-1), respectively. Of all 118 subsamples analysed in the surveys, 96 (84%) contained ochratoxin A at levels above the quantification level and five samples (4%) contained more than the European Community legislation of 10 microg kg(-1). The OTA concentrations found in the first survey were in the range Big differences were often achieved between individual subsamples of the original sample, which indicate a wide heterogeneous distribution of the toxin. Data from the repeatability test as well as recovery experiments from the same slurries showed that preparation of slurries as described here seemed to give a homogeneous and representative sample. The extraction with the basic sodium bicarbonate-methanol mixture used in the surveys gave similar or somewhat higher OTA values on some samples tested in a comparison with a weak phosphoric acid water-methanol extraction mixture.

  7. Oxygen consumption during mineralization of organic compounds in water samples from a small sub-tropical reservoir (Brazil

    Directory of Open Access Journals (Sweden)

    Cunha-Santino Marcela Bianchessi da

    2003-01-01

    Full Text Available Assays were carried out to evaluate the oxygen consumption resulting from mineralization of different organic compounds: glucose, sucrose, starch, tannic acid, lysine and glycine. The compounds were added to 1 l of water sample from Monjolinho Reservoir. Dissolved oxygen and dissolved organic carbon were monitored during 20 days and the results were fitted to first order kinetics model. During the 20 days of experiments, the oxygen consumption varied from 4.5 mg.l-1 (tannic acid to 71.5 mg.l-1 (glucose. The highest deoxygenation rate (kD was observed for mineralization of tannic acid (0.321 day-1 followed by glycine, starch, lysine, sucrose and glucose (0.1004, 0.0504, 0.0486, 0.0251 and 0.0158 day-1, respectively. From theoretical calculations and oxygen and carbon concentrations we obtained the stoichiometry of the mineralization processes. Stoichiometric values varied from 0.17 (tannic acid to 2.55 (sucrose.

  8. Small polaron hopping conduction in samples of ceramic La1.4Sr1.6Mn2O7.06

    International Nuclear Information System (INIS)

    Nakatsugawa, H.; Iguchi, E.; Jung, W.H.; Munakata, F.

    1999-01-01

    The ceramic sample of La 1.4 Sr 1.6 Mn 2 O 7.06 exhibits the metal-insulator transition and a negative magnetoresistance in the vicinity of the Curie temperature (T C ∼ 100 K). The dc magnetic susceptibility between 100 K and 280 K is nearly constant and decreases gradually with increasing temperature above 280 K. The measurements of dc resistivity and the thermoelectric power indicate that small polaron hopping conduction takes place at T > 280 K. The spin ordering due to the two-dimensional d x 2 -y 2 state occurring at T > 280 K is directly related to the hopping conduction above 280 K, although the spin ordering due to the one-dimensional d 3z 2 -r 2 state takes place at T > T C . The two-dimensional d x 2 -y 2 state extending within the MnO 2 sheets starts to narrow and leads to the carrier localisation at 280 K. The effective number of holes in this sample estimated from the thermoelectric power is considerably smaller than the nominal value. This indicates that the small polaron hopping conduction takes place predominantly within the in-plane MnO 2 sheets. A discussion is given of the experimental results of the ceramic sample of La 2/3 Ca 1/3 MnO 2.98 . Copyright (1999) CSIRO Australia

  9. Electroweak corrections

    International Nuclear Information System (INIS)

    Beenakker, W.J.P.

    1989-01-01

    The prospect of high accuracy measurements investigating the weak interactions, which are expected to take place at the electron-positron storage ring LEP at CERN and the linear collider SCL at SLAC, offers the possibility to study also the weak quantum effects. In order to distinguish if the measured weak quantum effects lie within the margins set by the standard model and those bearing traces of new physics one had to go beyond the lowest order and also include electroweak radiative corrections (EWRC) in theoretical calculations. These higher-order corrections also can offer the possibility of getting information about two particles present in the Glashow-Salam-Weinberg model (GSW), but not discovered up till now, the top quark and the Higgs boson. In ch. 2 the GSW standard model of electroweak interactions is described. In ch. 3 some special techniques are described for determination of integrals which are responsible for numerical instabilities caused by large canceling terms encountered in the calculation of EWRC effects, and methods necessary to get hold of the extensive algebra typical for EWRC. In ch. 4 various aspects related to EWRC effects are discussed, in particular the dependence of the unknown model parameters which are the masses of the top quark and the Higgs boson. The processes which are discussed are production of heavy fermions from electron-positron annihilation and those of the fermionic decay of the Z gauge boson. (H.W.). 106 refs.; 30 figs.; 6 tabs.; schemes

  10. High-throughput analysis using non-depletive SPME: challenges and applications to the determination of free and total concentrations in small sample volumes.

    Science.gov (United States)

    Boyacı, Ezel; Bojko, Barbara; Reyes-Garcés, Nathaly; Poole, Justen J; Gómez-Ríos, Germán Augusto; Teixeira, Alexandre; Nicol, Beate; Pawliszyn, Janusz

    2018-01-18

    In vitro high-throughput non-depletive quantitation of chemicals in biofluids is of growing interest in many areas. Some of the challenges facing researchers include the limited volume of biofluids, rapid and high-throughput sampling requirements, and the lack of reliable methods. Coupled to the above, growing interest in the monitoring of kinetics and dynamics of miniaturized biosystems has spurred the demand for development of novel and revolutionary methodologies for analysis of biofluids. The applicability of solid-phase microextraction (SPME) is investigated as a potential technology to fulfill the aforementioned requirements. As analytes with sufficient diversity in their physicochemical features, nicotine, N,N-Diethyl-meta-toluamide, and diclofenac were selected as test compounds for the study. The objective was to develop methodologies that would allow repeated non-depletive sampling from 96-well plates, using 100 µL of sample. Initially, thin film-SPME was investigated. Results revealed substantial depletion and consequent disruption in the system. Therefore, new ultra-thin coated fibers were developed. The applicability of this device to the described sampling scenario was tested by determining the protein binding of the analytes. Results showed good agreement with rapid equilibrium dialysis. The presented method allows high-throughput analysis using small volumes, enabling fast reliable free and total concentration determinations without disruption of system equilibrium.

  11. Robust Active Label Correction

    DEFF Research Database (Denmark)

    Kremer, Jan; Sha, Fei; Igel, Christian

    2018-01-01

    for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...

  12. Information in small neuronal ensemble activity in the hippocampal CA1 during delayed non-matching to sample performance in rats

    Directory of Open Access Journals (Sweden)

    Takahashi Susumu

    2009-09-01

    Full Text Available Abstract Background The matrix-like organization of the hippocampus, with its several inputs and outputs, has given rise to several theories related to hippocampal information processing. Single-cell electrophysiological studies and studies of lesions or genetically altered animals using recognition memory tasks such as delayed non-matching-to-sample (DNMS tasks support the theories. However, a complete understanding of hippocampal function necessitates knowledge of the encoding of information by multiple neurons in a single trial. The role of neuronal ensembles in the hippocampal CA1 for a DNMS task was assessed quantitatively in this study using multi-neuronal recordings and an artificial neural network classifier as a decoder. Results The activity of small neuronal ensembles (6-18 cells over brief time intervals (2-50 ms contains accurate information specifically related to the matching/non-matching of continuously presented stimuli (stimulus comparison. The accuracy of the combination of neurons pooled over all the ensembles was markedly lower than those of the ensembles over all examined time intervals. Conclusion The results show that the spatiotemporal patterns of spiking activity among cells in the small neuronal ensemble contain much information that is specifically useful for the stimulus comparison. Small neuronal networks in the hippocampal CA1 might therefore act as a comparator during recognition memory tasks.

  13. An Improved Metabolism Grey Model for Predicting Small Samples with a Singular Datum and Its Application to Sulfur Dioxide Emissions in China

    Directory of Open Access Journals (Sweden)

    Wei Zhou

    2016-01-01

    Full Text Available This study proposes an improved metabolism grey model [IMGM(1,1] to predict small samples with a singular datum, which is a common phenomenon in daily economic data. This new model combines the fitting advantage of the conventional GM(1,1 in small samples and the additional advantages of the MGM(1,1 in new real-time data, while overcoming the limitations of both the conventional GM(1,1 and MGM(1,1 when the predicted results are vulnerable at any singular datum. Thus, this model can be classified as an improved grey prediction model. Its improvements are illustrated through a case study of sulfur dioxide emissions in China from 2007 to 2013 with a singular datum in 2011. Some features of this model are presented based on the error analysis in the case study. Results suggest that if action is not taken immediately, sulfur dioxide emissions in 2016 will surpass the standard level required by the Twelfth Five-Year Plan proposed by the China State Council.

  14. Comparison of Time-of-flight and Multicollector ICP Mass Spectrometers for Measuring Actinides in Small Samples using single shot Laser Ablation

    International Nuclear Information System (INIS)

    R.S. Houk; D.B. Aeschliman; S.J. Bajic; D. Baldwin

    2005-01-01

    The objective of these experiments is to evaluate the performance of two types of ICP-MS device for measurement of actinide isotopes by laser ablation (LA) ICP-MS. The key advantage of ICP-MS compared to monitoring of radioactive decay is that the element need not decay during the measurement time. Hence ICP-MS is much faster for long-lived radionuclides. The LA process yields a transient signal. When spatially resolved analysis is required for small samples, the laser ablation sample pulse lasts only ∼10 seconds. It is difficult to measure signals at several isotopes with analyzers that are scanned for such a short sample transient. In this work, a time-of-flight (TOF) ICP-MS device, the GBC Optimass 8000 (Figure 1) is one instrument used. Strictly speaking, ions at different m/z values are not measured simultaneously in TOF. However, they are measured in very rapid sequence with little or no compromise between the number of m/z values monitored and the performance. Ions can be measured throughout the m/z range in single sample transients by TOF. The other ICP-MS instrument used is a magnetic sector multicollector MS, the NU Plasma 1700 (Figure 2). Up to 8 adjacent m/z values can be monitored at one setting of the magnetic field and accelerating voltage. Three of these m/z values can be measured with an electron multiplier. This device is usually used for high precision isotope ratio measurements with the Faraday cup detectors. The electron multipliers have much higher sensitivity. In our experience with the scanning magnetic sector instrument in Ames, these devices have the highest sensitivity and lowest background of any ICP-MS device. The ability to monitor several ions simultaneously, or nearly so, should make these devices valuable for the intended application: measurement of actinide isotopes at low concentrations in very small samples for nonproliferation purposes. The primary sample analyzed was an urban dust pellet reference material, NIST 1648. The

  15. Big news in small samples

    NARCIS (Netherlands)

    P.C. Schotman (Peter); S. Straetmans; C.G. de Vries (Casper)

    1997-01-01

    textabstractUnivariate time series regressions of the forex return on the forward premium generate mostly negative slope coefficients. Simple and refined panel estimation techniques yield slope estimates that are much closer to unity. We explain the two apparently opposing results by allowing for

  16. Small Boat Bottomfish Sampling Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fishing operations that focus on targeting bottomfish (mostly juvenile opakapaka) that are independent of a larger research vessel, i.e. the Oscar Elton Sette.

  17. Use of a 137Cs re-sampling technique to investigate temporal changes in soil erosion and sediment mobilisation for a small forested catchment in southern Italy

    International Nuclear Information System (INIS)

    Porto, Paolo; Walling, Des E.; Alewell, Christine; Callegari, Giovanni; Mabit, Lionel; Mallimo, Nicola; Meusburger, Katrin; Zehringer, Markus

    2014-01-01

    Soil erosion and both its on-site and off-site impacts are increasingly seen as a serious environmental problem across the world. The need for an improved evidence base on soil loss and soil redistribution rates has directed attention to the use of fallout radionuclides, and particularly 137 Cs, for documenting soil redistribution rates. This approach possesses important advantages over more traditional means of documenting soil erosion and soil redistribution. However, one key limitation of the approach is the time-averaged or lumped nature of the estimated erosion rates. In nearly all cases, these will relate to the period extending from the main period of bomb fallout to the time of sampling. Increasing concern for the impact of global change, particularly that related to changing land use and climate change, has frequently directed attention to the need to document changes in soil redistribution rates within this period. Re-sampling techniques, which should be distinguished from repeat-sampling techniques, have the potential to meet this requirement. As an example, the use of a re-sampling technique to derive estimates of the mean annual net soil loss from a small (1.38 ha) forested catchment in southern Italy is reported. The catchment was originally sampled in 1998 and samples were collected from points very close to the original sampling points again in 2013. This made it possible to compare the estimate of mean annual erosion for the period 1954–1998 with that for the period 1999–2013. The availability of measurements of sediment yield from the catchment for parts of the overall period made it possible to compare the results provided by the 137 Cs re-sampling study with the estimates of sediment yield for the same periods. In order to compare the estimates of soil loss and sediment yield for the two different periods, it was necessary to establish the uncertainty associated with the individual estimates. In the absence of a generally accepted procedure

  18. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  19. Miniaturizing 3D assay for high-throughput drug and genetic screens for small patient-derived tumor samples (Conference Presentation)

    Science.gov (United States)

    Rotem, Asaf; Garraway, Levi; Su, Mei-Ju; Basu, Anindita; Regev, Aviv; Struhl, Kevin

    2017-02-01

    Three-dimensional growth conditions reflect the natural environment of cancer cells and are crucial to be performed at drug screens. We developed a 3D assay for cellular transformation that involves growth in low attachment (GILA) conditions and is strongly correlated with the 50-year old benchmark assay-soft agar. Using GILA, we performed high-throughput screens for drugs and genes that selectively inhibit or increase transformation, but not proliferation. This phenotypic approach is complementary to our genetic approach that utilizes single-cell RNA-sequencing of a patient sample to identify putative oncogenes that confer sensitivity to drugs designed to specifically inhibit the identified oncoprotein. Currently, we are dealing with a big challenge in our field- the limited number of cells that might be extracted from a biopsy. Small patient-derived samples are hard to test in the traditional multiwell plate and it will be helpful to minimize the culture area and the experimental system. We managed to design a suitable microfluidic device for limited number of cells and perform the assay using image analysis. We aim to test drugs on tumor cells, outside of the patient body- and recommend on the ideal treatment that is tailored to the individual. This device will help to minimize biopsy-sampling volumes and minimize interventions in the patient's tumor.

  20. Context matters: volunteer bias, small sample size, and the value of comparison groups in the assessment of research-based undergraduate introductory biology lab courses.

    Science.gov (United States)

    Brownell, Sara E; Kloser, Matthew J; Fukami, Tadashi; Shavelson, Richard J

    2013-01-01

    The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.

  1. Context Matters: Volunteer Bias, Small Sample Size, and the Value of Comparison Groups in the Assessment of Research-Based Undergraduate Introductory Biology Lab Courses

    Directory of Open Access Journals (Sweden)

    Sara E. Brownell

    2013-08-01

    Full Text Available The shift from cookbook to authentic research-based lab courses in undergraduate biology necessitates the need for evaluation and assessment of these novel courses. Although the biology education community has made progress in this area, it is important that we interpret the effectiveness of these courses with caution and remain mindful of inherent limitations to our study designs that may impact internal and external validity. The specific context of a research study can have a dramatic impact on the conclusions. We present a case study of our own three-year investigation of the impact of a research-based introductory lab course, highlighting how volunteer students, a lack of a comparison group, and small sample sizes can be limitations of a study design that can affect the interpretation of the effectiveness of a course.

  2. A technique of evaluating most probable stochastic valuables from a small number of samples and their accuracies and degrees of confidence

    Energy Technology Data Exchange (ETDEWEB)

    Katoh, K [Ibaraki Pref. Univ. Health Sci., (Japan)

    1997-12-31

    A problem of estimating stochastic characteristics of a population from a small number of samples is solved as an inverse problem, from view point of information theory and with the Bayesian statistics. For both Poisson-process and Bernoulli-process, the most probable values of the characteristics of the mother population and their accuracies and degrees of confidence are successfully obtained. Mathematical expressions are given to the general case where a limit amount of information and/or knowledge with the stochastic characteristics are available and a special case where no a priori information nor knowledge are available. Mathematical properties of the solutions obtained, practical appreciation to the problem to radiation measurement are also discussed.

  3. Decomposition and forecasting analysis of China's energy efficiency: An application of three-dimensional decomposition and small-sample hybrid models

    International Nuclear Information System (INIS)

    Meng, Ming; Shang, Wei; Zhao, Xiaoli; Niu, Dongxiao; Li, Wei

    2015-01-01

    The coordinated actions of the central and the provincial governments are important in improving China's energy efficiency. This paper uses a three-dimensional decomposition model to measure the contribution of each province in improving the country's energy efficiency and a small-sample hybrid model to forecast this contribution. Empirical analysis draws the following conclusions which are useful for the central government to adjust its provincial energy-related policies. (a) There are two important areas for the Chinese government to improve its energy efficiency: adjusting the provincial economic structure and controlling the number of the small-scale private industrial enterprises; (b) Except for a few outliers, the energy efficiency growth rates of the northern provinces are higher than those of the southern provinces; provinces with high growth rates tend to converge geographically; (c) With regard to the energy sustainable development level, Beijing, Tianjin, Jiangxi, and Shaanxi are the best performers and Heilongjiang, Shanxi, Shanghai, and Guizhou are the worst performers; (d) By 2020, China's energy efficiency may reach 24.75 thousand yuan per ton of standard coal; as well as (e) Three development scenarios are designed to forecast China's energy consumption in 2012–2020. - Highlights: • Decomposition and forecasting models are used to analyze China's energy efficiency. • China should focus on the small industrial enterprises and local protectionism. • The energy sustainable development level of each province is evaluated. • Geographic distribution characteristics of energy efficiency changes are revealed. • Future energy efficiency and energy consumption are forecasted

  4. A semi-nested real-time PCR method to detect low chimerism percentage in small quantity of hematopoietic stem cell transplant DNA samples.

    Science.gov (United States)

    Aloisio, Michelangelo; Bortot, Barbara; Gandin, Ilaria; Severini, Giovanni Maria; Athanasakis, Emmanouil

    2017-02-01

    Chimerism status evaluation of post-allogeneic hematopoietic stem cell transplantation samples is essential to predict post-transplant relapse. The most commonly used technique capable of detecting small increments of chimerism is quantitative real-time PCR. Although this method is already used in several laboratories, previously described protocols often lack sensitivity and the amount of the DNA required for each chimerism analysis is too high. In the present study, we compared a novel semi-nested allele-specific real-time PCR (sNAS-qPCR) protocol with our in-house standard allele-specific real-time PCR (gAS-qPCR) protocol. We selected two genetic markers and analyzed technical parameters (slope, y-intercept, R2, and standard deviation) useful to determine the performances of the two protocols. The sNAS-qPCR protocol showed better sensitivity and precision. Moreover, the sNAS-qPCR protocol requires, as input, only 10 ng of DNA, which is at least 10-fold less than the gAS-qPCR protocols described in the literature. Finally, the proposed sNAS-qPCR protocol could prove very useful for performing chimerism analysis with a small amount of DNA, as in the case of blood cell subsets.

  5. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  6. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    Science.gov (United States)

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  7. A rapid procedure for the determination of thorium, uranium, cadmium and molybdenum in small sediment samples by inductively coupled plasma-mass spectrometry: application in Chesapeake Bay

    International Nuclear Information System (INIS)

    Zheng, Y.; Weinman, B.; Cronin, T.; Fleisher, M.Q.; Anderson, R.F.

    2003-01-01

    This paper describes a rapid procedure that allows precise analysis of Mo, Cd, U and Th in sediment samples as small as 10 mg by using a novel approach that utilizes a 'pseudo' isotope dilution for Th and conventional isotope dilution for Mo, Cd and U by ICP-MS. Long-term reproducibility of the method is between 2.5 and 5% with an advantage of rapid analysis on a single digestion of sediment sample and the potential of adding other elements of interest if so desired. Application of this method to two piston cores collected near the mouth of the Patuxent River in Chesapeake Bay showed that the accumulation of authigenic Mo and Cd varied in response to the changing bottom water redox conditions, with anoxia showing consistent oscillations throughout both pre-industrial and industrial times. Accumulation of authigenic U shows consistent oscillations as well, without any apparent increase in productivity related to anoxic trends. Degrees of Mo and Cd enrichment also inversely correlate to halophilic microfaunal assemblages already established as paleoclimate proxies within the bay indicating that bottom water anoxia is driven in part by the amount of freshwater discharge that the area receives

  8. Thermal transfer and apparent-dose distributions in poorly bleached mortar samples: results from single grains and small aliquots of quartz

    International Nuclear Information System (INIS)

    Jain, M.; Thomsen, K.J.; Boetter-Jensen, L.; Urray, A.S.

    2004-01-01

    In the assessment of doses received from a nuclear accident, considerable attention has been paid to retrospective dosimetry using the optically stimulated luminescence (OSL) of heated materials such as bricks and tiles. quartz extracted from these artefacts was heated during manufacture; this process releases all the prior trapped charge and simultaneously sensitises he quartz. Unfortunately unheated materials such as mortar and concrete are ore common in industrial sites and particularly in nuclear installations. These materials are usually exposed to daylight during quarrying and construction, but in general this exposure is insufficient to completely empty (bleach) any geological trapped charge. This leads to a distribution of apparent doses in the sample at the time of construction with only some (if ny) grains exposed to sufficient light to be considered well bleached for SL dosimetry. The challenge in using such materials as retrospective dosemeters is in identifying these well-bleached grains when an accident dose as been superimposed on the original dose distribution. We investigate here, sing OSL, the background dose in three different mortar samples: render, whitewash and inner wall plaster from a building built in 1964. These samples re found to be both poorly bleached and weakly sensitive (only 0.3% of rains giving a detectable dose response). We study thermal transfer in ingle grains of quartz, investigate the grain-size dependence of bleaching n the size range 90-300 μm and compare the dose-distributions obtained rom small aliquots and single-grain procedures. A comparison of three different methods viz. (a) first 5%, (b) probability plot and (c) comparison f internal and external uncertainties, is made for equivalent dose estimation. The results have implications for accident dosimetry, archaeological studies and dating of poorly bleached sediments

  9. Ultra-trace plutonium determination in small volume seawater by sector field inductively coupled plasma mass spectrometry with application to Fukushima seawater samples.

    Science.gov (United States)

    Bu, Wenting; Zheng, Jian; Guo, Qiuju; Aono, Tatsuo; Tagami, Keiko; Uchida, Shigeo; Tazoe, Hirofumi; Yamada, Masatoshi

    2014-04-11

    Long-term monitoring of Pu isotopes in seawater is required for assessing Pu contamination in the marine environment from the Fukushima Dai-ichi Nuclear Power Plant accident. In this study, we established an accurate and precise analytical method based on anion-exchange chromatography and SF-ICP-MS. This method was able to determine Pu isotopes in seawater samples with small volumes (20-60L). The U decontamination factor was 3×10(7)-1×10(8), which provided sufficient removal of interfering U from the seawater samples. The estimated limits of detection for (239)Pu and (240)Pu were 0.11fgmL(-1) and 0.08fgmL(-1), respectively, which corresponded to 0.01mBqm(-3) for (239)Pu and 0.03mBqm(-3) for (240)Pu when a 20L volume of seawater was measured. We achieved good precision (2.9%) and accuracy (0.8%) for measurement of the (240)Pu/(239)Pu atom ratio in the standard Pu solution with a (239)Pu concentration of 11fgmL(-1) and (240)Pu concentration of 2.7fgmL(-1). Seawater reference materials were used for the method validation and both the (239+240)Pu activities and (240)Pu/(239)Pu atom ratios agreed well with the expected values. Surface and bottom seawater samples collected off Fukushima in the western North Pacific since March 2011 were analyzed. Our results suggested that there was no significant variation of the Pu distribution in seawater in the investigated areas compared to the distribution before the accident. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Spectroelectrochemical Sensing Based on Multimode Selectivity simultaneously Achievable in a Single Device. 11. Design and Evaluation of a Small Portable Sensor for the Determination of Ferrocyanide in Hanford Waste Samples

    International Nuclear Information System (INIS)

    Stegemiller, Michael L.; Heineman, William R.; Seliskar, Carl J.; Ridgway, Thomas H.; Bryan, Samuel A.; Hubler, Timothy L.; Sell, Richard L.

    2003-01-01

    Spectroelectrochemical sensing based on multimode selectivity simultaneously achievable in a single device. 11. Design and evaluation of a small portable sensor for the determination of ferrocyanide in Hanford waste samples

  11. Secondary School Students' Reasoning about Conditional Probability, Samples, and Sampling Procedures

    Science.gov (United States)

    Prodromou, Theodosia

    2016-01-01

    In the Australian mathematics curriculum, Year 12 students (aged 16-17) are asked to solve conditional probability problems that involve the representation of the problem situation with two-way tables or three-dimensional diagrams and consider sampling procedures that result in different correct answers. In a small exploratory study, we…

  12. Performance of next-generation sequencing on small tumor specimens and/or low tumor content samples using a commercially available platform.

    Directory of Open Access Journals (Sweden)

    Scott Morris

    Full Text Available Next generation sequencing tests (NGS are usually performed on relatively small core biopsy or fine needle aspiration (FNA samples. Data is limited on what amount of tumor by volume or minimum number of FNA passes are needed to yield sufficient material for running NGS. We sought to identify the amount of tumor for running the PCDx NGS platform.2,723 consecutive tumor tissues of all cancer types were queried and reviewed for inclusion. Information on tumor volume, success of performing NGS, and results of NGS were compiled. Assessment of sequence analysis, mutation calling and sensitivity, quality control, drug associations, and data aggregation and analysis were performed.6.4% of samples were rejected from all testing due to insufficient tumor quantity. The number of genes with insufficient sensitivity make definitive mutation calls increased as the percentage of tumor decreased, reaching statistical significance below 5% tumor content. The number of drug associations also decreased with a lower percentage of tumor, but this difference only became significant between 1-3%. The number of drug associations did decrease with smaller tissue size as expected. Neither specimen size or percentage of tumor affected the ability to pass mRNA quality control. A tumor area of 10 mm2 provides a good margin of error for specimens to yield adequate drug association results.Specimen suitability remains a major obstacle to clinical NGS testing. We determined that PCR-based library creation methods allow the use of smaller specimens, and those with a lower percentage of tumor cells to be run on the PCDx NGS platform.

  13. Measurement of large asymptotic reactor periods from about 103 to 4.104 sec) to determine reactivity effects of small samples

    International Nuclear Information System (INIS)

    Grinevich, F.A.; Evchuk, A.I.; Klimentov, V.B.; Tyzh, A.V.; Churkin, Yu.I.; Yaroshevich, O.I.

    1977-01-01

    All investigation programs on fast reactor physics include measurements of low reactivity values (1-0.01)x10 -5 ΔK/K. An application of the pile oscillator technique for the purpose requires a special critical assembly for an installation of the oscillator. Thus it is of interest to develop relatively simple methods. In particular, one of such methods is the asymptotic period method which is widely used for low reactivity measurements. The description of the method and equipment developed for low reactivity measurements according to the measurements of the steady-state reactor period is presented. The equipment has been tested on the BTS-2 fast-thermal critical assembly. Measurement results on the reactivity effects of small samples in the fast zone centre are given. It is shown that the application of the method of measuring long steady-state periods and developed and tested equipment enables the reactivity of (1+-0.02)x10 -5 ΔK/K to be determined at the critical assembly power of 5 to 10 Wt. The disadvantage of the method presented is the time lost on reaching the steady-state period which results in greater sensitivity of the method to reactivity drifts

  14. Corrective Jaw Surgery