WorldWideScience

Sample records for sampling techniques correct

  1. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  2. Neural network scatter correction technique for digital radiography

    International Nuclear Information System (INIS)

    Boone, J.M.

    1990-01-01

    This paper presents a scatter correction technique based on artificial neural networks. The technique utilizes the acquisition of a conventional digital radiographic image, coupled with the acquisition of a multiple pencil beam (micro-aperture) digital image. Image subtraction results in a sparsely sampled estimate of the scatter component in the image. The neural network is trained to develop a causal relationship between image data on the low-pass filtered open field image and the sparsely sampled scatter image, and then the trained network is used to correct the entire image (pixel by pixel) in a manner which is operationally similar to but potentially more powerful than convolution. The technique is described and is illustrated using clinical primary component images combined with scatter component images that are realistically simulated using the results from previously reported Monte Carlo investigations. The results indicate that an accurate scatter correction can be realized using this technique

  3. Gamma-ray self-attenuation corrections in environmental samples

    International Nuclear Information System (INIS)

    Robu, E.; Giovani, C.

    2009-01-01

    Gamma-spectrometry is a commonly used technique in environmental radioactivity monitoring. Frequently the bulk samples that should be measured differ with respect to composition and density from the reference sample used for efficiency calibration. Correction factors should be applied in these cases for activity measurement. Linear attenuation coefficients and self-absorption correction factors have been evaluated for soil, grass and liquid sources with different densities and geometries.(authors)

  4. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  5. Correction of failure in antenna array using matrix pencil technique

    International Nuclear Information System (INIS)

    Khan, SU; Rahim, MKA

    2017-01-01

    In this paper a non-iterative technique is developed for the correction of faulty antenna array based on matrix pencil technique (MPT). The failure of a sensor in antenna array can damage the radiation power pattern in terms of sidelobes level and nulls. In the developed technique, the radiation pattern of the array is sampled to form discrete power pattern information set. Then this information set can be arranged in the form of Hankel matrix (HM) and execute the singular value decomposition (SVD). By removing nonprincipal values, we obtain an optimum lower rank estimation of HM. This lower rank matrix corresponds to the corrected pattern. Then the proposed technique is employed to recover the weight excitation and position allocations from the estimated matrix. Numerical simulations confirm the efficiency of the proposed technique, which is compared with the available techniques in terms of sidelobes level and nulls. (paper)

  6. Urine sampling techniques in symptomatic primary-care patients

    DEFF Research Database (Denmark)

    Holm, Anne; Aabenhus, Rune

    2016-01-01

    in infection rate between mid-stream-clean-catch, mid-stream-urine and random samples. Conclusions: At present, no evidence suggests that sampling technique affects the accuracy of the microbiological diagnosis in non-pregnant women with symptoms of urinary tract infection in primary care. However......Background: Choice of urine sampling technique in urinary tract infection may impact diagnostic accuracy and thus lead to possible over- or undertreatment. Currently no evidencebased consensus exists regarding correct sampling technique of urine from women with symptoms of urinary tract infection...... a randomized or paired design to compare the result of urine culture obtained with two or more collection techniques in adult, female, non-pregnant patients with symptoms of urinary tract infection. We evaluated quality of the studies and compared accuracy based on dichotomized outcomes. Results: We included...

  7. Self-absorption corrections of various sample-detector geometries in gamma-ray spectrometry using sample Monte Carlo Simulations

    International Nuclear Information System (INIS)

    Ahmad Saat; Appleby, P.G.; Nolan, P.J.

    1997-01-01

    Corrections for self-absorption in gamma-ray spectrometry have been developed using a simple Monte Carlo simulation technique. The simulation enables the calculation of gamma-ray path lengths in the sample which, using available data, can be used to calculate self-absorption correction factors. The simulation was carried out on three sample geometries: disk, Marinelli beaker, and cylinder (for well-type detectors). Mathematical models and experimental measurements are used to evaluate the simulations. A good agreement of within a few percents was observed. The simulation results are also in good agreement with those reported in the literature. The simulation code was carried out in FORTRAN 90,

  8. An application of the baseline correction technique for correcting distorted seismic acceleration time histories

    International Nuclear Information System (INIS)

    Lee, Gyu Mahn; Kim, Jong Wook; Jeoung, Kyeong Hoon; Kim, Tae Wan; Park, Keun Bae; Kim, Keung Koo

    2008-03-01

    Three kinds of baseline correction techniques named as 'Newmark', 'Zero-VD' and 'Newmark and Zero-VD' were introduced to correct the distorted physical characteristics of a seismic time history accelogram. The corrected seismic accelerations and distorted raw acceleration showed an identical response spectra in frequency domains, but showed various time history profiles in velocity and displacement domains. The referred correction techniques were programmed with UNIX-HP Fortran. The verification of the baseline corrected seismic data in terms of frequency response spectrum were performed by ANSYS of a commerical FEM software

  9. A new trajectory correction technique for linacs

    International Nuclear Information System (INIS)

    Raubenheimer, T.O.; Ruth, R.D.

    1990-06-01

    In this paper, we describe a new trajectory correction technique for high energy linear accelerators. Current correction techniques force the beam trajectory to follow misalignments of the Beam Position Monitors. Since the particle bunch has a finite energy spread and particles with different energies are deflected differently, this causes ''chromatic'' dilution of the transverse beam emittance. The algorithm, which we describe in this paper, reduces the chromatic error by minimizing the energy dependence of the trajectory. To test the method we compare the effectiveness of our algorithm with a standard correction technique in simulations on a design linac for a Next Linear Collider. The simulations indicate that chromatic dilution would be debilitating in a future linear collider because of the very small beam sizes required to achieve the necessary luminosity. Thus, we feel that this technique will prove essential for future linear colliders. 3 refs., 6 figs., 2 tabs

  10. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    Science.gov (United States)

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  11. Determination of the self-attenuation correction factor for environmental samples analysis in gamma spectrometry

    International Nuclear Information System (INIS)

    Santos, Talita O.; Rocha, Zildete; Knupp, Eliana A.N.; Kastner, Geraldo F.; Oliveira, Arno H. de; Oliveira, Arno H. de

    2015-01-01

    Gamma spectrometry technique has been used in order to obtain the activity concentrations of natural and artificial radionuclides in environmental samples of different origins, compositions and densities. These samples characteristics may influence the calibration condition by the self-attenuation effect. The sample density has been considered the most important factor. For reliable results, it is necessary to determine self-attenuation correction factor which has been subject of great interest due to its effect on activity concentration. In this context, the aim of this work is to show the calibration process considering the correction by self-attenuation in the evaluation of the concentration of each radionuclide to a gamma HPGEe detector spectrometry system. (author)

  12. The surgical correction of buried penis: a new technique

    NARCIS (Netherlands)

    Boemers, T. M.; de Jong, T. P.

    1995-01-01

    We report a new surgical technique for the correction of buried penis. The study comprised 10 boys with buried penis. The technique consisted of resection of abnormal dartos attachments, unfurling of the prepuce and correction of the deficient shaft skin by reapproximation of the preputial skin

  13. Receiver calibration and the nonlinearity parameter measurement of thick solid samples with diffraction and attenuation corrections.

    Science.gov (United States)

    Jeong, Hyunjo; Barnard, Daniel; Cho, Sungjong; Zhang, Shuzeng; Li, Xiongbing

    2017-11-01

    This paper presents analytical and experimental techniques for accurate determination of the nonlinearity parameter (β) in thick solid samples. When piezoelectric transducers are used for β measurements, the receiver calibration is required to determine the transfer function from which the absolute displacement can be calculated. The measured fundamental and second harmonic displacement amplitudes should be modified to account for beam diffraction and material absorption. All these issues are addressed in this study and the proposed technique is validated through the β measurements of thick solid samples. A simplified self-reciprocity calibration procedure for a broadband receiver is described. The diffraction and attenuation corrections for the fundamental and second harmonics are explicitly derived. Aluminum alloy samples in five different thicknesses (4, 6, 8, 10, 12cm) are prepared and β measurements are made using the finite amplitude, through-transmission method. The effects of diffraction and attenuation corrections on β measurements are systematically investigated. When diffraction and attenuation corrections are all properly made, the variation of β between different thickness samples is found to be less than 3.2%. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Modified emission-transmission method for determining trace elements in solid samples using the XRF techniques

    International Nuclear Information System (INIS)

    Poblete, V.; Alvarez, M.; Hermosilla, M.

    2000-01-01

    This is a study of an analysis of trace elements in medium thick solid samples, by the modified transmission emission method, using the energy dispersion X-ray fluorescence technique (EDXRF). The effects of absorption and reinforcement are the main disadvantages of the EDXRF technique for the quantitative analysis of bigger elements and trace elements in solid samples. The implementation of this method and its application to a variety of samples was carried out using an infinitely thick multi-element white sample that calculates the correction factors by absorbing all the analytes in the sample. The discontinuities in the masic absorption coefficients versus energies association for each element, with medium thick and homogenous samples, are analyzed and corrected. A thorough analysis of the different theoretical and test variables are proven by using real samples, including certified material with known concentration. The simplicity of the calculation method and the results obtained show the method's major precision, with possibilities for the non-destructive routine analysis of different solid samples, using the EDXRF technique (author)

  15. Beam dynamics in rf guns and emittance correction techniques

    International Nuclear Information System (INIS)

    Serafini, L.

    1994-01-01

    In this paper we present a general review of beam dynamics in a laser-driven rf gun. The peculiarity of such an accelerating structure versus other conventional multi-cell linac structures is underlined on the basis of the Panofsky-Wenzel theorem, which is found to give a theoretical background for the well known Kim's model. A basic explanation for some proposed methods to correct rf induced emittance growth is also derived from the theorem. We also present three emittance correction techniques for the recovery of space-charge induced emittance growth, namely the optimum distributed disk-like bunch technique, the use of rf spatial harmonics to correct spherical aberration induced by space charge forces and the technique of emittance filtering by clipping the electron beam. The expected performances regarding the beam quality achievable with different techniques, as predicted by scaling laws and simulations, are analyzed, and, where available, compared to experimental results. (orig.)

  16. Empirical method for matrix effects correction in liquid samples

    International Nuclear Information System (INIS)

    Vigoda de Leyt, Dora; Vazquez, Cristina

    1987-01-01

    A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es

  17. Comparison between correlated sampling and the perturbation technique of MCNP5 for fixed-source problems

    International Nuclear Information System (INIS)

    He Tao; Su Bingjing

    2011-01-01

    Highlights: → The performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. → In terms of precision, the MCNP perturbation technique outperforms correlated sampling for one type of problem but performs comparably with or even under-performs correlated sampling for the other two types of problems. → In terms of accuracy, the MCNP perturbation calculations may predict inaccurate results for some of the test problems. However, the accuracy can be improved if the midpoint correction technique is used. - Abstract: Correlated sampling and the differential operator perturbation technique are two methods that enable MCNP (Monte Carlo N-Particle) to simulate small response change between an original system and a perturbed system. In this work the performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. In terms of precision of predicted response changes, the MCNP perturbation technique outperforms correlated sampling for the problem involving variation of nuclide concentrations in the same direction but performs comparably with or even underperforms correlated sampling for the other two types of problems that involve void or variation of nuclide concentrations in opposite directions. In terms of accuracy, the MCNP differential operator perturbation calculations may predict inaccurate results that deviate from the benchmarks well beyond their uncertainty ranges for some of the test problems. However, the accuracy of the MCNP differential operator perturbation can be improved if the midpoint correction technique is used.

  18. New measurement techniques correct PU inventory in Japanese reprocessing plant

    International Nuclear Information System (INIS)

    2003-01-01

    Full text: At its briefing to the Japan Atomic Energy Commission on 28 January 2003, the Japan Safeguards Office (JSGO) of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) announced that, due to the introduction of more precise sampling and analytical measurement techniques for measuring plutonium in the high active liquid waste (HALW) storage tanks at the Tokai Reprocessing Plant (TRP), the Japan Nuclear Cycle Development Institute (JNC) is correcting the amount of plutonium declared in past accountancy reports to the IAEA. The corrected amounts are expected to be in line with IAEA's own independent verification data and based on measurement methodologies endorsed by the IAEA. The IAEA has recognized for some time that the amount of nuclear material transferred to waste storage had not been adequately measured in the past and has worked with the facility operators and State authorities to introduce improved measurement techniques. IAEA Director General, Dr. Mohamed ElBaradei stressed however, that 'the Agency remains confident in its conclusion that no nuclear material has been diverted from the facility'. This conclusion is based on a range of activities under the NPT Safeguards Agreement between the Agency and Japan, as well as under the Additional Protocol to that Agreement which gives the Agency broad access to nuclear fuel-cycle related information and locations. TRP, in Tokai-mura, Ibaraki prefecture in Japan, was built in the early 1970s, using 1960s-era design and technology. The IAEA began inspecting the facility in 1977. In its annual evaluation of safeguards implementation, as reported to the IAEA's Board of Governors in the Safeguards Implementation Report, the Secretariat has regularly noted the need for strengthening safeguards implementation at TRP, particularly with respect to procedures used for the measurement of nuclear material in the waste produced. In 1996, Japan and the IAEA reached agreement on IAEA sampling, on a

  19. 40 CFR 1065.690 - Buoyancy correction for PM sample media.

    Science.gov (United States)

    2010-07-01

    ... mass, use a sample media density of 920 kg/m3. (3) For PTFE membrane (film) media with an integral... media. 1065.690 Section 1065.690 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Buoyancy correction for PM sample media. (a) General. Correct PM sample media for their buoyancy in air if...

  20. Comparative evaluation of scatter correction techniques in 3D positron emission tomography

    CERN Document Server

    Zaidi, H

    2000-01-01

    Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...

  1. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  2. Bulk sample self-attenuation correction by transmission measurement

    International Nuclear Information System (INIS)

    Parker, J.L.; Reilly, T.D.

    1976-01-01

    Various methods used in either finding or avoiding the attenuation correction in the passive γ-ray assay of bulk samples are reviewed. Detailed consideration is given to the transmission method, which involves experimental determination of the sample linear attenuation coefficient by measuring the transmission through the sample of a beam of gamma rays from an external source. The method was applied to box- and cylindrically-shaped samples

  3. 78 FR 27442 - Coal Mine Dust Sampling Devices; Correction

    Science.gov (United States)

    2013-05-10

    ... DEPARTMENT OF LABOR Mine Safety and Health Administration Coal Mine Dust Sampling Devices; Correction AGENCY: Mine Safety and Health Administration, Labor. ACTION: Notice; correction. SUMMARY: On April 30, 2013, Mine Safety and Health Administration (MSHA) published a notice in the Federal Register...

  4. Correction for sample self-absorption in activity determination by gamma spectrometry

    International Nuclear Information System (INIS)

    Galloway, R.B.

    1991-01-01

    Gamma ray spectrometry is a convenient method of determining the activity of the radioactive components in environmental samples. Commonly samples vary in gamma absorption or differ in absorption from the calibration standards available, so that accurate correction for self-absorption in the sample is essential. A versatile correction procedure is described. (orig.)

  5. Attenuation correction for the collimated gamma ray assay of cylindrical samples

    International Nuclear Information System (INIS)

    Patra, Sabyasachi; Agarwal, Chhavi; Goswami, A.; Gathibandhe, M.

    2015-01-01

    The Hybrid Monte Carlo (HMC) method developed earlier for attenuation correction of non-collimated samples [Agarwal et al., 2008, Nucl. Instrum. Methods A 597, 198], has been extended to the segmented gamma ray assay of cylindrical samples. The method has been validated both experimentally and theoretically. For experimental validation, the results of HMC calculation have been compared with the experimentally obtained attenuation correction factors. The HMC attenuation correction factors have also been compared with the results obtained from literature available near-field and far-field formulae at two sample-to-detector distances (10.3 cm and 20.4 cm). The method has been found to be valid at all sample-to-detector distances over a wide range of transmittance. On the other hand, the literature available near-field and far-field formulae have been found to work over a limited range of sample-to detector distances and transmittances. The HMC method has been further extended to circular collimated geometries where analytical formula for attenuation correction does not exist. - Highlights: • Hybrid Monte Carlo method for attenuation correction developed for SGA system. • Method found to work for all sample-detector geometries for all transmittances. • The near-field formula applicable only after certain sample-detector distance. • The far-field formula applicable only for higher transmittances (>18%). • Hybrid Monte Carlo method further extended to circular collimated geometry

  6. Correction factor to determine total hydrogen+deuterium concentration obtained by inert gas fusion-thermal conductivity detection (IGF- TCD) technique

    International Nuclear Information System (INIS)

    Ramakumar, K.L.; Sesha Sayi, Y.; Shankaran, P.S.; Chhapru, G.C; Yadav, C.S.; Venugopal, V.

    2004-01-01

    The limitation of commercially available dedicated equipment based on Inert Gas Fusion- Thermal Conductivity Detection (IGF - TCD) for the determination of hydrogen+deuterium is described. For a given molar concentration, deuterium is underestimated vis a vis hydrogen because of lower thermal conductivity and not considering its molecular weight in calculations. An empirical correction factor based on the differences between the thermal conductivities of hydrogen, deuterium and the carrier gas argon, and the mole fraction of deuterium in the sample has been derived to correct the observed hydrogen+deuterium concentration. The corrected results obtained by IGF - TCD technique have been validated by determining hydrogen and deuterium contents in a few samples using an independent method based on hot vacuum extraction-quadrupole mass spectrometry (HVE-QMS). Knowledge of mole fraction of deuterium (XD) is necessary to effect the correction. The correction becomes insignificant at low X D values (XD < 0.2) as the precision in the IGF measurements is comparable with the extent of correction. (author)

  7. A new corrective technique for adolescent idiopathic scoliosis (Ucar′s convex rod rotation

    Directory of Open Access Journals (Sweden)

    Bekir Yavuz Ucar

    2014-01-01

    Full Text Available Study Design: Prospective single-center study. Objective: To analyze the efficacy and safety of a new technique of global vertebral correction with convex rod rotation performed on the patients with adolescent idiopathic scoliosis. Summary of Background Data: Surgical goal is to obtain an optimal curve correction in scoliosis surgery. There are various correction techniques. This report describes a new technique of global vertebral correction with convex rod rotation. Materials and Methods: A total of 12 consecutive patients with Lenke type I adolescent idiopathic scoliosis and managed by convex rod rotation technique between years 2012 and 2013 having more than 1 year follow-up were included. Mean age was 14.5 (range = 13-17 years years at the time of operation. The hospital charts were reviewed for demographic data. Measurements of curve magnitude and balance were made on 36-inch standing anteroposterior and lateral radiographs taken before surgery and at most recent follow up to assess deformity correction, spinal balance, and complications related to the instrumentation. Results: Preoperative coronal plane major curve of 62° (range = 50°-72° with flexibility of less than 30% was corrected to 11.5°(range = 10°-14° showing a 81% scoliosis correction at the final follow-up. Coronal imbalance was improved 72% at the most recent follow-up assessment. No complications were found. Conclusion: The new technique of global vertebral correction with Ucar′s convex rod rotation is an effective technique. This method is a vertebral rotation procedure from convex side and it allows to put screws easily to the concave side.

  8. Coherent optical adaptive technique improves the spatial resolution of STED microscopy in thick samples

    Science.gov (United States)

    Yan, Wei; Yang, Yanlong; Tan, Yu; Chen, Xun; Li, Yang; Qu, Junle; Ye, Tong

    2018-01-01

    Stimulated emission depletion microscopy (STED) is one of far-field optical microscopy techniques that can provide sub-diffraction spatial resolution. The spatial resolution of the STED microscopy is determined by the specially engineered beam profile of the depletion beam and its power. However, the beam profile of the depletion beam may be distorted due to aberrations of optical systems and inhomogeneity of specimens’ optical properties, resulting in a compromised spatial resolution. The situation gets deteriorated when thick samples are imaged. In the worst case, the sever distortion of the depletion beam profile may cause complete loss of the super resolution effect no matter how much depletion power is applied to specimens. Previously several adaptive optics approaches have been explored to compensate aberrations of systems and specimens. However, it is hard to correct the complicated high-order optical aberrations of specimens. In this report, we demonstrate that the complicated distorted wavefront from a thick phantom sample can be measured by using the coherent optical adaptive technique (COAT). The full correction can effectively maintain and improve the spatial resolution in imaging thick samples. PMID:29400356

  9. A correction scheme for thermal conductivity measurement using the comparative cut-bar technique based on 3D numerical simulation

    International Nuclear Information System (INIS)

    Xing, Changhu; Folsom, Charles; Jensen, Colby; Ban, Heng; Marshall, Douglas W

    2014-01-01

    As an important factor affecting the accuracy of thermal conductivity measurement, systematic (bias) error in the guarded comparative axial heat flow (cut-bar) method was mostly neglected by previous researches. This bias is primarily due to the thermal conductivity mismatch between sample and meter bars (reference), which is common for a sample of unknown thermal conductivity. A correction scheme, based on finite element simulation of the measurement system, was proposed to reduce the magnitude of the overall measurement uncertainty. This scheme was experimentally validated by applying corrections on four types of sample measurements in which the specimen thermal conductivity is much smaller, slightly smaller, equal and much larger than that of the meter bar. As an alternative to the optimum guarding technique proposed before, the correction scheme can be used to minimize the uncertainty contribution from the measurement system with non-optimal guarding conditions. It is especially necessary for large thermal conductivity mismatches between sample and meter bars. (paper)

  10. Two sampling techniques for game meat

    OpenAIRE

    van der Merwe, Maretha; Jooste, Piet J.; Hoffman, Louw C.; Calitz, Frikkie J.

    2013-01-01

    A study was conducted to compare the excision sampling technique used by the export market and the sampling technique preferred by European countries, namely the biotrace cattle and swine test. The measuring unit for the excision sampling was grams (g) and square centimetres (cm2) for the swabbing technique. The two techniques were compared after a pilot test was conducted on spiked approved beef carcasses (n = 12) that statistically proved the two measuring units correlated. The two sampling...

  11. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    Science.gov (United States)

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  12. Development of Large Sample Neutron Activation Technique for New Applications in Thailand

    International Nuclear Information System (INIS)

    Laoharojanaphand, S.; Tippayakul, C.; Wonglee, S.; Channuie, J.

    2018-01-01

    The development of the Large Sample Neutron Activation Analysis (LSNAA) in Thailand is presented in this paper. The technique had been firstly developed with rice sample as the test subject. The Thai Research Reactor-1/Modification 1 (TRR-1/M1) was used as the neutron source. The first step was to select and characterize an appropriate irradiation facility for the research. An out-core irradiation facility (A4 position) was first attempted. The results performed with the A4 facility were then used as guides for the subsequent experiments with the thermal column facility. The characterization of the thermal column was performed with Cu-wire to determine spatial distribution without and with rice sample. The flux depression without rice sample was observed to be less than 30% while the flux depression with rice sample increased to within 60%. The flux monitors internal to the rice sample were used to determine average flux over the rice sample. The gamma selfshielding effect during gamma measurement was corrected using the Monte Carlo simulation. The ratio between the efficiencies of the volume source and the point source for each energy point was calculated by the MCNPX code. The research team adopted the k0-NAA methodology to calculate the element concentration in the research. The k0-NAA program which developed by IAEA was set up to simulate the conditions of the irradiation and measurement facilities used in this research. The element concentrations in the bulk rice sample were then calculated taking into account the flux depression and gamma efficiency corrections. At the moment, the results still show large discrepancies with the reference values. However, more research on the validation will be performed to identify sources of errors. Moreover, this LS-NAA technique was introduced for the activation analysis of the IAEA archaeological mock-up. The results are provided in this report. (author)

  13. Education on Correct Inhaler Technique in Pharmacy Schools ...

    African Journals Online (AJOL)

    Conclusion: Standard educational training may not be the most appropriate method of teaching students the correct use of inhalers. Clearly, there is a practice element missing which needs to be addressed in a feasible way. Keywords: Inhaler technique, Pharmacy education, Hands-on training, Training barrier ...

  14. Two sampling techniques for game meat

    Directory of Open Access Journals (Sweden)

    Maretha van der Merwe

    2013-03-01

    Full Text Available A study was conducted to compare the excision sampling technique used by the export market and the sampling technique preferred by European countries, namely the biotrace cattle and swine test. The measuring unit for the excision sampling was grams (g and square centimetres (cm2 for the swabbing technique. The two techniques were compared after a pilot test was conducted on spiked approved beef carcasses (n = 12 that statistically proved the two measuring units correlated. The two sampling techniques were conducted on the same game carcasses (n = 13 and analyses performed for aerobic plate count (APC, Escherichia coli and Staphylococcus aureus, for both techniques. A more representative result was obtained by swabbing and no damage was caused to the carcass. Conversely, the excision technique yielded fewer organisms and caused minor damage to the carcass. The recovery ratio from the sampling technique improved 5.4 times for APC, 108.0 times for E. coli and 3.4 times for S. aureus over the results obtained from the excision technique. It was concluded that the sampling methods of excision and swabbing can be used to obtain bacterial profiles from both export and local carcasses and could be used to indicate whether game carcasses intended for the local market are possibly on par with game carcasses intended for the export market and therefore safe for human consumption.

  15. Two sampling techniques for game meat.

    Science.gov (United States)

    van der Merwe, Maretha; Jooste, Piet J; Hoffman, Louw C; Calitz, Frikkie J

    2013-03-20

    A study was conducted to compare the excision sampling technique used by the export market and the sampling technique preferred by European countries, namely the biotrace cattle and swine test. The measuring unit for the excision sampling was grams (g) and square centimetres (cm2) for the swabbing technique. The two techniques were compared after a pilot test was conducted on spiked approved beef carcasses (n = 12) that statistically proved the two measuring units correlated. The two sampling techniques were conducted on the same game carcasses (n = 13) and analyses performed for aerobic plate count (APC), Escherichia coli and Staphylococcus aureus, for both techniques. A more representative result was obtained by swabbing and no damage was caused to the carcass. Conversely, the excision technique yielded fewer organisms and caused minor damage to the carcass. The recovery ratio from the sampling technique improved 5.4 times for APC, 108.0 times for E. coli and 3.4 times for S. aureus over the results obtained from the excision technique. It was concluded that the sampling methods of excision and swabbing can be used to obtain bacterial profiles from both export and local carcasses and could be used to indicate whether game carcasses intended for the local market are possibly on par with game carcasses intended for the export market and therefore safe for human consumption.

  16. A beam-based alignment technique for correction of accelerator structure misalignments

    International Nuclear Information System (INIS)

    Kubo, K.; Raubenheimer, T.O.

    1994-08-01

    This paper describes a method of reducing the transverse emittance dilution in linear colliders due to transverse wakefields arising-from misaligned accelerator structures. The technique is a generalization of the Wake-Free correction algorithm. The structure alignment errors are measured locally by varying the bunch charge and/or bunch length and measuring the change in the beam trajectory. The misalignments can then be corrected by varying the beam trajectory or moving structures. The results of simulations are presented demonstrating the viability of the technique

  17. Fast shading correction for cone beam CT in radiation therapy via sparse sampling on planning CT.

    Science.gov (United States)

    Shi, Linxi; Tsui, Tiffany; Wei, Jikun; Zhu, Lei

    2017-05-01

    The image quality of cone beam computed tomography (CBCT) is limited by severe shading artifacts, hindering its quantitative applications in radiation therapy. In this work, we propose an image-domain shading correction method using planning CT (pCT) as prior information which is highly adaptive to clinical environment. We propose to perform shading correction via sparse sampling on pCT. The method starts with a coarse mapping between the first-pass CBCT images obtained from the Varian TrueBeam system and the pCT. The scatter correction method embedded in the Varian commercial software removes some image errors but the CBCT images still contain severe shading artifacts. The difference images between the mapped pCT and the CBCT are considered as shading errors, but only sparse shading samples are selected for correction using empirical constraints to avoid carrying over false information from pCT. A Fourier-Transform-based technique, referred to as local filtration, is proposed to efficiently process the sparse data for effective shading correction. The performance of the proposed method is evaluated on one anthropomorphic pelvis phantom and 17 patients, who were scheduled for radiation therapy. (The codes of the proposed method and sample data can be downloaded from https://sites.google.com/view/linxicbct) RESULTS: The proposed shading correction substantially improves the CBCT image quality on both the phantom and the patients to a level close to that of the pCT images. On the phantom, the spatial nonuniformity (SNU) difference between CBCT and pCT is reduced from 74 to 1 HU. The root of mean square difference of SNU between CBCT and pCT is reduced from 83 to 10 HU on the pelvis patients, and from 101 to 12 HU on the thorax patients. The robustness of the proposed shading correction is fully investigated with simulated registration errors between CBCT and pCT on the phantom and mis-registration on patients. The sparse sampling scheme of our method successfully

  18. Texture investigation in aluminium and iron - silicon samples by neutron diffraction technique

    International Nuclear Information System (INIS)

    Pugliese, R.; Yamasaki, J.M.

    1988-09-01

    By means of the neutron diffraction technique the texture of 5% and 98% rolled-aluminium and of iron-silicon steel used in the core of electric transformers, have been determined. The measurements were performed by using a neutron diffractometer installed at the IEA-R1 Nuclear Research Reactor, in the Beam-Hole n 0 . 6. To avoid corrections such as neutron absorption and sample luminosity the geometric form of the samples were approximated to spheric or octagonal prism, and its dimensions do not exceed that of the neutron beam. The texture of the samples were analysed with the help of a computer programme that analyses the intensity of the diffracted neutron beam and plot the pole figures. (author) [pt

  19. A Correctness Verification Technique for Commercial FPGA Synthesis Tools

    International Nuclear Information System (INIS)

    Kim, Eui Sub; Yoo, Jun Beom; Choi, Jong Gyun; Kim, Jang Yeol; Lee, Jang Soo

    2014-01-01

    Once the FPGA (Filed-Programmable Gate Array) designers designs Verilog programs, the commercial synthesis tools automatically translate the Verilog programs into EDIF programs so that the designers can have largely focused on HDL designs for correctness of functionality. Nuclear regulation authorities, however, require more considerate demonstration of the correctness and safety of mechanical synthesis processes of FPGA synthesis tools, even if the FPGA industry have acknowledged them empirically as correct and safe processes and tools. In order to assure of the safety, the industry standards for the safety of electronic/electrical devices, such as IEC 61508 and IEC 60880, recommend using the formal verification technique. There are several formal verification tools (i.e., 'FormalPro' 'Conformal' 'Formality' and so on) to verify the correctness of translation from Verilog into EDIF programs, but it is too expensive to use and hard to apply them to the works of 3rd-party developers. This paper proposes a formal verification technique which can contribute to the correctness demonstration in part. It formally checks the behavioral equivalence between Verilog and subsequently synthesized Net list with the VIS verification system. A Net list is an intermediate output of FPGA synthesis process, and EDIF is used as a standard format of Net lists. If the formal verification succeeds, then we can assure that the synthesis process from Verilog into Net list worked correctly at least for the Verilog used. In order to support the formal verification, we developed the mechanical translator 'EDIFtoBLIFMV,' which translates EDIF into BLIF-MV as an input front-end of VIS system, while preserving their behavior equivalence.. We performed the case study with an example of a preliminary version of RPS in a Korean nuclear power plant in order to provide the efficiency of the proposed formal verification technique and implemented translator. It

  20. A Correctness Verification Technique for Commercial FPGA Synthesis Tools

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eui Sub; Yoo, Jun Beom [Konkuk University, Seoul (Korea, Republic of); Choi, Jong Gyun; Kim, Jang Yeol; Lee, Jang Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    Once the FPGA (Filed-Programmable Gate Array) designers designs Verilog programs, the commercial synthesis tools automatically translate the Verilog programs into EDIF programs so that the designers can have largely focused on HDL designs for correctness of functionality. Nuclear regulation authorities, however, require more considerate demonstration of the correctness and safety of mechanical synthesis processes of FPGA synthesis tools, even if the FPGA industry have acknowledged them empirically as correct and safe processes and tools. In order to assure of the safety, the industry standards for the safety of electronic/electrical devices, such as IEC 61508 and IEC 60880, recommend using the formal verification technique. There are several formal verification tools (i.e., 'FormalPro' 'Conformal' 'Formality' and so on) to verify the correctness of translation from Verilog into EDIF programs, but it is too expensive to use and hard to apply them to the works of 3rd-party developers. This paper proposes a formal verification technique which can contribute to the correctness demonstration in part. It formally checks the behavioral equivalence between Verilog and subsequently synthesized Net list with the VIS verification system. A Net list is an intermediate output of FPGA synthesis process, and EDIF is used as a standard format of Net lists. If the formal verification succeeds, then we can assure that the synthesis process from Verilog into Net list worked correctly at least for the Verilog used. In order to support the formal verification, we developed the mechanical translator 'EDIFtoBLIFMV,' which translates EDIF into BLIF-MV as an input front-end of VIS system, while preserving their behavior equivalence.. We performed the case study with an example of a preliminary version of RPS in a Korean nuclear power plant in order to provide the efficiency of the proposed formal verification technique and implemented translator. It

  1. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  2. Research on self-absorption corrections for laboratory γ spectral analysis of soil samples

    International Nuclear Information System (INIS)

    Tian Zining; Jia Mingyan; Li Huibin; Cheng Ziwei; Ju Lingjun; Shen Maoquan; Yang Xiaoyan; Yan Ling; Fen Tiancheng

    2010-01-01

    Based on the calibration results of the point sources,dimensions of HPGe crystal were characterized.Linear attenuation coefficients and detection efficiencies of all kinds of samples were calculated,and the function F(μ) of φ75 mm x 25 mm sample was established. Standard surface source was used to simulate the source of different heights in the soil sample. And the function ε(h) which reflect the relationship between detection efficiencies and heights of the surface sources was determined. The detection efficiency of calibration source can be obtained by integration, F(μ) functions of soil samples established is consistent with the result of MCNP calculation code. Several φ75 mm x 25 mm soil samples were measured by the HPGe spectrometer,and the function F(μ) was used to correct the self absorption. F(μ) functions of soil samples of various dimensions can be calculated by MCNP calculation code established, and self absorption correction can be done. To verify the efficiency of calculation results, φ75 mm x 75 mm soil samples were measured. Several φ75 mm x 25 mm soil samples from aerosphere nuclear testing field was measured by the HPGe spectrometer,and the function F(μ) was used to correct the self absorption. The function F(m) was established, and the technical method which is used to correct the soil samples of unknown area is also given. The correction method of surface source greatly improves the gamma spectrum's metrical accuracy, and it will be widely applied to environmental radioactive investigation. (authors)

  3. ICNTS. Benchmarking of momentum correction techniques

    International Nuclear Information System (INIS)

    Beidler, Craig D.; Isaev, Maxim Yu.; Kasilov, Sergei V.

    2008-01-01

    In the traditional neoclassical ordering, mono-energetic transport coefficients are evaluated using the simplified Lorentz form of the pitch-angle collision operator which violates momentum conservation. In this paper, the parallel momentum balance with radial parallel momentum transport and viscosity terms is analysed, in particular with respect to the radial electric field. Next, the impact of momentum conservation in the stellarator lmfp-regime is estimated for the radial transport and the parallel electric conductivity. Finally, momentum correction techniques are described based on mono-energetic transport coefficients calculated e.g. by the DKES code, and preliminary results for the parallel electric conductivity and the bootstrap current are presented. (author)

  4. Radioisotope Sample Measurement Techniques in Medicine and Biology. Proceedings of the Symposium on Radioisotope Sample Measurement Techniques

    International Nuclear Information System (INIS)

    1965-01-01

    The medical and biological applications of radioisotopes depend on two basically different types of measurements, those on living subjects in vivo and those on samples in vitro. The International Atomic Energy Agency has in the past held several meetings on in vivo measurement techniques, notably whole-body counting and radioisotope scanning. The present volume contains the Proceedings of the first Symposium the Agency has organized to discuss the various aspects of techniques for sample measurement in vitro. The range of these sample measurement techniques is very wide. The sample may weigh a few milligrams or several hundred grams, and may be in the gaseous, liquid or solid state. Its radioactive content may consist of a single, known radioisotope or several unknown ones. The concentration of radioactivity may be low, medium or high. The measurements may be made manually or automatically and any one of the many radiation detectors now available may be used. The 53 papers presented at the Symposium illustrate the great variety of methods now in use for radioactive- sample measurements. The first topic discussed is gamma-ray spectrometry, which finds an increasing number of applications in sample measurements. Other sections of the Proceedings deal with: the use of computers in gamma-ray spectrometry and multiple tracer techniques; recent developments in activation analysis where both gamma-ray spectrometry and computing techniques are applied; thin-layer and paper radio chromatographic techniques for use with low energy beta-ray emitters; various aspects of liquid scintillation counting techniques in the measurement of alpha- and beta-ray emitters, including chemical and colour quenching; autoradiographic techniques; calibration of equipment; and standardization of radioisotopes. Finally, some applications of solid-state detectors are presented; this section may be regarded as a preview of important future developments. The meeting was attended by 203 participants

  5. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  6. Studies on the true coincidence correction in measuring filter samples by gamma spectrometry

    CERN Document Server

    Lian Qi; Chang Yong Fu; Xia Bing

    2002-01-01

    The true coincidence correction in measuring filter samples has been studied by high efficiency HPGe gamma detectors. The true coincidence correction for a specific three excited levels de-excitation case has been analyzed, and the typical analytical expressions of true coincidence correction factors have been given. According to the measured relative efficiency on the detector surface with 8 'single' energy gamma emitters and efficiency of filter samples, the peak and total efficiency surfaces are fitted. The true coincidence correction factors of sup 6 sup 0 Co and sup 1 sup 5 sup 2 Eu calculated by the efficiency surfaces agree well with experimental results

  7. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  8. Critical evaluation of sample pretreatment techniques.

    Science.gov (United States)

    Hyötyläinen, Tuulia

    2009-06-01

    Sample preparation before chromatographic separation is the most time-consuming and error-prone part of the analytical procedure. Therefore, selecting and optimizing an appropriate sample preparation scheme is a key factor in the final success of the analysis, and the judicious choice of an appropriate procedure greatly influences the reliability and accuracy of a given analysis. The main objective of this review is to critically evaluate the applicability, disadvantages, and advantages of various sample preparation techniques. Particular emphasis is placed on extraction techniques suitable for both liquid and solid samples.

  9. Determination Of Activity Of Radionuclides In Moss-Soil Sample With Self-Absorption Correction

    International Nuclear Information System (INIS)

    Tran Thien Thanh; Chau Van Tao; Truong Thi Hong Loan; Hoang Duc Tam

    2011-01-01

    Hyper Pure Germanium (HPGe) spectrometer system is a very powerful tool for radioactivity measurements. The systematic uncertainty in the full energy peak efficiency is due to the differences between the matrix (density and chemical composition) of the reference and the other bulk samples. For getting precision from the gamma spectrum analysis, the absorbed correction in the sample should be considered. For gamma spectral analysis, a correction for absorption effects in sample should be considered, especially for bulk samples. The results were presented and discussed in this paper. (author)

  10. Sampling or gambling

    Energy Technology Data Exchange (ETDEWEB)

    Gy, P.M.

    1981-12-01

    Sampling can be compared to no other technique. A mechanical sampler must above all be selected according to its aptitude for supressing or reducing all components of the sampling error. Sampling is said to be correct when it gives all elements making up the batch of matter submitted to sampling an uniform probability of being selected. A sampler must be correctly designed, built, installed, operated and maintained. When the conditions of sampling correctness are not strictly respected, the sampling error can no longer be controlled and can, unknown to the user, be unacceptably large: the sample is no longer representative. The implementation of an incorrect sampler is a form of gambling and this paper intends to show that at this game the user is nearly always the loser in the long run. The users' and the manufacturers' interests may diverge and the standards which should safeguard the users' interests very often fail to do so by tolerating or even recommending incorrect techniques such as the implementation of too narrow cutters traveling too fast through the stream to be sampled.

  11. A two-phase sampling survey for nonresponse and its paradata to correct nonresponse bias in a health surveillance survey.

    Science.gov (United States)

    Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A

    2017-02-01

    The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in

  12. Corrections of arterial input function for dynamic H215O PET to assess perfusion of pelvic tumours: arterial blood sampling versus image extraction

    International Nuclear Information System (INIS)

    Luedemann, L; Sreenivasa, G; Michel, R; Rosner, C; Plotkin, M; Felix, R; Wust, P; Amthauer, H

    2006-01-01

    Assessment of perfusion with 15 O-labelled water (H 2 15 O) requires measurement of the arterial input function (AIF). The arterial time activity curve (TAC) measured using the peripheral sampling scheme requires corrections for delay and dispersion. In this study, parametrizations with and without arterial spillover correction for fitting of the tissue curve are evaluated. Additionally, a completely noninvasive method for generation of the AIF from a dynamic positron emission tomography (PET) acquisition is applied to assess perfusion of pelvic tumours. This method uses a volume of interest (VOI) to extract the TAC from the femoral artery. The VOI TAC is corrected for spillover using a separate tissue TAC and for recovery by determining the recovery coefficient on a coregistered CT data set. The techniques were applied in five patients with pelvic tumours who underwent a total of 11 examinations. Delay and dispersion correction of the blood TAC without arterial spillover correction yielded in seven examinations solutions inconsistent with physiology. Correction of arterial spillover increased the fitting accuracy and yielded consistent results in all patients. Generation of an AIF from PET image data was investigated as an alternative to arterial blood sampling and was shown to have an intrinsic potential to determine the AIF noninvasively and reproducibly. The AIF extracted from a VOI in a dynamic PET scan was similar in shape to the blood AIF but yielded significantly higher tissue perfusion values (mean of 104.0 ± 52.0%) and lower partition coefficients (-31.6 ± 24.2%). The perfusion values and partition coefficients determined with the VOI technique have to be corrected in order to compare the results with those of studies using a blood AIF

  13. A systematic comparison of motion artifact correction techniques for functional near-infrared spectroscopy

    DEFF Research Database (Denmark)

    Cooper, Robert J; Selb, Juliette; Gagnon, Louis

    2012-01-01

    a significant reduction in the mean-squared error (MSE) and significant increase in the contrast-to-noise ratio (CNR) of the recovered HRF when compared to no correction and compared to a process of rejecting motion-contaminated trials. Spline interpolation produces the largest average reduction in MSE (55....... Principle component analysis, spline interpolation, wavelet analysis, and Kalman filtering approaches are compared to one another and to standard approaches using the accuracy of the recovered, simulated hemodynamic response function (HRF). Each of the four motion correction techniques we tested yields......%) while wavelet analysis produces the highest average increase in CNR (39%). On the basis of this analysis, we recommend the routine application of motion correction techniques (particularly spline interpolation or wavelet analysis) to minimize the impact of motion artifacts on functional NIRS data....

  14. Newly introduced sample preparation techniques: towards miniaturization.

    Science.gov (United States)

    Costa, Rosaria

    2014-01-01

    Sampling and sample preparation are of crucial importance in an analytical procedure, representing quite often a source of errors. The technique chosen for the isolation of analytes greatly affects the success of a chemical determination. On the other hand, growing concerns about environmental and human safety, along with the introduction of international regulations for quality control, have moved the interest of scientists towards specific needs. Newly introduced sample preparation techniques are challenged to meet new criteria: (i) miniaturization, (ii) higher sensitivity and selectivity, and (iii) automation. In this survey, the most recent techniques introduced in the field of sample preparation will be described and discussed, along with many examples of applications.

  15. Dual ring multilayer ionization chamber and theory-based correction technique for scanning proton therapy.

    Science.gov (United States)

    Takayanagi, Taisuke; Nihongi, Hideaki; Nishiuchi, Hideaki; Tadokoro, Masahiro; Ito, Yuki; Nakashima, Chihiro; Fujitaka, Shinichiro; Umezawa, Masumi; Matsuda, Koji; Sakae, Takeji; Terunuma, Toshiyuki

    2016-07-01

    To develop a multilayer ionization chamber (MLIC) and a correction technique that suppresses differences between the MLIC and water phantom measurements in order to achieve fast and accurate depth dose measurements in pencil beam scanning proton therapy. The authors distinguish between a calibration procedure and an additional correction: 1-the calibration for variations in the air gap thickness and the electrometer gains is addressed without involving measurements in water; 2-the correction is addressed to suppress the difference between depth dose profiles in water and in the MLIC materials due to the nuclear interaction cross sections by a semiempirical model tuned by using measurements in water. In the correction technique, raw MLIC data are obtained for each energy layer and integrated after multiplying them by the correction factor because the correction factor depends on incident energy. The MLIC described here has been designed especially for pencil beam scanning proton therapy. This MLIC is called a dual ring multilayer ionization chamber (DRMLIC). The shape of the electrodes allows the DRMLIC to measure both the percentage depth dose (PDD) and integrated depth dose (IDD) because ionization electrons are collected from inner and outer air gaps independently. IDDs for which the beam energies were 71.6, 120.6, 159, 180.6, and 221.4 MeV were measured and compared with water phantom results. Furthermore, the measured PDDs along the central axis of the proton field with a nominal field size of 10 × 10 cm(2) were compared. The spread out Bragg peak was 20 cm for fields with a range of 30.6 and 3 cm for fields with a range of 6.9 cm. The IDDs measured with the DRMLIC using the correction technique were consistent with those that of the water phantom; except for the beam energy of 71.6 MeV, all of the points satisfied the 1% dose/1 mm distance to agreement criterion of the gamma index. The 71.6 MeV depth dose profile showed slight differences in the shallow

  16. Photon attenuation correction technique in SPECT based on nonlinear optimization

    International Nuclear Information System (INIS)

    Suzuki, Shigehito; Wakabayashi, Misato; Okuyama, Keiichi; Kuwamura, Susumu

    1998-01-01

    Photon attenuation correction in SPECT was made using a nonlinear optimization theory, in which an optimum image is searched so that the sum of square errors between observed and reprojected projection data is minimized. This correction technique consists of optimization and step-width algorithms, which determine at each iteration a pixel-by-pixel directional value of search and its step-width, respectively. We used the conjugate gradient and quasi-Newton methods as the optimization algorithm, and Curry rule and the quadratic function method as the step-width algorithm. Statistical fluctuations in the corrected image due to statistical noise in the emission projection data grew as the iteration increased, depending on the combination of optimization and step-width algorithms. To suppress them, smoothing for directional values was introduced. Computer experiments and clinical applications showed a pronounced reduction in statistical fluctuations of the corrected image for all combinations. Combinations using the conjugate gradient method were superior in noise characteristic and computation time. The use of that method with the quadratic function method was optimum if noise property was regarded as important. (author)

  17. Depth-profiling by confocal Raman microscopy (CRM): data correction by numerical techniques.

    Science.gov (United States)

    Tomba, J Pablo; Eliçabe, Guillermo E; Miguel, María de la Paz; Perez, Claudio J

    2011-03-01

    The data obtained in confocal Raman microscopy (CRM) depth profiling experiments with dry optics are subjected to significant distortions, including an artificial compression of the depth scale, due to the combined influence of diffraction, refraction, and instrumental effects that operate on the measurement. This work explores the use of (1) regularized deconvolution and (2) the application of simple rescaling of the depth scale as methodologies to obtain an improved, more precise, confocal response. The deconvolution scheme is based on a simple predictive model for depth resolution and the use of regularization techniques to minimize the dramatic oscillations in the recovered response typical of problem inversion. That scheme is first evaluated using computer simulations on situations that reproduce smooth and sharp sample transitions between two materials and finally it is applied to correct genuine experimental data, obtained in this case from a sharp transition (planar interface) between two polymeric materials. It is shown that the methodology recovers very well most of the lost profile features in all the analyzed situations. The use of simple rescaling appears to be only useful for correcting smooth transitions, particularly those extended over distances larger than those spanned by the operative depth resolution, which limits the strategy to the study of profiles near the sample surface. However, through computer simulations, it is shown that the use of water immersion objectives may help to reduce optical distortions and to expand the application window of this simple methodology, which could be useful, for instance, to safely monitor Fickean sorption/desorption of penetrants in polymer films/coatings in a nearly noninvasive way.

  18. A precise technique for manufacturing correction coil

    International Nuclear Information System (INIS)

    Schieber, L.

    1992-01-01

    An automated method of manufacturing correction coils has been developed which provides a precise embodiment of the coil design. Numerically controlled machines have been developed to accurately position coil windings on the beam tube. Two types of machines have been built. One machine bonds the wire to a substrate which is wrapped around the beam tube after it is completed while the second machine bonds the wire directly to the beam tube. Both machines use the Multiwire reg-sign technique of bonding the wire to the substrate utilizing an ultrasonic stylus. These machines are being used to manufacture coils for both the SSC and RHIC

  19. Determination of true coincidence correction factors using Monte-Carlo simulation techniques

    Directory of Open Access Journals (Sweden)

    Chionis Dionysios A.

    2014-01-01

    Full Text Available Aim of this work is the numerical calculation of the true coincidence correction factors by means of Monte-Carlo simulation techniques. For this purpose, the Monte Carlo computer code PENELOPE was used and the main program PENMAIN was properly modified in order to include the effect of the true coincidence phenomenon. The modified main program that takes into consideration the true coincidence phenomenon was used for the full energy peak efficiency determination of an XtRa Ge detector with relative efficiency 104% and the results obtained for the 1173 keV and 1332 keV photons of 60Co were found consistent with respective experimental ones. The true coincidence correction factors were calculated as the ratio of the full energy peak efficiencies was determined from the original main program PENMAIN and the modified main program PENMAIN. The developed technique was applied for 57Co, 88Y, and 134Cs and for two source-to-detector geometries. The results obtained were compared with true coincidence correction factors calculated from the "TrueCoinc" program and the relative bias was found to be less than 2%, 4%, and 8% for 57Co, 88Y, and 134Cs, respectively.

  20. Correction for the absorption of plutonium alpha particles in filter paper used for dust sampling

    Energy Technology Data Exchange (ETDEWEB)

    Simons, J G

    1956-01-01

    This sample of air-borne dust collected on a filter paper when laboratory air is monitored for plutonium with the 1195 portable dust sampling unit may be regarded, for counting purposes, as a thick source with a non-uniform distribution of alpha-active plutonium. Experiments have been carried out to determine a correction factor to be applied to the observed count on the filter paper sample to correct for internal absorption in the paper and on the dust layer. From the results obtained it is recommended that a correction factor of 2 be used.

  1. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  2. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  3. A Technique for Real-Time Ionospheric Ranging Error Correction Based On Radar Dual-Frequency Detection

    Science.gov (United States)

    Lyu, Jiang-Tao; Zhou, Chen

    2017-12-01

    Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.

  4. A systematic comparison of motion artifact correction techniques for functional near-infrared spectroscopy.

    Science.gov (United States)

    Cooper, Robert J; Selb, Juliette; Gagnon, Louis; Phillip, Dorte; Schytz, Henrik W; Iversen, Helle K; Ashina, Messoud; Boas, David A

    2012-01-01

    Near-infrared spectroscopy (NIRS) is susceptible to signal artifacts caused by relative motion between NIRS optical fibers and the scalp. These artifacts can be very damaging to the utility of functional NIRS, particularly in challenging subject groups where motion can be unavoidable. A number of approaches to the removal of motion artifacts from NIRS data have been suggested. In this paper we systematically compare the utility of a variety of published NIRS motion correction techniques using a simulated functional activation signal added to 20 real NIRS datasets which contain motion artifacts. Principle component analysis, spline interpolation, wavelet analysis, and Kalman filtering approaches are compared to one another and to standard approaches using the accuracy of the recovered, simulated hemodynamic response function (HRF). Each of the four motion correction techniques we tested yields a significant reduction in the mean-squared error (MSE) and significant increase in the contrast-to-noise ratio (CNR) of the recovered HRF when compared to no correction and compared to a process of rejecting motion-contaminated trials. Spline interpolation produces the largest average reduction in MSE (55%) while wavelet analysis produces the highest average increase in CNR (39%). On the basis of this analysis, we recommend the routine application of motion correction techniques (particularly spline interpolation or wavelet analysis) to minimize the impact of motion artifacts on functional NIRS data.

  5. Evaluation of relative radiometric correction techniques on Landsat 8 OLI sensor data

    Science.gov (United States)

    Novelli, Antonio; Caradonna, Grazia; Tarantino, Eufemia

    2016-08-01

    The quality of information derived from processed remotely sensed data may depend upon many factors, mostly related to the extent data acquisition is influenced by atmospheric conditions, topographic effects, sun angle and so on. The goal of radiometric corrections is to reduce such effects in order enhance the performance of change detection analysis. There are two approaches to radiometric correction: absolute and relative calibrations. Due to the large amount of free data products available, absolute radiometric calibration techniques may be time consuming and financially expensive because of the necessary inputs for absolute calibration models (often these data are not available and can be difficult to obtain). The relative approach to radiometric correction, known as relative radiometric normalization, is preferred with some research topics because no in situ ancillary data, at the time of satellite overpasses, are required. In this study we evaluated three well known relative radiometric correction techniques using two Landsat 8 - OLI scenes over a subset area of the Apulia Region (southern Italy): the IR-MAD (Iteratively Reweighted Multivariate Alteration Detection), the HM (Histogram Matching) and the DOS (Dark Object Subtraction). IR-MAD results were statistically assessed within a territory with an extremely heterogeneous landscape and all computations performed in a Matlab environment. The panchromatic and thermal bands were excluded from the comparisons.

  6. Dual ring multilayer ionization chamber and theory-based correction technique for scanning proton therapy

    International Nuclear Information System (INIS)

    Takayanagi, Taisuke; Nishiuchi, Hideaki; Fujitaka, Shinichiro; Umezawa, Masumi; Nihongi, Hideaki; Tadokoro, Masahiro; Ito, Yuki; Nakashima, Chihiro; Matsuda, Koji; Sakae, Takeji; Terunuma, Toshiyuki

    2016-01-01

    Purpose: To develop a multilayer ionization chamber (MLIC) and a correction technique that suppresses differences between the MLIC and water phantom measurements in order to achieve fast and accurate depth dose measurements in pencil beam scanning proton therapy. Methods: The authors distinguish between a calibration procedure and an additional correction: 1—the calibration for variations in the air gap thickness and the electrometer gains is addressed without involving measurements in water; 2—the correction is addressed to suppress the difference between depth dose profiles in water and in the MLIC materials due to the nuclear interaction cross sections by a semiempirical model tuned by using measurements in water. In the correction technique, raw MLIC data are obtained for each energy layer and integrated after multiplying them by the correction factor because the correction factor depends on incident energy. The MLIC described here has been designed especially for pencil beam scanning proton therapy. This MLIC is called a dual ring multilayer ionization chamber (DRMLIC). The shape of the electrodes allows the DRMLIC to measure both the percentage depth dose (PDD) and integrated depth dose (IDD) because ionization electrons are collected from inner and outer air gaps independently. Results: IDDs for which the beam energies were 71.6, 120.6, 159, 180.6, and 221.4 MeV were measured and compared with water phantom results. Furthermore, the measured PDDs along the central axis of the proton field with a nominal field size of 10 × 10 cm 2 were compared. The spread out Bragg peak was 20 cm for fields with a range of 30.6 and 3 cm for fields with a range of 6.9 cm. The IDDs measured with the DRMLIC using the correction technique were consistent with those that of the water phantom; except for the beam energy of 71.6 MeV, all of the points satisfied the 1% dose/1 mm distance to agreement criterion of the gamma index. The 71.6 MeV depth dose profile showed slight

  7. Comparison of sampling techniques for use in SYVAC

    International Nuclear Information System (INIS)

    Dalrymple, G.J.

    1984-01-01

    The Stephen Howe review (reference TR-STH-1) recommended the use of a deterministic generator (DG) sampling technique for sampling the input values to the SYVAC (SYstems Variability Analysis Code) program. This technique was compared with Monte Carlo simple random sampling (MC) by taking a 1000 run case of SYVAC using MC as the reference case. The results show that DG appears relatively inaccurate for most values of consequence when used with 11 sample intervals. If 22 sample intervals are used then DG generates cumulative distribution functions that are statistically similar to the reference distribution. 400 runs of DG or MC are adequate to generate a representative cumulative distribution function. The MC technique appears to perform better than DG for the same number of runs. However, the DG predicts higher doses and in view of the importance of generating data in the high dose region this sampling technique with 22 sample intervals is recommended for use in SYVAC. (author)

  8. Proteomic Challenges: Sample Preparation Techniques for Microgram-Quantity Protein Analysis from Biological Samples

    Directory of Open Access Journals (Sweden)

    Peter Feist

    2015-02-01

    Full Text Available Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed.

  9. Proteomic Challenges: Sample Preparation Techniques for Microgram-Quantity Protein Analysis from Biological Samples

    Science.gov (United States)

    Feist, Peter; Hummon, Amanda B.

    2015-01-01

    Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower) and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed. PMID:25664860

  10. Proteomic challenges: sample preparation techniques for microgram-quantity protein analysis from biological samples.

    Science.gov (United States)

    Feist, Peter; Hummon, Amanda B

    2015-02-05

    Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower) and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed.

  11. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  12. Comparison of online IGRT techniques for prostate IMRT treatment: Adaptive vs repositioning correction

    International Nuclear Information System (INIS)

    Thongphiew, Danthai; Wu, Q. Jackie; Lee, W. Robert; Chankong, Vira; Yoo, Sua; McMahon, Ryan; Yin Fangfang

    2009-01-01

    This study compares three online image guidance techniques (IGRT) for prostate IMRT treatment: bony-anatomy matching, soft-tissue matching, and online replanning. Six prostate IMRT patients were studied. Five daily CBCT scans from the first week were acquired for each patient to provide representative ''snapshots'' of anatomical variations during the course of treatment. Initial IMRT plans were designed for each patient with seven coplanar 15 MV beams on a Eclipse treatment planning system. Two plans were created, one with a PTV margin of 10 mm and another with a 5 mm PTV margin. Based on these plans, the delivered dose distributions to each CBCT anatomy was evaluated to compare bony-anatomy matching, soft-tissue matching, and online replanning. Matching based on bony anatomy was evaluated using the 10 mm PTV margin (''bone10''). Soft-tissue matching was evaluated using both the 10 mm (''soft10'') and 5 mm (''soft5'') PTV margins. Online reoptimization was evaluated using the 5 mm PTV margin (''adapt''). The replanning process utilized the original dose distribution as the basis and linear goal programming techniques for reoptimization. The reoptimized plans were finished in less than 2 min for all cases. Using each IGRT technique, the delivered dose distribution was evaluated on all 30 CBCT scans (6 patientsx5CBCT/patient). The mean minimum dose (in percentage of prescription dose) to the CTV over five treatment fractions were in the ranges of 99%-100%(SD=0.1%-0.8%), 65%-98%(SD=0.4%-19.5%), 87%-99%(SD=0.7%-23.3%), and 95%-99%(SD=0.4%-10.4%) for the adapt, bone10, soft5, and soft10 techniques, respectively. Compared to patient position correction techniques, the online reoptimization technique also showed improvement in OAR sparing when organ motion/deformations were large. For bladder, the adapt technique had the best (minimum) D90, D50, and D30 values for 24, 17, and 15 fractions out of 30 total fractions, while it also had the best D90, D50, and D30 values for

  13. NAIL SAMPLING TECHNIQUE AND ITS INTERPRETATION

    Directory of Open Access Journals (Sweden)

    TZAR MN

    2011-01-01

    Full Text Available The clinical suspicion of onychomyosis based on appearance of the nails, requires culture for confirmation. This is because treatment requires prolonged use of systemic agents which may cause side effects. One of the common problems encountered is improper nail sampling technique which results in loss of essential information. The unfamiliar terminologies used in reporting culture results may intimidate physicians resulting in misinterpretation and hamper treatment decision. This article provides a simple guide on nail sampling technique and the interpretation of culture results.

  14. Near-station terrain corrections for gravity data by a surface-integral technique

    Science.gov (United States)

    Gettings, M.E.

    1982-01-01

    A new method of computing gravity terrain corrections by use of a digitizer and digital computer can result in substantial savings in the time and manual labor required to perform such corrections by conventional manual ring-chart techniques. The method is typically applied to estimate terrain effects for topography near the station, for example within 3 km of the station, although it has been used successfully to a radius of 15 km to estimate corrections in areas where topographic mapping is poor. Points (about 20) that define topographic maxima, minima, and changes in the slope gradient are picked on the topographic map, within the desired radius of correction about the station. Particular attention must be paid to the area immediately surrounding the station to ensure a good topographic representation. The horizontal and vertical coordinates of these points are entered into the computer, usually by means of a digitizer. The computer then fits a multiquadric surface to the input points to form an analytic representation of the surface. By means of the divergence theorem, the gravity effect of an interior closed solid can be expressed as a surface integral, and the terrain correction is calculated by numerical evaluation of the integral over the surfaces of a cylinder, The vertical sides of which are at the correction radius about the station, the flat bottom surface at the topographic minimum, and the upper surface given by the multiquadric equation. The method has been tested with favorable results against models for which an exact result is available and against manually computed field-station locations in areas of rugged topography. By increasing the number of points defining the topographic surface, any desired degree of accuracy can be obtained. The method is more objective than manual ring-chart techniques because no average compartment elevations need be estimated ?

  15. Development of sampling techniques for ITER Type B radwaste

    International Nuclear Information System (INIS)

    Hong, Kwon Pyo; Kim, Sung Geun; Jung, Sang Hee; Oh, Wan Ho; Park, Myung Chul; Kim, Hee Moon; Ahn, Sang Bok

    2016-01-01

    There are several difficulties and limitation in sampling activities. As the Type B radwaste components are mostly metallic(mostly stainless steel) and bulk(∼ 1 m in size and ∼ 100 mm in thickness), it is difficult in taking samples from the surface of Type B radwaste by remote operation. But also, sampling should be performed without use of any liquid coolant to avoid the spread of contamination. And all sampling procedures are carried in the hot cell red zone with remote operation. Three kinds of sampling techniques are being developed. They are core sampling, chip sampling, and wedge sampling, which are the candidates of sampling techniques to be applied to ITER hot cell. Object materials for sampling are stainless steel or Cu alloy block in order to simulate ITER Type B radwaste. The best sampling technique for ITER Type B radwaste among the three sampling techniques will be suggested in several months after finishing the related experiment

  16. Development of sampling techniques for ITER Type B radwaste

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Kwon Pyo; Kim, Sung Geun; Jung, Sang Hee; Oh, Wan Ho; Park, Myung Chul; Kim, Hee Moon; Ahn, Sang Bok [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    There are several difficulties and limitation in sampling activities. As the Type B radwaste components are mostly metallic(mostly stainless steel) and bulk(∼ 1 m in size and ∼ 100 mm in thickness), it is difficult in taking samples from the surface of Type B radwaste by remote operation. But also, sampling should be performed without use of any liquid coolant to avoid the spread of contamination. And all sampling procedures are carried in the hot cell red zone with remote operation. Three kinds of sampling techniques are being developed. They are core sampling, chip sampling, and wedge sampling, which are the candidates of sampling techniques to be applied to ITER hot cell. Object materials for sampling are stainless steel or Cu alloy block in order to simulate ITER Type B radwaste. The best sampling technique for ITER Type B radwaste among the three sampling techniques will be suggested in several months after finishing the related experiment.

  17. Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time.

    Science.gov (United States)

    Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa

    2008-01-01

    This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.

  18. Calculation of coincidence summing corrections for a specific small soil sample geometry

    Energy Technology Data Exchange (ETDEWEB)

    Helmer, R.G.; Gehrke, R.J.

    1996-10-01

    Previously, a system was developed at the INEL for measuring the {gamma}-ray emitting nuclides in small soil samples for the purpose of environmental monitoring. These samples were counted close to a {approx}20% Ge detector and, therefore, it was necessary to take into account the coincidence summing that occurs for some nuclides. In order to improve the technical basis for the coincidence summing corrections, the authors have carried out a study of the variation in the coincidence summing probability with position within the sample volume. A Monte Carlo electron and photon transport code (CYLTRAN) was used to compute peak and total efficiencies for various photon energies from 30 to 2,000 keV at 30 points throughout the sample volume. The geometry for these calculations included the various components of the detector and source along with the shielding. The associated coincidence summing corrections were computed at these 30 positions in the sample volume and then averaged for the whole source. The influence of the soil and the detector shielding on the efficiencies was investigated.

  19. Boat sampling technique for assessment of ageing of components

    International Nuclear Information System (INIS)

    Kumar, Kundan; Shyam, T.V.; Kayal, J.N.; Rupani, B.B.

    2006-01-01

    Boat sampling technique (BST) is a surface sampling technique, which has been developed for obtaining, in-situ, metal samples from the surface of an operating component without affecting its operating service life. The BST is non-destructive in nature and the sample is obtained without plastic deformation or without thermal degradation of the parent material. The shape and size of the sample depends upon the shape of the cutter and the surface geometry of the parent material. Miniature test specimens are generated from the sample and the specimens are subjected to various tests, viz. Metallurgical Evaluation, Metallographic Evaluation, Micro-hardness Evaluation, sensitisation test, small punch test etc. to confirm the integrity and assessment of safe operating life of the component. This paper highlights design objective of boat sampling technique, description of sampling module, sampling cutter and its performance evaluation, cutting process, boat samples, operational sequence of sampling module, qualification of sampling module, qualification of sampling technique, qualification of scooped region of the parent material, sample retrieval system, inspection, testing and examination to be carried out on the boat samples and scooped region. (author)

  20. Correcting sample drift using Fourier harmonics.

    Science.gov (United States)

    Bárcena-González, G; Guerrero-Lebrero, M P; Guerrero, E; Reyes, D F; Braza, V; Yañez, A; Nuñez-Moraleda, B; González, D; Galindo, P L

    2018-07-01

    During image acquisition of crystalline materials by high-resolution scanning transmission electron microscopy, the sample drift could lead to distortions and shears that hinder their quantitative analysis and characterization. In order to measure and correct this effect, several authors have proposed different methodologies making use of series of images. In this work, we introduce a methodology to determine the drift angle via Fourier analysis by using a single image based on the measurements between the angles of the second Fourier harmonics in different quadrants. Two different approaches, that are independent of the angle of acquisition of the image, are evaluated. In addition, our results demonstrate that the determination of the drift angle is more accurate by using the measurements of non-consecutive quadrants when the angle of acquisition is an odd multiple of 45°. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. A new technique, combined plication-incision (CPI, for correction of penile curvature

    Directory of Open Access Journals (Sweden)

    Hamed Abdalla Hamed

    Full Text Available ABSTRACT Introduction Penile curvature (PC can be surgically corrected by either corporoplasty or plication techniques. These techniques can be complicated by post-operative: penile shortening, recurrent PC, painful/palpable suture knots and erectile dysfunction. Objective To avoid the complications of corporoplasty and plication techniques using a new technique: combined plication-incision (CPI. Materials and Methods Two groups (1&2 were operated upon: group 1 using CPI and group 2 using the 16-dot technique. In CPI, dots were first marked as in 16 dot technique. In each group of 4 dots the superficial layer of tunica albuginea was transversely incised (3-6mm at the first and last dots. Ethibond 2/0, passed through the interior edge of the first incision plicating the intermediate 2 dots and passed out of the interior edge of the last incision, was tightened and ligated. Vicryle 4/0, passed through the exterior edges of the incisions, was tightened and ligated to cover the ethibond knot. Results Twelve (57.1 % participants in group 2 complained of a bothering palpable knot compared to none in group 1 with statistically significant difference (P=0.005. Postoperative shortening (5mm of erect penis, encountered in 9 participants, was doubled in group 2 but with insignificant difference (P>0.05. Post-operative recurrence of PC, was encountered in only 1 (4.8% participant in group 2, compared to none in group 1, with insignificant difference (P>0.05. Post-operative erectile rigidity was normally maintained in all participants. Conclusion The new technique was superior to the 16-dot technique for correction of PC.

  2. An experimental verification of laser-velocimeter sampling bias and its correction

    Science.gov (United States)

    Johnson, D. A.; Modarress, D.; Owen, F. K.

    1982-01-01

    The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.

  3. Efficiency and attenuation correction factors determination in gamma spectrometric assay of bulk samples using self radiation

    International Nuclear Information System (INIS)

    Haddad, Kh.

    2009-02-01

    Gamma spectrometry forms the most important and capable tool for measuring radioactive materials. Determination of the efficiency and attenuation correction factors is the most tedious problem in the gamma spectrometric assay of bulk samples. A new experimental and easy method for these correction factors determination using self radiation was proposed in this work. An experimental study of the correlation between self attenuation correction factor and sample thickness and its practical application was also introduced. The work was performed on NORM and uranyl nitrate bulk sample. The results of proposed methods agreed with those of traditional ones.(author)

  4. Non-terminal blood sampling techniques in guinea pigs.

    Science.gov (United States)

    Birck, Malene M; Tveden-Nyborg, Pernille; Lindblad, Maiken M; Lykkesfeldt, Jens

    2014-10-11

    Guinea pigs possess several biological similarities to humans and are validated experimental animal models(1-3). However, the use of guinea pigs currently represents a relatively narrow area of research and descriptive data on specific methodology is correspondingly scarce. The anatomical features of guinea pigs are slightly different from other rodent models, hence modulation of sampling techniques to accommodate for species-specific differences, e.g., compared to mice and rats, are necessary to obtain sufficient and high quality samples. As both long and short term in vivo studies often require repeated blood sampling the choice of technique should be well considered in order to reduce stress and discomfort in the animals but also to ensure survival as well as compliance with requirements of sample size and accessibility. Venous blood samples can be obtained at a number of sites in guinea pigs e.g., the saphenous and jugular veins, each technique containing both advantages and disadvantages(4,5). Here, we present four different blood sampling techniques for either conscious or anaesthetized guinea pigs. The procedures are all non-terminal procedures provided that sample volumes and number of samples do not exceed guidelines for blood collection in laboratory animals(6). All the described methods have been thoroughly tested and applied for repeated in vivo blood sampling in studies within our research facility.

  5. Motion correction in simultaneous PET/MR brain imaging using sparsely sampled MR navigators

    DEFF Research Database (Denmark)

    Keller, Sune H; Hansen, Casper; Hansen, Christian

    2015-01-01

    BACKGROUND: We present a study performing motion correction (MC) of PET using MR navigators sampled between other protocolled MR sequences during simultaneous PET/MR brain scanning with the purpose of evaluating its clinical feasibility and the potential improvement of image quality. FINDINGS......: Twenty-nine human subjects had a 30-min [(11)C]-PiB PET scan with simultaneous MR including 3D navigators sampled at six time points, which were used to correct the PET image for rigid head motion. Five subjects with motion greater than 4 mm were reconstructed into six frames (one for each navigator...

  6. Gamma ray auto absorption correction evaluation methodology

    International Nuclear Information System (INIS)

    Gugiu, Daniela; Roth, Csaba; Ghinescu, Alecse

    2010-01-01

    Neutron activation analysis (NAA) is a well established nuclear technique, suited to investigate the microstructural or elemental composition and can be applied to studies of a large variety of samples. The work with large samples involves, beside the development of large irradiation devices with well know neutron field characteristics, the knowledge of perturbing phenomena and adequate evaluation of correction factors like: neutron self shielding, extended source correction, gamma ray auto absorption. The objective of the works presented in this paper is to validate an appropriate methodology for gamma ray auto absorption correction evaluation for large inhomogeneous samples. For this purpose a benchmark experiment has been defined - a simple gamma ray transmission experiment, easy to be reproduced. The gamma ray attenuation in pottery samples has been measured and computed using MCNP5 code. The results show a good agreement between the computed and measured values, proving that the proposed methodology is able to evaluate the correction factors. (authors)

  7. Improvements to the Chebyshev expansion of attenuation correction factors for cylindrical samples

    International Nuclear Information System (INIS)

    Mildner, D.F.R.; Carpenter, J.M.

    1990-01-01

    The accuracy of the Chebyshev expansion coefficients used for the calculation of attenuation correction factors for cylinderical samples has been improved. An increased order of expansion allows the method to be useful over a greater range of attenuation. It is shown that many of these coefficients are exactly zero, others are rational numbers, and others are rational frations of π -1 . The assumptions of Sears in his asymptotic expression of the attenuation correction factor are also examined. (orig.)

  8. Differences in sampling techniques on total post-mortem tryptase.

    Science.gov (United States)

    Tse, R; Garland, J; Kesha, K; Elstub, H; Cala, A D; Ahn, Y; Stables, S; Palmiere, C

    2017-11-20

    The measurement of mast cell tryptase is commonly used to support the diagnosis of anaphylaxis. In the post-mortem setting, the literature recommends sampling from peripheral blood sources (femoral blood) but does not specify the exact sampling technique. Sampling techniques vary between pathologists, and it is unclear whether different sampling techniques have any impact on post-mortem tryptase levels. The aim of this study is to compare the difference in femoral total post-mortem tryptase levels between two sampling techniques. A 6-month retrospective study comparing femoral total post-mortem tryptase levels between (1) aspirating femoral vessels with a needle and syringe prior to evisceration and (2) femoral vein cut down during evisceration. Twenty cases were identified, with three cases excluded from analysis. There was a statistically significant difference (paired t test, p sampling methods. The clinical significance of this finding and what factors may contribute to it are unclear. When requesting post-mortem tryptase, the pathologist should consider documenting the exact blood collection site and method used for collection. In addition, blood samples acquired by different techniques should not be mixed together and should be analyzed separately if possible.

  9. Microextraction sample preparation techniques in biomedical analysis.

    Science.gov (United States)

    Szultka, Malgorzata; Pomastowski, Pawel; Railean-Plugaru, Viorica; Buszewski, Boguslaw

    2014-11-01

    Biologically active compounds are found in biological samples at relatively low concentration levels. The sample preparation of target compounds from biological, pharmaceutical, environmental, and food matrices is one of the most time-consuming steps in the analytical procedure. The microextraction techniques are dominant. Metabolomic studies also require application of proper analytical technique for the determination of endogenic metabolites present in biological matrix on trace concentration levels. Due to the reproducibility of data, precision, relatively low cost of the appropriate analysis, simplicity of the determination, and the possibility of direct combination of those techniques with other methods (combination types on-line and off-line), they have become the most widespread in routine determinations. Additionally, sample pretreatment procedures have to be more selective, cheap, quick, and environmentally friendly. This review summarizes the current achievements and applications of microextraction techniques. The main aim is to deal with the utilization of different types of sorbents for microextraction and emphasize the use of new synthesized sorbents as well as to bring together studies concerning the systematic approach to method development. This review is dedicated to the description of microextraction techniques and their application in biomedical analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. An introduction to Bartlett correction and bias reduction

    CERN Document Server

    Cordeiro, Gauss M

    2014-01-01

    This book presents a concise introduction to Bartlett and Bartlett-type corrections of statistical tests and bias correction of point estimators. The underlying idea behind both groups of corrections is to obtain higher accuracy in small samples. While the main focus is on corrections that can be analytically derived, the authors also present alternative strategies for improving estimators and tests based on bootstrap, a data resampling technique, and discuss concrete applications to several important statistical models.

  11. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  12. Skin reduction technique for correction of lateral deviation of the erect straight penis.

    Science.gov (United States)

    Shaeer, Osama

    2014-07-01

    Lateral deviation of the erect straight penis (LDESP) refers to a penis that despite being straight in the erect state, points laterally, yet can be directed forward manually without the use of force. While LDESP should not impose a negative impact on sexual function, it may have a negative cosmetic impact. This work describes skin reduction technique (SRT) for correction of LDESP. Counseling was offered to males with LDESP after excluding other abnormalities. Surgery was performed in case of failed counseling. In the erect state, the degree and direction of LDESP were noted. Skin on the base of the penis on the contralateral side of LDESP was excised from the base of the penis and the edges approximated to correct LDESP. Further excision was repeated if needed. The incision was closed in two layers. Long-term efficacy of SRT was the main outcome measure. Out of 183 males with LDESP, 66.7% were not sexually active. Counseling relieved 91.8% of cases. Fifteen patients insisted on surgery, mostly from among the sexually active where the complaint was mutual from the patient and partner. SRT resulted in full correction of the angle of erection in 12 cases out of 15. Two had minimal recurrence, and one had major recurrence indicating re-SRT. LDESP is more common a complaint among those who have not experienced coital relationship, and is mostly relieved by counseling. However, sexually active males with this complaint are more difficult to relieve by counseling. A minority of patients may opt for surgical correction. SRT achieves a forward erection in such patients, is minimally invasive, and relatively safe, provided the angle of erection can be corrected manually without force. Shaeer O. Skin reduction technique for correction of lateral deviation of the erect straight penis. © 2014 International Society for Sexual Medicine.

  13. Absorption correction factor in X-ray fluorescent quantitative analysis

    International Nuclear Information System (INIS)

    Pimjun, S.

    1994-01-01

    An experiment on absorption correction factor in X-ray fluorescent quantitative analysis were carried out. Standard samples were prepared from the mixture of Fe 2 O 3 and tapioca flour at various concentration of Fe 2 O 3 ranging from 5% to 25%. Unknown samples were kaolin containing 3.5% to-50% of Fe 2 O 3 Kaolin samples were diluted with tapioca flour in order to reduce the absorption of FeK α and make them easy to prepare. Pressed samples with 0.150 /cm 2 and 2.76 cm in diameter, were used in the experiment. Absorption correction factor is related to total mass absorption coefficient (χ) which varied with sample composition. In known sample, χ can be calculated by conveniently the formula. However in unknown sample, χ can be determined by Emission-Transmission method. It was found that the relationship between corrected FeK α intensity and contents of Fe 2 O 3 in these samples was linear. This result indicate that this correction factor can be used to adjust the accuracy of X-ray intensity. Therefore, this correction factor is essential in quantitative analysis of elements comprising in any sample by X-ray fluorescent technique

  14. Use of X-ray diffraction technique and chemometrics to aid soil sampling strategies in traceability studies.

    Science.gov (United States)

    Bertacchini, Lucia; Durante, Caterina; Marchetti, Andrea; Sighinolfi, Simona; Silvestri, Michele; Cocchi, Marina

    2012-08-30

    Aim of this work is to assess the potentialities of the X-ray powder diffraction technique as fingerprinting technique, i.e. as a preliminary tool to assess soil samples variability, in terms of geochemical features, in the context of food geographical traceability. A correct approach to sampling procedure is always a critical issue in scientific investigation. In particular, in food geographical traceability studies, where the cause-effect relations between the soil of origin and the final foodstuff is sought, a representative sampling of the territory under investigation is certainly an imperative. This research concerns a pilot study to investigate the field homogeneity with respect to both field extension and sampling depth, taking also into account the seasonal variability. Four Lambrusco production sites of the Modena district were considered. The X-Ray diffraction spectra, collected on the powder of each soil sample, were treated as fingerprint profiles to be deciphered by multivariate and multi-way data analysis, namely PCA and PARAFAC. The differentiation pattern observed in soil samples, as obtained by this fast and non-destructive analytical approach, well matches with the results obtained by characterization with other costly analytical techniques, such as ICP/MS, GFAAS, FAAS, etc. Thus, the proposed approach furnishes a rational basis to reduce the number of soil samples to be collected for further analytical characterization, i.e. metals content, isotopic ratio of radiogenic element, etc., while maintaining an exhaustive description of the investigated production areas. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. SWOT ANALYSIS ON SAMPLING METHOD

    Directory of Open Access Journals (Sweden)

    CHIS ANCA OANA

    2014-07-01

    Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the sampling method, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the sampling method the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.

  16. A simple method of correcting for variation of sample thickness in the determination of the activity of environmental samples by gamma spectrometry

    International Nuclear Information System (INIS)

    Galloway, R.B.

    1991-01-01

    Gamma ray spectrometry is a well established method of determining the activity of radioactive components in environmental samples. It is usual to maintain precisely the same counting geometry in measurements on samples under investigation as in the calibration measurements on standard materials of known activity, thus avoiding perceived uncertainties and complications in correcting for changes in counting geometry. However this may not always be convenient if, as on some occasions, only a small quantity of sample material is available for analysis. A procedure which avoids re-calibration for each sample size is described and is shown to be simple to use without significantly reducing the accuracy of measurement of the activity of typical environmental samples. The correction procedure relates to the use of cylindrical samples at a constant distance from the detector, the samples all having the same diameter but various thicknesses being permissible. (author)

  17. Real-time scatter measurement and correction in film radiography

    International Nuclear Information System (INIS)

    Shaw, C.G.

    1987-01-01

    A technique for real-time scatter measurement and correction in scanning film radiography is described. With this technique, collimated x-ray fan beams are used to partially reject scattered radiation. Photodiodes are attached to the aft-collimator for sampled scatter measurement. Such measurement allows the scatter distribution to be reconstructed and subtracted from digitized film image data for accurate transmission measurement. In this presentation the authors discuss the physical and technical considerations of this scatter correction technique. Examples are shown that demonstrate the feasibility of the technique. Improved x-ray transmission measurement and dual-energy subtraction imaging are demonstrated with phantoms

  18. New materials for sample preparation techniques in bioanalysis.

    Science.gov (United States)

    Nazario, Carlos Eduardo Domingues; Fumes, Bruno Henrique; da Silva, Meire Ribeiro; Lanças, Fernando Mauro

    2017-02-01

    The analysis of biological samples is a complex and difficult task owing to two basic and complementary issues: the high complexity of most biological matrices and the need to determine minute quantities of active substances and contaminants in such complex sample. To succeed in this endeavor samples are usually subject to three steps of a comprehensive analytical methodological approach: sample preparation, analytes isolation (usually utilizing a chromatographic technique) and qualitative/quantitative analysis (usually with the aid of mass spectrometric tools). Owing to the complex nature of bio-samples, and the very low concentration of the target analytes to be determined, selective sample preparation techniques is mandatory in order to overcome the difficulties imposed by these two constraints. During the last decade new chemical synthesis approaches has been developed and optimized, such as sol-gel and molecularly imprinting technologies, allowing the preparation of novel materials for sample preparation including graphene and derivatives, magnetic materials, ionic liquids, molecularly imprinted polymers, and much more. In this contribution we will review these novel techniques and materials, as well as their application to the bioanalysis niche. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Students' Preferences and Attitude toward Oral Error Correction Techniques at Yanbu University College, Saudi Arabia

    Science.gov (United States)

    Alamri, Bushra; Fawzi, Hala Hassan

    2016-01-01

    Error correction has been one of the core areas in the field of English language teaching. It is "seen as a form of feedback given to learners on their language use" (Amara, 2015). Many studies investigated the use of different techniques to correct students' oral errors. However, only a few focused on students' preferences and attitude…

  20. The Development of a Differential Deposition Technique for Figure Correction in Grazing Incidence Optics

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose the development of a physical-vapor-deposition coating technique to correct residual figure errors in grazing-incidence optics. The process involves...

  1. Optimization Correction Strength Using Contra Bending Technique without Anterior Release Procedure to Achieve Maximum Correction on Severe Adult Idiopathic Scoliosis

    Directory of Open Access Journals (Sweden)

    Ahmad Jabir Rahyussalim

    2016-01-01

    Full Text Available Adult scoliosis is defined as a spinal deformity in a skeletally mature patient with a Cobb angle of more than 10 degrees in the coronal plain. Posterior-only approach with rod and screw corrective manipulation to add strength of contra bending manipulation has correction achievement similar to that obtained by conventional combined anterior release and posterior approach. It also avoids the complications related to the thoracic approach. We reported a case of 25-year-old male adult idiopathic scoliosis with double curve. It consists of main thoracic curve of 150 degrees and lumbar curve of 89 degrees. His curve underwent direct contra bending posterior approach using rod and screw corrective manipulation technique to achieve optimal correction. After surgery the main thoracic Cobb angle becomes 83 degrees and lumbar Cobb angle becomes 40 degrees, with 5 days length of stay and less than 800 mL blood loss during surgery. There is no complaint at two months after surgery; he has already come back to normal activity with good functional activity.

  2. Correction of incomplete penoscrotal transposition by a modified Glenn-Anderson technique

    Directory of Open Access Journals (Sweden)

    Saleh Amin

    2010-01-01

    Full Text Available Purpose: Penoscrotal transposition may be partial or complete, resulting in variable degrees of positional exchanges between the penis and the scrotum. Repairs of penoscrotal transposition rely on the creation of rotational flaps to mobilise the scrotum downwards or transpose the penis to a neo hole created in the skin of the mons-pubis. All known techniques result in complete circular incision around the root of the penis, resulting in severe and massive oedema of the penile skin, which delays correction of the associated hypospadias and increases the incidence of complications, as the skin vascularity and lymphatics are impaired by the designed incision. A new design to prevent this post-operative oedema, allowing early correction of the associated hypospadias and lowering the incidence of possible complications, had been used, whose results were compared with other methods of correction. Materials and Methods: Ten patients with incomplete penoscrotal transposition had been corrected by designing rotational flaps that push the scrotum back while the penile skin remains attached by small strip to the skin of the mons-pubis. Results : All patients showed an excellent cosmetic outcome. There was minimal post-operative oedema and no vascular compromise to the penile or scrotal skin. Correction of associated hypospadias can be performed in the same sitting or in another sitting, without or with minimal complications. Conclusion: This modification, which maintains the penile skin connected to the skin of the lower abdomen by a small strip of skin during correction of penoscrotal transposition, prevents post-operative oedema and improves healing with excellent cosmetic appearance, allows one-stage repair with minimal complications and reduce post-operative complications such as urinary fistula and flap necrosis.

  3. Gamma self-shielding correction factors calculation for aqueous bulk sample analysis by PGNAA technique

    International Nuclear Information System (INIS)

    Nasrabadi, M.N.; Mohammadi, A.; Jalali, M.

    2009-01-01

    In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required.

  4. Simultaneous double-rod rotation technique in posterior instrumentation surgery for correction of adolescent idiopathic scoliosis.

    Science.gov (United States)

    Ito, Manabu; Abumi, Kuniyoshi; Kotani, Yoshihisa; Takahata, Masahiko; Sudo, Hideki; Hojo, Yoshihiro; Minami, Akio

    2010-03-01

    The authors present a new posterior correction technique consisting of simultaneous double-rod rotation using 2 contoured rods and polyaxial pedicle screws with or without Nesplon tapes. The purpose of this study is to introduce the basic principles and surgical procedures of this new posterior surgery for correction of adolescent idiopathic scoliosis. Through gradual rotation of the concave-side rod by 2 rod holders, the convex-side rod simultaneously rotates with the the concave-side rod. This procedure does not involve any force pushing down the spinal column around the apex. Since this procedure consists of upward pushing and lateral translation of the spinal column with simultaneous double-rod rotation maneuvers, it is simple and can obtain thoracic kyphosis as well as favorable scoliosis correction. This technique is applicable not only to a thoracic single curve but also to double major curves in cases of adolescent idiopathic scoliosis.

  5. Use of calibration standards and the correction for sample self-attenuation in gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Parker, J.L.

    1984-08-01

    The efficient use of appropriate calibration standards and the correction for the attenuation of the gamma rays within an assay sample by the sample itself are two important and closely related subjects in gamma-ray nondestructive assay. Much research relating to those subjects has been done in the Nuclear Safeguards Research and Development program at the Los Alamos National Laboratory since 1970. This report brings together most of the significant results of that research. Also discussed are the nature of appropriate calibration standards and the necessary conditions on the composition, size, and shape of the samples to allow accurate assays. Procedures for determining the correction for the sample self-attenuation are described at length including both general principles and several specific useful cases. The most useful concept is that knowing the linear attenuation coefficient of the sample (which can usually be determined) and the size and shape of the sample and its position relative to the detector permits the computation of the correction factor for the self-attenuation. A major objective of the report is to explain how the procedures for determining the self-attenuation correction factor can be applied so that calibration standards can be entirely appropriate without being particularly similar, either physically or chemically, to the items to be assayed. This permits minimization of the number of standards required to assay items with a wide range of size, shape, and chemical composition. 17 references, 18 figures, 2 tables

  6. A comparative study of sampling techniques for monitoring carcass contamination

    NARCIS (Netherlands)

    Snijders, J.M.A.; Janssen, M.H.W.; Gerats, G.E.; Corstiaensen, G.P.

    1984-01-01

    Four bacteriological sampling techniques i.e. the excision, double swab, agar contract and modified agar contact techniques were compared by sampling pig carcasses before and after chilling. As well as assessing the advantages and disadvantages of the techniques particular attention was paid to

  7. Sample preparation for special PIE-techniques at ITU

    International Nuclear Information System (INIS)

    Toscano, E.H.; Manzel, R.

    2002-01-01

    Several sample preparation techniques were developed and installed in hot cells. The techniques were conceived to evaluate the performance of highly burnt fuel rods and include: (a) a device for the removal of the fuel, (b) a method for the preparation of the specimen ends for the welding of new end caps and for the careful cleaning of samples for Transmission Electron Microscopy and Glow Discharge Mass Spectroscopy, (c) a sample pressurisation device for long term creep tests, and (d) a diameter measuring device for creep or burst samples. Examples of the determination of the mechanical properties, the behaviour under transient conditions and for the assessment of the corrosion behaviour of high burnup cladding materials are presented. (author)

  8. New trends in sample preparation techniques for environmental analysis.

    Science.gov (United States)

    Ribeiro, Cláudia; Ribeiro, Ana Rita; Maia, Alexandra S; Gonçalves, Virgínia M F; Tiritan, Maria Elizabeth

    2014-01-01

    Environmental samples include a wide variety of complex matrices, with low concentrations of analytes and presence of several interferences. Sample preparation is a critical step and the main source of uncertainties in the analysis of environmental samples, and it is usually laborious, high cost, time consuming, and polluting. In this context, there is increasing interest in developing faster, cost-effective, and environmentally friendly sample preparation techniques. Recently, new methods have been developed and optimized in order to miniaturize extraction steps, to reduce solvent consumption or become solventless, and to automate systems. This review attempts to present an overview of the fundamentals, procedure, and application of the most recently developed sample preparation techniques for the extraction, cleanup, and concentration of organic pollutants from environmental samples. These techniques include: solid phase microextraction, on-line solid phase extraction, microextraction by packed sorbent, dispersive liquid-liquid microextraction, and QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe).

  9. Endoscopic techniques for diagnosis and correction of complications after retroperitoneal pancreas transplantation

    Directory of Open Access Journals (Sweden)

    A. V. Pinchuk

    2016-01-01

    Full Text Available Relevance. Timely diagnosis and treatment of postoperative complications after pancreas transplantation is an actual problem of modern clinical transplantation. Purpose. The assessment of the endoscopy potential for the diagnosis and correction of postoperative complications after pancreas transplantation. Materials and methods. Since October 2011, simultaneous retroperitoneal pancreas-kidney transplantation has been performed in 27 patients. In 8 cases, the use of endoscopic techniques allowed a timely identification and treatment of the complications occurred. Conclusions. Endoscopic techniques proved to be highly efficient in the diagnosis and treatment of surgical complications and immunological impairments after retroperitoneal pancreas transplantation. 

  10. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    Science.gov (United States)

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. An investigation of error correcting techniques for OMV and AXAF

    Science.gov (United States)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  12. Non-terminal blood sampling techniques in Guinea pigs

    DEFF Research Database (Denmark)

    Birck, Malene Muusfeldt; Tveden-Nyborg, Pernille; Lindblad, Maiken Marie

    2014-01-01

    Guinea pigs possess several biological similarities to humans and are validated experimental animal models(1-3). However, the use of guinea pigs currently represents a relatively narrow area of research and descriptive data on specific methodology is correspondingly scarce. The anatomical features...... of guinea pigs are slightly different from other rodent models, hence modulation of sampling techniques to accommodate for species-specific differences, e.g., compared to mice and rats, are necessary to obtain sufficient and high quality samples. As both long and short term in vivo studies often require...... repeated blood sampling the choice of technique should be well considered in order to reduce stress and discomfort in the animals but also to ensure survival as well as compliance with requirements of sample size and accessibility. Venous blood samples can be obtained at a number of sites in guinea pigs e...

  13. Application of bias factor method using random sampling technique for prediction accuracy improvement of critical eigenvalue of BWR

    International Nuclear Information System (INIS)

    Ito, Motohiro; Endo, Tomohiro; Yamamoto, Akio; Kuroda, Yusuke; Yoshii, Takashi

    2017-01-01

    The bias factor method based on the random sampling technique is applied to the benchmark problem of Peach Bottom Unit 2. Validity and availability of the present method, i.e. correction of calculation results and reduction of uncertainty, are confirmed in addition to features and performance of the present method. In the present study, core characteristics in cycle 3 are corrected with the proposed method using predicted and 'measured' critical eigenvalues in cycles 1 and 2. As the source of uncertainty, variance-covariance of cross sections is considered. The calculation results indicate that bias between predicted and measured results, and uncertainty owing to cross section can be reduced. Extension to other uncertainties such as thermal hydraulics properties will be a future task. (author)

  14. Multivariate correction in laser-enhanced ionization with laser sampling

    International Nuclear Information System (INIS)

    Popov, A.M.; Labutin, T.A.; Sychev, D.N.; Gorbatenko, A.A.; Zorov, N.B.

    2007-01-01

    The opportunity of normalizing laser-enhanced ionization (LEI) signals by several reference signals (RS) measured simultaneously has been examined in view of correcting variations of laser parameters and matrix interferences. Opto-acoustic, atomic emission and non-selective ionization signals and their paired combination were used as RS for Li determination in aluminum alloys (0-6% Mg, 0-5% Cu, 0-1% Sc, 0-1% Ag). The specific normalization procedure in case of RS essential multicollinearity has been proposed. LEI and RS for each definite ablation pulse energy were plotted in Cartesian co-ordinates (x and y axes - the RS values, z axis - LEI signal). It was found that in the three-dimensional space the slope of the correlation line to the plane of RS depends on the analyte content in the solid sample. The use of this slope has therefore been proposed as a multivariate corrected analytical signal. Multivariate correlative normalization provides analytical signal free of matrix interferences for Al-Mg-Cu-Li alloys. The application of this novel approach to the determination of Li allows plotting unified calibration curves for Al-alloys of different matrix composition

  15. Multivariate correction in laser-enhanced ionization with laser sampling

    Energy Technology Data Exchange (ETDEWEB)

    Popov, A.M. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation); Labutin, T.A. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation)], E-mail: timurla@laser.chem.msu.ru; Sychev, D.N.; Gorbatenko, A.A.; Zorov, N.B. [Department of Chemistry, M. V. Lomonosov Moscow State University, 119992 Russia Moscow GSP-2, Leninskie Gory 1 build.3 (Russian Federation)

    2007-03-15

    The opportunity of normalizing laser-enhanced ionization (LEI) signals by several reference signals (RS) measured simultaneously has been examined in view of correcting variations of laser parameters and matrix interferences. Opto-acoustic, atomic emission and non-selective ionization signals and their paired combination were used as RS for Li determination in aluminum alloys (0-6% Mg, 0-5% Cu, 0-1% Sc, 0-1% Ag). The specific normalization procedure in case of RS essential multicollinearity has been proposed. LEI and RS for each definite ablation pulse energy were plotted in Cartesian co-ordinates (x and y axes - the RS values, z axis - LEI signal). It was found that in the three-dimensional space the slope of the correlation line to the plane of RS depends on the analyte content in the solid sample. The use of this slope has therefore been proposed as a multivariate corrected analytical signal. Multivariate correlative normalization provides analytical signal free of matrix interferences for Al-Mg-Cu-Li alloys. The application of this novel approach to the determination of Li allows plotting unified calibration curves for Al-alloys of different matrix composition.

  16. The use of calibration standards and the correction for sample self-attenuation in gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Parker, J.L.

    1986-11-01

    The efficient use of appropriate calibration standards and the correction for the attenuation of the gamma rays within an assay sample by the sample itself are two important and closely related subjects in gamma-ray nondestructive assay. Much research relating to those subjects has been done in the Nuclear Safeguards Research and Development program at the Los Alamos National Laboratory since 1970. This report brings together most of the significant results of that research. Also discussed are the nature of appropriate calibration standards and the necessary conditions on the composition, size, and shape of the samples to allow accurate assays. Procedures for determining the correction for the sample self-attenuation are described at length including both general principles and several specific useful cases. The most useful concept is that knowing the linear attenuation coefficient of the sample (which can usually be determined) and the size and shape of the sample and its position relative to the detector permits the computation of the correction factor for the self-attenuation. A major objective of the report is to explain how the procedures for determining the self-attenuation correction factor can be applied so that calibration standards can be entirely appropriate without being particularly similar, either physically or chemically, to the items to be assayed. This permits minimization of the number of standards required to assay items with a wide range of size, shape, and chemical composition

  17. Application of digital sampling techniques to particle identification

    International Nuclear Information System (INIS)

    Bardelli, L.; Poggi, G.; Bini, M.; Carraresi, L.; Pasquali, G.; Taccetti, N.

    2003-01-01

    An application of digital sampling techniques is presented which can greatly simplify experiments involving sub-nanosecond time-mark determinations and energy measurements with nuclear detectors, used for Pulse Shape Analysis and Time of Flight measurements in heavy ion experiments. In this work a 100 M Sample/s, 12 bit analog to digital converter has been used: examples of this technique applied to Silicon and CsI(Tl) detectors in heavy-ions experiments involving particle identification via Pulse Shape analysis and Time of Flight measurements are presented. The system is suited for applications to large detector arrays and to different kinds of detectors. Some preliminary results regarding the simulation of current signals in Silicon detectors are also discussed. (authors)

  18. Efficiency corrections in determining the 137Cs inventory of environmental soil samples by using relative measurement method and GEANT4 simulations

    International Nuclear Information System (INIS)

    Li, Gang; Liang, Yongfei; Xu, Jiayun; Bai, Lixin

    2015-01-01

    The determination of 137 Cs inventory is widely used to estimate the soil erosion or deposition rate. The generally used method to determine the activity of volumetric samples is the relative measurement method, which employs a calibration standard sample with accurately known activity. This method has great advantages in accuracy and operation only when there is a small difference in elemental composition, sample density and geometry between measuring samples and the calibration standard. Otherwise it needs additional efficiency corrections in the calculating process. The Monte Carlo simulations can handle these correction problems easily with lower financial cost and higher accuracy. This work presents a detailed description to the simulation and calibration procedure for a conventionally used commercial P-type coaxial HPGe detector with cylindrical sample geometry. The effects of sample elemental composition, density and geometry were discussed in detail and calculated in terms of efficiency correction factors. The effect of sample placement was also analyzed, the results indicate that the radioactive nuclides and sample density are not absolutely uniform distributed along the axial direction. At last, a unified binary quadratic functional relationship of efficiency correction factors as a function of sample density and height was obtained by the least square fitting method. This function covers the sample density and height range of 0.8–1.8 g/cm 3 and 3.0–7.25 cm, respectively. The efficiency correction factors calculated by the fitted function are in good agreement with those obtained by the GEANT4 simulations with the determination coefficient value greater than 0.9999. The results obtained in this paper make the above-mentioned relative measurements more accurate and efficient in the routine radioactive analysis of environmental cylindrical soil samples. - Highlights: • Determination of 137 Cs inventory in environmental soil samples by using relative

  19. Biogeosystem Technique as a method to correct the climate

    Science.gov (United States)

    Kalinitchenko, Valery; Batukaev, Abdulmalik; Batukaev, Magomed; Minkina, Tatiana

    2017-04-01

    can be produced; The less energy is consumed for climate correction, the better. The proposed algorithm was never discussed before because most of its ingredients were unenforceable. Now the possibility to execute the algorithm exists in the framework of our new scientific-technical branch - Biogeosystem Technique (BGT*). The BGT* is a transcendental (non-imitating natural processes) approach to soil processing, regulation of energy, matter, water fluxes and biological productivity of biosphere: intra-soil machining to provide the new highly productive dispersed system of soil; intra-soil pulse continuous-discrete plants watering to reduce the transpiration rate and water consumption of plants for 5-20 times; intra-soil environmentally safe return of matter during intra-soil milling processing and (or) intra-soil pulse continuous-discrete plants watering with nutrition. Are possible: waste management; reducing flow of nutrients to water systems; carbon and other organic and mineral substances transformation into the soil to plant nutrition elements; less degradation of biological matter to greenhouse gases; increasing biological sequestration of carbon dioxide in terrestrial system's photosynthesis; oxidizing methane and hydrogen sulfide by fresh photosynthesis ionized biologically active oxygen; expansion of the active terrestrial site of biosphere. The high biological product output of biosphere will be gained. BGT* robotic systems are of low cost, energy and material consumption. By BGT* methods the uncertainties of climate and biosphere will be reduced. Key words: Biogeosystem Technique, method to correct, climate

  20. A comparative study of 232Th and 238U activity estimation in soil samples by gamma spectrometry and Neutron Activation Analysis (NAA) technique

    International Nuclear Information System (INIS)

    Rekha, A.K.; Anilkumar, S.; Narayani, K.; Babu, D.A.R.

    2012-01-01

    Radioactivity in the environment is mainly due to the naturally occurring radionuclides like uranium, thorium with their daughter products and potassium. Even though Gamma spectrometry is the most commonly used non destructive method for the quantification of these naturally occurring radionuclides, Neutron Activation Analysis (NAA), a well established analytical technique, can also be used. But the NAA technique is a time consuming process and needs proper standards, proper sample preparation etc. In this paper, the 232 Th and 238 U activity estimated using gamma ray spectrometry and NAA technique are compared. In the case of direct gamma spectrometry method, the samples were analysed after sealing in a 250 ml container. Whereas for the NAA, about 300 mg of each sample, after irradiation were subjected to gamma spectrometry. The 238 U and 232 Th activities (in Bq/kg) in samples were estimated after the proper efficiency correction and were compared. The estimated activities by these two methods are in good agreement. The variation in 238 U and 232 Th activity values are within ± 15% which are acceptable for environmental samples

  1. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  2. The Taylor saddle effacement: a new technique for correction of saddle nose deformity.

    Science.gov (United States)

    Taylor, S Mark; Rigby, Matthew H

    2008-02-01

    To describe a novel technique, the Taylor saddle effacement (TSE), for correction of saddle nose deformity using autologous grafts from the lower lateral cartilages. A prospective evaluation of six patients, all of whom had the TSE performed. Photographs were taken in combination with completion of a rhinoplasty outcomes questionnaire preoperatively and at 6 months. The questionnaire included a visual analogue scale (VAS) of nasal breathing and a rhinoplasty outcomes evaluation (ROE) of nasal function and esthetics. All six patients had improvement in both their global nasal airflow on the VAS and on their ROE that was statistically significant. The mean preoperative VAS score was 5.8 compared with our postoperative mean of 8.5 of a possible 10. Mean ROE scores improved from 34.7 to 85.5. At 6 months, all patients felt that their nasal appearance had improved. The TSE is a simple and reliable technique for correction of saddle nose deformity. This prospective study has demonstrated improvement in both nasal function and esthetics when it is employed.

  3. Development of a methodology for low-energy X-ray absorption correction in biological samples using radiation scattering techniques

    International Nuclear Information System (INIS)

    Pereira, Marcelo O.; Anjos, Marcelino J.; Lopes, Ricardo T.

    2009-01-01

    Non-destructive techniques with X-ray, such as tomography, radiography and X-ray fluorescence are sensitive to the attenuation coefficient and have a large field of applications in medical as well as industrial area. In the case of X-ray fluorescence analysis the knowledge of photon X-ray attenuation coefficients provides important information to obtain the elemental concentration. On the other hand, the mass attenuation coefficient values are determined by transmission methods. So, the use of X-ray scattering can be considered as an alternative to transmission methods. This work proposes a new method for obtain the X-ray absorption curve through superposition peak Rayleigh and Compton scattering of the lines L a e L β of Tungsten (Tungsten L lines of an X-ray tube with W anode). The absorption curve was obtained using standard samples with effective atomic number in the range from 6 to 16. The method were applied in certified samples of bovine liver (NIST 1577B) , milk powder and V-10. The experimental measurements were obtained using the portable system EDXRF of the Nuclear Instrumentation Laboratory (LIN-COPPE/UFRJ) with Tungsten (W) anode. (author)

  4. Determination of trace elements in plant samples using XRF, PIXE and ICP-OES techniques

    International Nuclear Information System (INIS)

    Ahmed, Hassan Elzain Hassan

    2014-07-01

    The purpose of this study is to determine trace element concentration (Ca, Cu, Cr, K,Fe, Mn,Sr, and Za) in some sudanese wild plants namely, Ziziphus Abyssinica and Grewia Tenax. X-ray fluorescence ( X RF), particle-induced x-ray emission ( PIXE) and inductively coupled plasma-optical emission spectroscopy (ICP-OES) techniques were used for element determination. A series of plants standard references materials were used to check the reliability of the different employed techniques as well as to estimate possible factors for correcting the concentration of some elements that deviated significantly from their actual concentration. The results showed that, X RF, PIXE and ICP-OES are equally competitive methods for measuring Ca,K, Fe, Sr and Zn elements. Unlikely to ICP-OES seems to be superior techniques tend to be appropriate methods for Cu determination in plant samples however, for Mn element PIXE and ICP-OES are advisable techniques for measuring this element rather than X RF method. On the other hand, ICP-OES seems to be the superior techniques over PIXE and X RF methods for Cr and Ni determination in plant samples. The effect of geographical location on trace elements concentration in plants has been examined through determination of element in different species of Grewia Tenax than collected from different location. Most of measured elements showed similarity indicating there is no significant impact of locations on the difference of element contents. In addition, two plants with different genetic families namely, Ziziphus Spina Christi and Ziziphus Abyssinica were collected from the same location and screened for their trace element content. It was found that there were no difference between the two plants for Ca, K, Cu, Fe, and Sr element. However, significant variations were observed for Mn and Zn concentrations implying the possibility of using of those two elements for plant taxonomy purposes.(Author)

  5. True coincidence summing correction determination for 214Bi principal gamma lines in NORM samples

    International Nuclear Information System (INIS)

    Haddad, Kh.

    2014-01-01

    The gamma lines 609.3 and 1,120.3 keV are two of the most intensive γ emissions of 214 Bi, but they have serious true coincidence summing (TCS) effects due to the complex decay schemes with multi-cascading transitions. TCS effects cause inaccurate count rate and hence erroneous results. A simple and easy experimental method for determination of TCS correction of 214 Bi gamma lines was developed in this work using naturally occurring radioactive material samples. Height efficiency and self attenuation corrections were determined as well. The developed method has been formulated theoretically and validated experimentally. The corrections problems were solved simply with neither additional standard source nor simulation skills. (author)

  6. Sampling techniques for thrips (Thysanoptera: Thripidae) in preflowering tomato.

    Science.gov (United States)

    Joost, P Houston; Riley, David G

    2004-08-01

    Sampling techniques for thrips (Thysanoptera: Thripidae) were compared in preflowering tomato plants at the Coastal Plain Experiment Station in Tifton, GA, in 2000 and 2003, to determine the most effective method of determining abundance of thrips on tomato foliage early in the growing season. Three relative sampling techniques, including a standard insect aspirator, a 946-ml beat cup, and an insect vacuum device, were compared for accuracy to an absolute method and to themselves for precision and efficiency of sampling thrips. Thrips counts of all relative sampling methods were highly correlated (R > 0.92) to the absolute method. The aspirator method was the most accurate compared with the absolute sample according to regression analysis in 2000. In 2003, all sampling methods were considered accurate according to Dunnett's test, but thrips numbers were lower and sample variation was greater than in 2000. In 2000, the beat cup method had the lowest relative variation (RV) or best precision, at 1 and 8 d after transplant (DAT). Only the beat cup method had RV values <25 for all sampling dates. In 2003, the beat cup method had the lowest RV value at 15 and 21 DAT. The beat cup method also was the most efficient method for all sample dates in both years. Frankliniella fusca (Pergande) was the most abundant thrips species on the foliage of preflowering tomato in both years of study at this location. Overall, the best thrips sampling technique tested was the beat cup method in terms of precision and sampling efficiency.

  7. Petrosal sinus sampling: technique and rationale.

    Science.gov (United States)

    Miller, D L; Doppman, J L

    1991-01-01

    Bilateral simultaneous sampling of the inferior petrosal sinuses is an extremely sensitive, specific, and accurate test for diagnosing Cushing disease and distinguishing between that entity and the ectopic ACTH syndrome. It is also valuable for lateralizing small hormone-producing adenomas within the pituitary gland. The inferior petrosal sinuses connect the cavernous sinuses with the ipsilateral internal jugular veins. The anatomy of the anastomoses between the inferior petrosal sinus, the internal jugular vein, and the venous plexuses at the base of the skull varies, but it is almost always possible to catheterize the inferior petrosal sinus. In addition, variations in size and anatomy are often present between the two inferior petrosal sinuses in a patient. Advance preparation is required for petrosal sinus sampling. Teamwork is a critical element, and each member of the staff should know what he or she will be doing during the procedure. The samples must be properly labeled, processed, and stored. Specific needles, guide wires, and catheters are recommended for this procedure. The procedure is performed with specific attention to the three areas of potential technical difficulty: catheterization of the common femoral veins, crossing the valve at the base of the left internal jugular vein, and selective catheterization of the inferior petrosal sinuses. There are specific methods for dealing with each of these areas. The sine qua non of correct catheter position in the inferior petrosal sinus is demonstration of reflux of contrast material into the ipsilateral cavernous sinus. Images must always be obtained to document correct catheter position. Special attention must be paid to two points to prevent potential complications: The patient must be given an adequate dose of heparin, and injection of contrast material into the inferior petrosal sinuses and surrounding veins must be done gently and carefully. When the procedure is performed as outlined, both inferior

  8. Structure-based sampling and self-correcting machine learning for accurate calculations of potential energy surfaces and vibrational levels

    Science.gov (United States)

    Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter

    2017-06-01

    We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.

  9. Transit time corrected arterial spin labeling technique aids to overcome delayed transit time effect

    International Nuclear Information System (INIS)

    Yun, Tae Jin; Sohn, Chul-Ho; Yoo, Roh-Eul; Kang, Kyung Mi; Choi, Seung Hong; Kim, Ji-hoon; Park, Sun-Won; Hwang, Moonjung; Lebel, R.M.

    2018-01-01

    This study aimed to evaluate the usefulness of transit time corrected cerebral blood flow (CBF) maps based on multi-phase arterial spin labeling MR perfusion imaging (ASL-MRP). The Institutional Review Board of our hospital approved this retrospective study. Written informed consent was waived. Conventional and multi-phase ASL-MRPs and dynamic susceptibility contrast MR perfusion imaging (DSC-MRP) were acquired for 108 consecutive patients. Vascular territory-based volumes of interest were applied to CBF and time to peak (TTP) maps obtained from DSC-MRP and CBF maps obtained from conventional and multi-phase ASL-MRPs. The concordances between normalized CBF (nCBF) from DSC-MRP and nCBF from conventional and transition time corrected CBF maps from multi-phase ASL-MRP were evaluated using Bland-Altman analysis. In addition, the dependence of difference between nCBF (ΔnCBF) values obtained from DSC-MRP and conventional ASL-MRP (or multi-phase ASL-MRP) on TTP obtained from DSC-MRP was also analyzed using regression analysis. The values of nCBFs from conventional and multi-phase ASL-MRPs had lower values than nCBF based on DSC-MRP (mean differences, 0.08 and 0.07, respectively). The values of ΔnCBF were dependent on TTP values from conventional ASL-MRP technique (F = 5.5679, P = 0.0384). No dependency of ΔnCBF on TTP values from multi-phase ASL-MRP technique was revealed (F = 0.1433, P > 0.05). The use of transit time corrected CBF maps based on multi-phase ASL-MRP technique can overcome the effect of delayed transit time on perfusion maps based on conventional ASL-MRP. (orig.)

  10. Transit time corrected arterial spin labeling technique aids to overcome delayed transit time effect

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Tae Jin; Sohn, Chul-Ho; Yoo, Roh-Eul; Kang, Kyung Mi; Choi, Seung Hong; Kim, Ji-hoon [Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University Hospital, Department of Radiology, Seoul (Korea, Republic of); Park, Sun-Won [Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University Boramae Medical Center, Department of Radiology, Seoul (Korea, Republic of); Hwang, Moonjung [GE Healthcare Korea, Seoul (Korea, Republic of); Lebel, R.M. [GE Healthcare Canada, Calgary (Canada)

    2018-03-15

    This study aimed to evaluate the usefulness of transit time corrected cerebral blood flow (CBF) maps based on multi-phase arterial spin labeling MR perfusion imaging (ASL-MRP). The Institutional Review Board of our hospital approved this retrospective study. Written informed consent was waived. Conventional and multi-phase ASL-MRPs and dynamic susceptibility contrast MR perfusion imaging (DSC-MRP) were acquired for 108 consecutive patients. Vascular territory-based volumes of interest were applied to CBF and time to peak (TTP) maps obtained from DSC-MRP and CBF maps obtained from conventional and multi-phase ASL-MRPs. The concordances between normalized CBF (nCBF) from DSC-MRP and nCBF from conventional and transition time corrected CBF maps from multi-phase ASL-MRP were evaluated using Bland-Altman analysis. In addition, the dependence of difference between nCBF (ΔnCBF) values obtained from DSC-MRP and conventional ASL-MRP (or multi-phase ASL-MRP) on TTP obtained from DSC-MRP was also analyzed using regression analysis. The values of nCBFs from conventional and multi-phase ASL-MRPs had lower values than nCBF based on DSC-MRP (mean differences, 0.08 and 0.07, respectively). The values of ΔnCBF were dependent on TTP values from conventional ASL-MRP technique (F = 5.5679, P = 0.0384). No dependency of ΔnCBF on TTP values from multi-phase ASL-MRP technique was revealed (F = 0.1433, P > 0.05). The use of transit time corrected CBF maps based on multi-phase ASL-MRP technique can overcome the effect of delayed transit time on perfusion maps based on conventional ASL-MRP. (orig.)

  11. Application of bias correction methods to improve U3Si2 sample preparation for quantitative analysis by WDXRF

    International Nuclear Information System (INIS)

    Scapin, Marcos A.; Guilhen, Sabine N.; Azevedo, Luciana C. de; Cotrim, Marycel E.B.; Pires, Maria Ap. F.

    2017-01-01

    The determination of silicon (Si), total uranium (U) and impurities in uranium-silicide (U 3 Si 2 ) samples by wavelength dispersion X-ray fluorescence technique (WDXRF) has been already validated and is currently implemented at IPEN's X-Ray Fluorescence Laboratory (IPEN-CNEN/SP) in São Paulo, Brazil. Sample preparation requires the use of approximately 3 g of H 3 BO 3 as sample holder and 1.8 g of U 3 Si 2 . However, because boron is a neutron absorber, this procedure precludes U 3 Si 2 sample's recovery, which, in time, considering routinely analysis, may account for significant unusable uranium waste. An estimated average of 15 samples per month are expected to be analyzed by WDXRF, resulting in approx. 320 g of U 3 Si 2 that would not return to the nuclear fuel cycle. This not only impacts in production losses, but generates another problem: radioactive waste management. The purpose of this paper is to present the mathematical models that may be applied for the correction of systematic errors when H 3 BO 3 sample holder is substituted by cellulose-acetate {[C 6 H 7 O 2 (OH) 3-m (OOCCH 3 )m], m = 0∼3}, thus enabling U 3 Si 2 sample’s recovery. The results demonstrate that the adopted mathematical model is statistically satisfactory, allowing the optimization of the procedure. (author)

  12. Use of nuclear technique in samples for agricultural purposes

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Kerley A. P. de; Sperling, Eduardo Von, E-mail: kerley@ufmg.br, E-mail: kerleyfisica@yahoo.com.br [Department of Sanitary and Environmental Engineering Federal University of Minas Gerais, Belo Horizonte (Brazil); Menezes, Maria Angela B. C.; Jacomino, Vanusa M.F. [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2013-01-15

    The concern related to environment is growing. Due to this, it is needed to determine chemical elements in a large range of concentration. The neutron activation technique (NAA) determines the elemental composition by the measurement of artificial radioactivity in a sample that was submitted to a neutron flux. NAA is a sensitive and accurate technique with low detection limits. An example of application of NAA was the measurement of concentrations of rare earth elements (REE) in waste samples of phosphogypsum (PG) and cerrado soil samples (clayey and sandy soils). Additionally, a soil reference material of the International Atomic Energy Agency (IAEA) was also analyzed. The REE concentration in PG samples was two times higher than those found in national fertilizers, (total of 4,000 mg kg{sup -1}), 154 times greater than the values found in the sandy soil (26 mg kg{sup -1}) and 14 times greater than the in clayey soil (280 mg kg{sup -1}). The experimental results for the reference material were inside the uncertainty of the certified values pointing out the accuracy of the method (95%). The determination of La, Ce, Pr, Nd, Pm, Sm, Eu, Tb, Dy, Ho, Er, Tm, Yb and Lu in the samples and reference material confirmed the versatility of the technique on REE determination in soil and phosphogypsum samples that are matrices for agricultural interest. (author)

  13. Use of nuclear technique in samples for agricultural purposes

    International Nuclear Information System (INIS)

    Oliveira, Kerley A. P. de; Sperling, Eduardo Von; Menezes, Maria Angela B. C.; Jacomino, Vanusa M.F.

    2013-01-01

    The concern related to environment is growing. Due to this, it is needed to determine chemical elements in a large range of concentration. The neutron activation technique (NAA) determines the elemental composition by the measurement of artificial radioactivity in a sample that was submitted to a neutron flux. NAA is a sensitive and accurate technique with low detection limits. An example of application of NAA was the measurement of concentrations of rare earth elements (REE) in waste samples of phosphogypsum (PG) and cerrado soil samples (clayey and sandy soils). Additionally, a soil reference material of the International Atomic Energy Agency (IAEA) was also analyzed. The REE concentration in PG samples was two times higher than those found in national fertilizers, (total of 4,000 mg kg -1 ), 154 times greater than the values found in the sandy soil (26 mg kg -1 ) and 14 times greater than the in clayey soil (280 mg kg -1 ). The experimental results for the reference material were inside the uncertainty of the certified values pointing out the accuracy of the method (95%). The determination of La, Ce, Pr, Nd, Pm, Sm, Eu, Tb, Dy, Ho, Er, Tm, Yb and Lu in the samples and reference material confirmed the versatility of the technique on REE determination in soil and phosphogypsum samples that are matrices for agricultural interest. (author)

  14. Advanced examination techniques applied to the qualification of critical welds for the ITER correction coils

    CERN Document Server

    Sgobba, Stefano; Libeyre, Paul; Marcinek, Dawid Jaroslaw; Piguiet, Aline; Cécillon, Alexandre

    2015-01-01

    The ITER correction coils (CCs) consist of three sets of six coils located in between the toroidal (TF) and poloidal field (PF) magnets. The CCs rely on a Cable-in-Conduit Conductor (CICC), whose supercritical cooling at 4.5 K is provided by helium inlets and outlets. The assembly of the nozzles to the stainless steel conductor conduit includes fillet welds requiring full penetration through the thickness of the nozzle. Static and cyclic stresses have to be sustained by the inlet welds during operation. The entire volume of helium inlet and outlet welds, that are submitted to the most stringent quality levels of imperfections according to standards in force, is virtually uninspectable with sufficient resolution by conventional or computed radiography or by Ultrasonic Testing. On the other hand, X-ray computed tomography (CT) was successfully applied to inspect the full weld volume of several dozens of helium inlet qualification samples. The extensive use of CT techniques allowed a significant progress in the ...

  15. Development of analytical techniques for safeguards environmental samples at JAEA

    International Nuclear Information System (INIS)

    Sakurai, Satoshi; Magara, Masaaki; Usuda, Shigekazu; Watanabe, Kazuo; Esaka, Fumitaka; Hirayama, Fumio; Lee, Chi-Gyu; Yasuda, Kenichiro; Inagawa, Jun; Suzuki, Daisuke; Iguchi, Kazunari; Kokubu, Yoko S.; Miyamoto, Yutaka; Ohzu, Akira

    2007-01-01

    JAEA has been developing, under the auspices of the Ministry of Education, Culture, Sports, Science and Technology of Japan, analytical techniques for ultra-trace amounts of nuclear materials in environmental samples in order to contribute to the strengthened safeguards system. Development of essential techniques for bulk and particle analysis, as well as screening, of the environmental swipe samples has been established as ultra-trace analytical methods of uranium and plutonium. In January 2003, JAEA was qualified, including its quality control system, as a member of the JAEA network analytical laboratories for environmental samples. Since 2004, JAEA has conducted the analysis of domestic and the IAEA samples, through which JAEA's analytical capability has been verified and improved. In parallel, advanced techniques have been developed in order to expand the applicability to the samples of various elemental composition and impurities and to improve analytical accuracy and efficiency. This paper summarizes the trace of the technical development in environmental sample analysis at JAEA, and refers to recent trends of research and development in this field. (author)

  16. Environmental gamma-ray measurements using in situ and core sampling techniques

    International Nuclear Information System (INIS)

    Dickson, H.W.; Kerr, G.D.; Perdue, P.T.; Abdullah, S.A.

    1976-01-01

    Dose rates from natural radionuclides and 137 Cs in soils of the Oak Ridge area have been determined from in situ and core sample measurements. In situ γ-ray measurements were made with a transportable spectrometer. A tape of spectral data and a soil core sample from each site were returned to ORNL for further analysis. Information on soil composition, density and moisture content and on the distribution of cesium in the soil was obtained from the core samples. In situ spectra were analyzed by a computer program which identified and assigned energies to peaks, integrated the areas under the peaks, and calculated radionuclide concentrations based on a uniform distribution in the soil. The assumption of a uniform distribution was adequate only for natural radionuclides, but simple corrections can be made to the computer calculations for man-made radionuclides distributed on the surface or exponentially in the soil. For 137 Cs a correction was used based on an exponential function fitted to the distribution measured in core samples. At typical sites in Oak Ridge, the dose rate determined from these measurements was about 5 μrad/hr. (author)

  17. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    Science.gov (United States)

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET

  18. Motion artifacts in functional near-infrared spectroscopy: a comparison of motion correction techniques applied to real cognitive data

    Science.gov (United States)

    Brigadoi, Sabrina; Ceccherini, Lisa; Cutini, Simone; Scarpa, Fabio; Scatturin, Pietro; Selb, Juliette; Gagnon, Louis; Boas, David A.; Cooper, Robert J.

    2013-01-01

    Motion artifacts are a significant source of noise in many functional near-infrared spectroscopy (fNIRS) experiments. Despite this, there is no well-established method for their removal. Instead, functional trials of fNIRS data containing a motion artifact are often rejected completely. However, in most experimental circumstances the number of trials is limited, and multiple motion artifacts are common, particularly in challenging populations. Many methods have been proposed recently to correct for motion artifacts, including principle component analysis, spline interpolation, Kalman filtering, wavelet filtering and correlation-based signal improvement. The performance of different techniques has been often compared in simulations, but only rarely has it been assessed on real functional data. Here, we compare the performance of these motion correction techniques on real functional data acquired during a cognitive task, which required the participant to speak aloud, leading to a low-frequency, low-amplitude motion artifact that is correlated with the hemodynamic response. To compare the efficacy of these methods, objective metrics related to the physiology of the hemodynamic response have been derived. Our results show that it is always better to correct for motion artifacts than reject trials, and that wavelet filtering is the most effective approach to correcting this type of artifact, reducing the area under the curve where the artifact is present in 93% of the cases. Our results therefore support previous studies that have shown wavelet filtering to be the most promising and powerful technique for the correction of motion artifacts in fNIRS data. The analyses performed here can serve as a guide for others to objectively test the impact of different motion correction algorithms and therefore select the most appropriate for the analysis of their own fNIRS experiment. PMID:23639260

  19. Water sampling techniques for continuous monitoring of pesticides in water

    Directory of Open Access Journals (Sweden)

    Šunjka Dragana

    2017-01-01

    Full Text Available Good ecological and chemical status of water represents the most important aim of the Water Framework Directive 2000/60/EC, which implies respect of water quality standards at the level of entire river basin (2008/105/EC and 2013/39/EC. This especially refers to the control of pesticide residues in surface waters. In order to achieve the set goals, a continuous monitoring program that should provide a comprehensive and interrelated overview of water status should be implemented. However, it demands the use of appropriate analysis techniques. Until now, the procedure for sampling and quantification of residual pesticide quantities in aquatic environment was based on the use of traditional sampling techniques that imply periodical collecting of individual samples. However, this type of sampling provides only a snapshot of the situation in regard to the presence of pollutants in water. As an alternative, the technique of passive sampling of pollutants in water, including pesticides has been introduced. Different samplers are available for pesticide sampling in surface water, depending on compounds. The technique itself is based on keeping a device in water over a longer period of time which varies from several days to several weeks, depending on the kind of compound. In this manner, the average concentrations of pollutants dissolved in water during a time period (time-weighted average concentrations, TWA are obtained, which enables monitoring of trends in areal and seasonal variations. The use of these techniques also leads to an increase in sensitivity of analytical methods, considering that pre-concentration of analytes takes place within the sorption medium. However, the use of these techniques for determination of pesticide concentrations in real water environments requires calibration studies for the estimation of sampling rates (Rs. Rs is a volume of water per time, calculated as the product of overall mass transfer coefficient and area of

  20. Magnetic separation techniques in sample preparation for biological analysis: a review.

    Science.gov (United States)

    He, Jincan; Huang, Meiying; Wang, Dongmei; Zhang, Zhuomin; Li, Gongke

    2014-12-01

    Sample preparation is a fundamental and essential step in almost all the analytical procedures, especially for the analysis of complex samples like biological and environmental samples. In past decades, with advantages of superparamagnetic property, good biocompatibility and high binding capacity, functionalized magnetic materials have been widely applied in various processes of sample preparation for biological analysis. In this paper, the recent advancements of magnetic separation techniques based on magnetic materials in the field of sample preparation for biological analysis were reviewed. The strategy of magnetic separation techniques was summarized. The synthesis, stabilization and bio-functionalization of magnetic nanoparticles were reviewed in detail. Characterization of magnetic materials was also summarized. Moreover, the applications of magnetic separation techniques for the enrichment of protein, nucleic acid, cell, bioactive compound and immobilization of enzyme were described. Finally, the existed problems and possible trends of magnetic separation techniques for biological analysis in the future were proposed. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. The electron transport problem sampling by Monte Carlo individual collision technique

    International Nuclear Information System (INIS)

    Androsenko, P.A.; Belousov, V.I.

    2005-01-01

    The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)

  2. Lightweight and Statistical Techniques for Petascale Debugging: Correctness on Petascale Systems (CoPS) Preliminry Report

    Energy Technology Data Exchange (ETDEWEB)

    de Supinski, B R; Miller, B P; Liblit, B

    2011-09-13

    Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques. Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the first two

  3. Solid Phase Microextraction and Related Techniques for Drugs in Biological Samples

    OpenAIRE

    Moein, Mohammad Mahdi; Said, Rana; Bassyouni, Fatma; Abdel-Rehim, Mohamed

    2014-01-01

    In drug discovery and development, the quantification of drugs in biological samples is an important task for the determination of the physiological performance of the investigated drugs. After sampling, the next step in the analytical process is sample preparation. Because of the low concentration levels of drug in plasma and the variety of the metabolites, the selected extraction technique should be virtually exhaustive. Recent developments of sample handling techniques are directed, from o...

  4. Accurate EPR radiosensitivity calibration using small sample masses

    Science.gov (United States)

    Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.

    2000-03-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.

  5. Accurate EPR radiosensitivity calibration using small sample masses

    International Nuclear Information System (INIS)

    Hayes, R.B.; Haskell, E.H.; Barrus, J.K.; Kenner, G.H.; Romanyukha, A.A.

    2000-01-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed

  6. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS) measurements

    Science.gov (United States)

    Dohe, S.; Sherlock, V.; Hase, F.; Gisi, M.; Robinson, J.; Sepúlveda, E.; Schneider, M.; Blumenstock, T.

    2013-08-01

    The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF) of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE) is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment). Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y) at both sites show discrepancies of 0.2-0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  7. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS measurements

    Directory of Open Access Journals (Sweden)

    S. Dohe

    2013-08-01

    Full Text Available The Total Carbon Column Observing Network (TCCON has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment. Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y at both sites show discrepancies of 0.2–0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  8. PET motion correction in context of integrated PET/MR: Current techniques, limitations, and future projections.

    Science.gov (United States)

    Gillman, Ashley; Smith, Jye; Thomas, Paul; Rose, Stephen; Dowson, Nicholas

    2017-12-01

    Patient motion is an important consideration in modern PET image reconstruction. Advances in PET technology mean motion has an increasingly important influence on resulting image quality. Motion-induced artifacts can have adverse effects on clinical outcomes, including missed diagnoses and oversized radiotherapy treatment volumes. This review aims to summarize the wide variety of motion correction techniques available in PET and combined PET/CT and PET/MR, with a focus on the latter. A general framework for the motion correction of PET images is presented, consisting of acquisition, modeling, and correction stages. Methods for measuring, modeling, and correcting motion and associated artifacts, both in literature and commercially available, are presented, and their relative merits are contrasted. Identified limitations of current methods include modeling of aperiodic and/or unpredictable motion, attaining adequate temporal resolution for motion correction in dynamic kinetic modeling acquisitions, and maintaining availability of the MR in PET/MR scans for diagnostic acquisitions. Finally, avenues for future investigation are discussed, with a focus on improvements that could improve PET image quality, and that are practical in the clinical environment. © 2017 American Association of Physicists in Medicine.

  9. Manipulation of biological samples using micro and nano techniques.

    Science.gov (United States)

    Castillo, Jaime; Dimaki, Maria; Svendsen, Winnie Edith

    2009-01-01

    The constant interest in handling, integrating and understanding biological systems of interest for the biomedical field, the pharmaceutical industry and the biomaterial researchers demand the use of techniques that allow the manipulation of biological samples causing minimal or no damage to their natural structure. Thanks to the advances in micro- and nanofabrication during the last decades several manipulation techniques offer us the possibility to image, characterize and manipulate biological material in a controlled way. Using these techniques the integration of biomaterials with remarkable properties with physical transducers has been possible, giving rise to new and highly sensitive biosensing devices. This article reviews the different techniques available to manipulate and integrate biological materials in a controlled manner either by sliding them along a surface (2-D manipulation), by grapping them and moving them to a new position (3-D manipulation), or by manipulating and relocating them applying external forces. The advantages and drawbacks are mentioned together with examples that reflect the state of the art of manipulation techniques for biological samples (171 references).

  10. The electron transport problem sampling by Monte Carlo individual collision technique

    Energy Technology Data Exchange (ETDEWEB)

    Androsenko, P.A.; Belousov, V.I. [Obninsk State Technical Univ. of Nuclear Power Engineering, Kaluga region (Russian Federation)

    2005-07-01

    The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)

  11. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    Science.gov (United States)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  12. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    International Nuclear Information System (INIS)

    Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K

    2014-01-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient

  13. SU-E-I-07: An Improved Technique for Scatter Correction in PET

    Energy Technology Data Exchange (ETDEWEB)

    Lin, S; Wang, Y; Lue, K; Lin, H; Chuang, K [Chuang, National Tsing Hua University, Hsichu, Taiwan (China)

    2014-06-01

    Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient

  14. NAIL SAMPLING TECHNIQUE AND ITS INTERPRETATION

    OpenAIRE

    TZAR MN; LEELAVATHI M

    2011-01-01

    The clinical suspicion of onychomyosis based on appearance of the nails, requires culture for confirmation. This is because treatment requires prolonged use of systemic agents which may cause side effects. One of the common problems encountered is improper nail sampling technique which results in loss of essential information. The unfamiliar terminologies used in reporting culture results may intimidate physicians resulting in misinterpretation and hamper treatment decision. This article prov...

  15. Application of the iterative probe correction technique for a high-order probe in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Pivnenko, Sergey; Breinbjerg, Olav

    2006-01-01

    An iterative probe-correction technique for spherical near-field antenna measurements is examined. This technique has previously been shown to be well-suited for non-ideal first-order probes. In this paper, its performance in the case of a high-order probe (a dual-ridged horn) is examined....

  16. Efficiency corrections in determining the (137)Cs inventory of environmental soil samples by using relative measurement method and GEANT4 simulations.

    Science.gov (United States)

    Li, Gang; Liang, Yongfei; Xu, Jiayun; Bai, Lixin

    2015-08-01

    The determination of (137)Cs inventory is widely used to estimate the soil erosion or deposition rate. The generally used method to determine the activity of volumetric samples is the relative measurement method, which employs a calibration standard sample with accurately known activity. This method has great advantages in accuracy and operation only when there is a small difference in elemental composition, sample density and geometry between measuring samples and the calibration standard. Otherwise it needs additional efficiency corrections in the calculating process. The Monte Carlo simulations can handle these correction problems easily with lower financial cost and higher accuracy. This work presents a detailed description to the simulation and calibration procedure for a conventionally used commercial P-type coaxial HPGe detector with cylindrical sample geometry. The effects of sample elemental composition, density and geometry were discussed in detail and calculated in terms of efficiency correction factors. The effect of sample placement was also analyzed, the results indicate that the radioactive nuclides and sample density are not absolutely uniform distributed along the axial direction. At last, a unified binary quadratic functional relationship of efficiency correction factors as a function of sample density and height was obtained by the least square fitting method. This function covers the sample density and height range of 0.8-1.8 g/cm(3) and 3.0-7.25 cm, respectively. The efficiency correction factors calculated by the fitted function are in good agreement with those obtained by the GEANT4 simulations with the determination coefficient value greater than 0.9999. The results obtained in this paper make the above-mentioned relative measurements more accurate and efficient in the routine radioactive analysis of environmental cylindrical soil samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Sample preparation techniques for (p, X) spectrometry

    International Nuclear Information System (INIS)

    Whitehead, N.E.

    1985-01-01

    Samples are ashed at low temperature, using oxygen plasma; a rotary evaporator, and freeze drying speeded up the ashing. The new design of apparatus manufactured was only 10 watt but was as efficient as a 200 watt commercial machine; a circuit diagram is included. Samples of hair and biopsy samples of skin were analysed by the technique. A wool standard was prepared for interlaboratory comparison exercises. It was based on New Zealand merino sheep wool and was 2.9 kg in weight. A washing protocol was developed, which preserves most of the trace element content. The wool was ground in liquid nitrogen using a plastic pestle and beaker, driven by a rotary drill press. (author)

  18. Reducing overlay sampling for APC-based correction per exposure by replacing measured data with computational prediction

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc

    2016-03-01

    One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.

  19. Correction of rectal sacculation through lateral resection in dogs with perineal hernia - technique description

    OpenAIRE

    P.C. Moraes; N.M. Zanetti; C.P. Burger; A.E.W.B. Meirelles; J.C. Canola; J.G.M.P. Isola

    2013-01-01

    The occurrence of perineal hernias in dogs during routine clinical surgery is frequent. The coexistence of rectal diseases that go undiagnosed or are not correctly treated can cause recurrence and postoperative complications. The objective of this report is to describe a surgical technique for treatment of rectal sacculation through lateral resection in dogs with perineal hernia, whereby restoring the rectal integrity.

  20. Fast high resolution ADC based on the flash type with a special error correcting technique

    Energy Technology Data Exchange (ETDEWEB)

    Xiao-Zhong, Liang; Jing-Xi, Cao [Beijing Univ. (China). Inst. of Atomic Energy

    1984-03-01

    A fast 12 bits ADC based on the flash type with a simple special error correcting technique which can effectively compensate the level drift of the discriminators and the droop of the stretcher voltage is described. The DNL is comparable with the Wilkinson's ADC and long term drift is far better than its.

  1. A cone beam CT-guided online plan modification technique to correct interfractional anatomic changes for prostate cancer IMRT treatment

    International Nuclear Information System (INIS)

    Fu Weihua; Yang Yong; Yue, Ning J; Heron, Dwight E; Huq, M Saiful

    2009-01-01

    The purpose of this work is to develop an online plan modification technique to compensate for the interfractional anatomic changes for prostate cancer intensity-modulated radiation therapy (IMRT) treatment based on daily cone beam CT (CBCT) images. In this proposed technique, pre-treatment CBCT images are acquired after the patient is set up on the treatment couch using an in-room laser with the guidance of the setup skin marks. Instead of moving the couch to rigidly align the target or re-planning using the CBCT images, we modify the original IMRT plan to account for the interfractional target motion and deformation based on the daily CBCT image feedback. The multileaf collimator (MLC) leaf positions for each subfield are automatically adjusted in the proposed algorithm based on the position and shape changes of target projection in the beam's eye view (BEV). Three typical prostate cases were adopted to evaluate the proposed technique, and the results were compared with those obtained with bony-structure-based rigid translation correction, prostate-based correction and CBCT-based re-planning strategies. The study revealed that the proposed modification technique is superior to the bony-structure-based and prostate-based correction techniques, especially when interfractional target deformation exists. Its dosimetric performance is closer to that of the re-planned strategy, but with much higher efficiency, indicating that the introduced online CBCT-guided plan modification technique may be an efficient and practical method to compensate for the interfractional target position and shape changes for prostate IMRT.

  2. Correction of rectal sacculation through lateral resection in dogs with perineal hernia - technique description

    Directory of Open Access Journals (Sweden)

    P.C. Moraes

    2013-06-01

    Full Text Available The occurrence of perineal hernias in dogs during routine clinical surgery is frequent. The coexistence of rectal diseases that go undiagnosed or are not correctly treated can cause recurrence and postoperative complications. The objective of this report is to describe a surgical technique for treatment of rectal sacculation through lateral resection in dogs with perineal hernia, whereby restoring the rectal integrity.

  3. Feedback correction of injection errors using digital signal-processing techniques

    Directory of Open Access Journals (Sweden)

    N. S. Sereno

    2007-01-01

    Full Text Available Efficient transfer of electron beams from one accelerator to another is important for 3rd-generation light sources that operate using top-up. In top-up mode, a constant amount of charge is injected at regular intervals into the storage ring to replenish beam lost primarily due to Touschek scattering. Top-up therefore requires that the complex of injector accelerators that fill the storage ring transport beam with a minimum amount of loss. Injection can be a source of significant beam loss if not carefully controlled. In this note we describe a method of processing injection transient signals produced by beam-position monitors and using the processed data in feedback. Feedback control using the technique described here has been incorporated in the Advanced Photon Source (APS booster synchrotron to correct injection transients.

  4. Limitations and ceiling effects with circumferential minimally invasive correction techniques for adult scoliosis: analysis of radiological outcomes over a 7-year experience.

    Science.gov (United States)

    Anand, Neel; Baron, Eli M; Khandehroo, Babak

    2014-05-01

    Minimally invasive correction of adult scoliosis is a surgical method increasing in popularity. Limited data exist, however, as to how effective these methodologies are in achieving coronal plane and sagittal plane correction in addition to improving spinopelvic parameters. This study serves to quantify how much correction is possible with present circumferential minimally invasive surgical (cMIS) methods. Ninety patients were selected from a database of 187 patients who underwent cMIS scoliosis correction. All patients had a Cobb angle greater than 15°, 3 or more levels fused, and availability of preoperative and postoperative 36-inch standing radiographs. The mean duration of follow-up was 37 months. Preoperative and postoperative Cobb angle, sagittal vertical axis (SVA), coronal balance, lumbar lordosis (LL), and pelvic incidence (PI) were measured. Scatter plots were performed comparing the pre- and postoperative radiological parameters to calculate ceiling effects for SVA correction, Cobb angle correction, and PI-LL mismatch correction. The mean preoperative SVA value was 60 mm (range 11.5-151 mm); the mean postoperative value was 31 mm (range 0-84 mm). The maximum SVA correction achieved with cMIS techniques in any of the cases was 89 mm. In terms of coronal Cobb angle, a mean correction of 61% was noted, with a mean preoperative value of 35.8° (range 15°-74.7°) and a mean postoperative value of 13.9° (range 0°-32.5°). A ceiling effect for Cobb angle correction was noted at 42°. The ability to correct the PI-LL mismatch to 10° was limited to cases in which the preoperative PI-LL mismatch was 38° or less. Circumferential MIS techniques as currently used for the treatment of adult scoliosis have limitations in terms of their ability to achieve SVA correction and lumbar lordosis. When the preoperative SVA is greater than 100 mm and a substantial amount of lumbar lordosis is needed, as determined by spinopelvic parameter calculations, surgeons should

  5. Temporal impulse and step responses of the human eye obtained psychophysically by means of a drift-correcting perturbation technique

    NARCIS (Netherlands)

    Roufs, J.A.J.; Blommaert, F.J.J.

    1981-01-01

    Internal impulse and step responses are derived from the thresholds of short probe flashes by means of a drift-correcting perturbation technique. The approach is based on only two postulated systems properties: quasi-linearity and peak detection. A special feature of the technique is its strong

  6. Application of the Sampling Selection Technique in Approaching Financial Audit

    Directory of Open Access Journals (Sweden)

    Victor Munteanu

    2018-03-01

    Full Text Available In his professional approach, the financial auditor has a wide range of working techniques, including selection techniques. They are applied depending on the nature of the information available to the financial auditor, the manner in which they are presented - paper or electronic format, and, last but not least, the time available. Several techniques are applied, successively or in parallel, to increase the safety of the expressed opinion and to provide the audit report with a solid basis of information. Sampling is used in the phase of control or clarification of the identified error. The main purpose is to corroborate or measure the degree of risk detected following a pertinent analysis. Since the auditor does not have time or means to thoroughly rebuild the information, the sampling technique can provide an effective response to the need for valorization.

  7. Simultaneous determination of major to ultratrace elements in geological samples by fusion-dissolution and inductively coupled plasma mass spectrometry techniques

    International Nuclear Information System (INIS)

    Madinabeitia, S. Garcia de; Lorda, M.E. Sanchez; Ibarguchi, J.I. Gil

    2008-01-01

    A method has been developed for the simultaneous quantification of major to ultratrace elements in geological samples using quadrupole ICP-MS techniques. The sample preparation involves fusion with LiBO 2 and dilution in HNO 3 -HF which allows complete decomposition of refractory minerals and quantification of the elements of interest. The effects of high Total Dissolved Solids (TDS) and Li in the solution are minimized using a matrix-tolerant interface and conditioning the instrument with LiBO 2 solution. The signal drift is moreover controlled using conventional internal standards and specific Drift Correction Standards (DCS). A key issue of the technique is the external calibration using selected Certified Reference Materials (CRM). Depending on the sample type and analytes of interest three optimized programmable modes are used sequentially: Standard, Collision Cell (CCT) and Kinetic Energy Discrimination (KED) mode. The method allows to quantify more than 40 elements in concentrations from tens-of-percent to <0.1 ppm levels during a single experiment. The method has been validated through the analysis of different CRMs with recovery factors of ca. 100% and typical 2σ errors of <10%

  8. Simultaneous determination of major to ultratrace elements in geological samples by fusion-dissolution and inductively coupled plasma mass spectrometry techniques

    Energy Technology Data Exchange (ETDEWEB)

    Madinabeitia, S. Garcia de [Servicio de Geocronologia y Geoquimica Isotopica, Facultad de Ciencia y Tecnologia, Universidad del Pais Vasco/EHU, Sarriena s/n, 48940 Leioa (Spain); Lorda, M.E. Sanchez [Servicio de Geocronologia y Geoquimica Isotopica, Facultad de Ciencia y Tecnologia, Universidad del Pais Vasco/EHU, Sarriena s/n, 48940 Leioa (Spain); Departamento de Mineralogia-Petrologia, Facultad de Ciencia y Tecnologia, Universidad del Pais Vasco/EHU, Sarriena s/n, 48940 Leioa (Spain); Ibarguchi, J.I. Gil [Servicio de Geocronologia y Geoquimica Isotopica, Facultad de Ciencia y Tecnologia, Universidad del Pais Vasco/EHU, Sarriena s/n, 48940 Leioa (Spain)], E-mail: josei.gil@ehu.es

    2008-09-12

    A method has been developed for the simultaneous quantification of major to ultratrace elements in geological samples using quadrupole ICP-MS techniques. The sample preparation involves fusion with LiBO{sub 2} and dilution in HNO{sub 3}-HF which allows complete decomposition of refractory minerals and quantification of the elements of interest. The effects of high Total Dissolved Solids (TDS) and Li in the solution are minimized using a matrix-tolerant interface and conditioning the instrument with LiBO{sub 2} solution. The signal drift is moreover controlled using conventional internal standards and specific Drift Correction Standards (DCS). A key issue of the technique is the external calibration using selected Certified Reference Materials (CRM). Depending on the sample type and analytes of interest three optimized programmable modes are used sequentially: Standard, Collision Cell (CCT) and Kinetic Energy Discrimination (KED) mode. The method allows to quantify more than 40 elements in concentrations from tens-of-percent to <0.1 ppm levels during a single experiment. The method has been validated through the analysis of different CRMs with recovery factors of ca. 100% and typical 2{sigma} errors of <10%.

  9. Techniques for transparent lattice measurement and correction

    Science.gov (United States)

    Cheng, Weixing; Li, Yongjun; Ha, Kiman

    2017-07-01

    A novel method has been successfully demonstrated at NSLS-II to characterize the lattice parameters with gated BPM turn-by-turn (TbT) capability. This method can be used at high current operation. Conventional lattice characterization and tuning are carried out at low current in dedicated machine studies which include beam-based measurement/correction of orbit, tune, dispersion, beta-beat, phase advance, coupling etc. At the NSLS-II storage ring, we observed lattice drifting during beam accumulation in user operation. Coupling and lifetime change while insertion device (ID) gaps are moved. With the new method, dynamical lattice correction is possible to achieve reliable and productive operations. A bunch-by-bunch feedback system excites a small fraction (∼1%) of bunches and gated BPMs are aligned to see those bunch motions. The gated TbT position data are used to characterize the lattice hence correction can be applied. As there are ∼1% of total charges disturbed for a short period of time (several ms), this method is transparent to general user operation. We demonstrated the effectiveness of these tools during high current user operation.

  10. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  11. Nuclear analytical techniques and their application to environmental samples

    International Nuclear Information System (INIS)

    Lieser, K.H.

    1986-01-01

    A survey is given on nuclear analytical techniques and their application to environmental samples. Measurement of the inherent radioactivity of elements or radionuclides allows determination of natural radioelements (e.g. Ra), man-made radioelements (e.g. Pu) and radionuclides in the environment. Activation analysis, in particular instrumental neutron activation analysis, is a very reliable and sensitive method for determination of a great number of trace elements in environmental samples, because the most abundant main constituents are not activated. Tracer techniques are very useful for studies of the behaviour and of chemical reactions of trace elements and compounds in the environment. Radioactive sources are mainly applied for excitation of characteristic X-rays (X-ray fluorescence analysis). (author)

  12. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.

    2017-11-27

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  13. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.; Pan, B.; Lubineau, Gilles

    2017-01-01

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  14. Correcting for Systematic Bias in Sample Estimates of Population Variances: Why Do We Divide by n-1?

    Science.gov (United States)

    Mittag, Kathleen Cage

    An important topic presented in introductory statistics courses is the estimation of population parameters using samples. Students learn that when estimating population variances using sample data, we always get an underestimate of the population variance if we divide by n rather than n-1. One implication of this correction is that the degree of…

  15. Micro and Nano Techniques for the Handling of Biological Samples

    DEFF Research Database (Denmark)

    Micro and Nano Techniques for the Handling of Biological Samples reviews the different techniques available to manipulate and integrate biological materials in a controlled manner, either by sliding them along a surface (2-D manipulation), or by gripping and moving them to a new position (3-D...

  16. Calibrating the X-ray attenuation of liquid water and correcting sample movement artefacts during in operando synchrotron X-ray radiographic imaging of polymer electrolyte membrane fuel cells.

    Science.gov (United States)

    Ge, Nan; Chevalier, Stéphane; Hinebaugh, James; Yip, Ronnie; Lee, Jongmin; Antonacci, Patrick; Kotaka, Toshikazu; Tabuchi, Yuichiro; Bazylak, Aimy

    2016-03-01

    Synchrotron X-ray radiography, due to its high temporal and spatial resolutions, provides a valuable means for understanding the in operando water transport behaviour in polymer electrolyte membrane fuel cells. The purpose of this study is to address the specific artefact of imaging sample movement, which poses a significant challenge to synchrotron-based imaging for fuel cell diagnostics. Specifically, the impact of the micrometer-scale movement of the sample was determined, and a correction methodology was developed. At a photon energy level of 20 keV, a maximum movement of 7.5 µm resulted in a false water thickness of 0.93 cm (9% higher than the maximum amount of water that the experimental apparatus could physically contain). This artefact was corrected by image translations based on the relationship between the false water thickness value and the distance moved by the sample. The implementation of this correction method led to a significant reduction in false water thickness (to ∼0.04 cm). Furthermore, to account for inaccuracies in pixel intensities due to the scattering effect and higher harmonics, a calibration technique was introduced for the liquid water X-ray attenuation coefficient, which was found to be 0.657 ± 0.023 cm(-1) at 20 keV. The work presented in this paper provides valuable tools for artefact compensation and accuracy improvements for dynamic synchrotron X-ray imaging of fuel cells.

  17. Effect of tubing length on the dispersion correction of an arterially sampled input function for kinetic modeling in PET.

    Science.gov (United States)

    O'Doherty, Jim; Chilcott, Anna; Dunn, Joel

    2015-11-01

    Arterial sampling with dispersion correction is routinely performed for kinetic analysis of PET studies. Because of the the advent of PET-MRI systems, non-MR safe instrumentation will be required to be kept outside the scan room, which requires the length of the tubing between the patient and detector to increase, thus worsening the effects of dispersion. We examined the effects of dispersion in idealized radioactive blood studies using various lengths of tubing (1.5, 3, and 4.5 m) and applied a well-known transmission-dispersion model to attempt to correct the resulting traces. A simulation study was also carried out to examine noise characteristics of the model. The model was applied to patient traces using a 1.5 m acquisition tubing and extended to its use at 3 m. Satisfactory dispersion correction of the blood traces was achieved in the 1.5 m line. Predictions on the basis of experimental measurements, numerical simulations and noise analysis of resulting traces show that corrections of blood data can also be achieved using the 3 m tubing. The effects of dispersion could not be corrected for the 4.5 m line by the selected transmission-dispersion model. On the basis of our setup, correction of dispersion in arterial sampling tubing up to 3 m by the transmission-dispersion model can be performed. The model could not dispersion correct data acquired using a 4.5 m arterial tubing.

  18. Methodological integrative review of the work sampling technique used in nursing workload research.

    Science.gov (United States)

    Blay, Nicole; Duffield, Christine M; Gallagher, Robyn; Roche, Michael

    2014-11-01

    To critically review the work sampling technique used in nursing workload research. Work sampling is a technique frequently used by researchers and managers to explore and measure nursing activities. However, work sampling methods used are diverse making comparisons of results between studies difficult. Methodological integrative review. Four electronic databases were systematically searched for peer-reviewed articles published between 2002-2012. Manual scanning of reference lists and Rich Site Summary feeds from contemporary nursing journals were other sources of data. Articles published in the English language between 2002-2012 reporting on research which used work sampling to examine nursing workload. Eighteen articles were reviewed. The review identified that the work sampling technique lacks a standardized approach, which may have an impact on the sharing or comparison of results. Specific areas needing a shared understanding included the training of observers and subjects who self-report, standardization of the techniques used to assess observer inter-rater reliability, sampling methods and reporting of outcomes. Work sampling is a technique that can be used to explore the many facets of nursing work. Standardized reporting measures would enable greater comparison between studies and contribute to knowledge more effectively. Author suggestions for the reporting of results may act as guidelines for researchers considering work sampling as a research method. © 2014 John Wiley & Sons Ltd.

  19. Correct liquid scintillation counting of steroids and glycosides in RIA samples: a comparison of xylene-based, dioxane-based and colloidal counting systems. Chapter 14

    International Nuclear Information System (INIS)

    Spolders, H.

    1977-01-01

    In RIA, the following parameters are important for accurate liquid scintillation counting. (1) Absence of chemiluminescence. (2) Stability of count rate. (3) Dissolving properties for the sample. For samples with varying colours, a quench correction must be applied. For any type of accurate quench correction, a homogeneous sample is necessary. This can be obtained if proteins and the buffer can be dissolved completely in the scintillator solution. In this paper, these criteria are compared in xylene-based, dioxane-based and colloidal scintillation solutions for either bound or free antigens of different polarity. The labelling radioisotope used was 3 H. Using colloidal scintillators with plasma and buffer samples, phasing or sedimentation of salt or proteins sometimes occurs. The influence of sedimentation or phasing on count rate stability and correct quench correction is illustrated by varying the ratio between the scintillator solution and a RIA sample containing a semi-polar steroid aldosterone. (author)

  20. Inverted Nipple Correction with Selective Dissection of Lactiferous Ducts Using an Operative Microscope and a Traction Technique.

    Science.gov (United States)

    Sowa, Yoshihiro; Itsukage, Sizu; Morita, Daiki; Numajiri, Toshiaki

    2017-10-01

    An inverted nipple is a common congenital condition in young women that may cause breastfeeding difficulty, psychological distress, repeated inflammation, and loss of sensation. Various surgical techniques have been reported for correction of inverted nipples, and all have advantages and disadvantages. Here, we report a new technique for correction of an inverted nipple using an operative microscope and traction that results in low recurrence and preserves lactation function and sensation. Between January 2010 and January 2013, we treated eight inverted nipples in seven patients with selective lactiferous duct dissection using an operative microscope. An opposite Z-plasty was added at the junction of the nipple and areola. Postoperatively, traction was applied through an apparatus made from a rubber gasket attached to a sterile syringe. Patients were followed up for 15-48 months. Adequate projection was achieved in all patients, and there was no wound dehiscence or complications such as infection. Three patients had successful pregnancies and subsequent breastfeeding that was not adversely affected by the treatment. There was no loss of sensation in any patient during the postoperative period. Our technique for treating an inverted nipple is effective and preserves lactation function and nipple sensation. The method maintains traction for a longer period, which we believe increases the success rate of the surgery for correction of severely inverted nipples. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  1. Systematic comparison of static and dynamic headspace sampling techniques for gas chromatography.

    Science.gov (United States)

    Kremser, Andreas; Jochmann, Maik A; Schmidt, Torsten C

    2016-09-01

    Six automated, headspace-based sample preparation techniques were used to extract volatile analytes from water with the goal of establishing a systematic comparison between commonly available instrumental alternatives. To that end, these six techniques were used in conjunction with the same gas chromatography instrument for analysis of a common set of volatile organic carbon (VOC) analytes. The methods were thereby divided into three classes: static sampling (by syringe or loop), static enrichment (SPME and PAL SPME Arrow), and dynamic enrichment (ITEX and trap sampling). For PAL SPME Arrow, different sorption phase materials were also included in the evaluation. To enable an effective comparison, method detection limits (MDLs), relative standard deviations (RSDs), and extraction yields were determined and are discussed for all techniques. While static sampling techniques exhibited sufficient extraction yields (approx. 10-20 %) to be reliably used down to approx. 100 ng L(-1), enrichment techniques displayed extraction yields of up to 80 %, resulting in MDLs down to the picogram per liter range. RSDs for all techniques were below 27 %. The choice on one of the different instrumental modes of operation (aforementioned classes) was thereby the most influential parameter in terms of extraction yields and MDLs. Individual methods inside each class showed smaller deviations, and the least influences were observed when evaluating different sorption phase materials for the individual enrichment techniques. The option of selecting specialized sorption phase materials may, however, be more important when analyzing analytes with different properties such as high polarity or the capability of specific molecular interactions. Graphical Abstract PAL SPME Arrow during the extraction of volatile analytes from the headspace of an aqueous sample.

  2. Assessment of radioactivity for 24 hours urine sample depending on correction factor by using creatinine

    International Nuclear Information System (INIS)

    Kharita, M. H.; Maghrabi, M.

    2006-09-01

    Assessment of intake and internal does requires knowing the amount of radioactivity in 24 hours urine sample, sometimes it is difficult to get 24 hour sample because this method is not comfortable and in most cases the workers refuse to collect this amount of urine. This work focuses on finding correction factor of 24 hour sample depending on knowing the amount of creatinine in the sample whatever the size of this sample. Then the 24 hours excretion of radionuclide is calculated assuming the average creatinine excretion rate is 1.7 g per 24 hours, based on the amount of activity and creatinine in the urine sample. Several urine sample were collected from occupationally exposed workers the amount and ratios of creatinine and activity in these samples were determined, then normalized to 24 excretion of radionuclide. The average chemical recovery was 77%. It should be emphasized that this method should only be used if a 24 hours sample was not possible to collect. (author)

  3. Height drift correction in non-raster atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Travis R. [Department of Mathematics, University of California Los Angeles, Los Angeles, CA 90095 (United States); Ziegler, Dominik [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Brune, Christoph [Institute for Computational and Applied Mathematics, University of Münster (Germany); Chen, Alex [Statistical and Applied Mathematical Sciences Institute, Research Triangle Park, NC 27709 (United States); Farnham, Rodrigo; Huynh, Nen; Chang, Jen-Mei [Department of Mathematics and Statistics, California State University Long Beach, Long Beach, CA 90840 (United States); Bertozzi, Andrea L., E-mail: bertozzi@math.ucla.edu [Department of Mathematics, University of California Los Angeles, Los Angeles, CA 90095 (United States); Ashby, Paul D., E-mail: pdashby@lbl.gov [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2014-02-01

    We propose a novel method to detect and correct drift in non-raster scanning probe microscopy. In conventional raster scanning drift is usually corrected by subtracting a fitted polynomial from each scan line, but sample tilt or large topographic features can result in severe artifacts. Our method uses self-intersecting scan paths to distinguish drift from topographic features. Observing the height differences when passing the same position at different times enables the reconstruction of a continuous function of drift. We show that a small number of self-intersections is adequate for automatic and reliable drift correction. Additionally, we introduce a fitness function which provides a quantitative measure of drift correctability for any arbitrary scan shape. - Highlights: • We propose a novel height drift correction method for non-raster SPM. • Self-intersecting scans enable the distinction of drift from topographic features. • Unlike conventional techniques our method is unsupervised and tilt-invariant. • We introduce a fitness measure to quantify correctability for general scan paths.

  4. Application of bias correction methods to improve U{sub 3}Si{sub 2} sample preparation for quantitative analysis by WDXRF

    Energy Technology Data Exchange (ETDEWEB)

    Scapin, Marcos A.; Guilhen, Sabine N.; Azevedo, Luciana C. de; Cotrim, Marycel E.B.; Pires, Maria Ap. F., E-mail: mascapin@ipen.br, E-mail: snguilhen@ipen.br, E-mail: lvsantana@ipen.br, E-mail: mecotrim@ipen.br, E-mail: mapires@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The determination of silicon (Si), total uranium (U) and impurities in uranium-silicide (U{sub 3}Si{sub 2}) samples by wavelength dispersion X-ray fluorescence technique (WDXRF) has been already validated and is currently implemented at IPEN's X-Ray Fluorescence Laboratory (IPEN-CNEN/SP) in São Paulo, Brazil. Sample preparation requires the use of approximately 3 g of H{sub 3}BO{sub 3} as sample holder and 1.8 g of U{sub 3}Si{sub 2}. However, because boron is a neutron absorber, this procedure precludes U{sub 3}Si{sub 2} sample's recovery, which, in time, considering routinely analysis, may account for significant unusable uranium waste. An estimated average of 15 samples per month are expected to be analyzed by WDXRF, resulting in approx. 320 g of U{sub 3}Si{sub 2} that would not return to the nuclear fuel cycle. This not only impacts in production losses, but generates another problem: radioactive waste management. The purpose of this paper is to present the mathematical models that may be applied for the correction of systematic errors when H{sub 3}BO{sub 3} sample holder is substituted by cellulose-acetate {[C_6H_7O_2(OH)_3_-_m(OOCCH_3)m], m = 0∼3}, thus enabling U{sub 3}Si{sub 2} sample’s recovery. The results demonstrate that the adopted mathematical model is statistically satisfactory, allowing the optimization of the procedure. (author)

  5. RF Sub-sampling Receiver Architecture based on Milieu Adapting Techniques

    DEFF Research Database (Denmark)

    Behjou, Nastaran; Larsen, Torben; Jensen, Ole Kiel

    2012-01-01

    A novel sub-sampling based architecture is proposed which has the ability of reducing the problem of image distortion and improving the signal to noise ratio significantly. The technique is based on sensing the environment and adapting the sampling rate of the receiver to the best possible...

  6. X-ray fluorescence microscopy artefacts in elemental maps of topologically complex samples: Analytical observations, simulation and a map correction method

    Science.gov (United States)

    Billè, Fulvio; Kourousias, George; Luchinat, Enrico; Kiskinova, Maya; Gianoncelli, Alessandra

    2016-08-01

    XRF spectroscopy is among the most widely used non-destructive techniques for elemental analysis. Despite the known angular dependence of X-ray fluorescence (XRF), topological artefacts remain an unresolved issue when using X-ray micro- or nano-probes. In this work we investigate the origin of the artefacts in XRF imaging of topologically complex samples, which are unresolved problems in studies of organic matter due to the limited travel distances of low energy XRF emission from the light elements. In particular we mapped Human Embryonic Kidney (HEK293T) cells. The exemplary results with biological samples, obtained with a soft X-ray scanning microscope installed at a synchrotron facility were used for testing a mathematical model based on detector response simulations, and for proposing an artefact correction method based on directional derivatives. Despite the peculiar and specific application, the methodology can be easily extended to hard X-rays and to set-ups with multi-array detector systems when the dimensions of surface reliefs are in the order of the probing beam size.

  7. Improved importance sampling technique for efficient simulation of digital communication systems

    Science.gov (United States)

    Lu, Dingqing; Yao, Kung

    1988-01-01

    A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.

  8. Neutron borehole logging correction technique

    International Nuclear Information System (INIS)

    Goldman, L.H.

    1978-01-01

    In accordance with an illustrative embodiment of the present invention, a method and apparatus is disclosed for logging earth formations traversed by a borehole in which an earth formation is irradiated with neutrons and gamma radiation produced thereby in the formation and in the borehole is detected. A sleeve or shield for capturing neutrons from the borehole and producing gamma radiation characteristic of that capture is provided to give an indication of the contribution of borehole capture events to the total detected gamma radiation. It is then possible to correct from those borehole effects the total detected gamma radiation and any earth formation parameters determined therefrom

  9. Cone penetrometer tests and HydroPunch sampling: A screening technique for plume definition

    International Nuclear Information System (INIS)

    Smolley, M.; Kappmeyer, J.C.

    1991-01-01

    Cone penetrometer tests and HydroPunch sampling were used to define the extent of volatile organic compounds in ground water. The investigation indicated that the combination of the these techniques is effective for obtaining ground water samples for preliminary plume definition. HydroPunch samples can be collected in unconsolidated sediments and the analytical results obtained from these samples are comparable to those obtained from adjacent monitoring wells. This sampling method is a rapid and cost-effective screening technique for characterizing the extent of contaminant plumes in soft sediment environments. Use of this screening technique allowed monitoring wells to be located at the plume boundary, thereby reducing the number of wells installed and the overall cost of the plume definition program

  10. Chance constrained problems: penalty reformulation and performance of sample approximation technique

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin

    2012-01-01

    Roč. 48, č. 1 (2012), s. 105-122 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z10750506 Keywords : chance constrained problems * penalty functions * asymptotic equivalence * sample approximation technique * investment problem Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.619, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-chance constrained problems penalty reformulation and performance of sample approximation technique.pdf

  11. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  12. Errors and corrections in the separation of spin-flip and non-spin-flip thermal neutron scattering using the polarization analysis technique

    International Nuclear Information System (INIS)

    Williams, W.G.

    1975-01-01

    The use of the polarization analysis technique to separate spin-flip from non-spin-flip thermal neutron scattering is especially important in determining magnetic scattering cross-sections. In order to identify a spin-flip ratio in the scattering with a particular scattering process, it is necessary to correct the experimentally observed 'flipping-ratio' to allow for the efficiencies of the vital instrument components (polarizers and spin-flippers), as well as multiple scattering effects in the sample. Analytical expressions for these corections are presented and their magnitudes in typical cases estimated. The errors in measurement depend strongly on the uncertainties in the calibration of the efficiencies of the polarizers and the spin-flipper. The final section is devoted to a discussion of polarization analysis instruments

  13. Sampling methods and non-destructive examination techniques for large radioactive waste packages

    International Nuclear Information System (INIS)

    Green, T.H.; Smith, D.L.; Burgoyne, K.E.; Maxwell, D.J.; Norris, G.H.; Billington, D.M.; Pipe, R.G.; Smith, J.E.; Inman, C.M.

    1992-01-01

    Progress is reported on work undertaken to evaluate quality checking methods for radioactive wastes. A sampling rig was designed, fabricated and used to develop techniques for the destructive sampling of cemented simulant waste using remotely operated equipment. An engineered system for the containment of cooling water was designed and manufactured and successfully demonstrated with the drum and coring equipment mounted in both vertical and horizontal orientations. The preferred in-cell orientation was found to be with the drum and coring machinery mounted in a horizontal position. Small powdered samples can be taken from cemented homogeneous waste cores using a hollow drill/vacuum section technique with the preferred subsampling technique being to discard the outer 10 mm layer to obtain a representative sample of the cement core. Cement blends can be dissolved using fusion techniques and the resulting solutions are stable to gelling for periods in excess of one year. Although hydrochloric acid and nitric acid are promising solvents for dissolution of cement blends, the resultant solutions tend to form silicic acid gels. An estimate of the beta-emitter content of cemented waste packages can be obtained by a combination of non-destructive and destructive techniques. The errors will probably be in excess of +/-60 % at the 95 % confidence level. Real-time X-ray video-imaging techniques have been used to analyse drums of uncompressed, hand-compressed, in-drum compacted and high-force compacted (i.e. supercompacted) simulant waste. The results have confirmed the applicability of this technique for NDT of low-level waste. 8 refs., 12 figs., 3 tabs

  14. A simple method for regional cerebral blood flow measurement by one-point arterial blood sampling and 123I-IMP microsphere model (part 2). A study of time correction of one-point blood sample count

    International Nuclear Information System (INIS)

    Masuda, Yasuhiko; Makino, Kenichi; Gotoh, Satoshi

    1999-01-01

    In our previous paper regarding determination of the regional cerebral blood flow (rCBF) using the 123 I-IMP microsphere model, we reported that the accuracy of determination of the integrated value of the input function from one-point arterial blood sampling can be increased by performing correction using the 5 min: 29 min ratio for the whole-brain count. However, failure to carry out the arterial blood collection at exactly 5 minutes after 123 I-IMP injection causes errors with this method, and there is thus a time limitation. We have now revised out method so that the one-point arterial blood sampling can be performed at any time during the interval between 5 minutes and 20 minutes after 123 I-IMP injection, with addition of a correction step for the sampling time. This revised method permits more accurate estimation of the integral of the input functions. This method was then applied to 174 experimental subjects: one-point blood samples collected at random times between 5 and 20 minutes, and the estimated values for the continuous arterial octanol extraction count (COC) were determined. The mean error rate between the COC and the actual measured continuous arterial octanol extraction count (OC) was 3.6%, and the standard deviation was 12.7%. Accordingly, in 70% of the cases, the rCBF was able to be estimated within an error rate of 13%, while estimation was possible in 95% of the cases within an error rate of 25%. This improved method is a simple technique for determination of the rCBF by 123 I-IMP microsphere model and one-point arterial blood sampling which no longer shows a time limitation and does not require any octanol extraction step. (author)

  15. PETPVC: a toolbox for performing partial volume correction techniques in positron emission tomography

    Science.gov (United States)

    Thomas, Benjamin A.; Cuplov, Vesna; Bousse, Alexandre; Mendes, Adriana; Thielemans, Kris; Hutton, Brian F.; Erlandsson, Kjell

    2016-11-01

    Positron emission tomography (PET) images are degraded by a phenomenon known as the partial volume effect (PVE). Approaches have been developed to reduce PVEs, typically through the utilisation of structural information provided by other imaging modalities such as MRI or CT. These methods, known as partial volume correction (PVC) techniques, reduce PVEs by compensating for the effects of the scanner resolution, thereby improving the quantitative accuracy. The PETPVC toolbox described in this paper comprises a suite of methods, both classic and more recent approaches, for the purposes of applying PVC to PET data. Eight core PVC techniques are available. These core methods can be combined to create a total of 22 different PVC techniques. Simulated brain PET data are used to demonstrate the utility of toolbox in idealised conditions, the effects of applying PVC with mismatched point-spread function (PSF) estimates and the potential of novel hybrid PVC methods to improve the quantification of lesions. All anatomy-based PVC techniques achieve complete recovery of the PET signal in cortical grey matter (GM) when performed in idealised conditions. Applying deconvolution-based approaches results in incomplete recovery due to premature termination of the iterative process. PVC techniques are sensitive to PSF mismatch, causing a bias of up to 16.7% in GM recovery when over-estimating the PSF by 3 mm. The recovery of both GM and a simulated lesion was improved by combining two PVC techniques together. The PETPVC toolbox has been written in C++, supports Windows, Mac and Linux operating systems, is open-source and publicly available.

  16. A Comparison of Soil-Water Sampling Techniques

    Science.gov (United States)

    Tindall, J. A.; Figueroa-Johnson, M.; Friedel, M. J.

    2007-12-01

    The representativeness of soil pore water extracted by suction lysimeters in ground-water monitoring studies is a problem that often confounds interpretation of measured data. Current soil water sampling techniques cannot identify the soil volume from which a pore water sample is extracted, neither macroscopic, microscopic, or preferential flowpath. This research was undertaken to compare values of extracted suction lysimeters samples from intact soil cores with samples obtained by the direct extraction methods to determine what portion of soil pore water is sampled by each method. Intact soil cores (30 centimeter (cm) diameter by 40 cm height) were extracted from two different sites - a sandy soil near Altamonte Springs, Florida and a clayey soil near Centralia in Boone County, Missouri. Isotopically labeled water (O18? - analyzed by mass spectrometry) and bromide concentrations (KBr- - measured using ion chromatography) from water samples taken by suction lysimeters was compared with samples obtained by direct extraction methods of centrifugation and azeotropic distillation. Water samples collected by direct extraction were about 0.25 ? more negative (depleted) than that collected by suction lysimeter values from a sandy soil and about 2-7 ? more negative from a well structured clayey soil. Results indicate that the majority of soil water in well-structured soil is strongly bound to soil grain surfaces and is not easily sampled by suction lysimeters. In cases where a sufficient volume of water has passed through the soil profile and displaced previous pore water, suction lysimeters will collect a representative sample of soil pore water from the sampled depth interval. It is suggested that for stable isotope studies monitoring precipitation and soil water, suction lysimeter should be installed at shallow depths (10 cm). Samples should also be coordinated with precipitation events. The data also indicate that each extraction method be use to sample a different

  17. Can groundwater sampling techniques used in monitoring wells influence methane concentrations and isotopes?

    Science.gov (United States)

    Rivard, Christine; Bordeleau, Geneviève; Lavoie, Denis; Lefebvre, René; Malet, Xavier

    2018-03-06

    Methane concentrations and isotopic composition in groundwater are the focus of a growing number of studies. However, concerns are often expressed regarding the integrity of samples, as methane is very volatile and may partially exsolve during sample lifting in the well and transfer to sampling containers. While issues concerning bottle-filling techniques have already been documented, this paper documents a comparison of methane concentration and isotopic composition obtained with three devices commonly used to retrieve water samples from dedicated observation wells. This work lies within the framework of a larger project carried out in the Saint-Édouard area (southern Québec, Canada), whose objective was to assess the risk to shallow groundwater quality related to potential shale gas exploitation. The selected sampling devices, which were tested on ten wells during three sampling campaigns, consist of an impeller pump, a bladder pump, and disposable sampling bags (HydraSleeve). The sampling bags were used both before and after pumping, to verify the appropriateness of a no-purge approach, compared to the low-flow approach involving pumping until stabilization of field physicochemical parameters. Results show that methane concentrations obtained with the selected sampling techniques are usually similar and that there is no systematic bias related to a specific technique. Nonetheless, concentrations can sometimes vary quite significantly (up to 3.5 times) for a given well and sampling event. Methane isotopic composition obtained with all sampling techniques is very similar, except in some cases where sampling bags were used before pumping (no-purge approach), in wells where multiple groundwater sources enter the borehole.

  18. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  19. Quality-assurance techniques used with automated analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Killian, E.W.; Koeppen, L.D.; Femec, D.A.

    1994-01-01

    In the course of developing gamma-ray spectrum analysis algorithms for use by the Radiation Measurements Laboratory at the Idaho National Engineering Laboratory (INEL), several techniques have been developed that enhance and verify the quality of the analytical results. The use of these quality-assurance techniques is critical when gamma-ray analysis results from low-level environmental samples are used in risk assessment or site restoration and cleanup decisions. This paper describes four of the quality-assurance techniques that are in routine use at the laboratory. They are used for all types of samples, from reactor effluents to environmental samples. The techniques include: (1) the use of precision pulsers (with subsequent removal) to validate the correct operation of the spectrometer electronics for each and every spectrum acquired, (2) the use of naturally occurring and cosmically induced radionuclides in samples to help verify that the data acquisition and analysis were performed properly, (3) the use of an ambient background correction technique that involves superimposing (open-quotes mappingclose quotes) sample photopeak fitting parameters onto multiple background spectra for accurate and more consistent quantification of the background activities, (4) the use of interactive, computer-driven graphics to review the automated locating and fitting of photopeaks and to allow for manual fitting of photopeaks

  20. Development of environmental sample analysis techniques for safeguards

    International Nuclear Information System (INIS)

    Magara, Masaaki; Hanzawa, Yukiko; Esaka, Fumitaka

    1999-01-01

    JAERI has been developing environmental sample analysis techniques for safeguards and preparing a clean chemistry laboratory with clean rooms. Methods to be developed are a bulk analysis and a particle analysis. In the bulk analysis, Inductively-Coupled Plasma Mass Spectrometer or Thermal Ionization Mass Spectrometer are used to measure nuclear materials after chemical treatment of sample. In the particle analysis, Electron Probe Micro Analyzer and Secondary Ion Mass Spectrometer are used for elemental analysis and isotopic analysis, respectively. The design of the clean chemistry laboratory has been carried out and construction will be completed by the end of March, 2001. (author)

  1. EVALUATION OF THE METERED-DOSE INHALER TECHNIQUE AMONG HEALTHCARE PROVIDERS

    Directory of Open Access Journals (Sweden)

    E. Nadi F. Zeraati

    2005-07-01

    Full Text Available Poor inhaler technique is a common problem both in asthmatic patients and healthcare providers, which contributes to poor asthma control. This study was performed to evaluate the adequacy of metered-dose inhaler (MDI technique in a sample of physicians and nurses practicing in hospitals of Hamadan University of Medical Sciences. A total of 173 healthcare providers voluntary participated in this study. After the participants answered a questionnaire aimed at identifying their involvement in MDI prescribing and counseling, a trained observer assessed their MDI technique using a checklist of nine steps. Of the 173 participants, 35 (20.2% were physicians and 138 (79.8% were nurses. Only 12 participants (6.93% performed all steps correctly. Physicians performed essential steps significantly better than nurses (85.7% vs. 63.8%, P < 0.05. The majority of healthcare providers responsible for instructing patients on the correct MDI technique were unable to perform this technique correctly, indicating the need for regular formal training programs on inhaler techniques.

  2. Respiratory lung motion analysis using a nonlinear motion correction technique for respiratory-gated lung perfusion SPECT images

    International Nuclear Information System (INIS)

    Ue, Hidenori; Haneishi, Hideaki; Iwanaga, Hideyuki; Suga, Kazuyoshi

    2007-01-01

    This study evaluated the respiratory motion of lungs using a nonlinear motion correction technique for respiratory-gated single photon emission computed tomography (SPECT) images. The motion correction technique corrects the respiratory motion of the lungs nonlinearly between two-phase images obtained by respiratory-gated SPECT. The displacement vectors resulting from respiration can be computed at every location of the lungs. Respiratory lung motion analysis is carried out by calculating the mean value of the body axis component of the displacement vector in each of the 12 small regions into which the lungs were divided. In order to enable inter-patient comparison, the 12 mean values were normalized by the length of the lung region along the direction of the body axis. This method was applied to 25 Technetium (Tc)-99m-macroaggregated albumin (MAA) perfusion SPECT images, and motion analysis results were compared with the diagnostic results. It was confirmed that the respiratory lung motion reflects the ventilation function. A statistically significant difference in the amount of the respiratory lung motion was observed between the obstructive pulmonary diseases and other conditions, based on an unpaired Student's t test (P<0.0001). A difference in the motion between normal lungs and lungs with a ventilation obstruction was detected by the proposed method. This method is effective for evaluating obstructive pulmonary diseases such as pulmonary emphysema and diffuse panbronchiolitis. (author)

  3. Single-particle characterization of ice-nucleating particles and ice particle residuals sampled by three different techniques

    Science.gov (United States)

    Worringen, A.; Kandler, K.; Benker, N.; Dirsch, T.; Mertes, S.; Schenk, L.; Kästner, U.; Frank, F.; Nillius, B.; Bundke, U.; Rose, D.; Curtius, J.; Kupiszewski, P.; Weingartner, E.; Vochezer, P.; Schneider, J.; Schmidt, S.; Weinbruch, S.; Ebert, M.

    2015-04-01

    -400 nm in geometric diameter. In a few cases, a second supermicron maximum was identified. Soot/carbonaceous material and metal oxides were present mainly in the sub-micrometer range. Silicates and Ca-rich particles were mainly found with diameters above 1 μm (using ISI and FINCH), in contrast to the Ice-CVI which also sampled many submicron particles of both groups. Due to changing meteorological conditions, the INP/IPR composition was highly variable if different samples were compared. Thus, the observed discrepancies between the different separation techniques may partly result from the non-parallel sampling. The differences of the particle group relative number abundance as well as the mixing state of INP/IPR clearly demonstrate the need of further studies to better understand the influence of the separation techniques on the INP/IPR chemical composition. Also, it must be concluded that the abundance of contamination artifacts in the separated INP and IPR is generally large and should be corrected for, emphasizing the need for the accompanying chemical measurements. Thus, further work is needed to allow for routine operation of the three separation techniques investigated.

  4. Standardization of proton-induced x-ray emission technique for analysis of thick samples

    Science.gov (United States)

    Ali, Shad; Zeb, Johar; Ahad, Abdul; Ahmad, Ishfaq; Haneef, M.; Akbar, Jehan

    2015-09-01

    This paper describes the standardization of the proton-induced x-ray emission (PIXE) technique for finding the elemental composition of thick samples. For the standardization, three different samples of standard reference materials (SRMs) were analyzed using this technique and the data were compared with the already known data of these certified SRMs. These samples were selected in order to cover the maximum range of elements in the periodic table. Each sample was irradiated for three different values of collected beam charges at three different times. A proton beam of 2.57 MeV obtained using 5UDH-II Pelletron accelerator was used for excitation of x-rays from the sample. The acquired experimental data were analyzed using the GUPIXWIN software. The results show that the SRM data and the data obtained using the PIXE technique are in good agreement.

  5. Review of sample preparation techniques for the analysis of pesticide residues in soil.

    Science.gov (United States)

    Tadeo, José L; Pérez, Rosa Ana; Albero, Beatriz; García-Valcárcel, Ana I; Sánchez-Brunete, Consuelo

    2012-01-01

    This paper reviews the sample preparation techniques used for the analysis of pesticides in soil. The present status and recent advances made during the last 5 years in these methods are discussed. The analysis of pesticide residues in soil requires the extraction of analytes from this matrix, followed by a cleanup procedure, when necessary, prior to their instrumental determination. The optimization of sample preparation is a very important part of the method development that can reduce the analysis time, the amount of solvent, and the size of samples. This review considers all aspects of sample preparation, including extraction and cleanup. Classical extraction techniques, such as shaking, Soxhlet, and ultrasonic-assisted extraction, and modern techniques like pressurized liquid extraction, microwave-assisted extraction, solid-phase microextraction and QuEChERS (Quick, Easy, Cheap, Effective, Rugged, and Safe) are reviewed. The different cleanup strategies applied for the purification of soil extracts are also discussed. In addition, the application of these techniques to environmental studies is considered.

  6. Laboratory techniques for safe encapsulation of α-emitting powder samples

    International Nuclear Information System (INIS)

    Chamberlain, H.E.; Pottinger, J.S.

    1984-01-01

    Plutonium oxide powder samples can be encapsulated in thin plastic film to prevent spread of contamination in counting and X-ray diffraction equipment. The film has to be thin enough to transmit X-rays and α-particles. Techniques are described for the wrapping process and the precautions necessary to keep the sample processing line free of significant contamination. (author)

  7. Fission track dating of volcanic glass: experimental evidence for the validity of the Size-Correction Method

    International Nuclear Information System (INIS)

    Bernardes, C.; Hadler Neto, J.C.; Lattes, C.M.G.; Araya, A.M.O.; Bigazzi, G.; Cesar, M.F.

    1986-01-01

    Two techniques may be employed for correcting thermally lowered fission track ages on glass material: the so called 'size-correcting method' and 'Plateau method'. Several results from fission track dating on obsidian were analysed in order to compare the model rising size-correction method with experimental evidences. The results from this work can be summarized as follows: 1) The assumption that mean size of spontaneous and induced etched tracks are equal on samples unaffected by partial fading is supported by experimental results. If reactor effects such as an enhancing of the etching rate in the irradiated fraction due to the radiation damage and/or to the fact that induced fission releases a quantity of energy slightly greater than spontaneous one exist, their influence on size-correction method is very small. 2) The above two correction techniques produce concordant results. 3) Several samples from the same obsidian, affected by 'instantaneous' as well as 'continuous' natural fading to different degrees were analysed: the curve showing decreasing of spontaneous track mean-size vs. fraction of spontaneous tracks lost by fading is in close agreement with the correction curve constructed for the same obsidian by imparting artificial thermal treatements on induced tracks. By the above points one can conclude that the assumptions on which size-correction method is based are well supported, at least in first approximation. (Author) [pt

  8. Sampling phased array a new technique for signal processing and ultrasonic imaging

    OpenAIRE

    Bulavinov, A.; Joneit, D.; Kröning, M.; Bernus, L.; Dalichow, M.H.; Reddy, K.M.

    2006-01-01

    Different signal processing and image reconstruction techniques are applied in ultrasonic non-destructive material evaluation. In recent years, rapid development in the fields of microelectronics and computer engineering lead to wide application of phased array systems. A new phased array technique, called "Sampling Phased Array" has been developed in Fraunhofer Institute for non-destructive testing. It realizes unique approach of measurement and processing of ultrasonic signals. The sampling...

  9. The role of graphene-based sorbents in modern sample preparation techniques.

    Science.gov (United States)

    de Toffoli, Ana Lúcia; Maciel, Edvaldo Vasconcelos Soares; Fumes, Bruno Henrique; Lanças, Fernando Mauro

    2018-01-01

    The application of graphene-based sorbents in sample preparation techniques has increased significantly since 2011. These materials have good physicochemical properties to be used as sorbent and have shown excellent results in different sample preparation techniques. Graphene and its precursor graphene oxide have been considered to be good candidates to improve the extraction and concentration of different classes of target compounds (e.g., parabens, polycyclic aromatic hydrocarbon, pyrethroids, triazines, and so on) present in complex matrices. Its applications have been employed during the analysis of different matrices (e.g., environmental, biological and food). In this review, we highlight the most important characteristics of graphene-based material, their properties, synthesis routes, and the most important applications in both off-line and on-line sample preparation techniques. The discussion of the off-line approaches includes methods derived from conventional solid-phase extraction focusing on the miniaturized magnetic and dispersive modes. The modes of microextraction techniques called stir bar sorptive extraction, solid phase microextraction, and microextraction by packed sorbent are discussed. The on-line approaches focus on the use of graphene-based material mainly in on-line solid phase extraction, its variation called in-tube solid-phase microextraction, and on-line microdialysis systems. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay

    International Nuclear Information System (INIS)

    Wang Na; Wu Zhi-Hai; Peng Li

    2014-01-01

    In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  11. UV Digital Imaging of Sulfur Dioxide Emissions: Enhancing the Technique With Empirical Corrections

    Science.gov (United States)

    Dalton, M. P.; Bluth, G. J.; Shannon, J. M.; Watson, I. M.

    2006-12-01

    SO2 emission measurements are an important component of monitoring volcanic processes, providing insight into the driving forces behind eruptions. Current spectrometric methods (COSPEC, DOAS) typically measure only a cross-section of the plume, which may not be representative of the actual emission flux, and coupled with the difficulty in determining wind speeds affecting the air mass, often leads to erratic SO2 flux values. In order to address these problems, we have developed a ground-based ultraviolet digital camera for the imaging and measurement of SO2 volcanic plumes. This camera improves on the spectrometric methods of SO2 observation by capturing a large portion of the plume in one measurement- a single image. The UV digital camera can also record multiple images every minute, producing a data set that is more comparable with other monitoring techniques. The UV digital camera has proven capable of imaging volcanic plumes under fairly demanding conditions, and determining SO2 fluxes that have roughly agreed with other SO2 measurement techniques. Initial field tests suggest that the data produced by the UV camera are significantly affected by atmospheric scattering. To better evaluate the errors and limitations associated with this new instrument, field experiments have been conducted to assess the effects that background sky brightness, meteorological conditions, and distance to the target have on the calculated SO2 concentrations and flux measurements. Our results will allow us to more accurately model and correct for changing atmospheric conditions and quantify the error associated with atmospheric background scattering. These corrections will make this remarkable new instrument a more accurate and valuable tool for monitoring volcanic emissions.

  12. Toward a Principled Sampling Theory for Quasi-Orders.

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  13. Toward a Principled Sampling Theory for Quasi-Orders

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  14. Calculation of the flux attenuation and multiple scattering correction factors in time of flight technique for double differential cross section measurements

    International Nuclear Information System (INIS)

    Martin, G.; Coca, M.; Capote, R.

    1996-01-01

    Using Monte Carlo method technique , a computer code which simulates the time of flight experiment to measure double differential cross section was developed. The correction factor for flux attenuation and multiple scattering, that make a deformation to the measured spectrum, were calculated. The energy dependence of the correction factor was determined and a comparison with other works is shown. Calculations for Fe 56 at two different scattering angles were made. We also reproduce the experiment performed at the Nuclear Analysis Laboratory for C 12 at 25 celsius degree and the calculated correction factor for the is measured is shown. We found a linear relation between the scatter size and the correction factor for flux attenuation

  15. Techniques of sample attack used in soil and mineral analysis. Phase I

    International Nuclear Information System (INIS)

    Chiu, N.W.; Dean, J.R.; Sill, C.W.

    1984-07-01

    Several techniques of sample attack for the determination of radioisotopes are reviewed. These techniques include: 1) digestion with nitric or hydrochloric acid in Parr digestion bomb, 2) digestion with a mixture of nitric and hydrochloric acids, 3) digestion with a mixture of hydrofluoric, nitric and perchloric acids, and 4) fusion with sodium carbonate, potassium fluoride or alkali pyrosulfates. The effectiveness of these techniques to decompose various soils and minerals containing radioisotopes such as lead-210 uranium, thorium and radium-226 are discussed. The combined procedure of potassium fluoride fusion followed by alkali pyrosulfate fusion is recommended for radium-226, uranium and thorium analysis. This technique guarantees the complete dissolution of samples containing refractory materials such as silica, silicates, carbides, oxides and sulfates. For the lead-210 analysis, the procedure of digestion with a mixture of hydrofluoric, nitric and perchloric acids followed by fusion with alkali pyrosulfate is recommended. These two procedures are detailed. Schemes for the sequential separation of the radioisotopes from a dissolved sample solution are outlined. Procedures for radiochemical analysis are suggested

  16. Assessment of Natural Radioactivity in TENORM Samples Using Different Techniques

    International Nuclear Information System (INIS)

    Salman, Kh.A.; Shahein, A.Y.

    2009-01-01

    In petroleum oil industries, technologically-enhanced, naturally occurring radioactive materials are produced. The presence of TENORM constitutes a significant radiological human health hazard. In the present work, liquid scintillation counting technique was used to determine both 222 Rn and 226 Ra concentrations in TENORM samples, by measuring 222 Rn concentrations in the sample at different intervals of time after preparation. The radiation doses from the TENORM samples were estimated using thermoluminenscent detector (TLD-4000). The estimated radiation doses were found to be proportional to both the measured radiation doses in site and natural activity concentration in the samples that measured with LSC

  17. Effects of surface-mapping corrections and synthetic-aperture focusing techniques on ultrasonic imaging

    International Nuclear Information System (INIS)

    Barna, B.A.; Johnson, J.A.

    1981-01-01

    Improvements in ultrasonic imaging that can be obtained using algorithms that map the surface of targets are evaluated. This information is incorporated in the application of synthetic-aperture focusing techniques which also have the potential to improve image resolution. Images obtained using directed-beam (flat) transducers and the focused transducers normally used for synthetic-aperture processing are quantitatively compared by using no processing, synthetic-aperture processing with no corrections for surface variations, and synthetic-aperture processing with surface mapping. The unprocessed images have relatively poor lateral resolutions because echoes from two adjacent reflectors show interference effects which prevent their identification even if the spacing is larger than the single-hole resolution. The synthetic-aperture-processed images show at least a twofold improvement in lateral resolution and greatly reduced interference effects in multiple-hole images compared to directed-beam images. Perhaps more importantly, in images of test blocks with substantial surface variations portions of the image are displaced from their actual positions by several wavelengths. To correct for this effect an algorithm has been developed for calculating the surface variations. The corrected images produced using this algorithm are accurate within the experimental error. In addition, the same algorithm, when applied to the directed-beam data, produced images that are not only accurately positioned, but that also have a resolution comparable to conventional synthetic-aperture-processed images obtained from focused-transducer data. This suggests that using synthetic-aperture processing on the type of data normally collected during directed-beam ultrasonic inspections would eliminate the need to rescan for synthetic-aperture enhancement

  18. A scatter-corrected list-mode reconstruction and a practical scatter/random approximation technique for dynamic PET imaging

    International Nuclear Information System (INIS)

    Cheng, J-C; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna

    2007-01-01

    We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies

  19. Radiometric Non-Uniformity Characterization and Correction of Landsat 8 OLI Using Earth Imagery-Based Techniques

    Directory of Open Access Journals (Sweden)

    Frank Pesta

    2014-12-01

    Full Text Available Landsat 8 is the first satellite in the Landsat mission to acquire spectral imagery of the Earth using pushbroom sensor instruments. As a result, there are almost 70,000 unique detectors on the Operational Land Imager (OLI alone to monitor. Due to minute variations in manufacturing and temporal degradation, every detector will exhibit a different behavior when exposed to uniform radiance, causing a noticeable striping artifact in collected imagery. Solar collects using the OLI’s on-board solar diffuser panels are the primary method of characterizing detector level non-uniformity. This paper reports on an approach for using a side-slither maneuver to estimate relative detector gains within each individual focal plane module (FPM in the OLI. A method to characterize cirrus band detector-level non-uniformity using deep convective clouds (DCCs is also presented. These approaches are discussed, and then, correction results are compared with the diffuser-based method. Detector relative gain stability is assessed using the side-slither technique. Side-slither relative gains were found to correct streaking in test imagery with quality comparable to diffuser-based gains (within 0.005% for VNIR/PAN; 0.01% for SWIR and identified a 0.5% temporal drift over a year. The DCC technique provided relative gains that visually decreased striping over the operational calibration in many images.

  20. Review of online coupling of sample preparation techniques with liquid chromatography.

    Science.gov (United States)

    Pan, Jialiang; Zhang, Chengjiang; Zhang, Zhuomin; Li, Gongke

    2014-03-07

    Sample preparation is still considered as the bottleneck of the whole analytical procedure, and efforts has been conducted towards the automation, improvement of sensitivity and accuracy, and low comsuption of organic solvents. Development of online sample preparation techniques (SP) coupled with liquid chromatography (LC) is a promising way to achieve these goals, which has attracted great attention. This article reviews the recent advances on the online SP-LC techniques. Various online SP techniques have been described and summarized, including solid-phase-based extraction, liquid-phase-based extraction assisted with membrane, microwave assisted extraction, ultrasonic assisted extraction, accelerated solvent extraction and supercritical fluids extraction. Specially, the coupling approaches of online SP-LC systems and the corresponding interfaces have been discussed and reviewed in detail, such as online injector, autosampler combined with transport unit, desorption chamber and column switching. Typical applications of the online SP-LC techniques have been summarized. Then the problems and expected trends in this field are attempted to be discussed and proposed in order to encourage the further development of online SP-LC techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Cleaning and Cleanliness Verification Techniques for Mars Returned Sample Handling

    Science.gov (United States)

    Mickelson, E. T.; Lindstrom, D. J.; Allton, J. H.; Hittle, J. D.

    2002-01-01

    Precision cleaning and cleanliness verification techniques are examined as a subset of a comprehensive contamination control strategy for a Mars sample return mission. Additional information is contained in the original extended abstract.

  2. Evaluation of intensity drift correction strategies using MetaboDrift, a normalization tool for multi-batch metabolomics data.

    Science.gov (United States)

    Thonusin, Chanisa; IglayReger, Heidi B; Soni, Tanu; Rothberg, Amy E; Burant, Charles F; Evans, Charles R

    2017-11-10

    In recent years, mass spectrometry-based metabolomics has increasingly been applied to large-scale epidemiological studies of human subjects. However, the successful use of metabolomics in this context is subject to the challenge of detecting biologically significant effects despite substantial intensity drift that often occurs when data are acquired over a long period or in multiple batches. Numerous computational strategies and software tools have been developed to aid in correcting for intensity drift in metabolomics data, but most of these techniques are implemented using command-line driven software and custom scripts which are not accessible to all end users of metabolomics data. Further, it has not yet become routine practice to assess the quantitative accuracy of drift correction against techniques which enable true absolute quantitation such as isotope dilution mass spectrometry. We developed an Excel-based tool, MetaboDrift, to visually evaluate and correct for intensity drift in a multi-batch liquid chromatography - mass spectrometry (LC-MS) metabolomics dataset. The tool enables drift correction based on either quality control (QC) samples analyzed throughout the batches or using QC-sample independent methods. We applied MetaboDrift to an original set of clinical metabolomics data from a mixed-meal tolerance test (MMTT). The performance of the method was evaluated for multiple classes of metabolites by comparison with normalization using isotope-labeled internal standards. QC sample-based intensity drift correction significantly improved correlation with IS-normalized data, and resulted in detection of additional metabolites with significant physiological response to the MMTT. The relative merits of different QC-sample curve fitting strategies are discussed in the context of batch size and drift pattern complexity. Our drift correction tool offers a practical, simplified approach to drift correction and batch combination in large metabolomics studies

  3. Self-attenuation correction in the environmental sample gamma spectrometry; Correcao de auto-absorcao na espectrometria gama de amostras ambientais

    Energy Technology Data Exchange (ETDEWEB)

    Venturini, Luzia; Nisti, Marcelo B. [Instituto de Pesquisas Energeticas e Nucleares (IPEN), Sao Paulo, SP (Brazil)

    1997-10-01

    Self-attenuation corrections were calculated for gamma ray spectrometry of environmental samples with densities from 0.42 g/ml up to 1.59 g/ml, measured in Marinelli beakers and polyethylene flasks. These corrections are to be used when the counting efficiency is calculated for water measured in the same geometry. The model of Debertin for Marinelli beaker, numerical integration and experimental linear attenuation coefficients were used. (author). 3 refs., 4 figs., 6 tabs.

  4. Measurement of regional cerebral blood flow using one-point arterial blood sampling and microsphere model with 123I-IMP. Correction of one-point arterial sampling count by whole brain count ratio

    International Nuclear Information System (INIS)

    Makino, Kenichi; Masuda, Yasuhiko; Gotoh, Satoshi

    1998-01-01

    The experimental subjects were 189 patients with cerebrovascular disorders. 123 I-IMP, 222 MBq, was administered by intravenous infusion. Continuous arterial blood sampling was carried out for 5 minutes, and arterial blood was also sampled once at 5 minutes after 123 I-IMP administration. Then the whole blood count of the one-point arterial sampling was compared with the octanol-extracted count of the continuous arterial sampling. A positive correlation was found between the two values. The ratio of the continuous sampling octanol-extracted count (OC) to the one-point sampling whole blood count (TC5) was compared with the whole brain count ratio (5:29 ratio, Cn) using 1-minute planar SPECT images, centering on 5 and 29 minutes after 123 I-IMP administration. Correlation was found between the two values. The following relationship was shown from the correlation equation. OC/TC5=0.390969 x Cn-0.08924. Based on this correlation equation, we calculated the theoretical continuous arterial sampling octanol-extracted count (COC). COC=TC5 x (0.390969 x Cn-0.08924). There was good correlation between the value calculated with this equation and the actually measured value. The coefficient improved to r=0.94 from the r=0.87 obtained before using the 5:29 ratio for correction. For 23 of these 189 cases, another one-point arterial sampling was carried out at 6, 7, 8, 9 and 10 minutes after the administration of 123 I-IMP. The correlation coefficient was also improved for these other point samplings when this correction method using the 5:29 ratio was applied. It was concluded that it is possible to obtain highly accurate input functions, i.e., calculated continuous arterial sampling octanol-extracted counts, using one-point arterial sampling whole blood counts by performing correction using the 5:29 ratio. (K.H.)

  5. Elemental analyses of goundwater: demonstrated advantage of low-flow sampling and trace-metal clean techniques over standard techniques

    Science.gov (United States)

    Creasey, C. L.; Flegal, A. R.

    The combined use of both (1) low-flow purging and sampling and (2) trace-metal clean techniques provides more representative measurements of trace-element concentrations in groundwater than results derived with standard techniques. The use of low-flow purging and sampling provides relatively undisturbed groundwater samples that are more representative of in situ conditions, and the use of trace-element clean techniques limits the inadvertent introduction of contaminants during sampling, storage, and analysis. When these techniques are applied, resultant trace-element concentrations are likely to be markedly lower than results based on standard sampling techniques. In a comparison of data derived from contaminated and control groundwater wells at a site in California, USA, trace-element concentrations from this study were 2-1000 times lower than those determined by the conventional techniques used in sampling of the same wells prior to (5months) and subsequent to (1month) the collections for this study. Specifically, the cadmium and chromium concentrations derived using standard sampling techniques exceed the California Maximum Contaminant Levels (MCL), whereas in this investigation concentrations of both of those elements are substantially below their MCLs. Consequently, the combined use of low-flow and trace-metal clean techniques may preclude erroneous reports of trace-element contamination in groundwater. Résumé L'utilisation simultanée de la purge et de l'échantillonnage à faible débit et des techniques sans traces de métaux permet d'obtenir des mesures de concentrations en éléments en traces dans les eaux souterraines plus représentatives que les résultats fournis par les techniques classiques. L'utilisation de la purge et de l'échantillonnage à faible débit donne des échantillons d'eau souterraine relativement peu perturbés qui sont plus représentatifs des conditions in situ, et le recours aux techniques sans éléments en traces limite l

  6. XRF analysis of mineralised samples

    International Nuclear Information System (INIS)

    Ahmedali, T.

    2002-01-01

    Full text: Software now supplied by instrument manufacturers has made it practical and convenient for users to analyse unusual samples routinely. Semiquantitative scanning software can be used for rapid preliminary screening of elements ranging from Carbon to Uranium, prior to assigning mineralised samples to an appropriate quantitative analysis routine. The general quality and precision of analytical results obtained from modern XRF spectrometers can be significantly enhanced by several means: a. Modifications in preliminary sample preparation can result in less contamination from crushing and grinding equipment. Optimised techniques of actual sample preparation can significantly increase precision of results. b. Employment of automatic data recording balances and the use of catch weights during sample preparation reduces technician time as well as weighing errors. * c. Consistency of results can be improved significantly by the use of appropriate stable drift monitors with a statistically significant content of the analyte d. A judicious selection of kV/mA combinations, analysing crystals, primary beam filters, collimators, peak positions, accurate background correction and peak overlap corrections, followed by the use of appropriate matrix correction procedures. e. Preventative maintenance procedures for XRF spectrometers and ancillary equipment, which can also contribute significantly to reducing instrument down times, are described. Examples of various facets of sample processing routines are given from the XRF spectrometer component of a multi-instrument analytical university facility, which provides XRF data to 17 Canadian universities. Copyright (2002) Australian X-ray Analytical Association Inc

  7. Atmospheric Pre-Corrected Differential Absorption Techniques to Retrieve Columnar Water Vapor: Theory and Simulations

    Science.gov (United States)

    Borel, Christoph C.; Schlaepfer, Daniel

    1996-01-01

    Two different approaches exist to retrieve columnar water vapor from imaging spectrometer data: (1) Differential absorption techniques based on: (a) Narrow-Wide (N/W) ratio between overlapping spectrally wide and narrow channels; (b) Continuum Interpolated Band Ratio (CIBR) between a measurement channel and the weighted sum of two reference channels. (2) Non-linear fitting techniques which are based on spectral radiative transfer calculations. The advantage of the first approach is computational speed and of the second, improved retrieval accuracy. Our goal was to improve the accuracy of the first technique using physics based on radiative transfer. Using a modified version of the Duntley equation, we derived an "Atmospheric Pre-corrected Differential Absorption" (APDA) technique and described an iterative scheme to retrieve water vapor on a pixel-by-pixel basis. Next we compared both, the CIBR and the APDA using the Duntley equation for MODTRAN3 computed irradiances, transmissions and path radiance (using the DISORT option). This simulation showed that the CIBR is very sensitive to reflectance effects and that the APDA performs much better. An extensive data set was created with the radiative transfer code 6S over 379 different ground reflectance spectra. The calculated relative water vapor error was reduced significantly for the APDA. The APDA technique had about 8% (vs. over 35% for the CIBR) of the 379 spectra with a relative water vapor error of greater than +5%. The APDA has been applied to 1991 and 1995 AVIRIS scenes which visually demonstrate the improvement over the CIBR technique.

  8. Atmospheric pre-corrected differential absorption techniques to retrieve columnar water vapor: Theory and simulations

    Energy Technology Data Exchange (ETDEWEB)

    Borel, C.C.; Schlaepfer, D.

    1996-03-01

    Two different approaches exist to retrieve columnar water vapor from imaging spectrometer data: (1) Differential absorption techniques based on: (a) Narrow-Wide (N/W) ratio between overlapping spectrally wide and narrow channels (b) Continuum Interpolated Band Ratio (CIBR) between a measurement channel and the weighted sum of two reference channels; and (2) Non-linear fitting techniques which are based on spectral radiative transfer calculations. The advantage of the first approach is computational speed and of the second, improved retrieval accuracy. Our goal was to improve the accuracy of the first technique using physics based on radiative transfer. Using a modified version of the Duntley equation, we derived an {open_quote}Atmospheric Pre-corrected Differential Absorption{close_quote} (APDA) technique and described an iterative scheme to retrieve water vapor on a pixel-by-pixel basis. Next we compared both, the CIBR and the APDA using the Duntley equation for MODTRAN3 computed irradiances, transmissions and path radiance (using the DISORT option). This simulation showed that the CIBR is very sensitive to reflectance effects and that the APDA performs much better. An extensive data set was created with the radiative transfer code 6S over 379 different ground reflectance spectra. The calculated relative water vapor error was reduced significantly for the APDA. The APDA technique had about 8% (vs. over 35% for the CIBR) of the 379 spectra with a relative water vapor error of greater than {+-}5%. The APDA has been applied to 1991 and 1995 AVIRIS scenes which visually demonstrate the improvement over the CIBR technique.

  9. A comparison of radiometric correction techniques in the evaluation of the relationship between LST and NDVI in Landsat imagery.

    Science.gov (United States)

    Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin

    2012-06-01

    Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.

  10. THE STUDY OF HEAVY METAL FROM ENVIRONMENTAL SAMPLES BY ATOMIC TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Ion V. POPESCU

    2011-05-01

    Full Text Available Using the Atomic Absorption Spectrometry (AAS and Energy Dispersive X-ray spectrometry (EDXRF techniques we analyzed the contents of heavy metals ( Cd, Cr, Ni, Pb, Ti, Sr, Co, Bi from eight wild mushrooms and soil substrate samples (48 samples of eight fungal species and 32 underlying soil samples, collected from ten forest sites of Dambovița County Romania. It was determined that the elements, especially heavy metals, in soil were characteristic of the acidic soils of the Romanian forest lands and are influenced by industrial pollution. Analytical possibilities of AAS and EDXRF analytical techniques have been compared and the heavy metal transfer from substrate to mushrooms has been studied. The coefficient of accumulation of essential and heavy metals has been calculated as well. Heavy metal contents of all analyzed mushrooms were generally higher than previously reported in literature.

  11. A technique for extracting blood samples from mice in fire toxicity tests

    Science.gov (United States)

    Bucci, T. J.; Hilado, C. J.; Lopez, M. T.

    1976-01-01

    The extraction of adequate blood samples from moribund and dead mice has been a problem because of the small quantity of blood in each animal and the short time available between the animals' death and coagulation of the blood. These difficulties are particularly critical in fire toxicity tests because removal of the test animals while observing proper safety precautions for personnel is time-consuming. Techniques for extracting blood samples from mice were evaluated, and a technique was developed to obtain up to 0.8 ml of blood from a single mouse after death. The technique involves rapid exposure and cutting of the posterior vena cava and accumulation of blood in the peritoneal space. Blood samples of 0.5 ml or more from individual mice have been consistently obtained as much as 16 minutes after apparent death. Results of carboxyhemoglobin analyses of blood appeared reproducible and consistent with carbon monoxide concentrations in the exposure chamber.

  12. Comparison of sampling techniques for Rift Valley Fever virus ...

    African Journals Online (AJOL)

    We investigated mosquito sampling techniques with two types of traps and attractants at different time for trapping potential vectors for Rift Valley Fever virus. The study was conducted in six villages in Ngorongoro district in Tanzania from September to October 2012. A total of 1814 mosquitoes were collected, of which 738 ...

  13. Fitted temperature-corrected Compton cross sections for Monte Carlo applications and a sampling distribution

    International Nuclear Information System (INIS)

    Wienke, B.R.; Devaney, J.J.; Lathrop, B.L.

    1984-01-01

    Simple temperature-corrected cross sections, which replace the static Klein-Nishina set in a one-to-one manner, are developed for Monte Carlo applications. The reduced set is obtained from a nonlinear least-squares fit to the exact photon-Maxwellian electron cross sections by using a Klein-Nishina-like formula as the fitting equation. Two parameters are sufficient, and accurate to two decimal places, to explicitly fit the exact cross sections over a range of 0 to 100 keV in electron temperature and 0 to 1 MeV in incident photon energy. Since the fit equations are Klein-Nishina-like, existing Monte Carlo code algorithms using the Klein-Nishina formula can be trivially modified to accommodate corrections for a moving Maxwellian electron background. The simple two parameter scheme and other fits are presented and discussed and comparisons with exact predictions are exhibited. The fits are made to the total photon-Maxwellian electron cross section and the fitting parameters can be consistently used in both the energy conservation equation for photon-electron scattering and the differential cross section, as they are presently sampled in Monte Carlo photonics applications. The fit equations are motivated in a very natural manner by the asymptotic expansion of the exact photon-Maxwellian effective cross-section kernel. A probability distribution is also obtained for the corrected set of equations

  14. Terahertz thickness determination with interferometric vibration correction for industrial applications.

    Science.gov (United States)

    Pfeiffer, Tobias; Weber, Stefan; Klier, Jens; Bachtler, Sebastian; Molter, Daniel; Jonuscheit, Joachim; Von Freymann, Georg

    2018-05-14

    In many industrial fields, like automotive and painting industry, the thickness of thin layers is a crucial parameter for quality control. Hence, the demand for thickness measurement techniques continuously grows. In particular, non-destructive and contact-free terahertz techniques access a wide range of thickness determination applications. However, terahertz time-domain spectroscopy based systems perform the measurement in a sampling manner, requiring fixed distances between measurement head and sample. In harsh industrial environments vibrations of sample and measurement head distort the time-base and decrease measurement accuracy. We present an interferometer-based vibration correction for terahertz time-domain measurements, able to reduce thickness distortion by one order of magnitude for vibrations with frequencies up to 100 Hz and amplitudes up to 100 µm. We further verify the experimental results by numerical calculations and find very good agreement.

  15. Correction of moderate to severe hallux valgus with combined proximal opening wedge and distal chevron osteotomies: a reliable technique.

    Science.gov (United States)

    Jeyaseelan, L; Chandrashekar, S; Mulligan, A; Bosman, H A; Watson, A J S

    2016-09-01

    The mainstay of surgical correction of hallux valgus is first metatarsal osteotomy, either proximally or distally. We present a technique of combining a distal chevron osteotomy with a proximal opening wedge osteotomy, for the correction of moderate to severe hallux valgus. We reviewed 45 patients (49 feet) who had undergone double osteotomy. Outcome was assessed using the American Orthopaedic Foot and Ankle Society (AOFAS) and the Short Form (SF) -36 Health Survey scores. Radiological measurements were undertaken to assess the correction. The mean age of the patients was 60.8 years (44.2 to 75.3). The mean follow-up was 35.4 months (24 to 51). The mean AOFAS score improved from 54.7 to 92.3 (p hallux valgus and intermetatarsal angles were improved from 41.6(o) to 12.8(o) (p < 0.001) and from 22.1(o) to 7.1(o), respectively (p < 0.001). The mean distal metatarsal articular angle improved from 23(o) to 9.7(o). The mean sesamoid position, as described by Hardy and Clapham, improved from 6.8 to 3.5. The mean length of the first metatarsal was unchanged. The overall rate of complications was 4.1% (two patients). These results suggest that a double osteotomy of the first metatarsal is a reliable, safe technique which, when compared with other metatarsal osteotomies, provides strong angular correction and excellent outcomes with a low rate of complications. Cite this article: Bone Joint J 2016;98-B:1202-7. ©2016 The British Editorial Society of Bone & Joint Surgery.

  16. A line-based vegetation sampling technique and its application in ...

    African Journals Online (AJOL)

    percentage cover, density and intercept frequency) and also provides plant size distributions, yet requires no more sampling effort than the line-intercept method.. A field test of the three techniques in succulent karoo, showed that the discriminating ...

  17. Comparison of correlation analysis techniques for irregularly sampled time series

    Directory of Open Access Journals (Sweden)

    K. Rehfeld

    2011-06-01

    Full Text Available Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques.

    All methods have comparable root mean square errors (RMSEs for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods.

    We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

  18. Application of nuclear and allied techniques for the characterisation of forensic samples

    International Nuclear Information System (INIS)

    Sudersanan, M.; Kayasth, S.R.; Pant, D.R.; Chattopadhyay, N.; Bhattacharyya, C.N.

    2002-01-01

    Full text: Forensic science deals with the application of various techniques for physics, chemistry and biology for crime investigation. The legal implication of such analysis put considerable restriction on the choice of analytical techniques. Moreover, the unknown nature of the materials, the limited availability of samples and the large number of elements to be analysed put considerable strain on the analytical chemist on the selection of the appropriate technique. The availability of nuclear techniques has considerably enhanced the scope of forensic analysis. This paper deals with the recent results on the use of nuclear and allied analytical techniques for forensic applications. One of the important types of samples of forensic importance pertain to the identification of gunshot residues. The use of nuclear techniques has considerably simplified the interpretation of results through the use of appropriate elements like Ba, Cu, Sb, Zn, As and Sn etc. The combination of non-nuclear techniques for elements like Pb and Ni which are not easily amenable to be analysed by NAA and the use of appropriate separation procedure has led to the use of this method as a valid and versatile analytical procedure. In view of the presence of a large amounts of extraneous materials like cloth, body tissues etc in these samples and the limited availability of materials, the procedures for sample collection, dissolution and analysis have been standardized. Analysis of unknown materials like powders, metallic pieces etc. for the possible presence of nuclear materials or as materials in illicit trafficking is becoming important in recent years. The use of multi-technique approach is important in this case. Use of non-destructive techniques like XRF and radioactive counting enables the preliminary identification of materials and for the detection of radioactivity. Subsequent analysis by NAA or other appropriate analytical methods allows the characterization of the materials. Such

  19. Determination of palladium in biological samples applying nuclear analytical techniques

    International Nuclear Information System (INIS)

    Cavalcante, Cassio Q.; Sato, Ivone M.; Salvador, Vera L. R.; Saiki, Mitiko

    2008-01-01

    This study presents Pd determinations in bovine tissue samples containing palladium prepared in the laboratory, and CCQM-P63 automotive catalyst materials of the Proficiency Test, using instrumental thermal and epithermal neutron activation analysis and energy dispersive X-ray fluorescence techniques. Solvent extraction and solid phase extraction procedures were also applied to separate Pd from interfering elements before the irradiation in the nuclear reactor. The results obtained by different techniques were compared against each other to examine sensitivity, precision and accuracy. (author)

  20. Temporal impulse and step responses of the human eye obtained psychophysically by means of a drift-correcting perturbation technique

    OpenAIRE

    Roufs, J.A.J.; Blommaert, F.J.J.

    1981-01-01

    Internal impulse and step responses are derived from the thresholds of short probe flashes by means of a drift-correcting perturbation technique. The approach is based on only two postulated systems properties: quasi-linearity and peak detection. A special feature of the technique is its strong reduction of the concealing effect of sensitivity drift within and between sessions. Results were found to be repeatable, even after about one year. For a 1° foveal disk at 1200 td stationary level, im...

  1. A new sampling technique for surface exposure dating using a portable electric rock cutter

    Directory of Open Access Journals (Sweden)

    Yusuke Suganuma

    2012-07-01

    Full Text Available Surface exposure dating using in situ cosmogenic nuclides has contributed to our understanding of Earth-surface processes. The precision of the ages estimated by this method is affected by the sample geometry; therefore, high accuracy measurements of the thickness and shape of the rock sample (thickness and shape is crucial. However, it is sometimes diffi cult to meet these requirements by conventional sampling methods with a hammer and chisel. Here, we propose a new sampling technique using a portable electric rock cutter. This sampling technique is faster, produces more precisely shaped samples, and allows for a more precise age interpretation. A simple theoretical modeldemonstrates that the age error due to defective sample geometry increases as the total sample thickness increases, indicating the importance of precise sampling for surface exposure dating.

  2. Mantle biopsy: a technique for nondestructive tissue-sampling of freshwater mussels

    Science.gov (United States)

    David J. Berg; Wendell R. Haag; Sheldon I. Guttman; James B. Sickel

    1995-01-01

    Mantle biopsy is a means of obtaining tissue samples for genetic, physiological, and contaminant studies of bivalves; but the effects of this biopsy on survival have not been determined. We describe a simple technique for obtaining such samples from unionacean bivalves and how we compared survival among biopsied and control organisms in field experiments. Survival was...

  3. A Monte Carlo Sampling Technique for Multi-phonon Processes

    Energy Technology Data Exchange (ETDEWEB)

    Hoegberg, Thure

    1961-12-15

    A sampling technique for selecting scattering angle and energy gain in Monte Carlo calculations of neutron thermalization is described. It is supposed that the scattering is separated into processes involving different numbers of phonons. The number of phonons involved is first determined. Scattering angle and energy gain are then chosen by using special properties of the multi-phonon term.

  4. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Evaluation of primary immunization coverage of infants under universal immunization programme in an urban area of bangalore city using cluster sampling and lot quality assurance sampling techniques.

    Science.gov (United States)

    K, Punith; K, Lalitha; G, Suman; Bs, Pradeep; Kumar K, Jayanth

    2008-07-01

    Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Population-based cross-sectional study. Areas under Mathikere Urban Health Center. Children aged 12 months to 23 months. 220 in cluster sampling, 76 in lot quality assurance sampling. Percentages and Proportions, Chi square Test. (1) Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2) Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area.

  6. The Accuracy of Inference in Small Samples of Dynamic Panel Data Models

    NARCIS (Netherlands)

    Bun, M.J.G.; Kiviet, J.F.

    2001-01-01

    Through Monte Carlo experiments the small sample behavior is examined of various inference techniques for dynamic panel data models when both the time-series and cross-section dimensions of the data set are small. The LSDV technique and corrected versions of it are compared with IV and GMM

  7. Gravimetric determination of uranium in SALE samples

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    As a participant in the Safeguards Analytical Laboratory Evaluation (SALE) program, the Analytical Chemistry Laboratory at General Atomic routinely assays uranium dioxide and uranyl nitrate SALE samples for uranium content. Gravimetric methods are relatively easy and inexpensive to apply when the samples for uranium content. Gravimetric methods are relatively easy and inexpensive to apply when the samples are free from substantial amounts of metallic impurities. Clearly the gravimetric procedure alone is not specific for uranium and must be enhanced by the use of impurity corrections. Emission spectrography is used routinely as the technique of choice for making such corrections. In cases where it is essential to assay specifically for uranium, the modified Davies-Gray titration using a weighed titrant method is applied. In this paper some essential features of these gravimetric and titrimetric procedures are discussed

  8. TRAN-STAT: statistics for environmental studies, Number 22. Comparison of soil-sampling techniques for plutonium at Rocky Flats

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bernhardt, D.E.; Hahn, P.B.

    1983-01-01

    A summary of a field soil sampling study conducted around the Rocky Flats Colorado plant in May 1977 is preseted. Several different soil sampling techniques that had been used in the area were applied at four different sites. One objective was to comparethe average 239 - 240 Pu concentration values obtained by the various soil sampling techniques used. There was also interest in determining whether there are differences in the reproducibility of the various techniques and how the techniques compared with the proposed EPA technique of sampling to 1 cm depth. Statistically significant differences in average concentrations between the techniques were found. The differences could be largely related to the differences in sampling depth-the primary physical variable between the techniques. The reproducibility of the techniques was evaluated by comparing coefficients of variation. Differences between coefficients of variation were not statistically significant. Average (median) coefficients ranged from 21 to 42 percent for the five sampling techniques. A laboratory study indicated that various sample treatment and particle sizing techniques could increase the concentration of plutonium in the less than 10 micrometer size fraction by up to a factor of about 4 compared to the 2 mm size fraction

  9. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    Science.gov (United States)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  10. Stratified source-sampling techniques for Monte Carlo eigenvalue analysis

    International Nuclear Information System (INIS)

    Mohamed, A.

    1998-01-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results

  11. Evaluation of primary immunization coverage of infants under universal immunization programme in an urban area of Bangalore city using cluster sampling and lot quality assurance sampling techniques

    Directory of Open Access Journals (Sweden)

    Punith K

    2008-01-01

    Full Text Available Research Question: Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? Objective: To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Study Design: Population-based cross-sectional study. Study Setting: Areas under Mathikere Urban Health Center. Study Subjects: Children aged 12 months to 23 months. Sample Size: 220 in cluster sampling, 76 in lot quality assurance sampling. Statistical Analysis: Percentages and Proportions, Chi square Test. Results: (1 Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2 Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area.

  12. Analytical techniques for measurement of 99Tc in environmental samples

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    Three new methods have been developed for measuring 99 Tc in environmental samples. The most sensitive method is isotope dilution mass spectrometry, which allows measurement of about 1 x 10 -12 grams of 99 Tc. Results on analysis of five samples by this method compare very well with values obtained by a second independent method, which involves counting of beta particles from 99 Tc and internal conversion electrons from /sup 97m/Tc. A third method involving electrothermal atomic absorption has also been developed. Although this method is not as sensitive as the first two techniques, the cost per analysis is expected to be considerably less for certain types of samples

  13. Determination of some trace elements in biological samples using XRF and TXRF techniques

    International Nuclear Information System (INIS)

    Khuder, A.; Karjou, J.; Sawan, M. K.

    2006-07-01

    XRF and TXRF techniques were successfully used for the multi-element determination of trace elements in whole blood and human head hair samples. This was achieved by the direct analysis using XRF technique with different collimation units and by the optimized chemical procedures for TXRF analysis. Light element of S and P were preferably determined by XRF with primary x-ray excitation, while, elements of K, Ca, Fe, and Br were determined with a very good accuracy and precision using XRF with Cu- and Mo-secondary targets. The chemical procedure dependent on the preconcentration of trace elements by APDC was superiorly used for the determination of traces of Ni and Pb in the range of 1.0-1.7 μg/dl and 11-23 μg/dl, respectively, in whole blood samples by TXRF technique; determination of other elements as Cu and Zn was also achievable using this approach. Rb in whole blood samples was determined directly after the digestion of samples using PTFE-bomb for TXRF analysis. (author)

  14. Radioenzymatic assay for measurement of tissue concentrations of histamine: adaptation to correct for adherence of histamine to mechanical homogenizers

    International Nuclear Information System (INIS)

    Brown, J.K.; Frey, M.J.; Reed, B.R.; Leff, A.R.; Shields, R.; Gold, W.M.

    1984-01-01

    Because adherence of histamine to glass is well-known, we tested for its adherence to a mechanical homogenizer commonly used in the extraction of histamine from tissue samples. During 60 sec of homogenization, 15% to 17% of the histamine originally present in the samples ''disappeared,'' and the reason for the disappearance was reversible binding of histamine to the homogenizer. Adding trace amounts of [ 14 C]histamine to each sample before homogenization and measuring the disappearance of radioactivity during homogenization permitted correction for binding to the homogenizer. This technique for correction was validated by the measurement of endogenous concentrations of histamine in the tracheal posterior membranes of six dogs (range of mean concentrations: 0.63 to 1.51 ng/mg wet weight) followed by the measurement of known amounts of exogenous histamine added before homogenization to tracheal tissue samples from the same dogs. In the latter samples, 96 +/- 13% (mean +/- SEM) of the histamine added was measured by our technique. We conclude that binding of histamine to mechanical homogenizers may be an important cause of inaccuracy of the enzymatic assay for the measurement of histamine concentrations in tissue but that such binding may but that such binding may be easily corrected for

  15. Recent advances in sample preparation techniques and methods of sulfonamides detection - A review.

    Science.gov (United States)

    Dmitrienko, Stanislava G; Kochuk, Elena V; Apyari, Vladimir V; Tolmacheva, Veronika V; Zolotov, Yury A

    2014-11-19

    Sulfonamides (SAs) have been the most widely used antimicrobial drugs for more than 70 years, and their residues in foodstuffs and environmental samples pose serious health hazards. For this reason, sensitive and specific methods for the quantification of these compounds in numerous matrices have been developed. This review intends to provide an updated overview of the recent trends over the past five years in sample preparation techniques and methods for detecting SAs. Examples of the sample preparation techniques, including liquid-liquid and solid-phase extraction, dispersive liquid-liquid microextraction and QuEChERS, are given. Different methods of detecting the SAs present in food and feed and in environmental, pharmaceutical and biological samples are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Corrections of residual fluorescence distortions for a glancing-emergence-angle x-ray-absorption technique

    International Nuclear Information System (INIS)

    Brewe, D.L.; Pease, D.M.; Budnick, J.I.

    1994-01-01

    Distortions appear in x-ray-absorption spectra obtained by monitoring the fluorescence from thick samples with concentrated absorbing species. The glancing-emergence-angle technique for obtaining spectra from this type of sample eliminates distortions from the measured spectra by monitoring the fluorescence leaving the sample at a small angle relative to the sample surface. This technique is limited by the small signal available from the inherently limited detector solid angle. In addition, no precise estimate of the required restriction on maximum emergent angle θ max has been available. We have calculated residual extended x-ray-absorption fine structure distortions as a function of θ max , and performed experimental tests of the calculations. These calculations provide a means to estimate the required detector geometry for negligible distortions, or alternatively, allow the use of a larger θ max , increasing the available signal, with the remaining residual distortions removed by application of the calculations. The calculations are also applicable to other detector geometries, and account for detectors subtending a large solid angle by an integration over the subtended angle. This represents an improvement over previous calculations. The application to more general detector configurations is also discussed

  17. More accurate thermal neutron coincidence counting technique

    International Nuclear Information System (INIS)

    Baron, N.

    1978-01-01

    Using passive thermal neutron coincidence counting techniques, the accuracy of nondestructive assays of fertile material can be improved significantly using a two-ring detector. It was shown how the use of a function of the coincidence count rate ring-ratio can provide a detector response rate that is independent of variations in neutron detection efficiency caused by varying sample moderation. Furthermore, the correction for multiplication caused by SF- and (α,n)-neutrons is shown to be separable into the product of a function of the effective mass of 240 Pu (plutonium correction) and a function of the (α,n) reaction probability (matrix correction). The matrix correction is described by a function of the singles count rate ring-ratio. This correction factor is empirically observed to be identical for any combination of PuO 2 powder and matrix materials SiO 2 and MgO because of the similar relation of the (α,n)-Q value and (α,n)-reaction cross section among these matrix nuclei. However the matrix correction expression is expected to be different for matrix materials such as Na, Al, and/or Li. Nevertheless, it should be recognized that for comparison measurements among samples of similar matrix content, it is expected that some function of the singles count rate ring-ratio can be defined to account for variations in the matrix correction due to differences in the intimacy of mixture among the samples. Furthermore the magnitude of this singles count rate ring-ratio serves to identify the contaminant generating the (α,n)-neutrons. Such information is useful in process control

  18. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Symbol synchronization and sampling frequency synchronization techniques in real-time DDO-OFDM systems

    Science.gov (United States)

    Chen, Ming; He, Jing; Cao, Zizheng; Tang, Jin; Chen, Lin; Wu, Xian

    2014-09-01

    In this paper, we propose and experimentally demonstrate a symbol synchronization and sampling frequency synchronization techniques in real-time direct-detection optical orthogonal frequency division multiplexing (DDO-OFDM) system, over 100-km standard single mode fiber (SSMF) using a cost-effective directly modulated distributed feedback (DFB) laser. The experiment results show that the proposed symbol synchronization based on training sequence (TS) has a low complexity and high accuracy even at a sampling frequency offset (SFO) of 5000-ppm. Meanwhile, the proposed pilot-assisted sampling frequency synchronization between digital-to-analog converter (DAC) and analog-to-digital converter (ADC) is capable of estimating SFOs with an accuracy of technique can also compensate SFO effects within a small residual SFO caused by deviation of SFO estimation and low-precision or unstable clock source. The two synchronization techniques are suitable for high-speed DDO-OFDM transmission systems.

  20. ANALYSIS OF MONTE CARLO SIMULATION SAMPLING TECHNIQUES ON SMALL SIGNAL STABILITY OF WIND GENERATOR- CONNECTED POWER SYSTEM

    Directory of Open Access Journals (Sweden)

    TEMITOPE RAPHAEL AYODELE

    2016-04-01

    Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.

  1. A practical method for determining γ-ray full-energy peak efficiency considering coincidence-summing and self-absorption corrections for the measurement of environmental samples after the Fukushima reactor accident

    Energy Technology Data Exchange (ETDEWEB)

    Shizuma, Kiyoshi, E-mail: shizuma@hiroshima-u.ac.jp [Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527 (Japan); Oba, Yurika; Takada, Momo [Graduate School of Integrated Arts and Sciences, Hiroshima University, Higashi-Hiroshima 739-8521 (Japan)

    2016-09-15

    A method for determining the γ-ray full-energy peak efficiency at positions close to three Ge detectors and at the well port of a well-type detector was developed for measuring environmental volume samples containing {sup 137}Cs, {sup 134}Cs and {sup 40}K. The efficiency was estimated by considering two correction factors: coincidence-summing and self-absorption corrections. The coincidence-summing correction for a cascade transition nuclide was estimated by an experimental method involving measuring a sample at the far and close positions of a detector. The derived coincidence-summing correction factors were compared with those of analytical and Monte Carlo simulation methods and good agreements were obtained. Differences in the matrix of the calibration source and the environmental sample resulted in an increase or decrease of the full-energy peak counts due to the self-absorption of γ-rays in the sample. The correction factor was derived as a function of the densities of several matrix materials. The present method was applied to the measurement of environmental samples and also low-level radioactivity measurements of water samples using the well-type detector.

  2. Assessment of phacoaspiration techniques in clear lens extraction for correction of high myopia

    Directory of Open Access Journals (Sweden)

    Mostafa A El-Helw

    2010-03-01

    Full Text Available Mostafa A El-Helw, Ahmed M EmarahDepartment of Ophthalmology, Cairo University, EgyptPurpose: To evaluate various phacoaspiration techniques in clear lens extraction for the incidence of intraoperative difficulties and complications.Patients and methods: This was a prospective study in which bilateral clear lens extraction was performed on 40 eyes of 20 patients, to correct high myopia. The patients were divided into 2 groups: group A underwent supracapsular phacoaspiration; group B were the contralateral eyes of the same patient. These patients were operated on with endocapsular phacoaspiration with the divide and conquer (D and C technique. Preoperative ocular examination data were recorded and tested for significance. Intraoperative difficulties and complications such as nucleus cracking, capsule rupture and vitreous loss, and repeated chamber collapse were recorded. Postoperative examination data were recorded.Results: Mean age was 35.65 ± 5.85 years. Mean follow-up time was 17.1 ± 8.56 months. In group A mean myopia was -17.3 ± 5.07 diopters; in group B myopia was -17.9 ± 4.20 diopters. Mean preoperative uncorrected visual acuity (UCVA was 0.04 ± 0.0167, while the mean postoperative UCVA was 0.435 ± 0.1442. There was a significant difference in pre and postoperative BCVA within both groups, but not between the two groups. In both groups endothelial cell count (ECC showed a significant difference between pre- and postoperative data; however, there was no statistically significant difference between both groups in postoperative ECC. The effective phacoaspiration time for group A was 4.6 ± 1.6 seconds, and for group B 9.90 ± 2.27 seconds (P < 0.005. No cases of capsule rupture occurred in group A, but 3 cases occurred in group B (15 % (not significant, P = 0.231. Nucleus cracking did not occur in group A, but in group B 13 cases occurred (65%. Chamber collapse occurred in 4 cases (20% in group A and 5 cases (25% in group B (not

  3. Correcting Poor Posture without Awareness or Willpower

    Science.gov (United States)

    Wernik, Uri

    2012-01-01

    In this article, a new technique for correcting poor posture is presented. Rather than intentionally increasing awareness or mobilizing willpower to correct posture, this approach offers a game using randomly drawn cards with easy daily assignments. A case using the technique is presented to emphasize the subjective experience of living with poor…

  4. Recent Trends in Microextraction Techniques Employed in Analytical and Bioanalytical Sample Preparation

    Directory of Open Access Journals (Sweden)

    Abuzar Kabir

    2017-12-01

    Full Text Available Sample preparation has been recognized as a major step in the chemical analysis workflow. As such, substantial efforts have been made in recent years to simplify the overall sample preparation process. Major focusses of these efforts have included miniaturization of the extraction device; minimizing/eliminating toxic and hazardous organic solvent consumption; eliminating sample pre-treatment and post-treatment steps; reducing the sample volume requirement; reducing extraction equilibrium time, maximizing extraction efficiency etc. All these improved attributes are congruent with the Green Analytical Chemistry (GAC principles. Classical sample preparation techniques such as solid phase extraction (SPE and liquid-liquid extraction (LLE are being rapidly replaced with emerging miniaturized and environmentally friendly techniques such as Solid Phase Micro Extraction (SPME, Stir bar Sorptive Extraction (SBSE, Micro Extraction by Packed Sorbent (MEPS, Fabric Phase Sorptive Extraction (FPSE, and Dispersive Liquid-Liquid Micro Extraction (DLLME. In addition to the development of many new generic extraction sorbents in recent years, a large number of molecularly imprinted polymers (MIPs created using different template molecules have also enriched the large cache of microextraction sorbents. Application of nanoparticles as high-performance extraction sorbents has undoubtedly elevated the extraction efficiency and method sensitivity of modern chromatographic analyses to a new level. Combining magnetic nanoparticles with many microextraction sorbents has opened up new possibilities to extract target analytes from sample matrices containing high volumes of matrix interferents. The aim of the current review is to critically audit the progress of microextraction techniques in recent years, which has indisputably transformed the analytical chemistry practices, from biological and therapeutic drug monitoring to the environmental field; from foods to phyto

  5. Development of core sampling technique for ITER Type B radwaste

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. G.; Hong, K. P.; Oh, W. H.; Park, M. C.; Jung, S. H.; Ahn, S. B. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    Type B radwaste (intermediate level and long lived radioactive waste) imported from ITER vacuum vessel are to be treated and stored in basement of hot cell building. The Type B radwaste treatment process is composed of buffer storage, cutting, sampling/tritium measurement, tritium removal, characterization, pre-packaging, inspection/decontamination, and storage etc. The cut slices of Type B radwaste components generated from cutting process undergo sampling process before and after tritium removal process. The purpose of sampling is to obtain small pieces of samples in order to investigate the tritium content and concentration of Type B radwaste. Core sampling, which is the candidates of sampling technique to be applied to ITER hot cell, is available for not thick (less than 50 mm) metal without use of coolant. Experimented materials were SS316L and CuCrZr in order to simulate ITER Type B radwaste. In core sampling, substantial secondary wastes from cutting chips will be produced unavoidably. Thus, core sampling machine will have to be equipped with disposal system such as suction equipment. Core sampling is considered an unfavorable method for tool wear compared to conventional drilling.

  6. Pierre Gy's sampling theory and sampling practice heterogeneity, sampling correctness, and statistical process control

    CERN Document Server

    Pitard, Francis F

    1993-01-01

    Pierre Gy's Sampling Theory and Sampling Practice, Second Edition is a concise, step-by-step guide for process variability management and methods. Updated and expanded, this new edition provides a comprehensive study of heterogeneity, covering the basic principles of sampling theory and its various applications. It presents many practical examples to allow readers to select appropriate sampling protocols and assess the validity of sampling protocols from others. The variability of dynamic process streams using variography is discussed to help bridge sampling theory with statistical process control. Many descriptions of good sampling devices, as well as descriptions of poor ones, are featured to educate readers on what to look for when purchasing sampling systems. The book uses its accessible, tutorial style to focus on professional selection and use of methods. The book will be a valuable guide for mineral processing engineers; metallurgists; geologists; miners; chemists; environmental scientists; and practit...

  7. 3D-Laser-Scanning Technique Applied to Bulk Density Measurements of Apollo Lunar Samples

    Science.gov (United States)

    Macke, R. J.; Kent, J. J.; Kiefer, W. S.; Britt, D. T.

    2015-01-01

    In order to better interpret gravimetric data from orbiters such as GRAIL and LRO to understand the subsurface composition and structure of the lunar crust, it is import to have a reliable database of the density and porosity of lunar materials. To this end, we have been surveying these physical properties in both lunar meteorites and Apollo lunar samples. To measure porosity, both grain density and bulk density are required. For bulk density, our group has historically utilized sub-mm bead immersion techniques extensively, though several factors have made this technique problematic for our work with Apollo samples. Samples allocated for measurement are often smaller than optimal for the technique, leading to large error bars. Also, for some samples we were required to use pure alumina beads instead of our usual glass beads. The alumina beads were subject to undesirable static effects, producing unreliable results. Other investigators have tested the use of 3d laser scanners on meteorites for measuring bulk volumes. Early work, though promising, was plagued with difficulties including poor response on dark or reflective surfaces, difficulty reproducing sharp edges, and large processing time for producing shape models. Due to progress in technology, however, laser scanners have improved considerably in recent years. We tested this technique on 27 lunar samples in the Apollo collection using a scanner at NASA Johnson Space Center. We found it to be reliable and more precise than beads, with the added benefit that it involves no direct contact with the sample, enabling the study of particularly friable samples for which bead immersion is not possible

  8. Analysis of pure and malachite green doped polysulfone sample using FT-IR technique

    Science.gov (United States)

    Nayak, Rashmi J.; Khare, P. K.; Nayak, J. G.

    2018-05-01

    The sample of pure and malachite green doped Polysulfone in the form of foil was prepared by isothermal immersion technique. For the preparation of pure sample 4 gm of Polysulfone was dissolved in 50 ml of Dimethyl farmamide (DMF) solvent, while for the preparation of doped sample 10 mg, 50 mg and 100 mg Malachite Green was mixed with 4 gm of Polysulfone respectively. For the study of structural characterization of these pure and doped sample, Fourier Transform Infra-Red Spectroscopy (FT-IR) technique was used. This study shows that the intensity of transmittance decreases as the ratio of doping increases in pure polysulfone. The reduction in intensity of transmittance is clearly apparent in the present case more over the bands were broader which indicates towards charge transfer interaction between the donar and acceptor molecule.

  9. Technique for determining cesium-137 in milk excluding ashing

    International Nuclear Information System (INIS)

    Antonova, V.A.; Prokof'ev, O.N.

    1984-01-01

    For the purpose of simplification of the method of milk sample preparation for the radiochemical analysis for 137 Cs in laboratory sanitary and higienic investigations rapid analysis technique excluding sample ashing is developed. The technique comprises sample mixing with silica gel during an hour, 137 Cs desorption from silica gel into the 1N solution of hydrochloric acid, radiochemical analysis of hydrochloric solution for determining 137 Cs. The comparison of the results obtained by the above method and that with ashing provides perfect agreement. For taking into account the incompleteness of 137 Cs adsorption by silica gel a correction factor is applied in calculation formulae

  10. Influence of attenuation correction and reconstruction techniques on the detection of hypoperfused lesions in brain SPECT studies

    International Nuclear Information System (INIS)

    Ghoorun, S.; Groenewald, W.A.; Baete, K.; Nuyts, J.; Dupont, P.

    2004-01-01

    Full text: Aim: To study the influence of attenuation correction and the reconstruction technique on the detection of hypoperfused lesions in brain SPECT imaging, Material and Methods: A simulation experiment was used in which the effects of attenuation and reconstruction were decoupled, A high resolution SPECT phantom was constructed using the BrainWeb database, In this phantom, activity values were assigned to grey and white matter (ratio 4:1) and scaled to obtain counts of the same magnitude as in clinical practice, The true attenuation map was generated by assigning attenuation coefficients to each tissue class (grey and white matter, cerebral spinal fluid, skull, soft and fatty tissue and air) to create a non-uniform attenuation map, The uniform attenuation map was calculated using an attenuation coefficient of 0.15 cm-1, Hypoperfused lesions of varying intensities and sizes were added. The phantom was then projected as typical SPECT projection data, taking into account attenuation and collimator blurring with the addition of Poisson noise, The projection data was reconstructed using four different methods of reconstruction: (1) filtered backprojection (FBP) with the uniform attenuation map; (2) FBP using the true attenuation map; (3) ordered subset expectation maximization (OSEM) (equivalent to 423 iterations) with a uniform attenuation map; and (4) OSEM with a true attenuation map. Different Gaussian postsmooth kernels were applied to the reconstructed images. Results: The analysis of the reconstructed data was performed using figures of merit such as signal to noise ratio (SNR), bias and variance. The results illustrated that uniform attenuation correction offered slight deterioration (less than 2%) with regard to SNR when compared to the ideal attenuation map. which in reality is not known. The iterative techniques produced superior signal to noise ratios (increase of 5 - 20 % depending on the lesion and the postsmooth) in comparison to the FBP methods

  11. Evaluation of a breath-motion-correction technique in reducing measurement error in hepatic CT perfusion imaging

    International Nuclear Information System (INIS)

    He Wei; Liu Jianyu; Li Xuan; Li Jianying; Liao Jingmin

    2009-01-01

    Objective: To evaluate the effect of a breath-motion-correction (BMC) technique in reducing measurement error of the time-density curve (TDC) in hepatic CT perfusion imaging. Methods: Twenty-five patients with suspected liver diseases underwent hepatic CT perfusion scans. The right branch of portal vein was selected as the anatomy of interest and performed BMC to realign image slices for the TDC according to the rule of minimizing the temporal changes of overall structures. Ten ROIs was selected on the right branch of portal vein to generate 10 TDCs each with and without BMC. The values of peak enhancement and the time-to-peak enhancement for each TDC were measured. The coefficients of variation (CV) of peak enhancement and the time-to-peak enhancement were calculated for each patient with and without BMC. Wilcoxon signed ranks test was used to evaluate the difference between the CV of the two parameters obtained with and without BMC. Independent-samples t test was used to evaluate the difference between the values of peak enhancement obtained with and without BMC. Results: The median (quartiles) of CV of peak enhancement with BMC [2.84% (2.10%, 4.57%)] was significantly lower than that without BMC [5.19% (3.90%, 7.27%)] (Z=-3.108,P<0.01). The median (quartiles) of CV of time-to-peak enhancement with BMC [2.64% (0.76%, 4.41%)] was significantly lower than that without BMC [5.23% (3.81%, 7.43%)] (Z=-3.924, P<0.01). In 8 cases, TDC demonstrated statistically significant higher peak enhancement with BMC (P<0.05). Conclusion: By applying the BMC technique we can effectively reduce measurement error for parameters of the TDC in hepatic CT perfusion imaging. (authors)

  12. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 547: Miscellaneous Contaminated Waste Sites, Nevada National Security Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Mark Krauss

    2011-09-01

    The purpose of this CADD/CAP is to present the corrective action alternatives (CAAs) evaluated for CAU 547, provide justification for selection of the recommended alternative, and describe the plan for implementing the selected alternative. Corrective Action Unit 547 consists of the following three corrective action sites (CASs): (1) CAS 02-37-02, Gas Sampling Assembly; (2) CAS 03-99-19, Gas Sampling Assembly; and(3) CAS 09-99-06, Gas Sampling Assembly. The gas sampling assemblies consist of inactive process piping, equipment, and instrumentation that were left in place after completion of underground safety experiments. The purpose of these safety experiments was to confirm that a nuclear explosion would not occur in the case of an accidental detonation of the high-explosive component of the device. The gas sampling assemblies allowed for the direct sampling of the gases and particulates produced by the safety experiments. Corrective Action Site 02-37-02 is located in Area 2 of the Nevada National Security Site (NNSS) and is associated with the Mullet safety experiment conducted in emplacement borehole U2ag on October 17, 1963. Corrective Action Site 03-99-19 is located in Area 3 of the NNSS and is associated with the Tejon safety experiment conducted in emplacement borehole U3cg on May 17, 1963. Corrective Action Site 09-99-06 is located in Area 9 of the NNSS and is associated with the Player safety experiment conducted in emplacement borehole U9cc on August 27, 1964. The CAU 547 CASs were investigated in accordance with the data quality objectives (DQOs) developed by representatives of the Nevada Division of Environmental Protection (NDEP) and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office. The DQO process was used to identify and define the type, amount, and quality of data needed to determine and implement appropriate corrective actions for CAU 547. Existing radiological survey data and historical knowledge of

  13. Separation Techniques for Quantification of Radionuclides in Environmental Samples

    Directory of Open Access Journals (Sweden)

    Dusan Galanda

    2009-01-01

    Full Text Available The reliable and quantitative measurement of radionuclides is important in order to determine environmental quality and radiation safety, and to monitor regulatory compliance. We examined soil samples from Podunajske Biskupice, near the city of Bratislava in the Slovak Republic, for the presence of several natural (238U, 232Th, 40K and anthropogenic (137Cs, 90Sr, 239Pu, 240Pu, 241Am radionuclides. The area is adjacent to a refinery and hazardous waste processing center, as well as the municipal incinerator plant, and so might possess an unusually high level of ecotoxic metals. We found that the levels of both naturally occurring and anthropogenic radionuclides fell within the expected ranges, indicating that these facilities pose no radiological threat to the local environment. During the course of our analysis, we modified existing techniques in order to allow us to handle the unusually large and complex samples that were needed to determine the levels of 239Pu, 240Pu, and 241Am activity. We also rated three commercial techniques for the separation of 90Sr from aqueous solutions and found that two of them, AnaLig Sr-01 and Empore Extraction Disks, were suitable for the quantitative and reliable separation of 90Sr, while the third, Sr-Spec Resin, was less so. The main criterion in evaluating these methods was the chemical recovery of 90Sr, which was less than we had expected. We also considered speed of separation and additional steps needed to prepare the sample for separation.

  14. Large sample neutron activation analysis of a reference inhomogeneous sample

    International Nuclear Information System (INIS)

    Vasilopoulou, T.; Athens National Technical University, Athens; Tzika, F.; Stamatelatos, I.E.; Koster-Ammerlaan, M.J.J.

    2011-01-01

    A benchmark experiment was performed for Neutron Activation Analysis (NAA) of a large inhomogeneous sample. The reference sample was developed in-house and consisted of SiO 2 matrix and an Al-Zn alloy 'inhomogeneity' body. Monte Carlo simulations were employed to derive appropriate correction factors for neutron self-shielding during irradiation as well as self-attenuation of gamma rays and sample geometry during counting. The large sample neutron activation analysis (LSNAA) results were compared against reference values and the trueness of the technique was evaluated. An agreement within ±10% was observed between LSNAA and reference elemental mass values, for all matrix and inhomogeneity elements except Samarium, provided that the inhomogeneity body was fully simulated. However, in cases that the inhomogeneity was treated as not known, the results showed a reasonable agreement for most matrix elements, while large discrepancies were observed for the inhomogeneity elements. This study provided a quantification of the uncertainties associated with inhomogeneity in large sample analysis and contributed to the identification of the needs for future development of LSNAA facilities for analysis of inhomogeneous samples. (author)

  15. Accelerated Solvent Extraction: An Innovative Sample Extraction Technique for Natural Products

    International Nuclear Information System (INIS)

    Hazlina Ahmad Hassali; Azfar Hanif Abd Aziz; Rosniza Razali

    2015-01-01

    Accelerated solvent extraction (ASE) is one of the novel techniques that have been developed for the extraction of phytochemicals from plants in order to shorten the extraction time, decrease the solvent consumption, increase the extraction yield and enhance the quality of extracts. This technique combines elevated temperatures and pressure with liquid solvents. This paper gives a brief overview of accelerated solvent extraction technique for sample preparation and its application to the extraction of natural products. Through practical examples, the effects of operational parameters such as temperature, volume of solvent used, extraction time and extraction yields on the performance of ASE are discussed. It is demonstrated that ASE technique allows reduced solvent consumption and shorter extraction time, while the extraction yields are even higher than those obtained with conventional methods. (author)

  16. Corrective Action Investigation Plan for Corrective Action Unit 554: Area 23 Release Site, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Boehlecke, Robert F.

    2004-01-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information for conducting site investigation activities at Corrective Action Unit (CAU) 554: Area 23 Release Site, Nevada Test Site, Nevada. Information presented in this CAIP includes facility descriptions, environmental sample collection objectives, and criteria for the selection and evaluation of environmental samples. Corrective Action Unit 554 is located in Area 23 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 554 is comprised of one Corrective Action Site (CAS), which is: 23-02-08, USTs 23-115-1, 2, 3/Spill 530-90-002. This site consists of soil contamination resulting from a fuel release from underground storage tanks (USTs). Corrective Action Site 23-02-08 is being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation prior to evaluating corrective action alternatives and selecting the appropriate corrective action for this CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document for CAU 554. Corrective Action Site 23-02-08 will be investigated based on the data quality objectives (DQOs) developed on July 15, 2004, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; and contractor personnel. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 554. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to CAS 23-02-08. The scope of the corrective action investigation

  17. Practical aspects of the resin bead technique for mass spectrometric sample loading

    International Nuclear Information System (INIS)

    Walker, R.L.; Pritchard, C.A.; Carter, J.A.; Smith, D.H.

    1976-07-01

    Using an anion resin bead as a loading vehicle for uranium and plutonium samples which are to be analyzed isotopically in a mass spectrometer has many advantages over conventional techniques. It is applicable to any laboratory routinely performing such analyses, but should be particularly relevant for Safeguards' purposes. Because the techniques required differ markedly from those of conventional methods, this report has been written to describe them in detail to enable those unfamiliar with the technique to master it with a minimum of trouble

  18. The concept of power correction techniques and its use in the reactor regulation and protection systems in Indian PHWRs

    International Nuclear Information System (INIS)

    Vaswani, P.D.; Kelkar, M.G.; Ghoshal, B.; Ashok Kumar, B.

    2010-01-01

    Reactor Power Measurement is an essential part of the Reactor Power Control Loop in PHWRs. None of the available power measuring sensor offers characteristics which allow their direct use in the Reactor Power Control Loop. Thermal power, which is considered as relatively accurate, suffers from measurement delays and is used only as reference. Neutronic power sensors like Ion Chambers and Self Powered Neutron Detectors (SPNDs) which sense instantaneous power suffer from inaccuracies. A technique is required which makes use of both types-reference power and instantaneous power to extract real power information from the signals. This paper describes techniques to calibrate (correct) neutronic power that with the thermal reference power signals. The paper also brings out limitation of the calibration technique. (author)

  19. A fully blanketed early B star LTE model atmosphere using an opacity sampling technique

    International Nuclear Information System (INIS)

    Phillips, A.P.; Wright, S.L.

    1980-01-01

    A fully blanketed LTE model of a stellar atmosphere with Tsub(e) = 21914 K (thetasub(e) = 0.23), log g = 4 is presented. The model includes an explicit representation of the opacity due to the strongest lines, and uses a statistical opacity sampling technique to represent the weaker line opacity. The sampling technique is subjected to several tests and the model is compared with an atmosphere calculated using the line-distribution function method. The limitations of the distribution function method and the particular opacity sampling method used here are discussed in the light of the results obtained. (author)

  20. Sample preparation techniques based on combustion reactions in closed vessels - A brief overview and recent applications

    International Nuclear Information System (INIS)

    Flores, Erico M.M.; Barin, Juliano S.; Mesko, Marcia F.; Knapp, Guenter

    2007-01-01

    In this review, a general discussion of sample preparation techniques based on combustion reactions in closed vessels is presented. Applications for several kinds of samples are described, taking into account the literature data reported in the last 25 years. The operational conditions as well as the main characteristics and drawbacks are discussed for bomb combustion, oxygen flask and microwave-induced combustion (MIC) techniques. Recent applications of MIC techniques are discussed with special concern for samples not well digested by conventional microwave-assisted wet digestion as, for example, coal and also for subsequent determination of halogens

  1. Comparison of the FFT/matrix inversion and system matrix techniques for higher-order probe correction in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Nielsen, Jeppe Majlund; Breinbjerg, Olav

    2011-01-01

    correction of general high-order probes, including non-symmetric dual-polarized antennas with independent ports. The investigation was carried out by processing with each technique the same measurement data for a challenging case with an antenna under test significantly offset from the center of rotation...

  2. The application of statistical and/or non-statistical sampling techniques by internal audit functions in the South African banking industry

    Directory of Open Access Journals (Sweden)

    D.P. van der Nest

    2015-03-01

    Full Text Available This article explores the use by internal audit functions of audit sampling techniques in order to test the effectiveness of controls in the banking sector. The article focuses specifically on the use of statistical and/or non-statistical sampling techniques by internal auditors. The focus of the research for this article was internal audit functions in the banking sector of South Africa. The results discussed in the article indicate that audit sampling is still used frequently as an audit evidence-gathering technique. Non-statistical sampling techniques are used more frequently than statistical sampling techniques for the evaluation of the sample. In addition, both techniques are regarded as important for the determination of the sample size and the selection of the sample items

  3. Identification and Correction of Sample Mix-Ups in Expression Genetic Data: A Case Study.

    Science.gov (United States)

    Broman, Karl W; Keller, Mark P; Broman, Aimee Teo; Kendziorski, Christina; Yandell, Brian S; Sen, Śaunak; Attie, Alan D

    2015-08-19

    In a mouse intercross with more than 500 animals and genome-wide gene expression data on six tissues, we identified a high proportion (18%) of sample mix-ups in the genotype data. Local expression quantitative trait loci (eQTL; genetic loci influencing gene expression) with extremely large effect were used to form a classifier to predict an individual's eQTL genotype based on expression data alone. By considering multiple eQTL and their related transcripts, we identified numerous individuals whose predicted eQTL genotypes (based on their expression data) did not match their observed genotypes, and then went on to identify other individuals whose genotypes did match the predicted eQTL genotypes. The concordance of predictions across six tissues indicated that the problem was due to mix-ups in the genotypes (although we further identified a small number of sample mix-ups in each of the six panels of gene expression microarrays). Consideration of the plate positions of the DNA samples indicated a number of off-by-one and off-by-two errors, likely the result of pipetting errors. Such sample mix-ups can be a problem in any genetic study, but eQTL data allow us to identify, and even correct, such problems. Our methods have been implemented in an R package, R/lineup. Copyright © 2015 Broman et al.

  4. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  5. X-ray spectrometry and X-ray microtomography techniques for soil and geological samples analysis

    International Nuclear Information System (INIS)

    Kubala-Kukuś, A.; Banaś, D.; Braziewicz, J.; Dziadowicz, M.; Kopeć, E.; Majewska, U.; Mazurek, M.; Pajek, M.; Sobisz, M.; Stabrawa, I.; Wudarczyk-Moćko, J.; Góźdź, S.

    2015-01-01

    A particular subject of X-ray fluorescence analysis is its application in studies of the multielemental sample of composition in a wide range of concentrations, samples with different matrices, also inhomogeneous ones and those characterized with different grain size. Typical examples of these kinds of samples are soil or geological samples for which XRF elemental analysis may be difficult due to XRF disturbing effects. In this paper the WDXRF technique was applied in elemental analysis concerning different soil and geological samples (therapeutic mud, floral soil, brown soil, sandy soil, calcium aluminum cement). The sample morphology was analyzed using X-ray microtomography technique. The paper discusses the differences between the composition of samples, the influence of procedures with respect to the preparation of samples as regards their morphology and, finally, a quantitative analysis. The results of the studies were statistically tested (one-way ANOVA and correlation coefficients). For lead concentration determination in samples of sandy soil and cement-like matrix, the WDXRF spectrometer calibration was performed. The elemental analysis of the samples was complemented with knowledge of chemical composition obtained by X-ray powder diffraction.

  6. X-ray spectrometry and X-ray microtomography techniques for soil and geological samples analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kubala-Kukuś, A.; Banaś, D.; Braziewicz, J. [Institute of Physics, Jan Kochanowski University, ul. Świetokrzyska 15, 25-406 Kielce (Poland); Holycross Cancer Center, ul. Artwińskiego 3, 25-734 Kielce (Poland); Dziadowicz, M.; Kopeć, E. [Institute of Physics, Jan Kochanowski University, ul. Świetokrzyska 15, 25-406 Kielce (Poland); Majewska, U. [Institute of Physics, Jan Kochanowski University, ul. Świetokrzyska 15, 25-406 Kielce (Poland); Holycross Cancer Center, ul. Artwińskiego 3, 25-734 Kielce (Poland); Mazurek, M.; Pajek, M.; Sobisz, M.; Stabrawa, I. [Institute of Physics, Jan Kochanowski University, ul. Świetokrzyska 15, 25-406 Kielce (Poland); Wudarczyk-Moćko, J. [Holycross Cancer Center, ul. Artwińskiego 3, 25-734 Kielce (Poland); Góźdź, S. [Holycross Cancer Center, ul. Artwińskiego 3, 25-734 Kielce (Poland); Institute of Public Health, Jan Kochanowski University, IX Wieków Kielc 19, 25-317 Kielce (Poland)

    2015-12-01

    A particular subject of X-ray fluorescence analysis is its application in studies of the multielemental sample of composition in a wide range of concentrations, samples with different matrices, also inhomogeneous ones and those characterized with different grain size. Typical examples of these kinds of samples are soil or geological samples for which XRF elemental analysis may be difficult due to XRF disturbing effects. In this paper the WDXRF technique was applied in elemental analysis concerning different soil and geological samples (therapeutic mud, floral soil, brown soil, sandy soil, calcium aluminum cement). The sample morphology was analyzed using X-ray microtomography technique. The paper discusses the differences between the composition of samples, the influence of procedures with respect to the preparation of samples as regards their morphology and, finally, a quantitative analysis. The results of the studies were statistically tested (one-way ANOVA and correlation coefficients). For lead concentration determination in samples of sandy soil and cement-like matrix, the WDXRF spectrometer calibration was performed. The elemental analysis of the samples was complemented with knowledge of chemical composition obtained by X-ray powder diffraction.

  7. Sampling methods for rumen microbial counts by Real-Time PCR techniques

    Directory of Open Access Journals (Sweden)

    S. Puppo

    2010-02-01

    Full Text Available Fresh rumen samples were withdrawn from 4 cannulated buffalo females fed a fibrous diets in order to quantify bacteria concentration in the rumen by Real-Time PCR techniques. To obtain DNA of a good quality from whole rumen fluid, eight (M1-M8 different pre-filtration methods (cheese cloths, glass-fibre and nylon filter in combination with various centrifugation speeds (1000, 5000 and 14,000 rpm were tested. Genomic DNA extraction was performed either on fresh or frozen samples (-20°C. The quantitative bacteria analysis was realized according to Real-Time PCR procedure for Butyrivibrio fibrisolvens reported in literature. M5 resulted the best sampling procedure allowing to obtain a suitable genomic DNA. No differences were revealed between fresh and frozen samples.

  8. A weighted least-squares lump correction algorithm for transmission-corrected gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.

    1993-01-01

    With transmission-corrected gamma-ray nondestructive assay instruments such as the Segmented Gamma Scanner (SGS) and the Tomographic Gamma Scanner (TGS) that is currently under development at Los Alamos National Laboratory, the amount of gamma-ray emitting material can be underestimated for samples in which the emitting material consists of particles or lumps of highly attenuating material. This problem is encountered in the assay of uranium and plutonium-bearing samples. To correct for this source of bias, we have developed a least-squares algorithm that uses transmission-corrected assay results for several emitted energies and a weighting function to account for statistical uncertainties in the assay results. The variation of effective lump size in the fitted model is parameterized; this allows the correction to be performed for a wide range of lump-size distributions. It may be possible to use the reduced chi-squared value obtained in the fit to identify samples in which assay assumptions have been violated. We found that the algorithm significantly reduced bias in simulated assays and improved SGS assay results for plutonium-bearing samples. Further testing will be conducted with the TGS, which is expected to be less susceptible than the SGS to systematic source of bias

  9. Pu abundances, concentrations, and isotopics by x- and gamma-ray spectrometry assay techniques

    International Nuclear Information System (INIS)

    Camp, D.C.; Gunnink, R.; Ruhter, W.D.; Prindle, A.L.; Gomes, R.J.

    1986-01-01

    Two x- and gamma-ray systems were recently installed at-line in gloveboxes and will measure Pu solution concentrations from 5 to 105 g/L. These NDA technique, developed and refined over the past decade, are now used domestically and internationally for nuclear material process monitoring and accountability needs. In off- and at-line installations, they can measure solution concentrations to 0.2%. The K-XRFA systems use a transmission source to correct for solution density. The gamma-ray systems use peaks from 59- to 208-keV to determine solution concentrations and relative isotopics. A Pu check source monitors system stability. These two NDA techniques can be combined to form a new, NDA measurement methodology. With the instrument located outside of a glovebox, both relative Pu isotopics and absolute Pu abundances of a sample located inside a glovebox can be measured. The new technique works with either single or dual source excitation; the former for a detector 6 to 20 cm away with no geometric corrections needed; the latter requires geometric corrections or source movement if the sample cannot be measured at the calibration distance. 4 refs., 7 figs., 2 tabs

  10. Random sampling technique for ultra-fast computations of molecular opacities for exoplanet atmospheres

    Science.gov (United States)

    Min, M.

    2017-10-01

    Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.

  11. Sample preparation techniques in trace element analysis by X-ray emission spectroscopy

    International Nuclear Information System (INIS)

    Valkovic, V.

    1983-11-01

    The report, written under a research contract with the IAEA, contains a detailed presentation of the most difficult problem encountered in the trace element analysis by methods of the X-ray emission spectroscopy, namely the sample preparation techniques. The following items are covered. Sampling - with specific consideration of aerosols, water, soil, biological materials, petroleum and its products, storage of samples and their handling. Pretreatment of samples - preconcentration, ashing, solvent extraction, ion exchange and electrodeposition. Sample preparations for PIXE - analysis - backings, target uniformity and homogeneity, effects of irradiation, internal standards and specific examples of preparation (aqueous, biological, blood serum and solid samples). Sample preparations for radioactive sources or tube excitation - with specific examples (water, liquid and solid samples, soil, geological, plants and tissue samples). Finally, the problem of standards and reference materials, as well as that of interlaboratory comparisons, is discussed

  12. The measurement of radioactive microspheres in biological samples

    International Nuclear Information System (INIS)

    Mernagh, J.R.; Spiers, E.W.; Adiseshiah, M.

    1976-01-01

    Measurements of the distribution of radioactive microspheres are used in investigations of regional coronary blood flow, but the size and shape of the heart varies for different test animals, and the organ is frequently divided into smaller pieces for studies of regional perfusion. Errors are introduced by variations in the distribution of the radioactive source and the amount of Compton scatter in different samples. A technique has therefore been developed to allow the counting of these tissue samples in their original form, and correction factors have been derived to inter-relate the various counting geometries thus encountered. Dogs were injected with microspheres labelled with 141 Ce, 51 Cr or 85 Sr. The tissue samples did not require remodelling to fit a standard container, and allowance was made for the inhomogeneous distribution in the blood samples. The activities in the centrifuged blood samples were correlated with those from the tissue samples by a calibration procedure involving comparisons of the counts from samples of microspheres embedded in sachets of gelatine, and similar samples mixed with blood and then centrifuged. The calibration data have indicated that 51 Cr behaves anomalously, and its use as a label for microspheres may introduce unwarranted errors. A plane cylindrical 10 x 20 cm NaI detector was used, and a 'worst case' correction of 20% was found to be necessary for geometry effects. The accuracy of this method of correlating different geometries was tested by remodelling the same tissue sample into different sizes and comparing the results, and the validity of the technique was supported by agreement of the final results with previously published data. (U.K.)

  13. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-01

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4+ strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS.

  14. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  15. Experimental technique to measure thoron generation rate of building material samples using RAD7 detector

    International Nuclear Information System (INIS)

    Csige, I.; Szabó, Zs.; Szabó, Cs.

    2013-01-01

    Thoron ( 220 Rn) is the second most abundant radon isotope in our living environment. In some dwellings it is present in significant amount which calls for its identification and remediation. Indoor thoron originates mainly from building materials. In this work we have developed and tested an experimental technique to measure thoron generation rate in building material samples using RAD7 radon-thoron detector. The mathematical model of the measurement technique provides the thoron concentration response of RAD7 as a function of the sample thickness. For experimental validation of the technique an adobe building material sample was selected for measuring the thoron concentration at nineteen different sample thicknesses. Fitting the parameters of the model to the measurement results, both the generation rate and the diffusion length of thoron was estimated. We have also determined the optimal sample thickness for estimating the thoron generation rate from a single measurement. -- Highlights: • RAD7 is used for the determination of thoron generation rate (emanation). • The described model takes into account the thoron decay and attenuation. • The model describes well the experimental results. • A single point measurement method is offered at a determined sample thickness

  16. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    Science.gov (United States)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described

  17. Corrective Action Investigation Plan for Corrective Action Unit 542: Disposal Holes, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Laura Pastor

    2006-01-01

    Corrective Action Unit (CAU) 542 is located in Areas 3, 8, 9, and 20 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 542 is comprised of eight corrective action sites (CASs): (1) 03-20-07, ''UD-3a Disposal Hole''; (2) 03-20-09, ''UD-3b Disposal Hole''; (3) 03-20-10, ''UD-3c Disposal Hole''; (4) 03-20-11, ''UD-3d Disposal Hole''; (5) 06-20-03, ''UD-6 and UD-6s Disposal Holes''; (6) 08-20-01, ''U-8d PS No.1A Injection Well Surface Release''; (7) 09-20-03, ''U-9itsy30 PS No.1A Injection Well Surface Release''; and (8) 20-20-02, ''U-20av PS No.1A Injection Well Surface Release''. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on January 30, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and Bechtel Nevada. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 542. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the CAI for CAU 542 includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling. (2) Conduct radiological surveys. (3) Conduct geophysical surveys to

  18. Estimating the residential demand function for natural gas in Seoul with correction for sample selection bias

    International Nuclear Information System (INIS)

    Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun

    2009-01-01

    Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)

  19. Validation of phenol red versus gravimetric method for water reabsorption correction and study of gender differences in Doluisio's absorption technique.

    Science.gov (United States)

    Tuğcu-Demiröz, Fatmanur; Gonzalez-Alvarez, Isabel; Gonzalez-Alvarez, Marta; Bermejo, Marival

    2014-10-01

    The aim of the present study was to develop a method for water flux reabsorption measurement in Doluisio's Perfusion Technique based on the use of phenol red as a non-absorbable marker and to validate it by comparison with gravimetric procedure. The compounds selected for the study were metoprolol, atenolol, cimetidine and cefadroxil in order to include low, intermediate and high permeability drugs absorbed by passive diffusion and by carrier mediated mechanism. The intestinal permeabilities (Peff) of the drugs were obtained in male and female Wistar rats and calculated using both methods of water flux correction. The absorption rate coefficients of all the assayed compounds did not show statistically significant differences between male and female rats consequently all the individual values were combined to compare between reabsorption methods. The absorption rate coefficients and permeability values did not show statistically significant differences between the two strategies of concentration correction. The apparent zero order water absorption coefficients were also similar in both correction procedures. In conclusion gravimetric and phenol red method for water reabsorption correction are accurate and interchangeable for permeability estimation in closed loop perfusion method. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. The Role of the Sampling Distribution in Understanding Statistical Inference

    Science.gov (United States)

    Lipson, Kay

    2003-01-01

    Many statistics educators believe that few students develop the level of conceptual understanding essential for them to apply correctly the statistical techniques at their disposal and to interpret their outcomes appropriately. It is also commonly believed that the sampling distribution plays an important role in developing this understanding.…

  1. k-space sampling optimization for ultrashort TE imaging of cortical bone: Applications in radiation therapy planning and MR-based PET attenuation correction

    International Nuclear Information System (INIS)

    Hu, Lingzhi; Traughber, Melanie; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Muzic, Raymond F. Jr.

    2014-01-01

    Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2 ∗ = 1/T2 ∗ , was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2 ∗ of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2 ∗ of human skull was measured as 0.2–0.3 ms −1 depending on the specific region, which is more than ten times greater than the R2 ∗ of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in brain. High-quality, bone

  2. Drift correction for single-molecule imaging by molecular constraint field, a distance minimum metric

    International Nuclear Information System (INIS)

    Han, Renmin; Wang, Liansan; Xu, Fan; Zhang, Yongdeng; Zhang, Mingshu; Liu, Zhiyong; Ren, Fei; Zhang, Fa

    2015-01-01

    The recent developments of far-field optical microscopy (single molecule imaging techniques) have overcome the diffraction barrier of light and improve image resolution by a factor of ten compared with conventional light microscopy. These techniques utilize the stochastic switching of probe molecules to overcome the diffraction limit and determine the precise localizations of molecules, which often requires a long image acquisition time. However, long acquisition times increase the risk of sample drift. In the case of high resolution microscopy, sample drift would decrease the image resolution. In this paper, we propose a novel metric based on the distance between molecules to solve the drift correction. The proposed metric directly uses the position information of molecules to estimate the frame drift. We also designed an algorithm to implement the metric for the general application of drift correction. There are two advantages of our method: First, because our method does not require space binning of positions of molecules but directly operates on the positions, it is more natural for single molecule imaging techniques. Second, our method can estimate drift with a small number of positions in each temporal bin, which may extend its potential application. The effectiveness of our method has been demonstrated by both simulated data and experiments on single molecular images

  3. Corrective Action Investigation Plan for Corrective Action Unit 561: Waste Disposal Areas, Nevada Test Site, Nevada, Revision 0

    International Nuclear Information System (INIS)

    Grant Evenson

    2008-01-01

    Corrective Action Unit (CAU) 561 is located in Areas 1, 2, 3, 5, 12, 22, 23, and 25 of the Nevada Test Site, which is approximately 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 561 is comprised of the 10 corrective action sites (CASs) listed below: (1) 01-19-01, Waste Dump; (2) 02-08-02, Waste Dump and Burn Area; (3) 03-19-02, Debris Pile; (4) 05-62-01, Radioactive Gravel Pile; (5) 12-23-09, Radioactive Waste Dump; (6) 22-19-06, Buried Waste Disposal Site; (7) 23-21-04, Waste Disposal Trenches; (8) 25-08-02, Waste Dump; (9) 25-23-21, Radioactive Waste Dump; and (10) 25-25-19, Hydrocarbon Stains and Trench. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on April 28, 2008, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and National Security Technologies, LLC. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 561. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the Corrective Action Investigation for CAU 561 includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling. (2) Conduct

  4. Sampling phased array - a new technique for ultrasonic signal processing and imaging

    OpenAIRE

    Verkooijen, J.; Boulavinov, A.

    2008-01-01

    Over the past 10 years, the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called 'Sampling Phased Array', has been developed in the Fraunhofer Institute for Non-Destructive Testing([1]). It realises a unique approach of measurement and processing of ultrasonic signals. Th...

  5. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  6. Two-compartment, two-sample technique for accurate estimation of effective renal plasma flow: Theoretical development and comparison with other methods

    International Nuclear Information System (INIS)

    Lear, J.L.; Feyerabend, A.; Gregory, C.

    1989-01-01

    Discordance between effective renal plasma flow (ERPF) measurements from radionuclide techniques that use single versus multiple plasma samples was investigated. In particular, the authors determined whether effects of variations in distribution volume (Vd) of iodine-131 iodohippurate on measurement of ERPF could be ignored, an assumption implicit in the single-sample technique. The influence of Vd on ERPF was found to be significant, a factor indicating an important and previously unappreciated source of error in the single-sample technique. Therefore, a new two-compartment, two-plasma-sample technique was developed on the basis of the observations that while variations in Vd occur from patient to patient, the relationship between intravascular and extravascular components of Vd and the rate of iodohippurate exchange between the components are stable throughout a wide range of physiologic and pathologic conditions. The new technique was applied in a series of 30 studies in 19 patients. Results were compared with those achieved with the reference, single-sample, and slope-intercept techniques. The new two-compartment, two-sample technique yielded estimates of ERPF that more closely agreed with the reference multiple-sample method than either the single-sample or slope-intercept techniques

  7. Evaluation of the Metered-Dose Inhaler Technique among Health Care Providers Practicing in Hamadan University of Medical Sciences

    Directory of Open Access Journals (Sweden)

    E. Nadi

    2004-07-01

    Full Text Available Poor inhaler technique is a common problem both in asthma patients and health care providers , which contributes to poor asthma control. The aim of this study was to evaluate the correctness of metered-dose inhaler (MDI technique in a sample of physicians , pharmacists and nurses practicing in Hamadan University hospitals. A total of 176 healthcare providers (35 internists and general physicians , 138 nurses and 3 pharmacists were participated voluntary in this study. After the participants answered a questionnaire aimed at identifying their involvement in MDI prescribing and counseling , a trained observer assessed their MDI technique using a checklist of ten steps.Of the 176 participants , 35(20% were physician , and 3 subjects (2% were pharmacists , and 138 (78% were nurses. However only 6 participants (3.4% performed all steps correctly. Physicians performed significantly better than non-physicians (8.6% vs. 2.13%.The majority of healthcare providers responsible for instructing patients on the correct MDI technique were unable to perform this technique correctly ‘indicating the need for regular formal training programmes on inhaler techniques.

  8. First Industrial Tests of a Matrix Monitor Correction for the Differential Die-away Technique of Historical Waste Drums

    International Nuclear Information System (INIS)

    Antoni, Rodolphe; Passard, Christian; Perot, Bertrand; Batifol, Marc; Vandamme, Jean-Christophe; Grassi, Gabriele

    2015-01-01

    The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA NC La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT). In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (LMN) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor, namely a 3He proportional counter located inside the measurement cavity. After feasibility studies performed with LMN's PROMETHEE 6 laboratory measurement cell and with MCNPX simulations, this paper presents first experimental tests performed on the industrial ACC (hulls and nozzles compaction facility) measurement system. A calculation vs. experiment benchmark has been carried out by performing dedicated calibration measurements with a representative drum and 235 U samples. The comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the modeling approach and the industrial feasibility of the method, which will be implemented on the industrial station for the measurement of historical wastes. (authors)

  9. Neutron activation analysis technique and X-ray fluorescence in bovine liver sample

    International Nuclear Information System (INIS)

    Maihara, V.A.; Favaro, D.I.T.; Vasconcellos, M.B.A.; Sato, I.M.; Salvador, V.L.

    2002-01-01

    Many analytical techniques have been used in food and diet analysis in order to determine a great number of nutritional elements, ranging from percentage to ng g -1 , with high sensitivity and accuracy. Instrumental Neutron activation Analysis (INAA) has been employed to certificate many trace elements in biological reference materials. More recently, the X-Ray Fluorescence (FRX-WD) has been also used to determine some essential elements in food samples. The INAA has been applied in nutrition studies in our laboratory at IPEN since the 80 s. For the development of analytical methodologies the use of the reference materials with the same characteristics of the sample analyzed is essential. Several Brazilian laboratories do not have conditions to use these materials due their high cost.In this paper preliminary results of commercial bovine liver sample analyses obtained by INAA and WD-XRF methods are presented. This sample was prepared to be a Brazilian candidate of reference material for a group of laboratories participating in a research project sponsored by FAPESP. The concentrations of some elements like Cl, K, Na, P, S and trace elements Br, Ca, Co, Cu, Fe, Mg, Mn, Mo, Rb, Se and Zn were determined by INAA and WD-XFR. For validation methodology of both techniques, NIST SRM 1577b Bovine Liver reference material was analyzed and the detection limits were calculated. The concentrations of elements determined by both analytical techniques were compared by using the Student's t-test and for Cl, Cu, Fe, K, Mg, Na, Rn and Zn the results do show no statistical difference for 95% significance level. (author)

  10. Beam-Based Nonlinear Optics Corrections in Colliders

    CERN Document Server

    Pilat, Fulvia Caterina; Malitsky, Nikolay; Ptitsyn, Vadim

    2005-01-01

    A method has been developed to measure and correct operationally the non-linear effects of the final focusing magnets in colliders, which gives access to the effects of multi-pole errors by applying closed orbit bumps, and analyzing the resulting tune and orbit shifts. This technique has been tested and used during 3 years of RHIC (the Relativistic Heavy Ion Collider at BNL) operations. I will discuss here the theoretical basis of the method, the experimental set-up, the correction results, the present understanding of the machine model, the potential and limitations of the method itself as compared with other non linear correction techniques.

  11. Photobleaching correction in fluorescence microscopy images

    International Nuclear Information System (INIS)

    Vicente, Nathalie B; Diaz Zamboni, Javier E; Adur, Javier F; Paravani, Enrique V; Casco, Victor H

    2007-01-01

    Fluorophores are used to detect molecular expression by highly specific antigen-antibody reactions in fluorescence microscopy techniques. A portion of the fluorophore emits fluorescence when irradiated with electromagnetic waves of particular wavelengths, enabling its detection. Photobleaching irreversibly destroys fluorophores stimulated by radiation within the excitation spectrum, thus eliminating potentially useful information. Since this process may not be completely prevented, techniques have been developed to slow it down or to correct resulting alterations (mainly, the decrease in fluorescent signal). In the present work, the correction by photobleaching curve was studied using E-cadherin (a cell-cell adhesion molecule) expression in Bufo arenarum embryos. Significant improvements were observed when applying this simple, inexpensive and fast technique

  12. Improved mesh based photon sampling techniques for neutron activation analysis

    International Nuclear Information System (INIS)

    Relson, E.; Wilson, P. P. H.; Biondo, E. D.

    2013-01-01

    The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)

  13. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis.

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-05

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH 4 + strategy for ethylene and SO 2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO 2 from fruits. It was satisfied that trace ethylene and SO 2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO 2 during the entire LVCC sampling process were proved to be gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Method of absorbance correction in a spectroscopic heating value sensor

    Science.gov (United States)

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  15. Application of digital sampling techniques to particle identification in scintillation detectors

    International Nuclear Information System (INIS)

    Bardelli, L.; Bini, M.; Poggi, G.; Taccetti, N.

    2002-01-01

    In this paper, the use of a fast digitizing system for identification of fast charged particles with scintillation detectors is discussed. The three-layer phoswich detectors developed in the framework of the FIASCO experiment for the detection of light charged particles (LCP) and intermediate mass fragments (IMF) emitted in heavy-ion collisions at Fermi energies are briefly discussed. The standard analog electronics treatment of the signals for particle identification is illustrated. After a description of the digitizer designed to perform a fast digital sampling of the phoswich signals, the feasibility of particle identification on the sampled data is demonstrated. The results obtained with two different pulse shape discrimination analyses based on the digitally sampled data are compared with the standard analog signal treatment. The obtained results suggest, for the present application, the replacement of the analog methods with the digital sampling technique

  16. Uranium content measurement in drinking water samples using track etch technique

    International Nuclear Information System (INIS)

    Kumar, Mukesh; Kumar, Ajay; Singh, Surinder; Mahajan, R.K.; Walia, T.P.S.

    2003-01-01

    The concentration of uranium has been assessed in drinking water samples collected from different locations in Bathinda district, Punjab, India. The water samples are taken from hand pumps and tube wells. Uranium is determined using fission track technique. Uranium concentration in the water samples varies from 1.65±0.06 to 74.98±0.38 μg/l. These values are compared with safe limit values recommended for drinking water. Most of the water samples are found to have uranium concentration above the safe limit. Analysis of some heavy metals (Zn, Cd, Pb and Cu) in water is also done in order to see if some correlation exists between the concentration of uranium and these heavy metals. A weak positive correlation has been observed between the concentration of uranium and heavy metals of Pb, Cd and Cu

  17. Attempts to develop a new nuclear measurement technique of β-glucuronidase levels in biological samples

    International Nuclear Information System (INIS)

    Unak, T.; Avcibasi, U.; Yildirim, Y.; Cetinkaya, B.

    2003-01-01

    β-Glucuronidase is one of the most important hydrolytic enzymes in living systems and plays an essential role in the detoxification pathway of toxic materials incorporated into the metabolism. Some organs, especially liver and some tumour tissues, have high level of β-glucuronidase activity. As a result the enzymatic activity of some kind of tumour cells, the radiolabelled glucuronide conjugates of cytotoxic, as well as radiotoxic compounds have potentially very valuable diagnostic and therapeutic applications in cancer research. For this reason, a sensitive measurement of β-glucuronidase levels in normal and tumour tissues is a very important step for these kinds of applications. According to the classical measurement method of β-glucuronidase activity, in general, the quantity of phenolphthalein liberated from its glucuronide conjugate, i.e. phenolphthalein-glucuronide, by β-glucuronidase has been measured by use of the spectrophotometric technique. The lower detection limit of phenolphthalein by the spectrophotometric technique is about 1-3 mg. This means that the β-glucuronidase levels could not be detected in biological samples having lower levels of β-glucuronidase activity and therefore the applications of the spectrophotometric technique in cancer research are very seriously limited. Starting from this consideration, we recently attempted to develop a new nuclear technique to measure much lower concentrations of β-glucuronidase in biological samples. To improve the detection limit, phenolphthalein-glucuronide and also phenyl-N-glucuronide were radioiodinated with 131 I and their radioactivity was measured by use of the counting technique. Therefore, the quantity of phenolphthalein or aniline radioiodinated with 131 I and liberated by the deglucuronidation reactivity of β-glucuronidase was used in an attempt to measure levels lower than the spectrophotometric measurement technique. The results obtained clearly verified that 0.01 pg level of

  18. Corrective Action Investigation Plan for Corrective Action Unit 190: Contaminated Waste Sites Nevada Test Site, Nevada, Rev. No.: 0

    International Nuclear Information System (INIS)

    Wickline, Alfred

    2006-01-01

    Corrective Action Unit (CAU) 190 is located in Areas 11 and 14 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 190 is comprised of the four Corrective Action Sites (CASs) listed below: (1) 11-02-01, Underground Centrifuge; (2) 11-02-02, Drain Lines and Outfall; (3) 11-59-01, Tweezer Facility Septic System; and (4) 14-23-01, LTU-6 Test Area. These sites are being investigated because existing information is insufficient on the nature and extent of potential contamination to evaluate and recommend corrective action alternatives. Additional information will be obtained before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS by conducting a corrective action investigation (CAI). The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on August 24, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture, and National Security Technologies, LLC. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 190. The scope of the CAU 190 CAI includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling; (2) Conduct radiological and geophysical surveys; (3) Perform field screening; (4) Collect and submit environmental samples for laboratory analysis to determine whether contaminants of concern (COCs) are present; (5) If COCs are present, collect additional step-out samples to define the lateral and vertical extent of the contamination; (6) Collect samples of source material, if present

  19. Bias-corrected estimation in potentially mildly explosive autoregressive models

    DEFF Research Database (Denmark)

    Haufmann, Hendrik; Kruse, Robinson

    This paper provides a comprehensive Monte Carlo comparison of different finite-sample bias-correction methods for autoregressive processes. We consider classic situations where the process is either stationary or exhibits a unit root. Importantly, the case of mildly explosive behaviour is studied...... that the indirect inference approach oers a valuable alternative to other existing techniques. Its performance (measured by its bias and root mean squared error) is balanced and highly competitive across many different settings. A clear advantage is its applicability for mildly explosive processes. In an empirical...

  20. Corrective Action Investigation Plan for Corrective Action Unit 137: Waste Disposal Sites, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Wickline, Alfred

    2005-01-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information including facility descriptions, environmental sample collection objectives, and criteria for conducting site investigation activities at Corrective Action Unit (CAU) 137: Waste Disposal Sites. This CAIP has been developed in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996) that was agreed to by the State of Nevada, the U.S. Department of Energy (DOE), and the U.S. Department of Defense. Corrective Action Unit 137 contains sites that are located in Areas 1, 3, 7, 9, and 12 of the Nevada Test Site (NTS), which is approximately 65 miles (mi) northwest of Las Vegas, Nevada (Figure 1-1). Corrective Action Unit 137 is comprised of the eight corrective action sites (CASs) shown on Figure 1-1 and listed below: (1) CAS 01-08-01, Waste Disposal Site; (2) CAS 03-23-01, Waste Disposal Site; (3) CAS 03-23-07, Radioactive Waste Disposal Site; (4) CAS 03-99-15, Waste Disposal Site; (5) CAS 07-23-02, Radioactive Waste Disposal Site; (6) CAS 09-23-07, Radioactive Waste Disposal Site; (7) CAS 12-08-01, Waste Disposal Site; and (8) CAS 12-23-07, Waste Disposal Site. The Corrective Action Investigation (CAI) will include field inspections, radiological surveys, geophysical surveys, sampling of environmental media, analysis of samples, and assessment of investigation results, where appropriate. Data will be obtained to support corrective action alternative evaluations and waste management decisions. The CASs in CAU 137 are being investigated because hazardous and/or radioactive constituents may be present in concentrations that could potentially pose a threat to human health and the environment. Existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives for the CASs. Additional information will be generated by conducting a CAI before evaluating and selecting corrective action

  1. Education on Correct Inhaler Technique in Pharmacy Schools ...

    African Journals Online (AJOL)

    Purpose: To investigate the effectiveness of a standard educational module on pharmacy students' inhaler technique .... found in the market next to a checklist showing its technique steps). ... educational strategies in this area. To ensure.

  2. Calibration and correction of LA-ICP-MS and LA-MC-ICP-MS analyses for element contents and isotopic ratios

    Directory of Open Access Journals (Sweden)

    Jie Lin

    2016-06-01

    Full Text Available LA-ICP-MS and LA-MC-ICP-MS have been the techniques of choice for achieving accurate and precise element content and isotopic ratio, the state-of-the-art technique combines the advantages of low detection limits with high spatial resolution, however, the analysis accuracy and precision are restricted by many factors, such as sensitivity drift, elemental/isotopic fractionation, matrix effects, interferences and the lack of sufficiently matrix-matched reference materials. Thus, rigorous and suitable calibration and correction methods are needed to obtain quantitative data. This review systematically summarized and evaluated the interference correction, quantitative calculation and sensitivity correction strategies in order to provide the analysts with suitable calibration and correction strategies according to the sample types and the analyzed elements. The functions and features of data reduction software ICPMSDataCal were also outlined, which can provide real-time and on-line data reduction of element content and isotopic ratios analyzed by LA-ICP-MS and LA-MC-ICP-MS.

  3. Solving for the Surface: An Automated Approach to THEMIS Atmospheric Correction

    Science.gov (United States)

    Ryan, A. J.; Salvatore, M. R.; Smith, R.; Edwards, C. S.; Christensen, P. R.

    2013-12-01

    Here we present the initial results of an automated atmospheric correction algorithm for the Thermal Emission Imaging System (THEMIS) instrument, whereby high spectral resolution Thermal Emission Spectrometer (TES) data are queried to generate numerous atmospheric opacity values for each THEMIS infrared image. While the pioneering methods of Bandfield et al. [2004] also used TES spectra to atmospherically correct THEMIS data, the algorithm presented here is a significant improvement because of the reduced dependency on user-defined inputs for individual images. Additionally, this technique is particularly useful for correcting THEMIS images that have captured a range of atmospheric conditions and/or surface elevations, issues that have been difficult to correct for using previous techniques. Thermal infrared observations of the Martian surface can be used to determine the spatial distribution and relative abundance of many common rock-forming minerals. This information is essential to understanding the planet's geologic and climatic history. However, the Martian atmosphere also has absorptions in the thermal infrared which complicate the interpretation of infrared measurements obtained from orbit. TES has sufficient spectral resolution (143 bands at 10 cm-1 sampling) to linearly unmix and remove atmospheric spectral end-members from the acquired spectra. THEMIS has the benefit of higher spatial resolution (~100 m/pixel vs. 3x5 km/TES-pixel) but has lower spectral resolution (8 surface sensitive spectral bands). As such, it is not possible to isolate the surface component by unmixing the atmospheric contribution from the THEMIS spectra, as is done with TES. Bandfield et al. [2004] developed a technique using atmospherically corrected TES spectra as tie-points for constant radiance offset correction and surface emissivity retrieval. This technique is the primary method used to correct THEMIS but is highly susceptible to inconsistent results if great care in the

  4. Source distribution dependent scatter correction for PVI

    International Nuclear Information System (INIS)

    Barney, J.S.; Harrop, R.; Dykstra, C.J.

    1993-01-01

    Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction

  5. Establishment of an open database of realistic simulated data for evaluation of partial volume correction techniques in brain PET/MR

    Energy Technology Data Exchange (ETDEWEB)

    Mota, Ana [Instituto de Biofísica e Engenharia Biomédica, FC-UL, Lisboa (Portugal); Institute of Nuclear Medicine, UCL, London (United Kingdom); Cuplov, Vesna [Instituto de Biofísica e Engenharia Biomédica, FC-UL, Lisboa (Portugal); Schott, Jonathan; Hutton, Brian; Thielemans, Kris [Institute of Nuclear Medicine, UCL, London (United Kingdom); Drobnjak, Ivana [Centre of Medical Image Computing, UCL, London (United Kingdom); Dickson, John [Institute of Nuclear Medicine, UCL, London (United Kingdom); Bert, Julien [INSERM UMR1101, LaTIM, CHRU de Brest, Brest (France); Burgos, Ninon; Cardoso, Jorge; Modat, Marc; Ourselin, Sebastien [Centre of Medical Image Computing, UCL, London (United Kingdom); Erlandsson, Kjell [Institute of Nuclear Medicine, UCL, London (United Kingdom)

    2015-05-18

    The Partial Volume (PV) effect in Positron Emission Tomography (PET) imaging leads to loss in quantification accuracy, which manifests in PV effects (small objects occupy partially the sensitive volume of the imaging instrument, resulting in blurred images). Simultaneous acquisition of PET and Magnetic Resonance Imaging (MRI) produces concurrent metabolic and anatomical information. The latter has proved to be very helpful for the correction of PV effects. Currently, there are several techniques used for PV correction. They can be applied directly during the reconstruction process or as a post-processing step after image reconstruction. In order to evaluate the efficacy of the different PV correction techniques in brain- PET, we are constructing a database of simulated data. Here we present the framework and steps involved in constructing this database. Static 18F-FDG epilepsy and 18F-Florbetapir amyloid dementia PET/MR were selected because of their very different characteristics. The methodology followed was based on four main steps: Image pre-processing, Ground Truth (GT) generation, MRI and PET data simulation and reconstruction. All steps used Open Source software and can therefore be repeated at any centre. The framework as well as the database will be freely accessible. Tools used included GIF, FSL, POSSUM, GATE and STIR. The final data obtained after simulation, involving raw or reconstructed PET data together with corresponding MRI datasets, were close to the original patient data. Besides, there is the advantage that data can be compared with the GT. We indicate several parameters that can be improved and optimized.

  6. Establishment of an open database of realistic simulated data for evaluation of partial volume correction techniques in brain PET/MR

    International Nuclear Information System (INIS)

    Mota, Ana; Cuplov, Vesna; Schott, Jonathan; Hutton, Brian; Thielemans, Kris; Drobnjak, Ivana; Dickson, John; Bert, Julien; Burgos, Ninon; Cardoso, Jorge; Modat, Marc; Ourselin, Sebastien; Erlandsson, Kjell

    2015-01-01

    The Partial Volume (PV) effect in Positron Emission Tomography (PET) imaging leads to loss in quantification accuracy, which manifests in PV effects (small objects occupy partially the sensitive volume of the imaging instrument, resulting in blurred images). Simultaneous acquisition of PET and Magnetic Resonance Imaging (MRI) produces concurrent metabolic and anatomical information. The latter has proved to be very helpful for the correction of PV effects. Currently, there are several techniques used for PV correction. They can be applied directly during the reconstruction process or as a post-processing step after image reconstruction. In order to evaluate the efficacy of the different PV correction techniques in brain- PET, we are constructing a database of simulated data. Here we present the framework and steps involved in constructing this database. Static 18F-FDG epilepsy and 18F-Florbetapir amyloid dementia PET/MR were selected because of their very different characteristics. The methodology followed was based on four main steps: Image pre-processing, Ground Truth (GT) generation, MRI and PET data simulation and reconstruction. All steps used Open Source software and can therefore be repeated at any centre. The framework as well as the database will be freely accessible. Tools used included GIF, FSL, POSSUM, GATE and STIR. The final data obtained after simulation, involving raw or reconstructed PET data together with corresponding MRI datasets, were close to the original patient data. Besides, there is the advantage that data can be compared with the GT. We indicate several parameters that can be improved and optimized.

  7. A new analysis technique for microsamples

    International Nuclear Information System (INIS)

    Boyer, R.; Journoux, J.P.; Duval, C.

    1989-01-01

    For many decades, isotopic analysis of Uranium or Plutonium has been performed by mass spectrometry. The most recent analytical techniques, using the counting method or a plasma torch combined with a mass spectrometer (ICP.MS) have not yet to reach a greater degree of precision than the older methods in this field. The two means of ionization for isotopic analysis - by electronic bombardment of atoms or molecules (source of gas ions) and - by thermal effect (thermoionic source) are compared revealing some inconsistency between the quantity of sample necessary for analysis and the luminosity. In fact, the quantity of sample necessary for the gas source mass spectrometer is 10 to 20 times greater than that for the thermoionization spectrometer, while the sample consumption is between 10 5 to 10 6 times greater. This proves that almost the entire sample is not necessary for the measurement; it is only required because of the system of introduction for the gas spectrometer. The new analysis technique referred to as ''Microfluorination'' corrects this anomaly and exploits the advantages of the electron bombardment method of ionization

  8. Direct sampling technique of bees on Vriesea philippocoburgii (Bromeliaceae, Tillandsioideae flowers

    Directory of Open Access Journals (Sweden)

    Afonso Inácio Orth

    2004-11-01

    Full Text Available In our study on Vriesea philippocoburgii Wawra pollination, due to the small proportion of flowers in anthesis on a single day and the damage caused to inflorescences when netting directly on flowers, we used the direct sampling technique (DST of bees on flowers. This technique was applied to 40 flowering plants and resulted in the capture of 160 specimens, belonging to nine genera of Apoidea and separated into 19 morph species. As DST maintains the integrity of flowers for later Bees’ visits, it can enhance the survey’s performance, constituting an alternative methodology for the collection of bees visiting flowering plants.

  9. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  10. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.

    Science.gov (United States)

    Hofmann, Matthias; Pichler, Bernd; Schölkopf, Bernhard; Beyer, Thomas

    2009-03-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.

  11. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques

    International Nuclear Information System (INIS)

    Hofmann, Matthias; Pichler, Bernd; Schoelkopf, Bernhard; Beyer, Thomas

    2009-01-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data. (orig.)

  12. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hofmann, Matthias [Max Planck Institute for Biological Cybernetics, Tuebingen (Germany); University of Tuebingen, Laboratory for Preclinical Imaging and Imaging Technology of the Werner Siemens-Foundation, Department of Radiology, Tuebingen (Germany); University of Oxford, Wolfson Medical Vision Laboratory, Department of Engineering Science, Oxford (United Kingdom); Pichler, Bernd [University of Tuebingen, Laboratory for Preclinical Imaging and Imaging Technology of the Werner Siemens-Foundation, Department of Radiology, Tuebingen (Germany); Schoelkopf, Bernhard [Max Planck Institute for Biological Cybernetics, Tuebingen (Germany); Beyer, Thomas [University Hospital Duisburg-Essen, Department of Nuclear Medicine, Essen (Germany); Cmi-Experts GmbH, Zurich (Switzerland)

    2009-03-15

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data. (orig.)

  13. Large Sample Neutron Activation Analysis: A Challenge in Cultural Heritage Studies

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Tzika, F.

    2007-01-01

    Large sample neutron activation analysis compliments and significantly extends the analytical tools available for cultural heritage and authentication studies providing unique applications of non-destructive, multi-element analysis of materials that are too precious to damage for sampling purposes, representative sampling of heterogeneous materials or even analysis of whole objects. In this work, correction factors for neutron self-shielding, gamma-ray attenuation and volume distribution of the activity in large volume samples composed of iron and ceramic material were derived. Moreover, the effect of inhomogeneity on the accuracy of the technique was examined

  14. Applicability of Current Atmospheric Correction Techniques in the Red Sea

    KAUST Repository

    Tiwari, Surya Prakash

    2016-10-26

    Much of the Red Sea is considered to be a typical oligotrophic sea having very low chlorophyll-a concentrations. Few existing studies describe the variability of phytoplankton biomass in the Red Sea. This study evaluates the resulting chlorophyll-a values computed with different chlorophyll algorithms (e.g., Chl_OCI, Chl_Carder, Chl_GSM, and Chl_GIOP) using radiances derived from two different atmospheric correction algorithms (NASA standard and Singh and Shanmugam (2014)). The resulting satellite derived chlorophyll-a concentrations are compared with in situ chlorophyll values measured using the High-Performance Liquid Chromatography (HPLC). Statistical analyses are used to assess the performances of algorithms using the in situ measurements obtain in the Red Sea, to evaluate the approach to atmospheric correction and algorithm parameterization.

  15. Applicability of Current Atmospheric Correction Techniques in the Red Sea

    KAUST Repository

    Tiwari, Surya Prakash; Ouhssain, Mustapha; Jones, Burton

    2016-01-01

    Much of the Red Sea is considered to be a typical oligotrophic sea having very low chlorophyll-a concentrations. Few existing studies describe the variability of phytoplankton biomass in the Red Sea. This study evaluates the resulting chlorophyll-a values computed with different chlorophyll algorithms (e.g., Chl_OCI, Chl_Carder, Chl_GSM, and Chl_GIOP) using radiances derived from two different atmospheric correction algorithms (NASA standard and Singh and Shanmugam (2014)). The resulting satellite derived chlorophyll-a concentrations are compared with in situ chlorophyll values measured using the High-Performance Liquid Chromatography (HPLC). Statistical analyses are used to assess the performances of algorithms using the in situ measurements obtain in the Red Sea, to evaluate the approach to atmospheric correction and algorithm parameterization.

  16. An Importance Sampling Simulation Method for Bayesian Decision Feedback Equalizers

    OpenAIRE

    Chen, S.; Hanzo, L.

    2000-01-01

    An importance sampling (IS) simulation technique is presented for evaluating the lower-bound bit error rate (BER) of the Bayesian decision feedback equalizer (DFE) under the assumption of correct decisions being fed back. A design procedure is developed, which chooses appropriate bias vectors for the simulation density to ensure asymptotic efficiency of the IS simulation.

  17. Identification of unknown sample using NAA, EDXRF, XRD techniques

    International Nuclear Information System (INIS)

    Dalvi, Aditi A.; Swain, K.K.; Chavan, Trupti; Remya Devi, P.S.; Wagh, D.N.; Verma, R.

    2015-01-01

    Analytical Chemistry Division (ACD), Bhabha Atomic Research Centre (BARC) receives samples from law enforcement agencies such as Directorate of Revenue Intelligence, Customs for analysis. Five unknown grey powdered samples were received for identification and were suspected to be Iridium (Ir). Identification of unknown sample is always a challenging task and suitable analytical techniques have to be judiciously utilized for arriving at the conclusion. Qualitative analysis was carried out using Jordan Valley, EX-3600 M Energy dispersive X-ray fluorescence (EDXRF) spectrometer at ACD, BARC. A SLP series LEO Si (Li) detector (active area: 30 mm 2 ; thickness: 3.5 mm; resolution: 140 eV at 5.9 keV of Mn K X-ray) was used during the measurement and only characteristic X-rays of Ir (Lα: 9.17 keV and Lβ: 10.70 keV) was seen in the X-ray spectrum. X-ray diffraction (XRD) measurement results indicated that the Ir was in the form of metal. To confirm the XRD data, neutron activation analysis (NAA) was carried out by irradiating samples and elemental standards (as comparator) in graphite reflector position of Advanced Heavy Water Reactor Critical Facility (AHWR CF) reactor, BARC, Mumbai. After suitable decay period, gamma activity measurements were carried out using 45% HPGe detector coupled to 8 k multi channel analyzer. Characteristic gamma line at 328.4 keV of the activation product 194 Ir was used for quantification of iridium and relative method of NAA was used for concentration calculations. NAA results confirmed that all the samples were Iridium metal. (author)

  18. Development of SYVAC sampling techniques

    International Nuclear Information System (INIS)

    Prust, J.O.; Dalrymple, G.J.

    1985-04-01

    This report describes the requirements of a sampling scheme for use with the SYVAC radiological assessment model. The constraints on the number of samples that may be taken is considered. The conclusions from earlier studies using the deterministic generator sampling scheme are summarised. The method of Importance Sampling and a High Dose algorithm, which are designed to preferentially sample in the high dose region of the parameter space, are reviewed in the light of experience gained from earlier studies and the requirements of a site assessment and sensitivity analyses. In addition the use of an alternative numerical integration method for estimating risk is discussed. It is recommended that the method of Importance Sampling is developed and tested for use with SYVAC. An alternative numerical integration method is not recommended for investigation at this stage but should be the subject of future work. (author)

  19. Corrective Action Investigation Plan for Corrective Action Unit 428: Area 3 Septic Waste Systems 1 and 5, Tonopah Test Range, Nevada

    International Nuclear Information System (INIS)

    ITLV

    1999-01-01

    The Corrective Action Investigation Plan for Corrective Action Unit 428, Area 3 Septic Waste Systems 1 and 5, has been developed in accordance with the Federal Facility Agreement and Consent Order that was agreed to by the U. S. Department of Energy, Nevada Operations Office; the State of Nevada Division of Environmental Protection; and the U. S. Department of Defense. Corrective Action Unit 428 consists of Corrective Action Sites 03- 05- 002- SW01 and 03- 05- 002- SW05, respectively known as Area 3 Septic Waste System 1 and Septic Waste System 5. This Corrective Action Investigation Plan is used in combination with the Work Plan for Leachfield Corrective Action Units: Nevada Test Site and Tonopah Test Range, Nevada , Rev. 1 (DOE/ NV, 1998c). The Leachfield Work Plan was developed to streamline investigations at leachfield Corrective Action Units by incorporating management, technical, quality assurance, health and safety, public involvement, field sampling, and waste management information common to a set of Corrective Action Units with similar site histories and characteristics into a single document that can be referenced. This Corrective Action Investigation Plan provides investigative details specific to Corrective Action Unit 428. A system of leachfields and associated collection systems was used for wastewater disposal at Area 3 of the Tonopah Test Range until a consolidated sewer system was installed in 1990 to replace the discrete septic waste systems. Operations within various buildings at Area 3 generated sanitary and industrial wastewaters potentially contaminated with contaminants of potential concern and disposed of in septic tanks and leachfields. Corrective Action Unit 428 is composed of two leachfield systems in the northern portion of Area 3. Based on site history collected to support the Data Quality Objectives process, contaminants of potential concern for the site include oil/ diesel range total petroleum hydrocarbons, and Resource Conservation

  20. Geometry-based multiplication correction for passive neutron coincidence assay of materials with variable and unknown (α,n) neutron rates

    International Nuclear Information System (INIS)

    Langner, D.G.; Russo, P.A.

    1993-02-01

    We have studied the problem of assaying impure plutonium-bearing materials using passive neutron coincidence counting. We have developed a technique to analyze neutron coincidence data from impure plutonium samples that uses the bulk geometry of the sample to correct for multiplication in samples for which the (α,n) neutron production rate is unknown. This technique can be applied to any impure plutonium-bearing material whose matrix constituents are approximately constant, whose self-multiplication is low to moderate, whose plutonium isotopic composition is known and not substantially varying, and whose bulk geometry is measurable or can be derived. This technique requires a set of reference materials that have well-characterized plutonium contents. These reference materials are measured once to derive a calibration that is specific to the neutron detector and the material. The technique has been applied to molten salt extraction residues, PuF 4 samples that have a variable salt matrix, and impure plutonium oxide samples. It is also applied to pure plutonium oxide samples for comparison. Assays accurate to 4% (1 σ) were obtained for impure samples measured in a High-Level Neutron Coincidence Counter II. The effects on the technique of variations in neutron detector efficiency with energy and the effects of neutron capture in the sample are discussed

  1. Colour quenching corrections on the measurement of 90Sr through Cerenkov counting

    International Nuclear Information System (INIS)

    Mosqueda, F.; Villa, M.; Vaca, F.; Bolivar, J.P.

    2007-01-01

    The determination of 90 Sr through the Cerenkov radiation emitted by its descendant 90 Y is a well-known method and firmly established in literature. Nevertheless, in order to obtain an accurate result based on a Cerenkov measurement, the experimental work must be extremely rigorous because the efficiency of Cerenkov counting is especially sensitive to the presence of colour. Any traces of colour in the sample produce a decrease in the number of photons detected in the photomultipliers and, therefore, this might cause a diminution in Cerenkov counting efficiency. It is essential not only to detect the effect of colour quenching in the sample but also to correct the decrease in counting efficiency. For this reason, colour quenching correction curves versus counting efficiency are usually done when measuring through Cerenkov counting. One of the most widely used techniques to evaluate colour quenching in these measurements is the channel ratio method, which consists of the measurement of the shift of the spectrum measuring the ratio of counts in two different windows. The selection of the windows for the application of the corrections might have an influence on the quality of the fitting parameters of the correction curves efficiency versus colour quenching degree and hence on the final 90 Sr result. This work is focused on the calculation of the counting efficiency decrease using the channel ratio method and on obtaining the best fitting correction curve. For this purpose, empirical curves obtained through artificial quenchers have been studied and the results have been tested in real samples. Additionally, given that the Packard Tri-Carb 3170 TR/SL liquid scintillation counter is a novel detector for use in Cerenkov counting, the previous calibration of the Tri-Carb 3170 TR/SL detector, necessary for the measurement of 90 Sr, is included

  2. Correction for the interference of strontium in the determination of uranium in geologic samples by X-ray fluorescence

    International Nuclear Information System (INIS)

    Roca, M.; Bayon, A.

    1981-01-01

    A suitable empirical algorithm for the correction for the spectral interference of the SrKα on the ULα line has been derived. It works successfully for SrO concentrations up to 8% with a minimum detectable limit of 20 ppm U 3 O 8 . X-ray spectrometry procedure allows also the determination of the SrO contents of the samples. A program in BASIC language for data reduction has been written. (Author) 3 refs

  3. Use Residual Correction Method and Monotone Iterative Technique to Calculate the Upper and Lower Approximate Solutions of Singularly Perturbed Non-linear Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Chi-Chang Wang

    2013-09-01

    Full Text Available This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.

  4. Alternative approaches to correct interferences in the determination of boron in shrimps by electrothermal atomic absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Pasias, I.N.; Pappa, Ch.; Katsarou, V.; Thomaidis, N.S., E-mail: ntho@chem.uoa.gr; Piperaki, E.A.

    2014-02-01

    The aim of this study is to propose alternative techniques and methods in combination with the classical chemical modification to correct the major matrix interferences in the determination of boron in shrimps. The performance of an internal standard (Ge) for the determination of boron by the simultaneous multi-element atomic absorption spectrometry was tested. The use of internal standardization increased the recovery from 85.9% to 101% and allowed a simple correction of errors during sampling preparation and heating process. Furthermore, a new preparation procedure based on the use of citric acid during digestion and dilution steps improved the sensitivity of the method and decreased the limit of detection. Finally, a comparative study between the simultaneous multi-element atomic absorption spectrometry with a longitudinal Zeeman-effect background correction system, equipped with a transversely-heated graphite atomizer and the single element atomic absorption spectrometry with a D{sub 2} background correction system, equipped with an end-heated graphite atomizer was undertaken to investigate the different behavior of boron in both techniques. Different chemical modifiers for the determination of boron were tested with both techniques. Ni-citric acid and Ca were the optimal chemical modifiers when simultaneous multi-element atomic absorption spectrometry and single-element atomic absorption spectrometry were used, respectively. By using the single-element atomic absorption spectrometry, the calculated characteristic mass was 220 pg and the calculated limit of detection was 370 μg/kg. On the contrary, with simultaneous multi-element atomic absorption spectrometry, the characteristic mass was 2200 pg and the limit of detection was 5.5 mg/kg. - Highlights: • New approaches were developed to cope with interferences of B determination by ETAAS • Ge was used as internal standard for the determination of B by simultaneous ETAAS • Citric acid was used during

  5. Nonactivation interaction techniques in the analysis of environmental samples

    International Nuclear Information System (INIS)

    Tolgyessy, J.

    1986-01-01

    Nonactivation interaction analytical methods are based on the interaction processes of nuclear and X-ray radiation with a sample, leading to their absorption and backscattering, to the ionization of gases or excitation of fluorescent X-ray by radiation, but not to the activation of determined elements. From the point of view of environmental analysis, the most useful nonactivation interaction techniques are X-ray fluorescence by photon or charged particle excitation, ionization of gases by nuclear radiation, elastic scattering of charged particles and backscattering of beta radiation. The significant advantage of these methods is that they are nondestructive. (author)

  6. Construct Validity of the MMPI-2-RF Triarchic Psychopathy Scales in Correctional and Collegiate Samples.

    Science.gov (United States)

    Kutchen, Taylor J; Wygant, Dustin B; Tylicki, Jessica L; Dieter, Amy M; Veltri, Carlo O C; Sellbom, Martin

    2017-01-01

    This study examined the MMPI-2-RF (Ben-Porath & Tellegen, 2008/2011) Triarchic Psychopathy scales recently developed by Sellbom et al. ( 2016 ) in 3 separate groups of male correctional inmates and 2 college samples. Participants were administered a diverse battery of psychopathy specific measures (e.g., Psychopathy Checklist-Revised [Hare, 2003 ], Psychopathic Personality Inventory-Revised [Lilienfeld & Widows, 2005 ], Triarchic Psychopathy Measure [Patrick, 2010 ]), omnibus personality and psychopathology measures such as the Personality Assessment Inventory (Morey, 2007 ) and Personality Inventory for DSM-5 (Krueger, Derringer, Markon, Watson, & Skodol, 2012 ), and narrow-band measures that capture conceptually relevant constructs. Our results generally evidenced strong support for the convergent and discriminant validity for the MMPI-2-RF Triarchic scales. Boldness was largely associated with measures of fearless dominance, social potency, and stress immunity. Meanness showed strong relationships with measures of callousness, aggression, externalizing tendencies, and poor interpersonal functioning. Disinhibition exhibited strong associations with poor impulse control, stimulus seeking, and general externalizing proclivities. Our results provide additional construct validation to both the triarchic model and MMPI-2-RF Triarchic scales. Given the widespread use of the MMPI-2-RF in correctional and forensic settings, our results have important implications for clinical assessment in these 2 areas, where psychopathy is a highly relevant construct.

  7. Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.

    Science.gov (United States)

    Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby

    2018-02-06

    Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  9. Multi-element analysis of lubricant oil by WDXRF technique using thin-film sample preparation

    International Nuclear Information System (INIS)

    Scapin, M. A.; Salvador, V. L. R.; Lopes, C. D.; Sato, I. M.

    2006-01-01

    The quantitative analysis of the chemical elements in matrices like oils or gels represents a challenge for the analytical chemists. The classics methods or instrumental techniques such as atomic absorption spectrometry (AAS) and plasma optical emission spectrometry (ICP-OES) need chemical treatments, mainly sample dissolution and degradation processes. X-ray fluorescence technique allows a direct and multi-element analysis without previous sample treatments. In this work, a sensible method for the determination of elements Mg, Al, Si, P, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, Mo, Ag, Sn, Ba and Pb in lubricating oil is presented. The x-ray fluorescence (WDXRF) technique using linear regression method and thin film sample preparation was used. The validation of the methodology (repeatability and accuracy) was obtained by the analysis of the standard reference materials SRM Alpha AESAR lot 703527D, applying the Chauvenet, Cochrane, ANOVA and Z-score statistical tests. The method presents a relative standard deviation lower than 10% for all the elements, except for Pb determination (RSD Pb 15%). The Z-score values for all the elements were in the range -2 < Z < 2, indicating a very good accuracy.(Full text)

  10. Human mixed lymphocyte cultures. Evaluation of microculture technique utilizing the multiple automated sample harvester (MASH)

    Science.gov (United States)

    Thurman, G. B.; Strong, D. M.; Ahmed, A.; Green, S. S.; Sell, K. W.; Hartzman, R. J.; Bach, F. H.

    1973-01-01

    Use of lymphocyte cultures for in vitro studies such as pretransplant histocompatibility testing has established the need for standardization of this technique. A microculture technique has been developed that has facilitated the culturing of lymphocytes and increased the quantity of cultures feasible, while lowering the variation between replicate samples. Cultures were prepared for determination of tritiated thymidine incorporation using a Multiple Automated Sample Harvester (MASH). Using this system, the parameters that influence the in vitro responsiveness of human lymphocytes to allogeneic lymphocytes have been investigated. PMID:4271568

  11. Measuring Sulfur Isotope Ratios from Solid Samples with the Sample Analysis at Mars Instrument and the Effects of Dead Time Corrections

    Science.gov (United States)

    Franz, H. B.; Mahaffy, P. R.; Kasprzak, W.; Lyness, E.; Raaen, E.

    2011-01-01

    The Sample Analysis at Mars (SAM) instrument suite comprises the largest science payload on the Mars Science Laboratory (MSL) "Curiosity" rover. SAM will perform chemical and isotopic analysis of volatile compounds from atmospheric and solid samples to address questions pertaining to habitability and geochemical processes on Mars. Sulfur is a key element of interest in this regard, as sulfur compounds have been detected on the Martian surface by both in situ and remote sensing techniques. Their chemical and isotopic composition can belp constrain environmental conditions and mechanisms at the time of formation. A previous study examined the capability of the SAM quadrupole mass spectrometer (QMS) to determine sulfur isotope ratios of SO2 gas from a statistical perspective. Here we discuss the development of a method for determining sulfur isotope ratios with the QMS by sampling SO2 generated from heating of solid sulfate samples in SAM's pyrolysis oven. This analysis, which was performed with the SAM breadboard system, also required development of a novel treatment of the QMS dead time to accommodate the characteristics of an aging detector.

  12. ICT: isotope correction toolbox.

    Science.gov (United States)

    Jungreuthmayer, Christian; Neubauer, Stefan; Mairinger, Teresa; Zanghellini, Jürgen; Hann, Stephan

    2016-01-01

    Isotope tracer experiments are an invaluable technique to analyze and study the metabolism of biological systems. However, isotope labeling experiments are often affected by naturally abundant isotopes especially in cases where mass spectrometric methods make use of derivatization. The correction of these additive interferences--in particular for complex isotopic systems--is numerically challenging and still an emerging field of research. When positional information is generated via collision-induced dissociation, even more complex calculations for isotopic interference correction are necessary. So far, no freely available tools can handle tandem mass spectrometry data. We present isotope correction toolbox, a program that corrects tandem mass isotopomer data from tandem mass spectrometry experiments. Isotope correction toolbox is written in the multi-platform programming language Perl and, therefore, can be used on all commonly available computer platforms. Source code and documentation can be freely obtained under the Artistic License or the GNU General Public License from: https://github.com/jungreuc/isotope_correction_toolbox/ {christian.jungreuthmayer@boku.ac.at,juergen.zanghellini@boku.ac.at} Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. [Influence of Natural Dissolved Organic Matter on the Passive Sampling Technique and its Application].

    Science.gov (United States)

    Yu, Shang-yun; Zhou, Yan-mei

    2015-08-01

    This paper studied the effects of different concentrations of natural dissolved organic matter (DOM) on the passive sampling technique. The results showed that the presence of DOM affected the organic pollutant adsorption ability of the membrane. For lgK(OW), 3-5, DOM had less impact on the adsorption of organic matter by the membrane; for lgK(OW), > 5.5, DOM significantly increased the adsorption capacity of the membrane. Meanwhile, LDPE passive sampling technique was applied to monitor PAHs and PAEs in pore water of three surface sediments in Taizi River. All of the target pollutants were detected in varying degrees at each sampling point. Finally, the quotient method was used to assess the ecological risks of PAHs and PAEs. The results showed that fluoranthene exceeded the reference value of the aquatic ecosystem, meaning there was a big ecological risk.

  14. First Industrial Tests of a Matrix Monitor Correction for the Differential Die-away Technique of Historical Waste Drums

    Energy Technology Data Exchange (ETDEWEB)

    Antoni, Rodolphe; Passard, Christian; Perot, Bertrand [CEA Cadarache DEN/Nuclear Measurement Laboratory, 13108 Saint-Paul lez Durance (France); Batifol, Marc; Vandamme, Jean-Christophe [Nuclear Measurement Team, AREVA NC, La Hague plant F-50444 Beaumont-Hague (France); Grassi, Gabriele [AREVA NC, 1 place Jean-Millier, 92084 Paris-La-Defense cedex (France)

    2015-07-01

    The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA NC La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT). In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (LMN) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor, namely a 3He proportional counter located inside the measurement cavity. After feasibility studies performed with LMN's PROMETHEE 6 laboratory measurement cell and with MCNPX simulations, this paper presents first experimental tests performed on the industrial ACC (hulls and nozzles compaction facility) measurement system. A calculation vs. experiment benchmark has been carried out by performing dedicated calibration measurements with a representative drum and {sup 235}U samples. The comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the modeling approach and the industrial feasibility of the method, which will be implemented on the industrial station for the measurement of historical wastes. (authors)

  15. Quantum corrections to Schwarzschild black hole

    Energy Technology Data Exchange (ETDEWEB)

    Calmet, Xavier; El-Menoufi, Basem Kamal [University of Sussex, Department of Physics and Astronomy, Brighton (United Kingdom)

    2017-04-15

    Using effective field theory techniques, we compute quantum corrections to spherically symmetric solutions of Einstein's gravity and focus in particular on the Schwarzschild black hole. Quantum modifications are covariantly encoded in a non-local effective action. We work to quadratic order in curvatures simultaneously taking local and non-local corrections into account. Looking for solutions perturbatively close to that of classical general relativity, we find that an eternal Schwarzschild black hole remains a solution and receives no quantum corrections up to this order in the curvature expansion. In contrast, the field of a massive star receives corrections which are fully determined by the effective field theory. (orig.)

  16. Heating and thermal control of brazing technique to break contamination path for potential Mars sample return

    Science.gov (United States)

    Bao, Xiaoqi; Badescu, Mircea; Sherrit, Stewart; Bar-Cohen, Yoseph; Campos, Sergio

    2017-04-01

    The potential return of Mars sample material is of great interest to the planetary science community, as it would enable extensive analysis of samples with highly sensitive laboratory instruments. It is important to make sure such a mission concept would not bring any living microbes, which may possibly exist on Mars, back to Earth's environment. In order to ensure the isolation of Mars microbes from Earth's Atmosphere, a brazing sealing and sterilizing technique was proposed to break the Mars-to-Earth contamination path. Effectively, heating the brazing zone in high vacuum space and controlling the sample temperature for integrity are key challenges to the implementation of this technique. The break-thechain procedures for container configurations, which are being considered, were simulated by multi-physics finite element models. Different heating methods including induction and resistive/radiation were evaluated. The temperature profiles of Martian samples in a proposed container structure were predicted. The results show that the sealing and sterilizing process can be controlled such that the samples temperature is maintained below the level that may cause damage, and that the brazing technique is a feasible approach to breaking the contamination path.

  17. A Monte Carlo evaluation of analytical multiple scattering corrections for unpolarised neutron scattering and polarisation analysis data

    International Nuclear Information System (INIS)

    Mayers, J.; Cywinski, R.

    1985-03-01

    Some of the approximations commonly used for the analytical estimation of multiple scattering corrections to thermal neutron elastic scattering data from cylindrical and plane slab samples have been tested using a Monte Carlo program. It is shown that the approximations are accurate for a wide range of sample geometries and scattering cross-sections. Neutron polarisation analysis provides the most stringent test of multiple scattering calculations as multiply scattered neutrons may be redistributed not only geometrically but also between the spin flip and non spin flip scattering channels. A very simple analytical technique for correcting for multiple scattering in neutron polarisation analysis has been tested using the Monte Carlo program and has been shown to work remarkably well in most circumstances. (author)

  18. The particle analysis based on FT-TIMS technique for swipe sample under the frame of nuclear safeguard

    International Nuclear Information System (INIS)

    Yang Tianli; Liu Xuemei; Liu Zhao; Tang Lei; Long Kaiming

    2008-06-01

    Under the frame of nuclear safeguard, the particles analysis for swipe sample is an advance mean to detect the undeclared uranium enriched facilities and undeclared uranium enriched activity. The technique of particle analysis based on fission track-thermal ionization mass spectrometry (FT-TIMS) for swipe sample have been built. The reliability and the experimental background for selecting particles consisting of uranium from swipe sample by FT method have been verified. In addition, the utilization coefficient of particles on the surface of swipe sample have also been tested. These works have provided the technique support for application in the area of nuclear verification. (authors)

  19. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  20. Vitamin B 12 absorption: correction of intestinal retention by whole-body profile activity of vitamin B 12-58 cobalt and by double tracer technique

    International Nuclear Information System (INIS)

    Goncalves, M.R. Bencke; Gheldof, R.; Paternot, L. van Tricht; Delmotte, E.; Verschaeren, A.; Martin, P.; Verhas, M.; Universidade Federal, Rio de Janeiro, RJ

    1997-01-01

    Full text. Intestinal retention could give false negative results in determining the whole-body retention of vitamin B 12 absorption (WBC B12-58Co). After having validate the WBC B12-58Co, taking the Schilling test as reference, we have studied the feasibility to evaluate the intestinal contamination by measurement of the profile activity distribution of vitamin B12-58Co and by a double tracer technique (WBC B12-58Co/ WBC 51 Cr Cl3). Methodology: twenty five patients were studied for the setting up of the new methodology. For eleven of them the WBC B12-58 Co retention was measured at the 7th day after the oral administration of 37KBq of B12-58Co using a four detectors whole body counter. One week later, a Schilling test was performed after the oral absorption of 18,5 KBq B12-57Co. Results were expressed as %ID. In these patients, one single peak of hepatic activity was observed on the whole body profile and thus no further intestinal correction was needed. In order to evaluate the intestinal contribution, we made in nine other patients the profile of the whole body distribution of activity at 1 h, 1 week and two weeks after the oral administration of B12-58Co. For five other patients a double tracer technique was used for intestinal correction after the simultaneous oral administration of 37 KBq of B12-58Co and 1,85 MBq of 51 Cr Cl3. The B12-58Co absorption was evaluated after intestinal correction based on subtraction of the 51Cr Cl3 contribution after the formula: B12-58Co(%ID) = WBC B12-58Co - WBC 51 Cr Cl3/1 - WBC 51 Cr Cl3. Results: the correlation with the Schilling test was found excellent: r=0,94 (n=11). The normality for WBC retention (n=7) was define as 53,2 +-12,4% ID (SD). For nine patients studied at the 7th day, the presence of a double peak (hepatic and intestinal peaks) allowed the subtraction by exponential extrapolation; the correction range was 4,4% to 37,2%. With the exception of one observation there was no difference in the measure of vitamin

  1. Corrective Action Investigation Plan for Corrective Action Unit 561: Waste Disposal Areas, Nevada Test Site, Nevada with ROTC 1, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Grant Evenson

    2008-07-01

    Corrective Action Unit (CAU) 561 is located in Areas 1, 2, 3, 5, 12, 22, 23, and 25 of the Nevada Test Site, which is approximately 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 561 is comprised of the 10 corrective action sites (CASs) listed below: • 01-19-01, Waste Dump • 02-08-02, Waste Dump and Burn Area • 03-19-02, Debris Pile • 05-62-01, Radioactive Gravel Pile • 12-23-09, Radioactive Waste Dump • 22-19-06, Buried Waste Disposal Site • 23-21-04, Waste Disposal Trenches • 25-08-02, Waste Dump • 25-23-21, Radioactive Waste Dump • 25-25-19, Hydrocarbon Stains and Trench These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on April 28, 2008, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and National Security Technologies, LLC. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 561. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the Corrective Action Investigation for CAU 561 includes the following activities: • Move surface debris and/or materials, as needed, to facilitate sampling. • Conduct radiological surveys

  2. Segmented attenuation correction using artificial neural networks in positron tomography

    International Nuclear Information System (INIS)

    Yu, S.K.; Nahmias, C.

    1996-01-01

    The measured attenuation correction technique is widely used in cardiac positron tomographic studies. However, the success of this technique is limited because of insufficient counting statistics achievable in practical transmission scan times, and of the scattered radiation in transmission measurement which leads to an underestimation of the attenuation coefficients. In this work, a segmented attenuation correction technique has been developed that uses artificial neural networks. The technique has been validated in phantoms and verified in human studies. The results indicate that attenuation coefficients measured in the segmented transmission image are accurate and reproducible. Activity concentrations measured in the reconstructed emission image can also be recovered accurately using this new technique. The accuracy of the technique is subject independent and insensitive to scatter contamination in the transmission data. This technique has the potential of reducing the transmission scan time, and satisfactory results are obtained if the transmission data contain about 400 000 true counts per plane. It can predict accurately the value of any attenuation coefficient in the range from air to water in a transmission image with or without scatter correction. (author)

  3. Refinement of NMR structures using implicit solvent and advanced sampling techniques.

    Science.gov (United States)

    Chen, Jianhan; Im, Wonpil; Brooks, Charles L

    2004-12-15

    NMR biomolecular structure calculations exploit simulated annealing methods for conformational sampling and require a relatively high level of redundancy in the experimental restraints to determine quality three-dimensional structures. Recent advances in generalized Born (GB) implicit solvent models should make it possible to combine information from both experimental measurements and accurate empirical force fields to improve the quality of NMR-derived structures. In this paper, we study the influence of implicit solvent on the refinement of protein NMR structures and identify an optimal protocol of utilizing these improved force fields. To do so, we carry out structure refinement experiments for model proteins with published NMR structures using full NMR restraints and subsets of them. We also investigate the application of advanced sampling techniques to NMR structure refinement. Similar to the observations of Xia et al. (J.Biomol. NMR 2002, 22, 317-331), we find that the impact of implicit solvent is rather small when there is a sufficient number of experimental restraints (such as in the final stage of NMR structure determination), whether implicit solvent is used throughout the calculation or only in the final refinement step. The application of advanced sampling techniques also seems to have minimal impact in this case. However, when the experimental data are limited, we demonstrate that refinement with implicit solvent can substantially improve the quality of the structures. In particular, when combined with an advanced sampling technique, the replica exchange (REX) method, near-native structures can be rapidly moved toward the native basin. The REX method provides both enhanced sampling and automatic selection of the most native-like (lowest energy) structures. An optimal protocol based on our studies first generates an ensemble of initial structures that maximally satisfy the available experimental data with conventional NMR software using a simplified

  4. Posterior-only surgical correction of adolescent idiopathic scoliosis: an Egyptian experience

    Directory of Open Access Journals (Sweden)

    Elnady Belal

    2017-01-01

    Conclusion: This is a prospective study of at least 80% metal density pedicle screws technique and intra-operative wake-up test in Egyptian patients with AIS. It proved to be an effective and safe technique in correction of radiological parameters, with no neurological or implant related complications. It allowed excellent scoliotic and kyphotic curves correction with minimal loss of correction. On the whole it led to better quality of life.

  5. Dosimetric characterization of BeO samples in alpha, beta and X radiation beams using luminescent techniques

    International Nuclear Information System (INIS)

    Groppo, Daniela Piai

    2013-01-01

    In the medical field, the ionizing radiation is used both for therapeutic and diagnostic purposes, in a wide range of radiation doses. In order to ensure that the objective is achieved in practice, detailed studies of detectors and devices in different types of radiations beams are necessary. In this work a dosimetric characterization of BeO samples was performed using the techniques of thermoluminescence (TL) and optically stimulated luminescence (OSL) by a comparison of their response for alpha, beta and X radiations and the establishment of an appropriated system for use in monitoring of these radiations beams. The main results are: the high sensitivity to beta radiation for both techniques, good reproducibility of TL and OSL response (coefficients of variation lower than 5%), maximum energy dependence of the X radiation of 28% for the TL technique, and only 7% for the OSL technique, within the studied energy range. The dosimetric characteristics obtained in this work show the possibility of applying BeO samples to dosimetry of alpha, beta and X radiations, considering the studied dose ranges, using the TL and OSL techniques. From the results obtained, the samples of BeO showed their potential use for beam dosimetry in diagnostic radiology and radiotherapy. (author)

  6. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  7. Determination of self absorption correction factor (SAF) for gross alpha measurement in water samples by BIS method

    International Nuclear Information System (INIS)

    Raveendran, Nanda; Baburajan, A.; Ravi, P.M.

    2018-01-01

    The laboratories accredited by AERB undertake the measurement of gross alpha and gross beta in packaged drinking water from manufactures across the country and analyze as per the procedure of Bureau of Indian standards. The accurate measurements of gross alpha in the drinking water sample is a challenge due to the self absorption of alpha particle from varying precipitate (Fe(OH) 3 +BaSO 4 ) thickness and total dissolved solids (TDS). This paper deals with a study on tracer recovery generation and self absorption correction factor (SAF). ESL, Tarapur has participated in an inter-laboratory comparison exercise conducted by IDS, RSSD, BARC as per the recommendation of AERB for the accredited laboratories. The thickness of the precipitate is an important aspect which affected the counting process. The activity was reported after conducting multiple experiments with uranium tracer recovery and precipitate thickness. Later on to make our efforts simplified, an average tracer recovery and Self Absorption correction Factor (SAF) was derived by the present experiment and the same was used for the re-calculation of activity from the count rate reported earlier

  8. Long-term monitoring of the Danube river-Sampling techniques, radionuclide metrology and radioecological assessment

    International Nuclear Information System (INIS)

    Maringer, F.J.; Gruber, V.; Hrachowitz, M.; Baumgartner, A.; Weilner, S.; Seidel, C.

    2009-01-01

    Sampling techniques and radiometric methods, developed and applied in a comprehensive radioecological study of the Danube River are presented. Results and radiometric data of sediment samples, collected by sediment traps in Austria and additionally by grab sampling in the Danube during research cruises between Germany and the delta (Black sea) are shown and discussed. Goal of the investigation is the protection of public and environment, especially the sustainable use and conservation of human freshwater resources against harmful radioactive exposure.

  9. Colour quenching corrections on the measurement of {sup 90}Sr through Cerenkov counting

    Energy Technology Data Exchange (ETDEWEB)

    Mosqueda, F. [Dpto. de Fisica Aplicada, Facultad de Ciencias Experimentales, Universidad de Huelva, Campus de El Carmen, 21071 Huelva (Spain)], E-mail: fernando.mosqueda@dfa.uhu.es; Villa, M. [Centro de Investigacion, Tecnologia e Innovacion, Universidad de Sevilla, Av. Reina Mercedes 4B, E41012 Sevilla (Spain); Vaca, F.; Bolivar, J.P. [Dpto. de Fisica Aplicada, Facultad de Ciencias Experimentales, Universidad de Huelva, Campus de El Carmen, 21071 Huelva (Spain)

    2007-12-05

    The determination of {sup 90}Sr through the Cerenkov radiation emitted by its descendant {sup 90}Y is a well-known method and firmly established in literature. Nevertheless, in order to obtain an accurate result based on a Cerenkov measurement, the experimental work must be extremely rigorous because the efficiency of Cerenkov counting is especially sensitive to the presence of colour. Any traces of colour in the sample produce a decrease in the number of photons detected in the photomultipliers and, therefore, this might cause a diminution in Cerenkov counting efficiency. It is essential not only to detect the effect of colour quenching in the sample but also to correct the decrease in counting efficiency. For this reason, colour quenching correction curves versus counting efficiency are usually done when measuring through Cerenkov counting. One of the most widely used techniques to evaluate colour quenching in these measurements is the channel ratio method, which consists of the measurement of the shift of the spectrum measuring the ratio of counts in two different windows. The selection of the windows for the application of the corrections might have an influence on the quality of the fitting parameters of the correction curves efficiency versus colour quenching degree and hence on the final {sup 90}Sr result. This work is focused on the calculation of the counting efficiency decrease using the channel ratio method and on obtaining the best fitting correction curve. For this purpose, empirical curves obtained through artificial quenchers have been studied and the results have been tested in real samples. Additionally, given that the Packard Tri-Carb 3170 TR/SL liquid scintillation counter is a novel detector for use in Cerenkov counting, the previous calibration of the Tri-Carb 3170 TR/SL detector, necessary for the measurement of {sup 90}Sr, is included.

  10. Separation of arsenic species by capillary electrophoresis with sample-stacking techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Zu Liang; Naidu, Ravendra [Adelaide Laboratory, CSIRO Land and Water, PMB2, 5064, Glen Osmond, SA (Australia); Lin, Jin-Ming [Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, P.O. Box 2871, 100085, Beijing (China)

    2003-03-01

    A simple capillary zone electrophoresis procedure was developed for the separation of arsenic species (AsO{sub 2}{sup 2-}, AsO{sub 4}{sup 2-}, and dimethylarsinic acid, DMA). Both counter-electroosmotic and co-electroosmotic (EOF) modes were investigated for the separation of arsenic species with direct UV detection at 185 nm using 20 mmol L{sup -1} sodium phosphate as the electrolyte. The separation selectivity mainly depends on the separation modes and electrolyte pH. Inorganic anions (Cl{sup -}, NO{sub 2}{sup -}, NO{sub 3}{sup -} and SO{sub 4}{sup 2-}) presented in real samples did not interfere with arsenic speciation in either separation mode. To improve the detection limits, sample-stacking techniques, including large-volume sample stacking (LVSS) and field-amplified sample injection (FASI), were investigated for the preconcentration of As species in co-CZE mode. Less than 1 {mu}mol L{sup -1} of detection limits for As species were achieved using FASI. The proposed method was demonstrated for the separation and detection of As species in water. (orig.)

  11. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  12. Use of 3D Printed Bone Plate in Novel Technique to Surgically Correct Hallux Valgus Deformities

    Science.gov (United States)

    Smith, Kathryn E.; Dupont, Kenneth M.; Safranski, David L.; Blair, Jeremy; Buratti, Dawn; Zeetser, Vladimir; Callahan, Ryan; Lin, Jason; Gall, Ken

    2016-01-01

    Three-dimensional (3-D) printing offers many potential advantages in designing and manufacturing plating systems for foot and ankle procedures that involve small, geometrically complex bony anatomy. Here, we describe the design and clinical use of a Ti-6Al-4V ELI bone plate (FastForward™ Bone Tether Plate, MedShape, Inc., Atlanta, GA) manufactured through 3-D printing processes. The plate protects the second metatarsal when tethering suture tape between the first and second metatarsals and is a part of a new procedure that corrects hallux valgus (bunion) deformities without relying on doing an osteotomy or fusion procedure. The surgical technique and two clinical cases describing the use of this procedure with the 3-D printed bone plate are presented within. PMID:28337049

  13. Monitoring of persistent organic pollutants in seawater of the Pearl River Estuary with rapid on-site active SPME sampling technique

    International Nuclear Information System (INIS)

    Huang, Siming; He, Shuming; Xu, Hao; Wu, Peiyan; Jiang, Ruifen; Zhu, Fang; Luan, Tiangang; Ouyang, Gangfeng

    2015-01-01

    An on-site active solid-phase microextraction (SPME) sampling technique coupled with gas chromatography-mass spectrometry (GC–MS) for sampling and monitoring 16 polycyclic aromatic hydrocarbons (PAHs) and 8 organochlorine pesticides (OCPs) in seawater was developed. Laboratory experiments demonstrated that the sampling-rate calibration method was practical and could be used for the quantification of on-site sampling. The proposed method was employed for field tests which covered large amounts of water samples in the Pearl River Estuary in rainy and dry seasons. The on-site SPME sampling method can avoid the contamination of sample, the losses of analytes during sample transportation, as well as the usage of solvent and time-consuming sample preparation process. Results indicated that the technique with the designed device can address the requirement of modern environment water analysis. In addition, the sources, bioaccumulation and potential risk to human of the PAHs and OCPs in seawater of the Pearl River Estuary were discussed. - Highlights: • SPME on-site active sampling technique was developed and validated. • The technique was employed for field tests in the Pearl River Estuary. • 16 PAHs and 8 OCPs in the seawater of Pearl River Estuary were monitored. • The potential risk of the PAHs and OCPs in Pearl River Estuary were discussed. - An on-site active SPME sampling technique was developed and successfully applied for sampling and monitoring 16 PAHs and 8 OCPs in the Pearl River Estuary

  14. Corrections to the 148Nd method of evaluation of burnup for the PIE samples from Mihama-3 and Genkai-1 reactors

    International Nuclear Information System (INIS)

    Suyama, Kenya; Mochizuki, Hiroki

    2006-01-01

    The value of the burnup is one of the most important parameters of samples taken by post-irradiation examination (PIE). Generally, it is evaluated by the Neodymium-148 method. Precise evaluation of the burnup value requires: (1) an effective fission yield of 148 Nd; (2) neutron capture reactions of 147 Nd and 148 Nd; (3) a conversion factor from fissions per initial heavy metal to the burnup unit GWd/t. In this study, the burnup values of the PIE data from Mihama-3 and Genkai-1 PWRs, which were taken by the Japan Atomic Energy Research Institute, were re-evaluated using more accurate corrections for each of these three items. The PIE data were then re-analyzed using SWAT and SWAT2 code systems with JENDL-3.3 library. The re-evaluation of the effective fission yield of 148 Nd has an effect of 1.5-2.0% on burnup values. Considering the neutron capture reactions of 147 Nd and 148 Nd removes dependence of C/E values of 148 Nd on the burnup value. The conversion factor from FIMA(%) to GWd/t changes according to the burnup value. Its effect on the burnup evaluation is small for samples having burnup of larger than 30 GWd/t. The analyses using the corrected burnup values showed that the calculated 148 Nd concentrations and the PIE data is approximately 1%, whereas this was 3-5% in prior analyses. This analysis indicates that the burnup values of samples from Mihama-3 and Genkai-1 PWRs should be corrected by 2-3%. The effect of re-evaluation of the burnup value on the neutron multiplication factor is an approximately 0.6% change in PIE samples having the burnup of larger than 30 GWd/t. Finally, comparison between calculation results using a single pin-cell model and an assembly model is carried out. Because the results agreed with each other within a few percent, we concluded that the single pin-cell model is suitable for the analysis of PIE samples and that the underestimation of plutonium isotopes, which occurred in the previous analyses, does not result from a geometry

  15. Efficiency calibration and measurement of self-absorption correction of environmental gamma spectroscopy of soils samples using Marinelli beaker

    International Nuclear Information System (INIS)

    Abdi, M. R.; Mostajaboddavati, M.; Hassanzadeh, S.; Faghihian, H.; Rezaee, Kh.; Kamali, M.

    2006-01-01

    A nonlinear function in combination with the method of mixing activity calibrated is applied for fitting the experimental peak efficiency of HPGe spectrometers in 59-2614 keV energy range. The preparation of Marinelli beaker standards of mixed gamma and RG-set at secular equilibrium with its daughter radionuclides was studied. Standards were prepared by mixing of known amounts of 13B a, 241 Am, 152 Eu, 207 Bi, 24 Na, Al 2 O 3 powder and soil. The validity of these standards was checked by comparison with certified standard reference material RG-set and IAEA-Soil-6 Self-absorption was measured for the activity calculation of the gamma-ray lines about series of 238 U daughter, 232 Th series, 137 Cs and 40 K in soil samples. Self-absorption in the sample will depend on a number of factor including sample composition, density, sample size and gamma-ray energy. Seven Marinelli beaker standards were prepared in different degrees of compaction with bulk density ( ρ) of 1.000 to 1.600 g cm -3 . The detection efficiency versus density was obtained and the equation of self-absorption correction factors calculated for soil samples

  16. Prompt Gamma Activation Analysis (PGAA): Technique of choice for nondestructive bulk analysis of returned comet samples

    International Nuclear Information System (INIS)

    Lindstrom, D.J.; Lindstrom, R.M.

    1989-01-01

    Prompt gamma activation analysis (PGAA) is a well-developed analytical technique. The technique involves irradiation of samples in an external neutron beam from a nuclear reactor, with simultaneous counting of gamma rays produced in the sample by neutron capture. Capture of neutrons leads to excited nuclei which decay immediately with the emission of energetic gamma rays to the ground state. PGAA has several advantages over other techniques for the analysis of cometary materials: (1) It is nondestructive; (2) It can be used to determine abundances of a wide variety of elements, including most major and minor elements (Na, Mg, Al, Si, P, K, Ca, Ti, Cr, Mn, Fe, Co, Ni), volatiles (H, C, N, F, Cl, S), and some trace elements (those with high neutron capture cross sections, including B, Cd, Nd, Sm, and Gd); and (3) It is a true bulk analysis technique. Recent developments should improve the technique's sensitivity and accuracy considerably

  17. Experimental study of laser ablation as sample introduction technique for inductively coupled plasma-mass spectrometry

    International Nuclear Information System (INIS)

    Van Winckel, S.

    2001-01-01

    The contribution consists of an abstract of a PhD thesis. In the PhD study, several complementary applications of laser-ablation were investigated in order to characterise experimentally laser ablation (LA) as a sample introduction technique for ICP-MS. Three applications of LA as a sample introduction technique are discussed: (1) the microchemical analysis of the patina of weathered marble; (2) the possibility to measure isotope ratios (in particular Pb isotope ratios in archaeological bronze artefacts); and (3) the determination of Si in Al as part of a dosimetric study of the BR2 reactor vessel

  18. Corrective Action Decision Document/Closure Report for Corrective Action Unit 266: Area 25 Building 3124 Leachfield, Nevada Test Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    NNSA/NV

    2000-02-17

    This Corrective Action Decision Document/Closure Report (CADD/CR) was prepared for Corrective Action Unit (CAU) 266, Area 25 Building 3124 Leachfield, in accordance with the Federal Facility Agreement and Consent Order. Located in Area 25 at the Nevada Test Site in Nevada, CAU 266 includes Corrective Action Site (CAS) 25-05-09. The Corrective Action Decision Document and Closure Report were combined into one report because sample data collected during the corrective action investigation (CAI) indicated that contaminants of concern (COCs) were either not present in the soil, or present at concentrations not requiring corrective action. This CADD/CR identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's recommendation that no corrective action was necessary for CAU 266. From February through May 1999, CAI activities were performed as set forth in the related Corrective Action Investigation Plan. Analytes detected during the three-stage CAI of CAU 266 were evaluated against preliminary action levels (PALs) to determine COCs, and the analysis of the data generated from soil collection activities indicated the PALs were not exceeded for total volatile/semivolatile organic compounds, total petroleum hydrocarbons, polychlorinated biphenyls, total Resource Conservation and Recovery Act metals, gamma-emitting radionuclides, isotopic uranium/plutonium, and strontium-90 for any of the samples. However, COCs were identified in samples from within the septic tank and distribution box; and the isotopic americium concentrations in the two soil samples did exceed PALs. Closure activities were performed at the site to address the COCs identified in the septic tank and distribution box. Further, no use restrictions were required to be placed on CAU 266 because the CAI revealed soil contamination to be less than the 100 millirems per year limit established by DOE Order 5400.5.

  19. Corrective Action Decision Document/Closure Report for Corrective Action Unit 266: Area 25 Building 3124 Leachfield, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    2000-01-01

    This Corrective Action Decision Document/Closure Report (CADD/CR) was prepared for Corrective Action Unit (CAU) 266, Area 25 Building 3124 Leachfield, in accordance with the Federal Facility Agreement and Consent Order. Located in Area 25 at the Nevada Test Site in Nevada, CAU 266 includes Corrective Action Site (CAS) 25-05-09. The Corrective Action Decision Document and Closure Report were combined into one report because sample data collected during the corrective action investigation (CAI) indicated that contaminants of concern (COCs) were either not present in the soil, or present at concentrations not requiring corrective action. This CADD/CR identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's recommendation that no corrective action was necessary for CAU 266. From February through May 1999, CAI activities were performed as set forth in the related Corrective Action Investigation Plan. Analytes detected during the three-stage CAI of CAU 266 were evaluated against preliminary action levels (PALs) to determine COCs, and the analysis of the data generated from soil collection activities indicated the PALs were not exceeded for total volatile/semivolatile organic compounds, total petroleum hydrocarbons, polychlorinated biphenyls, total Resource Conservation and Recovery Act metals, gamma-emitting radionuclides, isotopic uranium/plutonium, and strontium-90 for any of the samples. However, COCs were identified in samples from within the septic tank and distribution box; and the isotopic americium concentrations in the two soil samples did exceed PALs. Closure activities were performed at the site to address the COCs identified in the septic tank and distribution box. Further, no use restrictions were required to be placed on CAU 266 because the CAI revealed soil contamination to be less than the 100 millirems per year limit established by DOE Order 5400.5

  20. Corrective Action Investigation Plan for Corrective Action Unit 166: Storage Yards and Contaminated Materials, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    David Strand

    2006-01-01

    Corrective Action Unit 166 is located in Areas 2, 3, 5, and 18 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit (CAU) 166 is comprised of the seven Corrective Action Sites (CASs) listed below: (1) 02-42-01, Cond. Release Storage Yd - North; (2) 02-42-02, Cond. Release Storage Yd - South; (3) 02-99-10, D-38 Storage Area; (4) 03-42-01, Conditional Release Storage Yard; (5) 05-19-02, Contaminated Soil and Drum; (6) 18-01-01, Aboveground Storage Tank; and (7) 18-99-03, Wax Piles/Oil Stain. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation (CAI) before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on February 28, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and Bechtel Nevada. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 166. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the CAI for CAU 166 includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling. (2) Conduct radiological surveys. (3) Perform field screening. (4) Collect and submit environmental samples for laboratory analysis to determine if

  1. Remote sampling system in reprocessing: present and future perspective

    International Nuclear Information System (INIS)

    Garcha, J.S.; Balakrishnan, V.P.; Rao, M.K.

    1990-01-01

    For the process and inventory control of the reprocessing plant operation it is essential to analyse the samples from the various process vessels to assess the plant performance and take corrective action if needed in the operating parameters. In view of the very high radioactive inventory in the plant, these plants are operated remotely behind thick shielding. The liquid sampling also has to be carried out by remote techniques only as no direct approach is feasible. A vacuum assisted air lift method is employed for the purpose of obtaining samples from remotely located process vessels. A brief description of the present technique, the design criteria, various interlocks and manual operations involved during sampling and despatching the same to the analytical laboratory is given in the paper. A design approach for making the sampling system, a fully automated remote operation has been attempted in this paper. Utilisation of custom built robots and dedicated computer for the various operations and interlocks has been visualised to ensure a complete remotised system for the adoption in future plants. (author). 2 figs., 2 tabs

  2. Rapid Measurement and Correction of Phase Errors from B0 Eddy Currents: Impact on Image Quality for Non-Cartesian Imaging

    Science.gov (United States)

    Brodsky, Ethan K.; Klaers, Jessica L.; Samsonov, Alexey A.; Kijowski, Richard; Block, Walter F.

    2014-01-01

    Non-Cartesian imaging sequences and navigational methods can be more sensitive to scanner imperfections that have little impact on conventional clinical sequences, an issue which has repeatedly complicated the commercialization of these techniques by frustrating transitions to multi-center evaluations. One such imperfection is phase errors caused by resonant frequency shifts from eddy currents induced in the cryostat by time-varying gradients, a phenomemon known as B0 eddy currents. These phase errors can have a substantial impact on sequences that use ramp sampling, bipolar gradients, and readouts at varying azimuthal angles. We present a method for measuring and correcting phase errors from B0 eddy currents and examine the results on two different scanner models. This technique yields significant improvements in image quality for high-resolution joint imaging on certain scanners. The results suggest that correction of short time B0 eddy currents in manufacturer provided service routines would simplify adoption of non-Cartesian sampling methods. PMID:22488532

  3. Corrective Action Investigation Plan for Corrective Action Unit 551: Area 12 Muckpiles, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Boehlecke, Robert F.

    2004-01-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information including facility descriptions, environmental sample collection objectives, and criteria for conducting site investigation activities at Corrective Action Unit (CAU) 551, Area 12 muckpiles, Nevada Test Site (NTS), Nevada. This CAIP has been developed in accordance with the 'Federal Facility Agreement and Consent Order' (FFACO) (1996) that was agreed to by the State of Nevada, the U.S. Department of Energy (DOE), and the U.S. Department of Defense. Corrective Action Unit 551 is located in Area 12 of the NTS, which is approximately 110 miles (mi) northwest of Las Vegas, Nevada (Figure 1-1). Area 12 is approximately 40 miles beyond the main gate to the NTS. Corrective Action Unit 551 is comprised of the four Corrective Action Sites (CASs) shown on Figure 1-1 and listed below: (1) 12-01-09, Aboveground Storage Tank and Stain; (2) 12-06-05, Muckpile; (3) 12-06-07, Muckpile; and (4) 12-06-08, Muckpile. Corrective Action Site 12-01-09 is located in Area 12 and consists of an above ground storage tank (AST) and associated stain. Corrective Action Site 12-06-05 is located in Area 12 and consists of a muckpile associated with the U12 B-Tunnel. Corrective Action Site 12-06-07 is located in Area 12 and consists of a muckpile associated with the U12 C-, D-, and F-Tunnels. Corrective Action Site 12-06-08 is located in Area 12 and consists of a muckpile associated with the U12 B-Tunnel. In keeping with common convention, the U12B-, C-, D-, and F-Tunnels will be referred to as the B-, C-, D-, and F-Tunnels. The corrective action investigation (CAI) will include field inspections, radiological surveys, and sampling of media, where appropriate. Data will also be obtained to support waste management decisions

  4. Determination of metals in air samples using X-Ray fluorescence associated the APDC preconcentration technique

    Energy Technology Data Exchange (ETDEWEB)

    Nardes, Raysa C.; Santos, Ramon S.; Sanches, Francis A.C.R.A.; Gama Filho, Hamilton S.; Oliveira, Davi F.; Anjos, Marcelino J., E-mail: rc.nardes@gmail.com, E-mail: ramonziosp@yahoo.com.br, E-mail: francissanches@gmail.com, E-mail: hamiltongamafilho@hotmail.com, E-mail: davi.oliveira@uerj.br, E-mail: marcelin@uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Instituto de Fisica. Departamento de Fisica Aplicada e Termodinamica

    2015-07-01

    Air pollution has become one of the leading quality degradation factors of life for people in large urban centers. Studies indicate that the suspended particulate matter in the atmosphere is directly associated with risks to public health, in addition, it can cause damage to fauna, flora and public / cultural patrimonies. The inhalable particulate materials can cause the emergence and / or worsening of chronic diseases related to respiratory system and other diseases, such as reduced physical strength. In this study, we propose a new method to measure the concentration of total suspended particulate matter (TSP) in the air using an impinger as an air cleaning apparatus, preconcentration with APDC and Total Reflection X-ray Fluorescence technique (TXRF) to analyze the heavy metals present in the air. The samples were collected from five random points in the city of Rio de Janeiro/Brazil. Analyses of TXRF were performed at the Brazilian Synchrotron Light Laboratory (LNLS). The technique proved viable because it was able to detect five important metallic elements to environmental studies: Cr, Fe, Ni, Cu and Zn. This technique presented substantial efficiency in determining the elementary concentration of air pollutants, in addition to low cost. It can be concluded that the metals analysis technique in air samples using an impinger as sample collection instrument associated with a complexing agent (APDC) was viable because it is a low-cost technique, moreover, it was possible the detection of five important metal elements in environmental studies associated with industrial emissions and urban traffic. (author)

  5. Determination of metals in air samples using X-Ray fluorescence associated the APDC preconcentration technique

    International Nuclear Information System (INIS)

    Nardes, Raysa C.; Santos, Ramon S.; Sanches, Francis A.C.R.A.; Gama Filho, Hamilton S.; Oliveira, Davi F.; Anjos, Marcelino J.

    2015-01-01

    Air pollution has become one of the leading quality degradation factors of life for people in large urban centers. Studies indicate that the suspended particulate matter in the atmosphere is directly associated with risks to public health, in addition, it can cause damage to fauna, flora and public / cultural patrimonies. The inhalable particulate materials can cause the emergence and / or worsening of chronic diseases related to respiratory system and other diseases, such as reduced physical strength. In this study, we propose a new method to measure the concentration of total suspended particulate matter (TSP) in the air using an impinger as an air cleaning apparatus, preconcentration with APDC and Total Reflection X-ray Fluorescence technique (TXRF) to analyze the heavy metals present in the air. The samples were collected from five random points in the city of Rio de Janeiro/Brazil. Analyses of TXRF were performed at the Brazilian Synchrotron Light Laboratory (LNLS). The technique proved viable because it was able to detect five important metallic elements to environmental studies: Cr, Fe, Ni, Cu and Zn. This technique presented substantial efficiency in determining the elementary concentration of air pollutants, in addition to low cost. It can be concluded that the metals analysis technique in air samples using an impinger as sample collection instrument associated with a complexing agent (APDC) was viable because it is a low-cost technique, moreover, it was possible the detection of five important metal elements in environmental studies associated with industrial emissions and urban traffic. (author)

  6. A radioanalytical technique using (n,2n) reaction for the elemental analysis of samples

    International Nuclear Information System (INIS)

    Labor, M.

    1985-11-01

    A technique to determine elemental composition of samples is reported. The principle of the technique employs the internal standard method and involves the resolution of complex annihilation spectra. The technique has been applied to the determination of the mass of nitrogen, msub(N), and that of potassium, msub(K), in known masses of potassium nitrate. The percentage difference between the calculated mass and actual masses in 2g and 3g of potassium nitrate is 1.0 and 0.7 respectively for potassium, and 1.0 for nitrogen. The use of more simultaneous equations than necessary in solving for msub(N) and msub(K) offers one of the advantages of the technique. (author)

  7. The "clover technique" as a novel approach for correction of post-traumatic tricuspid regurgitation.

    Science.gov (United States)

    Alfieri, O; De Bonis, M; Lapenna, E; Agricola, E; Quarti, A; Maisano, F

    2003-07-01

    To describe a novel technique, named "clover," to correct complex post-traumatic tricuspid valve lesions. Five patients with severe post-traumatic tricuspid insufficiency underwent valve reconstruction with the clover technique, a new surgical approach that consists of stitching together the middle point of the free edges of the tricuspid leaflets, producing a clover-shaped valve. The mechanism of tricuspid regurgitation was complex in all patients, and right ventricular function was always moderately to severely depressed. An echocardiographic study was performed after cardiopulmonary bypass, at discharge, and at follow-up. Cardiopulmonary bypass time was 32 +/- 6.3 minutes and crossclamp time was 23 +/- 7.4. There was no hospital mortality or morbidity. Intraoperative transesophageal and predischarge transthoracic echocardiography showed perfect results in all patients. No late deaths occurred. At the latest follow-up, extending to 14.2 months (mean 11.3; median 12.4), all patients were asymptomatic (New York Heart Association class I) with trivial (2 patients) or no residual regurgitation (3 patients) on 2-dimensional echocardiogram. No transvalvular gradient was revealed in any patient. A significant reduction of the right ventricular end-diastolic dimensions was noted as well (from 54 +/- 7.1 mm to 40 +/- 7.5 mm, P tricuspid valve repair in case of severe traumatic tricuspid valve insufficiency, leading to very satisfactory mid-term results even in the presence of complex lesions or dilatation and deterioration of the right ventricle.

  8. A novel non-invasive diagnostic sampling technique for cutaneous leishmaniasis.

    Directory of Open Access Journals (Sweden)

    Yasaman Taslimi

    2017-07-01

    Full Text Available Accurate diagnosis of cutaneous leishmaniasis (CL is important for chemotherapy and epidemiological studies. Common approaches for Leishmania detection involve the invasive collection of specimens for direct identification of amastigotes by microscopy and the culturing of promastigotes from infected tissues. Although these techniques are highly specific, they require highly skilled health workers and have the inherent risks of all invasive procedures, such as pain and risk of bacterial and fungal super-infection. Therefore, it is essential to reduce discomfort, potential infection and scarring caused by invasive diagnostic approaches especially for children. In this report, we present a novel non-invasive method, that is painless, rapid and user-friendly, using sequential tape strips for sampling and isolation of DNA from the surface of active and healed skin lesions of CL patients. A total of 119 patients suspected of suffering from cutaneous leishmaniasis with different clinical manifestations were recruited and samples were collected both from their lesions and from uninfected areas. In addition, 15 fungal-infected lesions and 54 areas of healthy skin were examined. The duration of sampling is short (less than one minute and species identification by PCR is highly specific and sensitive. The sequential tape stripping sampling method is a sensitive, non-invasive and cost-effective alternative to traditional diagnostic assays and it is suitable for field studies as well as for use in health care centers.

  9. p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results.

    Science.gov (United States)

    Simonsohn, Uri; Nelson, Leif D; Simmons, Joseph P

    2014-11-01

    Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the "choice overload" literature. © The Author(s) 2014.

  10. The Effect of Dynamic Written Corrective Feedback on Iranian Elementary Learners’ Writing

    Directory of Open Access Journals (Sweden)

    Amaneh Kamalian

    2014-09-01

    Full Text Available Error correction is probably the most widely used technique for responding to students’ writing. Although many studies have attempted to investigate the efficacy of providing error correction through different types of written corrective feedback (WCF, there has been relatively little research on any one new approach to writing pedagogy in foreign language learning which is called dynamic WCF. The purpose of the current research was to test the effect of WCF on the improvement of writing abilities of EFL learners. Two groups of EFL students who were learning English as a foreign language participated in this study. Both groups (A and B were given treatments. Core components of the treatment included having the students to write a composition every session (twice a week and the teacher providing the students with feedbacks (dynamic WCF or direct WCF on their writing tasks. Group A (n=24 was instructed through dynamic WCF because it was intended to improve L2 writing ability in general by raising linguistic awareness of learners through the error corrections performed by the teacher.  On the other hand, group B (n= 22 received direct WCF on their writings. Four essential characteristics were taken into consideration for the error correction, i.e. feedback needed to be manageable, meaningful, timely and constant. The data obtained for Group A and Group B was analyzed using paired sample test and the results indicated that both groups had improved on their writing abilities. Also, administrating an independent sample T-test the findings revealed that Group A which received dynamic WCF could outperform Group B.

  11. Respiratory Motion Correction for Compressively Sampled Free Breathing Cardiac MRI Using Smooth l1-Norm Approximation

    Directory of Open Access Journals (Sweden)

    Muhammad Bilal

    2018-01-01

    Full Text Available Transformed domain sparsity of Magnetic Resonance Imaging (MRI has recently been used to reduce the acquisition time in conjunction with compressed sensing (CS theory. Respiratory motion during MR scan results in strong blurring and ghosting artifacts in recovered MR images. To improve the quality of the recovered images, motion needs to be estimated and corrected. In this article, a two-step approach is proposed for the recovery of cardiac MR images in the presence of free breathing motion. In the first step, compressively sampled MR images are recovered by solving an optimization problem using gradient descent algorithm. The L1-norm based regularizer, used in optimization problem, is approximated by a hyperbolic tangent function. In the second step, a block matching algorithm, known as Adaptive Rood Pattern Search (ARPS, is exploited to estimate and correct respiratory motion among the recovered images. The framework is tested for free breathing simulated and in vivo 2D cardiac cine MRI data. Simulation results show improved structural similarity index (SSIM, peak signal-to-noise ratio (PSNR, and mean square error (MSE with different acceleration factors for the proposed method. Experimental results also provide a comparison between k-t FOCUSS with MEMC and the proposed method.

  12. Slurry sampling high-resolution continuum source electrothermal atomic absorption spectrometry for direct beryllium determination in soil and sediment samples after elimination of SiO interference by least-squares background correction.

    Science.gov (United States)

    Husáková, Lenka; Urbanová, Iva; Šafránková, Michaela; Šídová, Tereza

    2017-12-01

    In this work a simple, efficient, and environmentally-friendly method is proposed for determination of Be in soil and sediment samples employing slurry sampling and high-resolution continuum source electrothermal atomic absorption spectrometry (HR-CS-ETAAS). The spectral effects originating from SiO species were identified and successfully corrected by means of a mathematical correction algorithm. Fractional factorial design has been employed to assess the parameters affecting the analytical results and especially to help in the development of the slurry preparation and optimization of measuring conditions. The effects of seven analytical variables including particle size, concentration of glycerol and HNO 3 for stabilization and analyte extraction, respectively, the effect of ultrasonic agitation for slurry homogenization, concentration of chemical modifier, pyrolysis and atomization temperature were investigated by a 2 7-3 replicate (n = 3) design. Using the optimized experimental conditions, the proposed method allowed the determination of Be with a detection limit being 0.016mgkg -1 and characteristic mass 1.3pg. Optimum results were obtained after preparing the slurries by weighing 100mg of a sample with particle size < 54µm and adding 25mL of 20% w/w glycerol. The use of 1μg Rh and 50μg citric acid was found satisfactory for the analyte stabilization. Accurate data were obtained with the use of matrix-free calibration. The accuracy of the method was confirmed by analysis of two certified reference materials (NIST SRM 2702 Inorganics in Marine Sediment and IGI BIL-1 Baikal Bottom Silt) and by comparison of the results obtained for ten real samples by slurry sampling with those determined after microwave-assisted extraction by inductively coupled plasma time of flight mass spectrometry (TOF-ICP-MS). The reported method has a precision better than 7%. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    International Nuclear Information System (INIS)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib

    2008-01-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in

  14. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Use of an oscillation technique to measure effective cross-sections of fissionable samples in critical assemblies

    International Nuclear Information System (INIS)

    Tretiakoff, O.; Vidal, R.; Carre, J.C.; Robin, M.

    1964-01-01

    The authors describe the technique used to measure the effective absorption and neutron-yield cross-sections of a fissionable sample. These two values are determined by analysing the signals due to the variation in reactivity (over-all signal) and the local perturbation in the flux (local signal) produced by the oscillating sample. These signals are standardized by means of a set of samples containing quantities of fissionable material ( 235 U) and an absorber, boron, which are well known. The measurements are made for different neutron spectra characterized by lattice parameters which constitute the central zone within which the sample moves. This technique is used to study the effective cross-sections of uranium-plutonium alloys for different heavy-water and graphite lattices in the MINERVE and MARIUS critical assemblies. The same experiments are carried out on fuel samples of different irradiations in order to determine the evolution of effective cross-sections as a function of the spectrum and the irradiations. (authors) [fr

  16. Toward greener analytical techniques for the absolute quantification of peptides in pharmaceutical and biological samples.

    Science.gov (United States)

    Van Eeckhaut, Ann; Mangelings, Debby

    2015-09-10

    Peptide-based biopharmaceuticals represent one of the fastest growing classes of new drug molecules. New reaction types included in the synthesis strategies to reduce the rapid metabolism of peptides, along with the availability of new formulation and delivery technologies, resulted in an increased marketing of peptide drug products. In this regard, the development of analytical methods for quantification of peptides in pharmaceutical and biological samples is of utmost importance. From the sample preparation step to their analysis by means of chromatographic or electrophoretic methods, many difficulties should be tackled to analyze them. Recent developments in analytical techniques emphasize more and more on the use of green analytical techniques. This review will discuss the progresses in and challenges observed during green analytical method development for the quantification of peptides in pharmaceutical and biological samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. A cost-effective technique for integrating personal radiation dose assessment with personal gravimetric sampling

    International Nuclear Information System (INIS)

    Strydom, R.; Rolle, R.; Van der Linde, A.

    1992-01-01

    During recent years there has been an increasing awareness internationally of radiation levels in the mining and milling of radioactive ores, including those from non-uranium mines. A major aspect of radiation control is concerned with the measurement of radiation levels and the assessment of radiation doses incurred by individual workers. Current techniques available internationally for personnel monitoring of radiation exposures are expensive and there is a particular need to reduce the cost of personal radiation monitoring in South African gold mines because of the large labour force employed. In this regard the obvious benefits of integrating personal radiation monitoring with existing personal monitoring systems already in place in South African gold mines should be exploited. A system which can be utilized for this purpose is personal gravimetric sampling. A new cost-effective technique for personal radiation monitoring, which can be fully integrated with the personal gravimetric sampling strategy being implemented on mines, has been developed in South Africa. The basic principles of this technique and its potential in South African mines are described. 9 refs., 7 figs

  18. Correction factors for photon spectrometry in nuclear parameters study

    International Nuclear Information System (INIS)

    Patrao, Karla Cristina de Souza

    2004-10-01

    The goal of this work was the determination, using metrologic severity, the factors of correction for coincidences XX, Xγ and γγ and the factors of transference of efficiency for use in gamma spectrometry. On this way, it was carried through by determination of nuclear parameters of a nuclide used in medicine diagnostic ( 201 Tl) and the standardization of two environmental samples, of regular and irregular geometry, proceeding from the residual (ashes and slag) from the nuclear industry. The results shows that this adopted methodology is valid, and it allows its application for many different nuclides, including complex decay schema nuclides, using only photons spectrometry techniques on semiconductor detectors. (author)

  19. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    Science.gov (United States)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order

  20. 238U And 232Th Concentration In Rock Samples using Alpha Autoradiography and Gamma Spectroscopy Techniques

    International Nuclear Information System (INIS)

    Hafez, A.F.; El-Farrash, A.H.; Yousef, H.A.

    2009-01-01

    The activity concentrations of uranium and thorium were measured for some rock samples selected from Dahab region in the south tip of Sinai. In order to detect any harmful radiation that would affect on the tourists and is becoming economic resource because Dahab have open fields of tourism in Egypt. The activity concentration of uranium and thorium in rocks samples was measured using two techniques. The first is .-autoradiography technique with LR-115 and CR-39 detectors and the second is gamma spectroscopic technique with NaI(Tl) detector. It was found that the average activity concentrations of uranium and thorium using .-autoradiography technique ranged from 6.41-49.31 Bqkg-1, 4.86- 40.87 Bqkg-1 respectively and by gamma detector are ranged from 6.70- 49.50 Bqkg-1, 4.47- 42.33 Bqkg-1 respectively. From the obtained data we can conclude that there is no radioactive healthy hazard for human and living beings in the area under investigation. It was found that there are no big differences between the calculated thorium to uranium ratios in both techniques

  1. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set—Effect of Pasteurization

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-01-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  2. Correction procedures for C-14 dates

    International Nuclear Information System (INIS)

    McKerrell, H.

    1975-01-01

    There are two quite separate criteria to satisfy before accepting as valid the corrections to C-14 dates which have been indicated for some years now by the bristlecone pine calibration. Firstly the correction figures have to be based upon all the available tree-ring data and derived in a manner that is mathematically sound, and secondly the correction figures have to produce accurate results on C-14 dates from archaeological test samples of known historical date, these covering as wide a period as possible. Neither of these basic prerequisites has yet been fully met. Thus the two-fold purpose of this paper is to bring together, and to compare with an independently based procedure, the various correction curves or tables that have been published up to Spring 1974, as well as to detail the correction results on reliable, historically dated Egyptian, Helladic and Minoan test samples from 3100 B.C. The nomenclature followed is strictly that adopted by the primary dating journal Radiocarbon, all C-14 dates quoted thus relate to the 5568 year half-life and the standard AD/BC system. (author)

  3. Atmospheric pressure surface sampling/ionization techniques for direct coupling of planar separations with mass spectrometry.

    Science.gov (United States)

    Pasilis, Sofie P; Van Berkel, Gary J

    2010-06-18

    Planar separations, which include thin layer chromatography and gel electrophoresis, are in widespread use as important and powerful tools for conducting separations of complex mixtures. To increase the utility of planar separations, new methods are needed that allow in situ characterization of the individual components of the separated mixtures. A large number of atmospheric pressure surface sampling and ionization techniques for use with mass spectrometry have emerged in the past several years, and several have been investigated as a means for mass spectrometric read-out of planar separations. In this article, we review the atmospheric pressure surface sampling and ionization techniques that have been used for the read-out of planar separation media. For each technique, we briefly explain the operational basics and discuss the analyte type for which it is appropriate and some specific applications from the literature. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  4. Waste minimization in analytical chemistry through innovative sample preparation techniques

    International Nuclear Information System (INIS)

    Smith, L. L.

    1998-01-01

    water samples. In this SPME technique, a fused-silica fiber coated with a polymeric film is exposed to the sample, extraction is allowed to take place, and then the analytes are thermally desorbed for GC analysis. Unlike liquid-liquid extraction or solid-phase extraction, SPME consumes all of the extracted sample in the analysis, significantly reducing the required sample volume

  5. Detection of irradiated spices using photo-stimulated luminescence technique (PSL)

    Energy Technology Data Exchange (ETDEWEB)

    Ramli, Ros Anita Ahmad; Yasir, Muhamad Samudi [Faculty of Science and Technology, National University of Malaysia, Bangi, 43000 Kajang, Selangor (Malaysia); Othman, Zainon; Abdullah, Wan Saffiey Wan [Malaysian Nuclear Agency, Bangi 43000 Kajang, Selangor (Malaysia)

    2014-09-03

    Photo-stimulated luminescence (PSL) technique was applied to detect irradiated black pepper (Piper nigrum), cinnamon (Cinnamomum verum) and turmeric (Curcuma longa) after dark storage for 1 day, 3 and 6 months. Using screening and calibrated PSL, all samples were correctly discriminated between non-irradiated and spices irradiated with doses 1, 5 and 10 kGy. The PSL photon counts (PCs) of irradiated spices increased with increasing dose, with turmeric showing highest sensitivity index to irradiation compared to black pepper and cinnamon. The differences in response are possibly attributed to the varying quantity and quality of silicate minerals present in each spice sample. PSL signals of all irradiated samples reduced after 3 and 6 months storage. The results of this study provide a useful database on the applicability of PSL technique for the detection of Malaysian irradiated spices.

  6. Detection of irradiated spices using photo-stimulated luminescence technique (PSL)

    International Nuclear Information System (INIS)

    Ramli, Ros Anita Ahmad; Yasir, Muhamad Samudi; Othman, Zainon; Abdullah, Wan Saffiey Wan

    2014-01-01

    Photo-stimulated luminescence (PSL) technique was applied to detect irradiated black pepper (Piper nigrum), cinnamon (Cinnamomum verum) and turmeric (Curcuma longa) after dark storage for 1 day, 3 and 6 months. Using screening and calibrated PSL, all samples were correctly discriminated between non-irradiated and spices irradiated with doses 1, 5 and 10 kGy. The PSL photon counts (PCs) of irradiated spices increased with increasing dose, with turmeric showing highest sensitivity index to irradiation compared to black pepper and cinnamon. The differences in response are possibly attributed to the varying quantity and quality of silicate minerals present in each spice sample. PSL signals of all irradiated samples reduced after 3 and 6 months storage. The results of this study provide a useful database on the applicability of PSL technique for the detection of Malaysian irradiated spices

  7. Detection of irradiated spices using photo-stimulated luminescence technique (PSL)

    Science.gov (United States)

    Ramli, Ros Anita Ahmad; Yasir, Muhamad Samudi; Othman, Zainon; Abdullah, Wan Saffiey Wan

    2014-09-01

    Photo-stimulated luminescence (PSL) technique was applied to detect irradiated black pepper (Piper nigrum), cinnamon (Cinnamomum verum) and turmeric (Curcuma longa) after dark storage for 1 day, 3 and 6 months. Using screening and calibrated PSL, all samples were correctly discriminated between non-irradiated and spices irradiated with doses 1, 5 and 10 kGy. The PSL photon counts (PCs) of irradiated spices increased with increasing dose, with turmeric showing highest sensitivity index to irradiation compared to black pepper and cinnamon. The differences in response are possibly attributed to the varying quantity and quality of silicate minerals present in each spice sample. PSL signals of all irradiated samples reduced after 3 and 6 months storage. The results of this study provide a useful database on the applicability of PSL technique for the detection of Malaysian irradiated spices.

  8. Technique of Antireflux Procedure without Creating Submucosal Tunnel for Surgical Correction of Vesicoureteric Reflux during Bladder Closure in Exstrophy.

    Science.gov (United States)

    Sunil, Kanoujia; Gupta, Archika; Chaubey, Digamber; Pandey, Anand; Kureel, Shiv Narain; Verma, Ajay Kumar

    2018-01-01

    To report the clinical application of the new surgical technique of antireflux procedure without creating submucosal tunnel for surgical correction of vesicoureteric reflux during bladder closure in exstrophy. Based on the report of published experimental technique, the procedure was clinically executed in seven patients of classic exstrophy bladder with small bladder plate with polyps, where the creation of submucosal tunnel was not possible, in last 18 months. Ureters were mobilized. A rectangular patch of bladder mucosa at trigone was removed exposing the detrusor. Mobilized urteres were advanced, crossed and anchored to exposed detrusor parallel to each other. Reconstruction included bladder and epispadias repair with abdominal wall closure. The outcome was measured with the assessment of complications, abolition of reflux on cystogram and upper tract status. At 3-month follow-up cystogram, reflux was absent in all. Follow-up ultrasound revealed mild dilatation of pelvis and ureter in one. The technique of extra-mucosal ureteric reimplantation without the creation of submucosal tunnel is simple to execute without risk and complications and effectively provides an antireflux mechanism for the preservation of upper tract in bladder exstrophy. With the use of this technique, reflux can be prevented since the very beginning of exstrophy reconstruction.

  9. High-throughput droplet analysis and multiplex DNA detection in the microfluidic platform equipped with a robust sample-introduction technique

    International Nuclear Information System (INIS)

    Chen, Jinyang; Ji, Xinghu; He, Zhike

    2015-01-01

    In this work, a simple, flexible and low-cost sample-introduction technique was developed and integrated with droplet platform. The sample-introduction strategy was realized based on connecting the components of positive pressure input device, sample container and microfluidic chip through the tygon tubing with homemade polydimethylsiloxane (PDMS) adaptor, so the sample was delivered into the microchip from the sample container under the driving of positive pressure. This sample-introduction technique is so robust and compatible that could be integrated with T-junction, flow-focus or valve-assisted droplet microchips. By choosing the PDMS adaptor with proper dimension, the microchip could be flexibly equipped with various types of familiar sample containers, makes the sampling more straightforward without trivial sample transfer or loading. And the convenient sample changing was easily achieved by positioning the adaptor from one sample container to another. Benefiting from the proposed technique, the time-dependent concentration gradient was generated and applied for quantum dot (QD)-based fluorescence barcoding within droplet chip. High-throughput droplet screening was preliminarily demonstrated through the investigation of the quenching efficiency of ruthenium complex to the fluorescence of QD. More importantly, multiplex DNA assay was successfully carried out in the integrated system, which shows the practicability and potentials in high-throughput biosensing. - Highlights: • A simple, robust and low-cost sample-introduction technique was developed. • Convenient and flexible sample changing was achieved in microfluidic system. • Novel strategy of concentration gradient generation was presented for barcoding. • High-throughput droplet screening could be realized in the integrated platform. • Multiplex DNA assay was successfully carried out in the droplet platform

  10. High-throughput droplet analysis and multiplex DNA detection in the microfluidic platform equipped with a robust sample-introduction technique

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Jinyang; Ji, Xinghu [Key Laboratory of Analytical Chemistry for Biology and Medicine (Ministry of Education), College of Chemistry and Molecular Sciences, Wuhan University, Wuhan 430072 (China); He, Zhike, E-mail: zhkhe@whu.edu.cn [Key Laboratory of Analytical Chemistry for Biology and Medicine (Ministry of Education), College of Chemistry and Molecular Sciences, Wuhan University, Wuhan 430072 (China); Suzhou Institute of Wuhan University, Suzhou 215123 (China)

    2015-08-12

    In this work, a simple, flexible and low-cost sample-introduction technique was developed and integrated with droplet platform. The sample-introduction strategy was realized based on connecting the components of positive pressure input device, sample container and microfluidic chip through the tygon tubing with homemade polydimethylsiloxane (PDMS) adaptor, so the sample was delivered into the microchip from the sample container under the driving of positive pressure. This sample-introduction technique is so robust and compatible that could be integrated with T-junction, flow-focus or valve-assisted droplet microchips. By choosing the PDMS adaptor with proper dimension, the microchip could be flexibly equipped with various types of familiar sample containers, makes the sampling more straightforward without trivial sample transfer or loading. And the convenient sample changing was easily achieved by positioning the adaptor from one sample container to another. Benefiting from the proposed technique, the time-dependent concentration gradient was generated and applied for quantum dot (QD)-based fluorescence barcoding within droplet chip. High-throughput droplet screening was preliminarily demonstrated through the investigation of the quenching efficiency of ruthenium complex to the fluorescence of QD. More importantly, multiplex DNA assay was successfully carried out in the integrated system, which shows the practicability and potentials in high-throughput biosensing. - Highlights: • A simple, robust and low-cost sample-introduction technique was developed. • Convenient and flexible sample changing was achieved in microfluidic system. • Novel strategy of concentration gradient generation was presented for barcoding. • High-throughput droplet screening could be realized in the integrated platform. • Multiplex DNA assay was successfully carried out in the droplet platform.

  11. The novel programmable riometer for in-depth ionospheric and magnetospheric observations (PRIAMOS) using direct sampling DSP techniques

    OpenAIRE

    Dekoulis, G.; Honary, F.

    2005-01-01

    This paper describes the feasibility study and simulation results for the unique multi-frequency, multi-bandwidth, Programmable Riometer for in-depth Ionospheric And Magnetospheric ObservationS (PRIAMOS) based on direct sampling digital signal processing (DSP) techniques. This novel architecture is based on sampling the cosmic noise wavefront at the antenna. It eliminates the usage of any intermediate frequency (IF) mixer stages (-6 dB) and the noise balancing technique (-3 dB), providing a m...

  12. Laser-Assisted Sampling Techniques in Combination with ICP-MS: A Novel Approach for Particle Analysis at the IAEA Environmental Samples Laboratory

    International Nuclear Information System (INIS)

    Dzigal, N.; Chinea-Cano, E.

    2015-01-01

    Researchers have found many applications for lasers. About two decades ago, scientists started using lasers as sample introduction instruments for mass spectrometry measurements. Similarly, lasers as micro-dissection tools have also been increasingly on demand in the fields of life sciences, materials science, forensics, etc. This presentation deals with the interception of these aforementioned laser-assisted techniques to the field of particle analysis. Historically, the use of a nanosecond laser to ablate material has been used in materials science. Recently, it has been proven that in the analysis of particulate materials the disadvantages associated with the utilization of nanosecond lasers such as overheating and melting of the sample are suppressed when using femtosecond lasers. Further, due to the length of a single laser shot, fs-LA allows a more controlled ablation to occur and therefore the sample plasma is more homogeneous and less mass-fractionation events are detected. The use of laser micro-dissection devices enables the physical segmentation of microsized artefacts previously performed by a laborious manual procedure. By combining the precision of the laser cutting inherent to the LMD technique together with a particle identification methodology, one can increase the efficiency of single particle isolation. Further, besides the increase in throughput of analyses, this combination enhances the signal-to-noise ratio by removing matrix particles effectively. Specifically, this contribution describes the use of an Olympus+MMI laser microdissection device in improving the sample preparation of environmental swipe samples and the installation of an Applied Spectra J200 fs-LA/LIBS (laser ablation/laser inducedbreakdown spectroscopy) system as a sample introduction device to a quadrupole mass spectrometer, the iCap Q from Thermofisher Scientific at the IAEA Environmental Samples Laboratory are explored. Preliminary results of the ongoing efforts for the

  13. Thermophilic Campylobacter spp. in turkey samples: evaluation of two automated enzyme immunoassays and conventional microbiological techniques

    DEFF Research Database (Denmark)

    Borck, Birgitte; Stryhn, H.; Ersboll, A.K.

    2002-01-01

    Aims: To determine the sensitivity and specificity of two automated enzyme immunoassays (EIA), EiaFoss and Minividas, and a conventional microbiological culture technique for detecting thermophilic Campylobacter spp. in turkey samples. Methods and Results: A total of 286 samples (faecal, meat...

  14. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  15. Solving mercury (Hg) speciation in soil samples by synchrotron X-ray microspectroscopic techniques.

    Science.gov (United States)

    Terzano, Roberto; Santoro, Anna; Spagnuolo, Matteo; Vekemans, Bart; Medici, Luca; Janssens, Koen; Göttlicher, Jörg; Denecke, Melissa A; Mangold, Stefan; Ruggiero, Pacifico

    2010-08-01

    Direct mercury (Hg) speciation was assessed for soil samples with a Hg concentration ranging from 7 up to 240 mg kg(-1). Hg chemical forms were identified and quantified by sequential extractions and bulk- and micro-analytical techniques exploiting synchrotron generated X-rays. In particular, microspectroscopic techniques such as mu-XRF, mu-XRD and mu-XANES were necessary to solve bulk Hg speciation, in both soil fractions soil samples were metacinnabar (beta-HgS), cinnabar (alpha-HgS), corderoite (Hg(3)S(2)Cl(2)), and an amorphous phase containing Hg bound to chlorine and sulfur. The amount of metacinnabar and amorphous phases increased in the fraction soil components was observed. All the observed Hg-species originated from the slow weathering of an inert Hg-containing waste material (K106, U.S. EPA) dumped in the area several years ago, which is changing into a relatively more dangerous source of pollution. Copyright 2010 Elsevier Ltd. All rights reserved.

  16. Effects of novel corrective spinal technique on adolescent idiopathic scoliosis as assessed by radiographic imaging.

    Science.gov (United States)

    Noh, Dong Koog; You, Joshua Sung-H; Koh, Jae-Hyun; Kim, Hoseong; Kim, Donghyun; Ko, Sung-Mok; Shin, Ji-Youn

    2014-01-01

    To compare the therapeutic effects of a 3-dimensional corrective spinal technique (CST) and a conventional exercise program (CE) on altered spinal curvature and health related quality-of-life in patients with adolescent idiopathic scoliosis (AIS). Adolescents with idiopathic scoliosis (N=32, 6 males and 26 females) between 10 and 19 years of age (14.34 ± 2.60 years) were recruited and underwent the CST or CE for 60 minutes/day, 2-3 times a week, and an average of total 30 sessions. Diagnostic X-ray imaging technique was used to determine intervention-related changes in the Cobb angle, thoracic kyphosis angle, lumbar lordosis angle, sacral slope, pelvic tilt, pelvic incidence, and vertebral rotation (Nash-Moe method). The Scoliosis Research Society-22 (SRS-22) health related quality-of-life questionnaire was used. Data were analysed using independent t-test, paired t-test, and non-parametric Mann-Whitney U-test at p self-image and treatment satisfaction subscale scores and total score, p=0.026, p=0.039, and p=0.041, respectively) as compared to the controls. There were no significant changes in the other measures between the two groups. This is the first clinical trial to investigate the effects of the 3-dimensional CST on spinal curvatures and health related quality-of-life in AIS, providing the important clinical rationale and compelling evidence for the effective management of AIS.

  17. Correction Technique for Raman Water Vapor Lidar Signal-Dependent Bias and Suitability for Water Wapor Trend Monitoring in the Upper Troposphere

    Science.gov (United States)

    Whiteman, D. N.; Cadirola, M.; Venable, D.; Calhoun, M.; Miloshevich, L; Vermeesch, K.; Twigg, L.; Dirisu, A.; Hurst, D.; Hall, E.; hide

    2012-01-01

    The MOHAVE-2009 campaign brought together diverse instrumentation for measuring atmospheric water vapor. We report on the participation of the ALVICE (Atmospheric Laboratory for Validation, Interagency Collaboration and Education) mobile laboratory in the MOHAVE-2009 campaign. In appendices we also report on the performance of the corrected Vaisala RS92 radiosonde measurements during the campaign, on a new radiosonde based calibration algorithm that reduces the influence of atmospheric variability on the derived calibration constant, and on other results of the ALVICE deployment. The MOHAVE-2009 campaign permitted the Raman lidar systems participating to discover and address measurement biases in the upper troposphere and lower stratosphere. The ALVICE lidar system was found to possess a wet bias which was attributed to fluorescence of insect material that was deposited on the telescope early in the mission. Other sources of wet biases are discussed and data from other Raman lidar systems are investigated, revealing that wet biases in upper tropospheric (UT) and lower stratospheric (LS) water vapor measurements appear to be quite common in Raman lidar systems. Lower stratospheric climatology of water vapor is investigated both as a means to check for the existence of these wet biases in Raman lidar data and as a source of correction for the bias. A correction technique is derived and applied to the ALVICE lidar water vapor profiles. Good agreement is found between corrected ALVICE lidar measurments and those of RS92, frost point hygrometer and total column water. The correction is offered as a general method to both quality control Raman water vapor lidar data and to correct those data that have signal-dependent bias. The influence of the correction is shown to be small at regions in the upper troposphere where recent work indicates detection of trends in atmospheric water vapor may be most robust. The correction shown here holds promise for permitting useful upper

  18. Fluorescence correction in electron probe microanalysis

    International Nuclear Information System (INIS)

    Castellano, Gustavo; Riveros, J.A.

    1987-01-01

    In this work, several expressions for characteristic fluorescence corrections are computed, for a compilation of experimental determinations on standard samples. Since this correction does not take significant values, the performance of the different models is nearly the same; this fact suggests the use of the simplest available expression. (Author) [es

  19. The robust corrective action priority-an improved approach for selecting competing corrective actions in FMEA based on principle of robust design

    Science.gov (United States)

    Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan

    2017-11-01

    In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.

  20. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2014-01-01

    The aim in this paper is to develop a new local defect correction approach to gridding for problems with localised regions of high activity in the boundary element method. The technique of local defect correction has been studied for other methods as finite difference methods and finite volume

  1. First Industrial Tests of a Drum Monitor Matrix Correction for the Fissile Mass Measurement in Large Volume Historic Metallic Residues with the Differential Die-away Technique

    Energy Technology Data Exchange (ETDEWEB)

    Antoni, R.; Passard, C.; Perot, B.; Batifol, M.; Vandamme, J.C. [CEA, DEN, Cadarache, Nuclear Measurement Laboratory, F-13108 St Paul-lez-Durance, (France); Grassi, G. [AREVA NC, 1 place Jean-Millier, 92084 Paris-La-Defense cedex (France)

    2015-07-01

    The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT. In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (NML) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor ({sup 3}He proportional counter inside the measurement cavity). A previous study performed with the NML R and D measurement cell PROMETHEE 6 has shown the feasibility of method, and the capability of MCNP simulations to correctly reproduce experimental data and to assess the performances of the proposed correction. A next step of the study has focused on the performance assessment of the method on the industrial station using numerical simulation. A correlation between the prompt calibration coefficient of the {sup 239}Pu signal and the drum monitor signal was established using the MCNPX computer code and a fractional factorial experimental design composed of matrix parameters representative of the variation range of historical waste. Calculations have showed that the method allows the assay of the fissile mass with an uncertainty within a factor of 2, while the matrix effect without correction ranges on 2 decades. In this paper, we present and discuss the first experimental tests on the industrial ACC measurement system. A calculation vs. experiment benchmark has been achieved by performing dedicated calibration measurement with a representative drum and {sup 235}U samples. The preliminary comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the

  2. First Industrial Tests of a Drum Monitor Matrix Correction for the Fissile Mass Measurement in Large Volume Historic Metallic Residues with the Differential Die-away Technique

    International Nuclear Information System (INIS)

    Antoni, R.; Passard, C.; Perot, B.; Batifol, M.; Vandamme, J.C.; Grassi, G.

    2015-01-01

    The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT. In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (NML) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor ( 3 He proportional counter inside the measurement cavity). A previous study performed with the NML R and D measurement cell PROMETHEE 6 has shown the feasibility of method, and the capability of MCNP simulations to correctly reproduce experimental data and to assess the performances of the proposed correction. A next step of the study has focused on the performance assessment of the method on the industrial station using numerical simulation. A correlation between the prompt calibration coefficient of the 239 Pu signal and the drum monitor signal was established using the MCNPX computer code and a fractional factorial experimental design composed of matrix parameters representative of the variation range of historical waste. Calculations have showed that the method allows the assay of the fissile mass with an uncertainty within a factor of 2, while the matrix effect without correction ranges on 2 decades. In this paper, we present and discuss the first experimental tests on the industrial ACC measurement system. A calculation vs. experiment benchmark has been achieved by performing dedicated calibration measurement with a representative drum and 235 U samples. The preliminary comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the modeling approach

  3. Blind retrospective motion correction of MR images.

    Science.gov (United States)

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2013-12-01

    Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.

  4. A sub-sampled approach to extremely low-dose STEM

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, A. [OptimalSensing, Southlake, Texas 76092, USA; Duke University, ECE, Durham, North Carolina 27708, USA; Luzi, L. [Rice University, ECE, Houston, Texas 77005, USA; Yang, H. [Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA; Kovarik, L. [Pacific NW National Laboratory, Richland, Washington 99354, USA; Mehdi, B. L. [Pacific NW National Laboratory, Richland, Washington 99354, USA; University of Liverpool, Materials Engineering, Liverpool L69 3GH, United Kingdom; Liyu, A. [Pacific NW National Laboratory, Richland, Washington 99354, USA; Gehm, M. E. [Duke University, ECE, Durham, North Carolina 27708, USA; Browning, N. D. [Pacific NW National Laboratory, Richland, Washington 99354, USA; University of Liverpool, Materials Engineering, Liverpool L69 3GH, United Kingdom

    2018-01-22

    The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e-2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis of the node distribution in metal-organic frameworks (MOFs).

  5. Corrections to the {sup 148}Nd method of evaluation of burnup for the PIE samples from Mihama-3 and Genkai-1 reactors

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya [Fuel Cycle Facility Safety Research Group, Nuclear Safety Research Center, Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki 319-1195 (Japan)]. E-mail: suyama.kenya@jaea.go.jp; Mochizuki, Hiroki [Japan Research Institute, Limited, 16 Ichiban-cho, Chiyoda-ku, Tokyo 102-0082 (Japan)

    2006-03-15

    The value of the burnup is one of the most important parameters of samples taken by post-irradiation examination (PIE). Generally, it is evaluated by the Neodymium-148 method. Precise evaluation of the burnup value requires: (1) an effective fission yield of {sup 148}Nd; (2) neutron capture reactions of {sup 147}Nd and {sup 148}Nd; (3) a conversion factor from fissions per initial heavy metal to the burnup unit GWd/t. In this study, the burnup values of the PIE data from Mihama-3 and Genkai-1 PWRs, which were taken by the Japan Atomic Energy Research Institute, were re-evaluated using more accurate corrections for each of these three items. The PIE data were then re-analyzed using SWAT and SWAT2 code systems with JENDL-3.3 library. The re-evaluation of the effective fission yield of {sup 148}Nd has an effect of 1.5-2.0% on burnup values. Considering the neutron capture reactions of {sup 147}Nd and {sup 148}Nd removes dependence of C/E values of {sup 148}Nd on the burnup value. The conversion factor from FIMA(%) to GWd/t changes according to the burnup value. Its effect on the burnup evaluation is small for samples having burnup of larger than 30 GWd/t. The analyses using the corrected burnup values showed that the calculated {sup 148}Nd concentrations and the PIE data is approximately 1%, whereas this was 3-5% in prior analyses. This analysis indicates that the burnup values of samples from Mihama-3 and Genkai-1 PWRs should be corrected by 2-3%. The effect of re-evaluation of the burnup value on the neutron multiplication factor is an approximately 0.6% change in PIE samples having the burnup of larger than 30 GWd/t. Finally, comparison between calculation results using a single pin-cell model and an assembly model is carried out. Because the results agreed with each other within a few percent, we concluded that the single pin-cell model is suitable for the analysis of PIE samples and that the underestimation of plutonium isotopes, which occurred in the previous

  6. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Sampling and sample preparation development for analytical and on-line measurement techniques of process liquids; Naeytteenoton ja kaesittelyn kehittaeminen prosessinesteiden analytiikan ja on-line mittaustekniikan tarpeisiin - MPKT 11

    Energy Technology Data Exchange (ETDEWEB)

    Karttunen, K. [Oulu Univ. (Finland)

    1998-12-31

    Main goal of the research project is to develop sampling and sample handling methods and techniques for pulp and paper industry to be used for analysis and on-line purposes. The research focus specially on the research and development of the classification and separation methods and techniques needed for liquid and colloidal substances as well as in ion analysis. (orig.)

  8. Sampling and sample preparation development for analytical and on-line measurement techniques of process liquids; Naeytteenoton ja kaesittelyn kehittaeminen prosessinesteiden analytiikan ja on-line mittaustekniikan tarpeisiin - MPKT 11

    Energy Technology Data Exchange (ETDEWEB)

    Karttunen, K [Oulu Univ. (Finland)

    1999-12-31

    Main goal of the research project is to develop sampling and sample handling methods and techniques for pulp and paper industry to be used for analysis and on-line purposes. The research focus specially on the research and development of the classification and separation methods and techniques needed for liquid and colloidal substances as well as in ion analysis. (orig.)

  9. Water stable isotope measurements of Antarctic samples by means of IRMS and WS-CRDS techniques

    Science.gov (United States)

    Michelini, Marzia; Bonazza, Mattia; Braida, Martina; Flora, Onelio; Dreossi, Giuliano; Stenni, Barbara

    2010-05-01

    In the last years in the scientific community there has been an increasing interest for the application of stable isotope techniques to several environmental problems such as drinking water safeguarding, groundwater management, climate change, soils and paleoclimate studies etc. For example, the water stable isotopes, being natural tracers of the hydrological cycle, have been extensively used as tools to characterize regional aquifers and to reconstruct past temperature changes from polar ice cores. Here the need for improvements in analytical techniques: the high request for information calls for technologies that can offer a great quantity of analyses in short times and with low costs. Furthermore, sometimes it is difficult to obtain big amount of samples (as is the case for Antarctic ice cores or interstitial water) preventing the possibility to replicate the analyses. Here, we present oxygen and hydrogen measurements performed on water samples covering a big range of isotopic values (from very negative antarctic precipitation to mid-latitude precipitation values) carried out with both the conventional Isotope Ratio Mass Spectrometry (IRMS) technique and with a new method based on laser absorption techniques, the Wavelenght Scanned Cavity Ringdown Spectroscopy (WS-CRDS). This study is focusing on improving the precision of the measurements carried out with WS-CRDS in order to extensively apply this method to Antarctic ice core paleoclimate studies. The WS-CRDS is a variation of the CRDS developed in 1988 by O'Keef and Deacon. In CRDS a pulse of light goes through a box with high reflective inner surfaces; when there is no sample in the box the light beam doesn't find any obstacle in its path, but the reflectivity of the walls is not perfect so eventually there will be an absorption of the light beam; when the sample is injected in the box there is absorption and the difference between the time of absorption without and with sample is proportional to the quantity

  10. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  11. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    Science.gov (United States)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  12. A new iterative reconstruction technique for attenuation correction in high-resolution positron emission tomography

    International Nuclear Information System (INIS)

    Knesaurek, K.; Machac, J.; Vallabhajosula, S.; Buchsbaum, M.S.

    1996-01-01

    A new interative reconstruction technique (NIRT) for positron emission computed tomography (PET), which uses transmission data for nonuniform attenuation correction, is described. Utilizing the general inverse problem theory, a cost functional which includes a noise term was derived. The cost functional was minimized using a weighted-least-square maximum a posteriori conjugate gradient (CG) method. The procedure involves a change in the Hessian of the cost function by adding an additional term. Two phantoms were used in a real data acquisition. The first was a cylinder phantom filled with uniformly distributed activity of 74 MBq of fluorine-18. Two different inserts were placed in the phantom. The second was a Hoffman brain phantom filled with uniformly distributed activity of 7.4 MBq of 18 F. Resulting reconstructed images were used to test and compare a new interative reconstruction technique with a standard filtered backprojection (FBP) method. The results confirmed that NIRT, based on the conjugate gradient method, converges rapidly and provides good reconstructed images. In comaprison with standard results obtained by the FBP method, the images reconstructed by NIRT showed better noise properties. The noise was measured as rms% noise and was less, by a factor of 1.75, in images reconstructed by NIRT than in the same images reconstructed by FBP. The distance between the Hoffman brain slice created from the MRI image was 0.526, while the same distance for the Hoffman brain slice reconstructed by NIRT was 0.328. The NIRT method suppressed the propagation of the noise without visible loss of resolution in the reconstructed PET images. (orig.)

  13. Developing Formal Correctness Properties from Natural Language Requirements

    Science.gov (United States)

    Nikora, Allen P.

    2006-01-01

    This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.

  14. [PERCUTANEOUS CORRECTION OF FOREFOOT DEFORMITIES IN DIABETIC PATIENTS IN ORDER TO PREVENT PRESSURE SORES - TECHNIQUE AND RESULTS IN 20 CONSECUTIVE PATIENTS].

    Science.gov (United States)

    Yassin, Mustafa; Garti, Avraham; Heller, Eyal; Weissbrot, Moshe; Robinson, Dror

    2017-04-01

    Diabetes mellitus is a 21st century pandemic. Due to life-span prolongation combined with the increased rate of diabetes, a growing population of patients is afflicted with neuropathic foot deformities. Traditional operative repair of these deformities is associated with a high complication rate and relatively common infection incidence. In recent years, in order to prevent these complications, percutaneous deformity correction methods were developed. Description of experience accumulated in treating 20 consecutive patients with diabetic neuropathic foot deformities treated in a percutaneous fashion. A consecutive series of patients treated at our institute for neuropathic foot deformity was assessed according to a standard protocol using the AOFAS forefoot score and the LUMT score performed at baseline as well as at 6 months and 12 months. Treatment related complications were monitored. All procedures were performed in an ambulatory setting using local anesthesia. A total of 12 patients had soft tissue corrections, and 8 had a combined soft tissue and bone correction. Baseline AOFAS score was 48±7 and improved to 73±9 at six months and 75±7 at one year. LUMT score in 11 patients with a chronic wound decreased from 22±4 to 2±1 at one year post-op. One patient required hospitalization due to post-op bleeding. Percutaneous techniques allow deformity correction of diabetic feet, including those with open wounds in an ambulatory setting with a low complication rate.

  15. GQ corrections in the circuit theory of quantum transport

    NARCIS (Netherlands)

    Campagnano, G.; Nazarov, Y.V.

    2006-01-01

    We develop a finite-element technique that allows one to evaluate correction of the order of GQ to various transport characteristics of arbitrary nanostructures. Common examples of such corrections are the weak-localization effect on conductance and universal conductance fluctuations. Our approach,

  16. Correcting for intra-experiment variation in Illumina BeadChip data is necessary to generate robust gene-expression profiles

    Directory of Open Access Journals (Sweden)

    van Hemert Jano I

    2010-02-01

    Full Text Available Abstract Background Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR and duplicate hybridisations of primary breast tumour samples from a clinical study. Results A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999 and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. Conclusion In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data.

  17. Correcting for intra-experiment variation in Illumina BeadChip data is necessary to generate robust gene-expression profiles.

    Science.gov (United States)

    Kitchen, Robert R; Sabine, Vicky S; Sims, Andrew H; Macaskill, E Jane; Renshaw, Lorna; Thomas, Jeremy S; van Hemert, Jano I; Dixon, J Michael; Bartlett, John M S

    2010-02-24

    Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data.

  18. Estimation of trace levels of plutonium in urine samples by fission track technique

    International Nuclear Information System (INIS)

    Sawant, P.D.; Prabhu, S.; Pendharkar, K.A.; Kalsi, P.C.

    2009-01-01

    Individual monitoring of radiation workers handling Pu in various nuclear installations requires the detection of trace levels of plutonium in bioassay samples. It is necessary to develop methods that can detect urinary excretion of Pu in fraction of mBq range. Therefore, a sensitive method such as fission track analysis has been developed for the measurement of trace levels of Pu in bioassay samples. In this technique, chemically separated plutonium from the sample and a Pu standard were electrodeposited on planchettes and covered with Lexan solid state nuclear track detector (SSNTD) and irradiated with thermal neutrons in APSARA reactor of Bhabha Atomic Research Centre, India. The fission track densities in the Lexan films of the sample and the standard were used to calculate the amount of Pu in the sample. The minimum amount of Pu that can be analyzed by this method using doubly distilled electronic grade (E. G.) reagents is about 12 μBq/L. (author)

  19. Fabrication Techniques of Stretchable and Cloth Electroadhesion Samples for Implementation on Devices with Space Application

    Data.gov (United States)

    National Aeronautics and Space Administration — The purpose of this study is to determine materials and fabrication techniques for efficient space-rated electroadhesion (EA) samples. Liquid metals, including...

  20. Corrective Action Investigation Plan for Corrective Action Unit 137: Waste Disposal Sites, Nevada Test Site, Nevada, Rev. No.:0

    Energy Technology Data Exchange (ETDEWEB)

    Wickline, Alfred

    2005-12-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information including facility descriptions, environmental sample collection objectives, and criteria for conducting site investigation activities at Corrective Action Unit (CAU) 137: Waste Disposal Sites. This CAIP has been developed in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996) that was agreed to by the State of Nevada, the U.S. Department of Energy (DOE), and the U.S. Department of Defense. Corrective Action Unit 137 contains sites that are located in Areas 1, 3, 7, 9, and 12 of the Nevada Test Site (NTS), which is approximately 65 miles (mi) northwest of Las Vegas, Nevada (Figure 1-1). Corrective Action Unit 137 is comprised of the eight corrective action sites (CASs) shown on Figure 1-1 and listed below: (1) CAS 01-08-01, Waste Disposal Site; (2) CAS 03-23-01, Waste Disposal Site; (3) CAS 03-23-07, Radioactive Waste Disposal Site; (4) CAS 03-99-15, Waste Disposal Site; (5) CAS 07-23-02, Radioactive Waste Disposal Site; (6) CAS 09-23-07, Radioactive Waste Disposal Site; (7) CAS 12-08-01, Waste Disposal Site; and (8) CAS 12-23-07, Waste Disposal Site. The Corrective Action Investigation (CAI) will include field inspections, radiological surveys, geophysical surveys, sampling of environmental media, analysis of samples, and assessment of investigation results, where appropriate. Data will be obtained to support corrective action alternative evaluations and waste management decisions. The CASs in CAU 137 are being investigated because hazardous and/or radioactive constituents may be present in concentrations that could potentially pose a threat to human health and the environment. Existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives for the CASs. Additional information will be generated by conducting a CAI before evaluating and selecting

  1. Real-Time Correction By Optical Tracking with Integrated Geometric Distortion Correction for Reducing Motion Artifacts in fMRI

    Science.gov (United States)

    Rotenberg, David J.

    Artifacts caused by head motion are a substantial source of error in fMRI that limits its use in neuroscience research and clinical settings. Real-time scan-plane correction by optical tracking has been shown to correct slice misalignment and non-linear spin-history artifacts, however residual artifacts due to dynamic magnetic field non-uniformity may remain in the data. A recently developed correction technique, PLACE, can correct for absolute geometric distortion using the complex image data from two EPI images, with slightly shifted k-space trajectories. We present a correction approach that integrates PLACE into a real-time scan-plane update system by optical tracking, applied to a tissue-equivalent phantom undergoing complex motion and an fMRI finger tapping experiment with overt head motion to induce dynamic field non-uniformity. Experiments suggest that including volume by volume geometric distortion correction by PLACE can suppress dynamic geometric distortion artifacts in a phantom and in vivo and provide more robust activation maps.

  2. Weighing black holes using open-loop focus corrections for LGS-AO observations of galaxy nuclei at Gemini Observatory

    Science.gov (United States)

    McDermid, Richard M.; Krajnovic, Davor; Cappellari, Michele; Trujillo, Chadwick; Christou, Julian; Davies, Roger L.

    2010-07-01

    We present observations of early-type galaxies with laser guide star adaptive optics (LGS AO) obtained at Gemini North telescope using the NIFS integral field unit (IFU). We employ an innovative technique where the focus compensation due to the changing distance to the sodium layer is made 'open loop', allowing the extended galaxy nucleus to be used only for tip-tilt correction. The purpose of these observations is to determine high spatial resolution stellar kinematics within the nuclei of these galaxies to determine the masses of the super-massive black holes. The resulting data have spatial resolution of 0.2" FWHM or better. This is sufficient to positively constrain the presence of the central black hole in even low-mass early-type galaxies, suggesting that larger samples of such objects could be observed with this technique in the future. The open-loop focus correction technique is a supported queue-observing mode at Gemini, significantly extending the sky coverage in particular for faint, extended guide sources. We also provide preliminary results from tests combining tip/tilt correction from the Gemini peripheral guider with on-axis LGS. The current test system demonstrates feasibility of this mode, providing about a factor 2-3 improvement over natural seeing. With planned upgrades to the peripheral wave-front sensor, we hope to provide close to 100% sky coverage with low Strehl corrections, or 'improved seeing', significantly increasing flux concentration for deep field and extended object studies.

  3. Power corrections in the N-jettiness subtraction scheme

    Energy Technology Data Exchange (ETDEWEB)

    Boughezal, Radja [High Energy Physics Division, Argonne National Laboratory,Argonne, IL 60439 (United States); Liu, Xiaohui [Department of Physics, Beijing Normal University,Beijing, 100875 (China); Center of Advanced Quantum Studies, Beijing Normal University,Beijing, 100875 (China); Center for High-Energy Physics, Peking University,Beijing, 100871 (China); Maryland Center for Fundamental Physics, University of Maryland,College Park, MD 20742 (United States); Petriello, Frank [Department of Physics & Astronomy, Northwestern University,Evanston, IL 60208 (United States); High Energy Physics Division, Argonne National Laboratory,Argonne, IL 60439 (United States)

    2017-03-30

    We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for both qq̄ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. We discuss what features of our techniques extend to processes containing final-state jets.

  4. A review of analytical techniques for the determination of carbon-14 in environmental samples

    International Nuclear Information System (INIS)

    Milton, G.M.; Brown, R.M.

    1993-11-01

    This report contains a brief summary of analytical techniques commonly used for the determination of radiocarbon in a variety of environmental samples. Details of the applicable procedures developed and tested in the Environmental Research Branch at Chalk River Laboratories are appended

  5. Arsenic, Antimony, Chromium, and Thallium Speciation in Water and Sediment Samples with the LC-ICP-MS Technique

    Directory of Open Access Journals (Sweden)

    Magdalena Jabłońska-Czapla

    2015-01-01

    Full Text Available Chemical speciation is a very important subject in the environmental protection, toxicology, and chemical analytics due to the fact that toxicity, availability, and reactivity of trace elements depend on the chemical forms in which these elements occur. Research on low analyte levels, particularly in complex matrix samples, requires more and more advanced and sophisticated analytical methods and techniques. The latest trends in this field concern the so-called hyphenated techniques. Arsenic, antimony, chromium, and (underestimated thallium attract the closest attention of toxicologists and analysts. The properties of those elements depend on the oxidation state in which they occur. The aim of the following paper is to answer the question why the speciation analytics is so important. The paper also provides numerous examples of the hyphenated technique usage (e.g., the LC-ICP-MS application in the speciation analysis of chromium, antimony, arsenic, or thallium in water and bottom sediment samples. An important issue addressed is the preparation of environmental samples for speciation analysis.

  6. Techniques for the detection of pathogenic Cryptococcus species in wood decay substrata and the evaluation of viability in stored samples

    Directory of Open Access Journals (Sweden)

    Christian Alvarez

    2013-02-01

    Full Text Available In this study, we evaluated several techniques for the detection of the yeast form of Cryptococcus in decaying wood and measured the viability of these fungi in environmental samples stored in the laboratory. Samples were collected from a tree known to be positive for Cryptococcus and were each inoculated on 10 Niger seed agar (NSA plates. The conventional technique (CT yielded a greater number of positive samples and indicated a higher fungal density [in colony forming units per gram of wood (CFU.g-1] compared to the humid swab technique (ST. However, the difference in positive and false negative results between the CT-ST was not significant. The threshold of detection for the CT was 0.05.10³ CFU.g-1, while the threshold for the ST was greater than 0.1.10³ CFU-1. No colonies were recovered using the dry swab technique. We also determined the viability of Cryptococcus in wood samples stored for 45 days at 25ºC using the CT and ST and found that samples not only continued to yield a positive response, but also exhibited an increase in CFU.g-1, suggesting that Cryptococcus is able to grow in stored environmental samples. The ST.1, in which samples collected with swabs were immediately plated on NSA medium, was more efficient and less laborious than either the CT or ST and required approximately 10 min to perform; however, additional studies are needed to validate this technique.

  7. CORRECTION OF SEVERE STIFF SCOLIOSIS THROUGH EXTRAPLEURAL INTERBODY RELEASE AND OSTEOTOMY (LIEPO

    Directory of Open Access Journals (Sweden)

    Cleiton Dias Naves

    Full Text Available ABSTRACT Objective: To report a new technique for extrapleural interbody release with transcorporal osteotomy of the inferior vertebral plateau (LIEPO and to evaluate the correction potential of this technique and its complications. Method: We included patients with scoliosis with Cobb angle greater than 90° and flexibility less than 25% submitted to surgical treatment between 2012 and 2016 by the technique LIEPO at the National Institute of Traumatology and Orthopedics (INTO. Sagittal and coronal alignment, and the translation of the apical vertebra were measured and the degree of correction of the deformity was calculated through the pre and postoperative radiographs, and the complications were described. Results: Patients had an average bleed of 1,525 ml, 8.8 hours of surgical time, 123° of scoliosis in the preoperative period, and a mean correction of 66%. There was no case of permanent neurological damage and no surgical revision. Conclusion: The LIEPO technique proved to be effective and safe in the treatment of severe stiff scoliosis, reaching a correction potential close to the PEISR (Posterior extrapleural intervertebral space release technique and superior to that of the pVCR (posterior Vertebral Column Resection with no presence of infection and permanent neurological deficit. New studies are needed to validate this promising technique.

  8. Investigation of CPD and HMDS Sample Preparation Techniques for Cervical Cells in Developing Computer-Aided Screening System Based on FE-SEM/EDX

    Science.gov (United States)

    Ng, Siew Cheok; Abu Osman, Noor Azuan

    2014-01-01

    This paper investigated the effects of critical-point drying (CPD) and hexamethyldisilazane (HMDS) sample preparation techniques for cervical cells on field emission scanning electron microscopy and energy dispersive X-ray (FE-SEM/EDX). We investigated the visualization of cervical cell image and elemental distribution on the cervical cell for two techniques of sample preparation. Using FE-SEM/EDX, the cervical cell images are captured and the cell element compositions are extracted for both sample preparation techniques. Cervical cell image quality, elemental composition, and processing time are considered for comparison of performances. Qualitatively, FE-SEM image based on HMDS preparation technique has better image quality than CPD technique in terms of degree of spread cell on the specimen and morphologic signs of cell deteriorations (i.e., existence of plate and pellet drying artifacts and membrane blebs). Quantitatively, with mapping and line scanning EDX analysis, carbon and oxygen element compositions in HMDS technique were higher than the CPD technique in terms of weight percentages. The HMDS technique has shorter processing time than the CPD technique. The results indicate that FE-SEM imaging, elemental composition, and processing time for sample preparation with the HMDS technique were better than CPD technique for cervical cell preparation technique for developing computer-aided screening system. PMID:25610902

  9. Real-time scanning charged-particle microscope image composition with correction of drift.

    Science.gov (United States)

    Cizmar, Petr; Vladár, András E; Postek, Michael T

    2011-04-01

    In this article, a new scanning electron microscopy (SEM) image composition technique is described, which can significantly reduce drift related image corruptions. Drift distortion commonly causes blur and distortions in the SEM images. Such corruption ordinarily appears when conventional image-acquisition methods, i.e., "slow scan" and "fast scan," are applied. The damage is often very significant; it may render images unusable for metrology applications, especially where subnanometer accuracy is required. The described correction technique works with a large number of quickly taken frames, which are properly aligned and then composed into a single image. Such image contains much less noise than the individual frames, while the blur and deformation is minimized. This technique also provides useful information about changes of the sample position in time, which may be applied to investigate the drift properties of the instrument without a need of additional equipment.

  10. Clinical application of microsampling versus conventional sampling techniques in the quantitative bioanalysis of antibiotics: a systematic review.

    Science.gov (United States)

    Guerra Valero, Yarmarly C; Wallis, Steven C; Lipman, Jeffrey; Stove, Christophe; Roberts, Jason A; Parker, Suzanne L

    2018-03-01

    Conventional sampling techniques for clinical pharmacokinetic studies often require the removal of large blood volumes from patients. This can result in a physiological or emotional burden, particularly for neonates or pediatric patients. Antibiotic pharmacokinetic studies are typically performed on healthy adults or general ward patients. These may not account for alterations to a patient's pathophysiology and can lead to suboptimal treatment. Microsampling offers an important opportunity for clinical pharmacokinetic studies in vulnerable patient populations, where smaller sample volumes can be collected. This systematic review provides a description of currently available microsampling techniques and an overview of studies reporting the quantitation and validation of antibiotics using microsampling. A comparison of microsampling to conventional sampling in clinical studies is included.

  11. Novel Principles and Techniques to Create a Natural Design in Female Hairline Correction Surgery.

    Science.gov (United States)

    Park, Jae Hyun

    2015-12-01

    Female hairline correction surgery is becoming increasingly popular. However, no guidelines or methods of female hairline design have been introduced to date. The purpose of this study was to create an initial framework based on the novel principles of female hairline design and then use artistic ability and experience to fine tune this framework. An understanding of the concept of 5 areas (frontal area, frontotemporal recess area, temporal peak, infratemple area, and sideburns) and 5 points (C, A, B, T, and S) is required for female hairline correction surgery (the 5A5P principle). The general concepts of female hairline correction surgery and natural design methods are, herein, explained with a focus on the correlations between these 5 areas and 5 points. A natural and aesthetic female hairline can be created with application of the above-mentioned concepts. The 5A5P principle of forming the female hairline is very useful in female hairline correction surgery.

  12. Corrective Action Investigation Plan for Corrective Action Unit 409: Other Waste Sites, Tonopah Test Range, Nevada (Rev. 0)

    International Nuclear Information System (INIS)

    2000-01-01

    undisturbed locations near the area of the disposal pits; field screening samples for radiological constituents; analysis for geotechnical/hydrologic parameters of samples beneath the disposal pits; and bioassesment samples, if VOC or TPH contamination concentrations exceed field-screening levels. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document

  13. Elemental analysis of brazing alloy samples by neutron activation technique

    International Nuclear Information System (INIS)

    Eissa, E.A.; Rofail, N.B.; Hassan, A.M.; El-Shershaby, A.; Walley El-Dine, N.

    1996-01-01

    Two brazing alloy samples (C P 2 and C P 3 ) have been investigated by Neutron activation analysis (NAA) technique in order to identify and estimate their constituent elements. The pneumatic irradiation rabbit system (PIRS), installed at the first egyptian research reactor (ETRR-1) was used for short-time irradiation (30 s) with a thermal neutron flux of 1.6 x 10 1 1 n/cm 2 /s in the reactor reflector, where the thermal to epithermal neutron flux ratio is 106. Long-time irradiation (48 hours) was performed at reactor core periphery with thermal neutron flux of 3.34 x 10 1 2 n/cm 2 /s, and thermal to epithermal neutron flux ratio of 79. Activation by epithermal neutrons was taken into account for the (1/v) and resonance neutron absorption in both methods. A hyper pure germanium detection system was used for gamma-ray acquisitions. The concentration values of Al, Cr, Fe, Co, Cu, Zn, Se, Ag and Sb were estimated as percentages of the sample weight and compared with reported values. 1 tab

  14. Elemental analysis of brazing alloy samples by neutron activation technique

    Energy Technology Data Exchange (ETDEWEB)

    Eissa, E A; Rofail, N B; Hassan, A M [Reactor and Neutron physics Department, Nuclear Research Centre, Atomic Energy Authority, Cairo (Egypt); El-Shershaby, A; Walley El-Dine, N [Physics Department, Faculty of Girls, Ain Shams Universty, Cairo (Egypt)

    1997-12-31

    Two brazing alloy samples (C P{sup 2} and C P{sup 3}) have been investigated by Neutron activation analysis (NAA) technique in order to identify and estimate their constituent elements. The pneumatic irradiation rabbit system (PIRS), installed at the first egyptian research reactor (ETRR-1) was used for short-time irradiation (30 s) with a thermal neutron flux of 1.6 x 10{sup 1}1 n/cm{sup 2}/s in the reactor reflector, where the thermal to epithermal neutron flux ratio is 106. Long-time irradiation (48 hours) was performed at reactor core periphery with thermal neutron flux of 3.34 x 10{sup 1}2 n/cm{sup 2}/s, and thermal to epithermal neutron flux ratio of 79. Activation by epithermal neutrons was taken into account for the (1/v) and resonance neutron absorption in both methods. A hyper pure germanium detection system was used for gamma-ray acquisitions. The concentration values of Al, Cr, Fe, Co, Cu, Zn, Se, Ag and Sb were estimated as percentages of the sample weight and compared with reported values. 1 tab.

  15. Trace uranium analysis in geological sample by isotope dilution-alpha spectrometry and comparison with other techniques

    International Nuclear Information System (INIS)

    Shihomatsu, H.M.; Iyer, S.S.

    1988-12-01

    Establishment of uranium determination in geological samples by alpha spectrometric isotope dilution technique using 233 U tracer is described in the present work. The various steps involved in the method namely, preparation of the sample, electrodeposition, alpha spectrometry, isotope dilution, calculation of the concentration and error statistics are discussed in detail. The experimental parameters for the electrodeposition of uranium, like current density, pH concentration of the electrolyte solution, deposition time, electrode distance were all optimised based on the efficiency of the deposition. The total accuracy and precision of the IDAS using 233 U tracer in the determination of uranium in mineral and granite samples were of the order of 1 to 2% for the concentration range of 50-1500 ppm of U. Our results are compared with those obtained by others workers using similar and different techniques. (author) [pt

  16. Surface excitation correction of electron IMFP of selected polymers

    International Nuclear Information System (INIS)

    Gergely, G.; Orosz, G.T.; Lesiak, B.; Jablonski, A.; Toth, J.; Varga, D.

    2004-01-01

    Complete text of publication follows. The IMFP [1] of selected polymers: polythiophenes, polyanilines, polyethylene (PE) [2] was determined by EPES [3] experiments, using Si, Ge and Ag (for PE) reference samples. Experiments were evaluated by Monte Carlo (MC) simulations [1] applying the NIST 64 (1996 and 2002) databases and IMFP data of Tanuma and Gries [1]. The integrated experimental elastic peak ratios of sample and reference are different from those calculated by Monte Carlo (MC) simulation [1]. The difference was attributed to the difference of surface excitation parameters (SEP) [4] of the sample and reference. The SEP parameters of the reference samples were taken from Chen and Werner. A new procedure was developed for experimental determination of the SEP parameters of polymer samples. It is a trial and error method for optimising the SEP correction of the IMFP and the correction of experimental elastic peak ratio [4]. Experiments made with a HSA spectrometer [5] covered the E = 0.2-2 keV energy range. The improvements with SEP correction appears in reduc- ing the difference between the corrected and MC calculated IMFPs, assuming Gries and Tanuma's et al IMFPs [1] for polymers and standard respectively. The experimental peak areas were corrected for the hydrogen peak. For the direct detection of hydrogen see Ref. [6] and [7]. Results obtained with the different NIST 64 databases and atomic potentials [8] are presented. This work was supported by the Hungarian Science Foundation of OTKA: T037709 and T038016. (author)

  17. Comparison of mobile and stationary spore-sampling techniques for estimating virulence frequencies in aerial barley powdery mildew populations

    DEFF Research Database (Denmark)

    Hovmøller, M.S.; Munk, L.; Østergård, Hanne

    1995-01-01

    Gene frequencies in samples of aerial populations of barley powdery mildew (Erysiphe graminis f.sp. hordei), which were collected in adjacent barley areas and in successive periods of time, were compared using mobile and stationary sampling techniques. Stationary samples were collected from trap ...

  18. A method of detector correction for cosmic ray muon radiography

    International Nuclear Information System (INIS)

    Liu Yuanyuan; Zhao Ziran; Chen Zhiqiang; Zhang Li; Wang Zhentian

    2008-01-01

    Cosmic ray muon radiography which has good penetrability and sensitivity to high-Z materials is an effective way for detecting shielded nuclear materials. The problem of data correction is one of the key points of muon radiography technique. Because of the influence of environmental background, environmental yawp and error of detectors, the raw data can not be used directly. If we used the raw data as the usable data to reconstruct without any corrections, it would turn up terrible artifacts. Based on the characteristics of the muon radiography system, aimed at the error of detectors, this paper proposes a method of detector correction. The simulation experiments demonstrate that this method can effectively correct the error produced by detectors. Therefore, we can say that it does a further step to let the technique of cosmic muon radiography into out real life. (authors)

  19. Inhaler technique maintenance: gaining an understanding from the patient's perspective.

    Science.gov (United States)

    Ovchinikova, Ludmila; Smith, Lorraine; Bosnic-Anticevich, Sinthia

    2011-08-01

    The aim of this study was to determine the patient-, education-, and device-related factors that predict inhaler technique maintenance. Thirty-one community pharmacists were trained to deliver inhaler technique education to people with asthma. Pharmacists evaluated (based on published checklists), and where appropriate, delivered inhaler technique education to patients (participants) in the community pharmacy at baseline (Visit 1) and 1 month later (Visit 2). Data were collected on participant demographics, asthma history, current asthma control, history of inhaler technique education, and a range of psychosocial aspects of disease management (including adherence to medication, motivation for correct technique, beliefs regarding the importance of maintaining correct technique, and necessity and concern beliefs regarding preventer therapy). Stepwise backward logistic regression was used to identify the predictors of inhaler technique maintenance at 1 month. In total 145 and 127 participants completed Visits 1 and 2, respectively. At baseline, 17% of patients (n = 24) demonstrated correct technique (score 11/11) which increased to 100% (n = 139) after remedial education by pharmacists. At follow-up, 61% (n = 77) of patients demonstrated correct technique. The predictors of inhaler technique maintenance based on the logistic regression model (X(2) (3, N = 125) = 16.22, p = .001) were use of a dry powder inhaler over a pressurized metered-dose inhaler (OR 2.6), having better asthma control at baseline (OR 2.3), and being more motivated to practice correct inhaler technique (OR 1.2). Contrary to what is typically recommended in previous research, correct inhaler technique maintenance may involve more than repetition of instructions. This study found that past technique education factors had no bearing on technique maintenance, whereas patient psychosocial factors (motivation) did.

  20. Preparation of quality control samples for thyroid hormones T3 and T4 in radioimmunoassay techniques

    International Nuclear Information System (INIS)

    Ahmed, F.O.A.

    2006-03-01

    Today, the radioimmunoassay becomes one of the best techniques for quantitative analysis of very low concentration of different substances. RIA is being widely used in medical and research laboratories. To maintain high specificity and accuracy in RIA and other related techniques the quality controls must be introduced. In this dissertation quality control samples for thyroid hormones (Triiodothyronine T3 and Thyroxin T4), using RIA techniques. Ready made chinese T4, T3 RIA kits were used. IAEA statistical package were selected.(Author)

  1. Corrective Action Investigation Plan for Corrective Action Unit 166: Storage Yards and Contaminated Materials, Nevada Test Site, Nevada, Rev. No.: 0

    Energy Technology Data Exchange (ETDEWEB)

    David Strand

    2006-06-01

    Corrective Action Unit 166 is located in Areas 2, 3, 5, and 18 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit (CAU) 166 is comprised of the seven Corrective Action Sites (CASs) listed below: (1) 02-42-01, Cond. Release Storage Yd - North; (2) 02-42-02, Cond. Release Storage Yd - South; (3) 02-99-10, D-38 Storage Area; (4) 03-42-01, Conditional Release Storage Yard; (5) 05-19-02, Contaminated Soil and Drum; (6) 18-01-01, Aboveground Storage Tank; and (7) 18-99-03, Wax Piles/Oil Stain. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation (CAI) before evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on February 28, 2006, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and Bechtel Nevada. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 166. Appendix A provides a detailed discussion of the DQO methodology and the DQOs specific to each CAS. The scope of the CAI for CAU 166 includes the following activities: (1) Move surface debris and/or materials, as needed, to facilitate sampling. (2) Conduct radiological surveys. (3) Perform field screening. (4) Collect and submit environmental samples for laboratory analysis to determine if

  2. Improved sample preparation and counting techniques for enhanced tritium measurement sensitivity

    Science.gov (United States)

    Moran, J.; Aalseth, C.; Bailey, V. L.; Mace, E. K.; Overman, C.; Seifert, A.; Wilcox Freeburg, E. D.

    2015-12-01

    Tritium (T) measurements offer insight to a wealth of environmental applications including hydrologic tracking, discerning ocean circulation patterns, and aging ice formations. However, the relatively short half-life of T (12.3 years) limits its effective age dating range. Compounding this limitation is the decrease in atmospheric T content by over two orders of magnitude (from 1000-2000 TU in 1962 to testing in the 1960's. We are developing sample preparation methods coupled to direct counting of T via ultra-low background proportional counters which, when combined, offer improved T measurement sensitivity (~4.5 mmoles of H2 equivalent) and will help expand the application of T age dating to smaller sample sizes linked to persistent environmental questions despite the limitations above. For instance, this approach can be used to T date ~ 2.2 mmoles of CH4 collected from sample-limited systems including microbial communities, soils, or subsurface aquifers and can be combined with radiocarbon dating to distinguish the methane's formation age from C age in a system. This approach can also expand investigations into soil organic C where the improved sensitivity will permit resolution of soil C into more descriptive fractions and provide direct assessments of the stability of specific classes of organic matter in soils environments. We are employing a multiple step sample preparation system whereby organic samples are first combusted with resulting CO2 and H2O being used as a feedstock to synthesize CH4. This CH4 is mixed with Ar and loaded directly into an ultra-low background proportional counter for measurement of T β decay in a shallow underground laboratory. Analysis of water samples requires only the addition of geologic CO2 feedstock with the sample for methane synthesis. The chemical nature of the preparation techniques enable high sample throughput with only the final measurement requiring T decay with total sample analysis time ranging from 2 -5 weeks

  3. Improving quantitative dosimetry in (177)Lu-DOTATATE SPECT by energy window-based scatter corrections

    DEFF Research Database (Denmark)

    de Nijs, Robin; Lagerburg, Vera; Klausen, Thomas L

    2014-01-01

    and the activity, which depends on the collimator type, the utilized energy windows and the applied scatter correction techniques. In this study, energy window subtraction-based scatter correction methods are compared experimentally and quantitatively. MATERIALS AND METHODS: (177)Lu SPECT images of a phantom...... technique, the measured ratio was close to the real ratio, and the differences between spheres were small. CONCLUSION: For quantitative (177)Lu imaging MEGP collimators are advised. Both energy peaks can be utilized when the ESSE correction technique is applied. The difference between the calculated...

  4. Measurement of double differential cross sections of charged particle emission reactions by incident DT neutrons. Correction for energy loss of charged particle in sample materials

    International Nuclear Information System (INIS)

    Takagi, Hiroyuki; Terada, Yasuaki; Murata, Isao; Takahashi, Akito

    2000-01-01

    In the measurement of charged particle emission spectrum induced by neutrons, correcting the energy loss of charged particle in sample materials becomes a very important inverse problem. To deal with this inverse problem, we have applied the Bayesian unfolding method to correct the energy loss, and tested the performance of the method. Although this method is very simple, it was confirmed from the test that the performance was not inferior to other methods at all, and therefore the method could be a powerful tool for charged particle spectrum measurement. (author)

  5. Effective absorption correction for energy dispersive X-ray mapping in a scanning transmission electron microscope: analysing the local indium distribution in rough samples of InGaN alloy layers.

    Science.gov (United States)

    Wang, X; Chauvat, M-P; Ruterana, P; Walther, T

    2017-12-01

    We have applied our previous method of self-consistent k*-factors for absorption correction in energy-dispersive X-ray spectroscopy to quantify the indium content in X-ray maps of thick compound InGaN layers. The method allows us to quantify the indium concentration without measuring the sample thickness, density or beam current, and works even if there is a drastic local thickness change due to sample roughness or preferential thinning. The method is shown to select, point-by-point in a two-dimensional spectrum image or map, the k*-factor from the local Ga K/L intensity ratio that is most appropriate for the corresponding sample geometry, demonstrating it is not the sample thickness measured along the electron beam direction but the optical path length the X-rays have to travel through the sample that is relevant for the absorption correction. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  6. A prospective, comparative, evaluator-blind clinical study investigating efficacy and safety of two injection techniques with Radiesse ® for the correction of skin changes in aging hands

    Directory of Open Access Journals (Sweden)

    Elena I Gubanova

    2015-01-01

    Full Text Available Background: Dermal fillers are used to correct age-related changes in hands. Aims: Assess efficacy and safety of two injection techniques to treat age-related changes in the hands using calcium hydroxylapatite filler, Radiesse ® . Settings and Design: This was a prospective, comparative, evaluator-blind, single-center study. Materials and Methods: Radiesse ® (0.8 mL/0.2 mL 2% lidocaine was injected subdermally on Day (D01, using a needle multipoint technique in one hand (N and a fan-like cannula technique in the other (C. Assessments were made pre-injection, on D14, Month (M02, M03 and M05 using the Merz Aesthetics Hand Grading Scale (MAS and Global Aesthetic Improvement Scale (GAIS. Participants completed questionnaires on satisfaction, pain and adverse events (AEs. Statistical Analysis Used: Data distribution was tested with the Shapiro-Wilk and Levene′s tests. The Wilcoxon signed-rank and Chi-square tests were employed to evaluate quantitative and qualitative data, respectively. Results: All 10 participants completed the study, four opted for a M03 touch-up (0.8 mL Radiesse ® . Evaluator-assessed mean GAIS scores were between 2 (significant improvement but not complete correction and 3 (optimal cosmetic result at each time point. The MAS score improved from D01 to M05 (N: 2.60 to 1.40; C: 2.20 to 1.30. Following treatment, participants reported skin was softer, more elastic, more youthful and less wrinkled. Other than less noticeable veins and tendons on the C hand, no differences in participant satisfaction were noted. All AEs were mild, with no serious AEs reported. Conclusions: Both injection techniques (needle and cannula demonstrated equivalent clinical efficacy with a comparable safety profile for the correction of age-related changes in hands with Radiesse ® .

  7. Corrective Action Investigation Plan for Corrective Action Unit 145: Wells and Storage Holes, Nevada Test Site, Nevada, Rev. No.: 0

    Energy Technology Data Exchange (ETDEWEB)

    David A. Strand

    2004-09-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information for conducting site investigation activities at Corrective Action Unit (CAU) 145: Wells and Storage Holes. Information presented in this CAIP includes facility descriptions, environmental sample collection objectives, and criteria for the selection and evaluation of environmental samples. Corrective Action Unit 145 is located in Area 3 of the Nevada Test Site, which is 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 145 is comprised of the six Corrective Action Sites (CASs) listed below: (1) 03-20-01, Core Storage Holes; (2) 03-20-02, Decon Pad and Sump; (3) 03-20-04, Injection Wells; (4) 03-20-08, Injection Well; (5) 03-25-01, Oil Spills; and (6) 03-99-13, Drain and Injection Well. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives. Additional information will be obtained by conducting a corrective action investigation (CAI) prior to evaluating corrective action alternatives and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable corrective action alternatives that will be presented in the Corrective Action Decision Document. One conceptual site model with three release scenario components was developed for the six CASs to address all releases associated with the site. The sites will be investigated based on data quality objectives (DQOs) developed on June 24, 2004, by representatives of the Nevada Division of Environmental Protection; U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office; Stoller-Navarro Joint Venture; and Bechtel Nevada. The DQOs process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 145.

  8. Performance evaluation of an importance sampling technique in a Jackson network

    Science.gov (United States)

    brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed

    2014-03-01

    Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.

  9. Precise material identification method based on a photon counting technique with correction of the beam hardening effect in X-ray spectra

    International Nuclear Information System (INIS)

    Kimoto, Natsumi; Hayashi, Hiroaki; Asahara, Takashi; Mihara, Yoshiki; Kanazawa, Yuki; Yamakawa, Tsutomu; Yamamoto, Shuichiro; Yamasaki, Masashi; Okada, Masahiro

    2017-01-01

    The aim of our study is to develop a novel material identification method based on a photon counting technique, in which the incident and penetrating X-ray spectra are analyzed. Dividing a 40 kV X-ray spectra into two energy regions, the corresponding linear attenuation coefficients are derived. We can identify the materials precisely using the relationship between atomic number and linear attenuation coefficient through the correction of the beam hardening effect of the X-ray spectra. - Highlights: • We propose a precise material identification method to be used as a photon counting system. • Beam hardening correction is important, even when the analysis is applied to the short energy regions in the X-ray spectrum. • Experiments using a single probe-type CdTe detector were performed, and Monte Carlo simulation was also carried out. • We described the applicability of our method for clinical diagnostic X-ray imaging in the near future.

  10. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments.

    Science.gov (United States)

    Bras, Wim; Koizumi, Satoshi; Terrill, Nicholas J

    2014-11-01

    Small- and wide-angle X-ray scattering (SAXS, WAXS) are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments.

  11. Calibration of CR-39 for radon-related parameters using sealed cup technique

    International Nuclear Information System (INIS)

    Abo-Elmagd, M.; Daif, M. M.

    2010-01-01

    Effective radium content, mass and areal radon exhalation rates of soil and rock samples are important radon-related parameters and can be used as a better indicator of radon risk. A sealed cup fitted to a CR-39 detector and to the sample under measurement is an advantageous passive device for the measurement of these parameters. The main factors affecting the results are the detector calibration factor and the sample weight. The results of an active technique (Lucas cell) and the CR-39 detector have been found to be correlated resulting in a reliable detector calibration factor. The result illustrates the dependence of the CR-39 calibration factor with the sample weight which is difficult to use in practice, because each sample weight has its own calibration factor of CR-39. It is reported to demonstrate the advantage of a back diffusion correction. After correcting the results for back diffusion effects, one obtains an approximately constant calibration factor for the sample volumes up to one-third the total sealed cup volume. For this condition the calibration factor is equal to 0.237 track cm -2 per Bq m -3 d with about 1% uncertainty. (authors)

  12. Nonperturbative QCD corrections to electroweak observables

    Energy Technology Data Exchange (ETDEWEB)

    Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Feng, Xu [High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki (Japan); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Petschlies, Marcus [The Cyprus Institute, Nicosia (Cyprus)

    2012-06-15

    Nonperturbative QCD corrections are important to many low-energy electroweak observables, for example the muon magnetic moment. However, hadronic corrections also play a significant role at much higher energies due to their impact on the running of standard model parameters, such as the electromagnetic coupling. Currently, these hadronic contributions are accounted for by a combination of experimental measurements, effective field theory techniques and phenomenological modeling but ideally should be calculated from first principles. Recent developments indicate that many of the most important hadronic corrections may be feasibly calculated using lattice QCD methods. To illustrate this, we examine the lattice computation of the leading-order QCD corrections to the muon magnetic moment, paying particular attention to a recently developed method but also reviewing the results from other calculations. We then continue with several examples that demonstrate the potential impact of the new approach: the leading-order corrections to the electron and tau magnetic moments, the running of the electromagnetic coupling, and a class of the next-to-leading-order corrections for the muon magnetic moment. Along the way, we mention applications to the Adler function, which can be used to determine the strong coupling constant, and QCD corrections to muonic-hydrogen.

  13. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  14. Surgical correction of severe spinal deformities using a staged protocol of external and internal techniques.

    Science.gov (United States)

    Prudnikova, Oksana G; Shchurova, Elena N

    2018-02-01

    There is high risk of neurologic complications in one-stage management of severe rigid spinal deformities in adolescents. Therefore, gradual spine stretching variants are applied. One of them is the use of external transpedicular fixation. Our aim was to retrospectively study the outcomes of gradual correction with an apparatus for external transpedicular fixation followed by internal fixation used for high-grade kyphoscoliosis in adolescents. Twenty five patients were reviewed (mean age, 15.1 ± 0.4 years). Correction was performed in two stages: 1) gradual controlled correction with the apparatus for external transpedicular fixation; and 2) internal posterior transpedicular fixation. Rigid deformities in eight patients required discapophysectomy. Clinical and radiographic study of the outcomes was conducted immediately after treatment and at a mean long-term period of 3.8 ± 0.4 years. Pain was evaluated using the visual analogue scale (VAS, 10 points). The Oswestry questionnaire (ODI scale) was used for functional assessment. Deformity correction with the external apparatus was 64.2 ± 4.6% in the main curve and 60.7 ± 3.7% in the compensatory one. It was 72.8 ± 4.1% and 66.2 ± 5.3% immediately after treatment and 70.8 ± 4.6% and 64.3 ± 4.2% at long term, respectively. Pain relieved by 33.2 ± 4.2% (p external transpedicular fixation provides gradual controlled correction for high-grade kyphoscoliosis in adolescents. Transition to internal fixation preserves the correction achieved, and correction is maintained at long term.

  15. Two Inexpensive and Non-destructive Techniques to Correct for Smaller-Than-Gasket Leaf Area in Gas Exchange Measurements

    Directory of Open Access Journals (Sweden)

    Andreas M. Savvides

    2018-04-01

    Full Text Available The development of technology, like the widely-used off-the-shelf portable photosynthesis systems, for the quantification of leaf gas exchange rates and chlorophyll fluorescence offered photosynthesis research a massive boost. Gas exchange parameters in such photosynthesis systems are calculated as gas exchange rates per unit leaf area. In small chambers (<10 cm2, the leaf area used by the system for these calculations is actually the internal gasket area (AG, provided that the leaf covers the entire AG. In this study, we present two inexpensive and non-destructive techniques that can be used to easily quantify the enclosed leaf area (AL of plant species with leaves of surface area much smaller than the AG, such as that of cereal crops. The AL of the cereal crop species studied has been measured using a standard image-based approach (iAL and estimated using a leaf width-based approach (wAL. iAL and wAL did not show any significant differences between them in maize, barley, hard and soft wheat. Similar results were obtained when the wAL was tested in comparison with iAL in different positions along the leaf in all species studied. The quantification of AL and the subsequent correction of leaf gas exchange parameters for AL provided a precise quantification of net photosynthesis and stomatal conductance especially with decreasing AL. This study provides two practical, inexpensive and non-destructive solutions to researchers dealing with photosynthesis measurements on small-leaf plant species. The image-based technique can be widely used for quantifying AL in many plant species despite their leaf shape. The leaf width-based technique can be securely used for quantifying AL in cereal crop species such as maize, wheat and barley along the leaf. Both techniques can be used for a wide range of gasket shapes and sizes with minor technique-specific adjustments.

  16. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2013-01-01

    This paper presents a new approach to gridding for problems with localised regions of high activity. The technique of local defect correction has been studied for other methods as ¿nite difference methods and ¿nite volume methods. In this paper we develop the technique for the boundary element

  17. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  18. Generalized INverse imaging (GIN): ultrafast fMRI with physiological noise correction.

    Science.gov (United States)

    Boyacioğlu, Rasim; Barth, Markus

    2013-10-01

    An ultrafast functional magnetic resonance imaging (fMRI) technique, called generalized inverse imaging (GIN), is proposed, which combines inverse imaging with a phase constraint-leading to a less underdetermined reconstruction-and physiological noise correction. A single 3D echo planar imaging (EPI) prescan is sufficient to obtain the necessary coil sensitivity information and reference images that are used to reconstruct standard images, so that standard analysis methods are applicable. A moving dots stimulus paradigm was chosen to assess the performance of GIN. We find that the spatial localization of activation for GIN is comparable to an EPI protocol and that maximum z-scores increase significantly. The high temporal resolution of GIN (50 ms) and the acquisition of the phase information enable unaliased sampling and regression of physiological signals. Using the phase time courses obtained from the 32 channels of the receiver coils as nuisance regressors in a general linear model results in significant improvement of the functional activation, rendering the acquisition of external physiological signals unnecessary. The proposed physiological noise correction can in principle be used for other fMRI protocols, such as simultaneous multislice acquisitions, which acquire the phase information sufficiently fast and sample physiological signals unaliased. Copyright © 2012 Wiley Periodicals, Inc.

  19. Radiative corrections to fermion matter and nontopological solitons

    International Nuclear Information System (INIS)

    Perry, R.J.

    1984-01-01

    This thesis addresses the effects of one loop radiative corrections to fermion matter and nontopological solitons. The effective action formalism is employed to explore the effects of these corrections on the ground state energy and scalar field expectation value of a system containing valence fermions, which are introduced using a chemical potential. This formalism is discussed extensively, and detailed calculations are presented for the Friedberg-Lee model. The techniques illustrated can be used in any renormalizable field theory and can be extended to include higher order quantum corrections

  20. Sample application of sensitivity/uncertainty analysis techniques to a groundwater transport problem. National Low-Level Waste Management Program

    International Nuclear Information System (INIS)

    Seitz, R.R.; Rood, A.S.; Harris, G.A.; Maheras, S.J.; Kotecki, M.

    1991-06-01

    The primary objective of this document is to provide sample applications of selected sensitivity and uncertainty analysis techniques within the context of the radiological performance assessment process. These applications were drawn from the companion document Guidelines for Sensitivity and Uncertainty Analyses of Low-Level Radioactive Waste Performance Assessment Computer Codes (S. Maheras and M. Kotecki, DOE/LLW-100, 1990). Three techniques are illustrated in this document: one-factor-at-a-time (OFAT) analysis, fractional factorial design, and Latin hypercube sampling. The report also illustrates the differences in sensitivity and uncertainty analysis at the early and latter stages of the performance assessment process, and potential pitfalls that can be encountered when applying the techniques. The emphasis is on application of the techniques as opposed to the actual results, since the results are hypothetical and are not based on site-specific conditions

  1. Low-mass molecular dynamics simulation: A simple and generic technique to enhance configurational sampling

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Yuan-Ping, E-mail: pang@mayo.edu

    2014-09-26

    Highlights: • Reducing atomic masses by 10-fold vastly improves sampling in MD simulations. • CLN025 folded in 4 of 10 × 0.5-μs MD simulations when masses were reduced by 10-fold. • CLN025 folded as early as 96.2 ns in 1 of the 4 simulations that captured folding. • CLN025 did not fold in 10 × 0.5-μs MD simulations when standard masses were used. • Low-mass MD simulation is a simple and generic sampling enhancement technique. - Abstract: CLN025 is one of the smallest fast-folding proteins. Until now it has not been reported that CLN025 can autonomously fold to its native conformation in a classical, all-atom, and isothermal–isobaric molecular dynamics (MD) simulation. This article reports the autonomous and repeated folding of CLN025 from a fully extended backbone conformation to its native conformation in explicit solvent in multiple 500-ns MD simulations at 277 K and 1 atm with the first folding event occurring as early as 66.1 ns. These simulations were accomplished by using AMBER forcefield derivatives with atomic masses reduced by 10-fold on Apple Mac Pros. By contrast, no folding event was observed when the simulations were repeated using the original AMBER forcefields of FF12SB and FF14SB. The results demonstrate that low-mass MD simulation is a simple and generic technique to enhance configurational sampling. This technique may propel autonomous folding of a wide range of miniature proteins in classical, all-atom, and isothermal–isobaric MD simulations performed on commodity computers—an important step forward in quantitative biology.

  2. Robust Active Label Correction

    DEFF Research Database (Denmark)

    Kremer, Jan; Sha, Fei; Igel, Christian

    2018-01-01

    for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...

  3. Review of cleaning techniques and their effects on the chemical composition of foliar samples

    Energy Technology Data Exchange (ETDEWEB)

    Rossini Oliva, S.; Raitio, H.

    2003-07-01

    Chemical foliar analysis is a tool widely used to study tree nutrition and to monitor the impact and extent of air pollutants. This paper reviews a number of cleaning methods, and the effects of cleaning on foliar chemistry. Cleaning may include mechanical techniques such as the use of dry or moistened tissues, shaking, blowing, and brushing, or use various washing techniques with water or other solvents. Owing to the diversity of plant species, tissue differences, etc., there is no standard procedure for all kinds of samples. Analysis of uncleaned leaves is considered a good method for assessing the degree of air contamination because it provides an estimate of the element content of the deposits on leaf surfaces or when the analysis is aimed at the investigation of transfer of elements along the food chain. Sample cleaning is recommended in order (1) to investigate the transfer rate of chemical elements from soil to plants, (2) to qualify the washoff of dry deposition from foliage and (3) to separate superficially absorbed and biomass-incorporated elements. Since there is not a standard cleaning procedure for all kinds of samples and aims, it is advised to conduct a pilot study in order to be able to establish a cleaning procedure to provide reliable foliar data. (orig.)

  4. Optimization of post-run corrections for water stable isotope measurements by laser spectroscopy

    Science.gov (United States)

    van Geldern, Robert; Barth, Johannes A. C.

    2013-04-01

    Light stable isotope analyses of hydrogen and oxygen of water are used in numerous aquatic studies from various scientific fields. The advantage of using stable isotope ratios is that water molecules serve as ubiquitous and already present natural tracers. Traditionally, the samples were analyzed in the laboratory by isotope ratio mass spectrometry (IRMS). Within recent years these analyses have been revolutionized by the development of new isotope ratio laser spectroscopy (IRIS) systems that are said to be cheaper, more robust and mobile compared to IRMS. Although easier to operate, laser systems also need thorough calibration with international reference materials and raw data need correction for analytical effects. A major issue in systems that use liquid injection via a vaporizer module is the memory effect, i.e. the carry-over from the previous analyzed sample in a sequence. This study presents an optimized and simple post-run correction procedure for liquid water injection developed for a Picarro water analyzer. The Excel(TM) template will rely exclusively on standard features implemented in MS Office without the need to run macros, additional code written in Visual Basic for Applications (VBA) or to use a database-related software such as MS Access or SQL Server. These protocols will maximize precision, accuracy and sample throughput via an efficient memory correction. The number of injections per unknown sample can be reduced to 4 or less. This procedure meets the demands of faster throughput with reduced costs per analysis. Procedures were verified by an international proficiency test and traditional IRMS techniques. The template is available free for scientific use from the corresponding author or the journals web site (van Geldern and Barth, 2012). References van Geldern, R. and Barth, J.A.C. (2012) Limnol. Oceanogr. Methods 10:1024-1036 [doi: 10.4319/lom.2012.10.1024

  5. Detection of equine herpesvirus in horses with idiopathic keratoconjunctivitis and comparison of three sampling techniques.

    Science.gov (United States)

    Hollingsworth, Steven R; Pusterla, Nicola; Kass, Philip H; Good, Kathryn L; Brault, Stephanie A; Maggs, David J

    2015-09-01

    To determine the role of equine herpesvirus (EHV) in idiopathic keratoconjunctivitis in horses and to determine whether sample collection method affects detection of EHV DNA by quantitative polymerase chain reaction (qPCR). Twelve horses with idiopathic keratoconjunctivitis and six horses without signs of ophthalmic disease. Conjunctival swabs, corneal scrapings, and conjunctival biopsies were collected from 18 horses: 12 clinical cases with idiopathic keratoconjunctivitis and six euthanized controls. In horses with both eyes involved, the samples were taken from the eye judged to be more severely affected. Samples were tested with qPCR for EHV-1, EHV-2, EHV-4, and EHV-5 DNA. Quantity of EHV DNA and viral replicative activity were compared between the two populations and among the different sampling techniques; relative sensitivities of the sampling techniques were determined. Prevalence of EHV DNA as assessed by qPCR did not differ significantly between control horses and those with idiopathic keratoconjunctivitis. Sampling by conjunctival swab was more likely to yield viral DNA as assessed by qPCR than was conjunctival biopsy. EHV-1 and EHV-4 DNA were not detected in either normal or IKC-affected horses; EHV-2 DNA was detected in two of 12 affected horses but not in normal horses. EHV-5 DNA was commonly found in ophthalmically normal horses and horses with idiopathic keratoconjunctivitis. Because EHV-5 DNA was commonly found in control horses and in horses with idiopathic keratoconjunctivitis, qPCR was not useful for the etiological diagnosis of equine keratoconjunctivitis. Conjunctival swabs were significantly better at obtaining viral DNA samples than conjunctival biopsy in horses in which EHV-5 DNA was found. © 2015 American College of Veterinary Ophthalmologists.

  6. Weighted Mean of Signal Intensity for Unbiased Fiber Tracking of Skeletal Muscles: Development of a New Method and Comparison With Other Correction Techniques.

    Science.gov (United States)

    Giraudo, Chiara; Motyka, Stanislav; Weber, Michael; Resinger, Christoph; Thorsten, Feiweier; Traxler, Hannes; Trattnig, Siegfried; Bogner, Wolfgang

    2017-08-01

    The aim of this study was to investigate the origin of random image artifacts in stimulated echo acquisition mode diffusion tensor imaging (STEAM-DTI), assess the role of averaging, develop an automated artifact postprocessing correction method using weighted mean of signal intensities (WMSIs), and compare it with other correction techniques. Institutional review board approval and written informed consent were obtained. The right calf and thigh of 10 volunteers were scanned on a 3 T magnetic resonance imaging scanner using a STEAM-DTI sequence.Artifacts (ie, signal loss) in STEAM-based DTI, presumably caused by involuntary muscle contractions, were investigated in volunteers and ex vivo (ie, human cadaver calf and turkey leg using the same DTI parameters as for the volunteers). An automated postprocessing artifact correction method based on the WMSI was developed and compared with previous approaches (ie, iteratively reweighted linear least squares and informed robust estimation of tensors by outlier rejection [iRESTORE]). Diffusion tensor imaging and fiber tracking metrics, using different averages and artifact corrections, were compared for region of interest- and mask-based analyses. One-way repeated measures analysis of variance with Greenhouse-Geisser correction and Bonferroni post hoc tests were used to evaluate differences among all tested conditions. Qualitative assessment (ie, images quality) for native and corrected images was performed using the paired t test. Randomly localized and shaped artifacts affected all volunteer data sets. Artifact burden during voluntary muscle contractions increased on average from 23.1% to 77.5% but were absent ex vivo. Diffusion tensor imaging metrics (mean diffusivity, fractional anisotropy, radial diffusivity, and axial diffusivity) had a heterogeneous behavior, but in the range reported by literature. Fiber track metrics (number, length, and volume) significantly improved in both calves and thighs after artifact

  7. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments

    Directory of Open Access Journals (Sweden)

    Wim Bras

    2014-11-01

    Full Text Available Small- and wide-angle X-ray scattering (SAXS, WAXS are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments.

  8. Computer technique for correction of nonhomogeneous distribution in radiologic images

    International Nuclear Information System (INIS)

    Florian, Rogerio V.; Frere, Annie F.; Schiable, Homero; Marques, Paulo M.A.; Marques, Marcio A.

    1996-01-01

    An image processing technique to provide a 'Heel' effect compensation on medical images is presented. It is reported that the technique can improve the structures detection due to background homogeneity and can be used for any radiologic system

  9. Corrective Action Investigation Plan for Corrective Action Unit 555: Septic Systems Nevada Test Site, Nevada, Rev. No.: 0 with Errata

    International Nuclear Information System (INIS)

    Pastor, Laura

    2005-01-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information including facility descriptions, environmental sample collection objectives, and criteria for conducting site investigation activities at Corrective Action Unit (CAU) 555: Septic Systems, Nevada Test Site (NTS), Nevada. This CAIP has been developed in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996) that was agreed to by the State of Nevada, the U.S. Department of Energy (DOE), and the U.S. Department of Defense. Corrective Action Unit 555 is located in Areas 1, 3 and 6 of the NTS, which is approximately 65 miles (mi) northwest of Las Vegas, Nevada, and is comprised of the five corrective action sites (CASs) shown on Figure 1-1 and listed below: (1) CAS 01-59-01, Area 1 Camp Septic System; (2) CAS 03-59-03, Core Handling Building Septic System; (3) CAS 06-20-05, Birdwell Dry Well; (4) CAS 06-59-01, Birdwell Septic System; and (5) CAS 06-59-02, National Cementers Septic System. An FFACO modification was approved on December 14, 2005, to include CAS 06-20-05, Birdwell Dry Well, as part of the scope of CAU 555. The work scope was expanded in this document to include the investigation of CAS 06-20-05. The Corrective Action Investigation (CAI) will include field inspections, radiological surveys, geophysical surveys, sampling of environmental media, analysis of samples, and assessment of investigation results, where appropriate. Data will be obtained to support corrective action alternative evaluations and waste management decisions. The CASs in CAU 555 are being investigated because hazardous and/or radioactive constituents may be present in concentrations that could potentially pose a threat to human health and the environment. Existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives for the CASs. Additional information will be generated by conducting a CAI

  10. Corrective Action Investigation Plan for Corrective Action Unit 555: Septic Systems Nevada Test Site, Nevada, Rev. No.: 0 with Errata

    Energy Technology Data Exchange (ETDEWEB)

    Pastor, Laura

    2005-12-01

    This Corrective Action Investigation Plan (CAIP) contains project-specific information including facility descriptions, environmental sample collection objectives, and criteria for conducting site investigation activities at Corrective Action Unit (CAU) 555: Septic Systems, Nevada Test Site (NTS), Nevada. This CAIP has been developed in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996) that was agreed to by the State of Nevada, the U.S. Department of Energy (DOE), and the U.S. Department of Defense. Corrective Action Unit 555 is located in Areas 1, 3 and 6 of the NTS, which is approximately 65 miles (mi) northwest of Las Vegas, Nevada, and is comprised of the five corrective action sites (CASs) shown on Figure 1-1 and listed below: (1) CAS 01-59-01, Area 1 Camp Septic System; (2) CAS 03-59-03, Core Handling Building Septic System; (3) CAS 06-20-05, Birdwell Dry Well; (4) CAS 06-59-01, Birdwell Septic System; and (5) CAS 06-59-02, National Cementers Septic System. An FFACO modification was approved on December 14, 2005, to include CAS 06-20-05, Birdwell Dry Well, as part of the scope of CAU 555. The work scope was expanded in this document to include the investigation of CAS 06-20-05. The Corrective Action Investigation (CAI) will include field inspections, radiological surveys, geophysical surveys, sampling of environmental media, analysis of samples, and assessment of investigation results, where appropriate. Data will be obtained to support corrective action alternative evaluations and waste management decisions. The CASs in CAU 555 are being investigated because hazardous and/or radioactive constituents may be present in concentrations that could potentially pose a threat to human health and the environment. Existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives for the CASs. Additional information will be generated by

  11. A microfluidic paper-based analytical device for the assay of albumin-corrected fructosamine values from whole blood samples.

    Science.gov (United States)

    Boonyasit, Yuwadee; Laiwattanapaisal, Wanida

    2015-01-01

    A method for acquiring albumin-corrected fructosamine values from whole blood using a microfluidic paper-based analytical system that offers substantial improvement over previous methods is proposed. The time required to quantify both serum albumin and fructosamine is shortened to 10 min with detection limits of 0.50 g dl(-1) and 0.58 mM, respectively (S/N = 3). The proposed system also exhibited good within-run and run-to-run reproducibility. The results of the interference study revealed that the acceptable recoveries ranged from 95.1 to 106.2%. The system was compared with currently used large-scale methods (n = 15), and the results demonstrated good agreement among the techniques. The microfluidic paper-based system has the potential to continuously monitor glycemic levels in low resource settings.

  12. Determination of trace element contents in grass samples for cattle feeding using NAA techniques

    Energy Technology Data Exchange (ETDEWEB)

    Yusof, Alias Mohamad; Jagir Singh, Jasbir Kaur

    1987-09-01

    An investigation on trace elements contents in six types of grass samples used for cattle feeding have been carried out using NAA techniques. The grass samples, Mardi Digit, African Star, Signal, Guinea, Setaria and Setaria Splendida were found to contain at least 19 trace elements in varying concentrations. The results were compared to the figures obtained from available sources to ascertain the status as to whether the grass samples studied would satisfy the minimum requirements of trace elements present in grass for cattle feeding or otherwise. Preference made on the suitability of the grass samples for cattle feeding was based on the availability and abundance of the trace elements, taking into account factors such as the degree of toxicity, inadequate amounts and contamination due to the presence of other trace elements not essential for cattle feeding.

  13. Determination of trace element contents in grass samples for cattle feeding using NAA techniques

    International Nuclear Information System (INIS)

    Alias Mohamad Yusof; Jasbir Kaur Jagir Singh

    1987-01-01

    An investigation on trace elements contents in six types of grass samples used for cattle feeding have been carried out using NAA techniques. The grass samples, Mardi Digit, African Star, Signal, Guinea, Setaria and Setaria Splendida were found to contain at least 19 trace elements in varying concentrations. The results were compared to the figures obtained from available sources to ascertain the status as to whether the grass samples studied would satisfy the minimum requirements of trace elements present in grass for cattle feeding or otherwise. Preference made on the suitability of the grass samples for cattle feeding was based on the availability and abundance of the trace elements, taking into account factors such as the degree of toxicity, inadequate amounts and contamination due to the presence of other trace elements not essential for cattle feeding. (author)

  14. Experimental performance evaluation of two stack sampling systems in a plutonium facility

    International Nuclear Information System (INIS)

    Glissmeyer, J.A.

    1992-04-01

    The evaluation of two routine stack sampling systems at the Z-Plant plutonium facility operated by Rockwell International for USERDA is part of a larger study, sponsored by Rockwell and conducted by Battelle, Pacific Northwest Laboratories, of gaseous effluent sampling systems. The gaseous effluent sampling systems evaluated are located at the main plant ventilation stack (291-Z-1) and at a vessel vent stack (296-Z-3). A preliminary report, which was a paper study issued in April 1976, identified many deficiencies in the existing sampling systems and made recommendations for corrective action. The objectives of this experimental evaluation of those sampling systems were as follows: Characterize the radioactive aerosols in the stack effluents; Develop a tracer aerosol technique for validating particulate effluent sampling system performance; Evaluate the performance of the existing routine sampling systems and their compliance with the sponsor's criteria; and Recommend corrective action where required. The tracer aerosol approach to sampler evaluation was chosen because the low concentrations of radioactive particulates in the effluents would otherwise require much longer sampling times and thus more time to complete this evaluation. The following report describes the sampling systems that are the subject of this study and then details the experiments performed. The results are then presented and discussed. Much of the raw and finished data are included in the appendices

  15. The theoretical analysis content correctional massage for athletes with disabilities

    Directory of Open Access Journals (Sweden)

    Romanna Rudenko

    2015-12-01

    Full Text Available Purpose: to analyze the content authoring methodology of correction massage for athletes with disabilities. Material and Methods: analysis and synthesis of information for scientific, methodical and special literature; pedagogical supervision; analysis of medical cards; methods of mathematical statistics. The study involved 60 athletes with disabilities qualifications of different nosological groups. Results: of correction massage technique developed taking into account the level of physical activity, nosological group, physiological effects of massage techniques on the system. Forms of correction massage must meet the intensity of physical activity, main course and related diseases in the training cycle athletes with disabilities. Conclusions: apply total, partial, intermittent, local, segmental-reflex massage, paravertebral zones, taking into account intensity physical activity, individual tolerance for exercise

  16. Relativistic neoclassical transport coefficients with momentum correction

    International Nuclear Information System (INIS)

    Marushchenko, I.; Azarenkov, N.A.

    2016-01-01

    The parallel momentum correction technique is generalized for relativistic approach. It is required for proper calculation of the parallel neoclassical flows and, in particular, for the bootstrap current at fusion temperatures. It is shown that the obtained system of linear algebraic equations for parallel fluxes can be solved directly without calculation of the distribution function if the relativistic mono-energetic transport coefficients are already known. The first relativistic correction terms for Braginskii matrix coefficients are calculated.

  17. Fluid Sampling under Adverse Conditions Echantillonnage des fluides en conditions difficiles

    Directory of Open Access Journals (Sweden)

    Williams J. M.

    2006-12-01

    Full Text Available Valid samples are essential to the proper description of reservoir fluids; if the samples are not representative, all measurements on them will be invalid. This paper discusses the principal challenges facing fluid sampling including gas condensate reservoirs, compositional gradients, water content of hydrocarbon fluids, asphaltene deposition, wax formation, oil base mud contamination, and reactive components. It also reports the major technological advances recently made in this field. It reviews developments in sampling techniques such as MDT-type tools, new DST sampling tools, coiled tubing sampling, and isokinetic techniques, and it highlights common limitations. The value of making proper use of existing technology is emphasized, both with traditional techniques and new developments, with reference to correct well conditioning, interpretation of field data, and especially to optimum handling of samples. The paper emphasizes the need for better exchange of sampling knowledge between organizations, and highlights the lack of up-to-date industry standards with respect to fluid sampling. A solution is proposed in the form of a joint industry project to identify and document best practices. Des échantillons valables sont essentiels pour bien caractériser les fluides de gisements. Si les échantillons ne sont pas représentatifs, toutes les mesures ultérieures seront entachées d'erreurs. Cet article discute les principaux défis en matière d'échantillonnage, en particulier les réservoirs de gaz à condensats, les gradients compositionnels, la teneur en eau des fluides hydrocarbonés, les dépôts d'asphaltènes, les dépôts de paraffines, la contamination par les boues à base d'huile, et les constituants réactifs. Il relate également les principaux progrès technologiques récemment réalisés dans ce domaine et passe en revue les développements des techniques d'échantillonnage telles que les outils de type MDT, les nouveaux outils d

  18. A flow-cytometric gram-staining technique for milk-associated bacteria.

    Science.gov (United States)

    Holm, Claus; Jespersen, Lene

    2003-05-01

    A Gram-staining technique combining staining with two fluorescent stains, Oregon Green-conjugated wheat germ agglutinin (WGA) and hexidium iodide (HI) followed by flow-cytometric detection is described. WGA stains gram-positive bacteria while HI binds to the DNA of all bacteria after permeabilization by EDTA and incubation at 50 degrees C for 15 min. For WGA to bind to gram-positive bacteria, a 3 M potassium chloride solution was found to give the highest fluorescence intensity. A total of 12 strains representing some of the predominant bacterial species in bulk tank milk and mixtures of these were stained and analyzed by flow cytometry. Overall, the staining method showed a clear differentiation between gram-positive and gram-negative bacterial populations. For stationary-stage cultures of seven gram-positive bacteria and five gram-negative bacteria, an average of 99% of the cells were correctly interpreted. The method was only slightly influenced by the growth phase of the bacteria or conditions such as freezing at -18 degrees C for 24 h. For any of these conditions, an average of at least 95% of the cells were correctly interpreted. When stationary-stage cultures were stored at 5 degrees C for 14 days, an average of 86% of the cells were correctly interpreted. The Gram-staining technique was applied to the flow cytometry analysis of bulk tank milk inoculated with Staphylococcus aureus and Escherichia coli. These results demonstrate that the technique is suitable for analyzing milk samples without precultivation.

  19. Trace uranium analysis in Indian coal samples using the fission track technique

    International Nuclear Information System (INIS)

    Jojo, P.J.; Rawat, A.; Kumar, Ashavani; Prasad, Rajendra

    1993-01-01

    The ever-growing demand for energy has resulted in the extensive use of fossil fuels, especially coal, for power generation. Coal and its by-products often contain significant amounts of radionuclides, including uranium, which is the ultimate source of the radioactive gas Radon-222. The present study gives the concentration of uranium in coal samples of different collieries in India, collected from various thermal power plants in the state of Uttar Pradesh. The estimates were made using the fission track technique. Latent damage tracks were not found to be uniformly distributed but showed sun bursts and clusters. Non-uniform distributions of trace elements are a very common phenomenon in rocks. The levels of uranium in the coal samples were found to vary from 2.0 to 4.9 ppm in uniform distributions and from 21.3 to 41.0 ppm in non-uniform distributions. Measurements were also made on fly ash samples where the average uranium concentration was found to be 8.4 and 49.3 ppm in uniform and non-uniform distributions, respectively. (author)

  20. Histologic examination of hepatic biopsy samples as a prognostic indicator in dogs undergoing surgical correction of congenital portosystemic shunts: 64 cases (1997-2005).

    Science.gov (United States)

    Parker, Jacquelyn S; Monnet, Eric; Powers, Barbara E; Twedt, David C

    2008-05-15

    To determine whether results of histologic examination of hepatic biopsy samples could be used as an indicator of survival time in dogs that underwent surgical correction of a congenital portosystemic shunt (PSS). Retrospective case series. 64 dogs that underwent exploratory laparotomy for an extrahepatic (n = 39) or intrahepatic (25) congenital PSS. All H&E-stained histologic slides of hepatic biopsy samples obtained at the time of surgery were reviewed by a single individual, and severity of histologic abnormalities (ie, arteriolar hyperplasia, biliary hyperplasia, fibrosis, cell swelling, lipidosis, lymphoplasmacytic cholangiohepatitis, suppurative cholangiohepatitis, lipid granulomas, and dilated sinusoids) was graded. A Cox proportional hazards regression model was used to determine whether each histologic feature was associated with survival time. Median follow-up time was 35.7 months, and median survival time was 50.6 months. Thirty-eight dogs were alive at the time of final follow-up; 15 had died of causes associated with the PSS, including 4 that died immediately after surgery; 3 had died of unrelated causes; and 8 were lost to follow-up. None of the histologic features examined were significantly associated with survival time. Findings suggested that results of histologic examination of hepatic biopsy samples obtained at the time of surgery cannot be used to predict long-term outcome in dogs undergoing surgical correction of a PSS.