WorldWideScience

Sample records for corrected quantification method

  1. SPECT quantification: a review of the different correction methods with compton scatter, attenuation and spatial deterioration effects

    International Nuclear Information System (INIS)

    Groiselle, C.; Rocchisani, J.M.; Moretti, J.L.; Dreuille, O. de; Gaillard, J.F.; Bendriem, B.

    1997-01-01

    SPECT quantification: a review of the different correction methods with Compton scatter attenuation and spatial deterioration effects. The improvement of gamma-cameras, acquisition and reconstruction software opens new perspectives in term of image quantification in nuclear medicine. In order to meet the challenge, numerous works have been undertaken in recent years to correct for the different physical phenomena that prevent an exact estimation of the radioactivity distribution. The main phenomena that have to betaken into account are scatter, attenuation and resolution. In this work, authors present the physical basis of each issue, its consequences on quantification and the main methods proposed to correct them. (authors)

  2. A direct ROI quantification method for inherent PVE correction: accuracy assessment in striatal SPECT measurements

    Energy Technology Data Exchange (ETDEWEB)

    Vanzi, Eleonora; De Cristofaro, Maria T.; Sotgia, Barbara; Mascalchi, Mario; Formiconi, Andreas R. [University of Florence, Clinical Pathophysiology, Florence (Italy); Ramat, Silvia [University of Florence, Neurological and Psychiatric Sciences, Florence (Italy)

    2007-09-15

    The clinical potential of striatal imaging with dopamine transporter (DAT) SPECT tracers is hampered by the limited capability to recover activity concentration ratios due to partial volume effects (PVE). We evaluated the accuracy of a least squares method that allows retrieval of activity in regions of interest directly from projections (LS-ROI). An Alderson striatal phantom was filled with striatal to background ratios of 6:1, 9:1 and 28:1; the striatal and background ROIs were drawn on a coregistered X-ray CT of the phantom. The activity ratios of these ROIs were derived both with the LS-ROI method and with conventional SPECT EM reconstruction (EM-SPECT). Moreover, the two methods were compared in seven patients with motor symptoms who were examined with N-3-fluoropropyl-2-{beta}-carboxymethoxy-3-{beta}-(4-iodophenyl) (FP-CIT) SPECT, calculating the binding potential (BP). In the phantom study, the activity ratios obtained with EM-SPECT were 3.5, 5.3 and 17.0, respectively, whereas the LS-ROI method resulted in ratios of 6.2, 9.0 and 27.3, respectively. With the LS-ROI method, the BP in the seven patients was approximately 60% higher than with EM-SPECT; a linear correlation between the LS-ROI and the EM estimates was found (r = 0.98, p = 0.03). The LS-ROI PVE correction capability is mainly due to the fact that the ill-conditioning of the LS-ROI approach is lower than that of the EM-SPECT one. The LS-ROI seems to be feasible and accurate in the examination of the dopaminergic system. This approach can be fruitful in monitoring of disease progression and in clinical trials of dopaminergic drugs. (orig.)

  3. Effect of scatter correction on quantification of myocardial SPECT and application to dual-energy acquisition using triple-energy window method

    International Nuclear Information System (INIS)

    Nakajima, Kenichi; Matsudaira, Masamichi; Yamada, Masato; Taki, Junichi; Tonami, Norihisa; Hisada, Kinichi

    1995-01-01

    Triple-energy window (TEW) method is a simple and practical approach for correcting Compton scatter in single-photon emission tracer studies. The fraction of scatter correction, with a point source or 30 ml-syringe placed under the camera, was measured by the TEW method. The scatter fraction was 55% for 201 Tl, 29% for 99m Tc and 57% for 123 I. Composite energy spectra were generated and separated by the TEW method. Combination of 99m Tc and 201 Tl was well separated, and 201 Tl and 123 I were separated within an error of 10%; whereas asymmetric photopeak energy window was necessary for separating 123 I and 99m Tc. By applying this method to myocardial SPECT study, the effect of scatter elimination was investigated in each myocardial wall by polar map and profile curve analysis. The effect of scatter was higher in the septum and the inferior wall. The count ratio relative to the anterior wall including scatter was 9% higher in 123 I, 7-8% higher in 99m Tc and 6% higher in 201 Tl. Apparent count loss after scatter correction was 30% for 123 I, 13% for 99m Tc and 38% for 201 Tl. Image contrast, as defined myocardium-to-left ventricular cavity count ratio, improved by scatter correction. Since the influence of Compton scatter was significant in cardiac planar and SPECT studies; the degree of scatter fraction should be kept in mind both in quantification and visual interpretation. (author)

  4. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  5. Analysis of the principal factors that intervene in the quantification of planar images of uniform distributions of 99mTc by the conjugate view method with background correction by simple subtraction

    International Nuclear Information System (INIS)

    Mora Araya, Luis Diego

    2013-01-01

    The activity of uniform distributions of 99m Tc was quantified by the conjugate view method. The necessary factors of calibration and transmission were calculated to realize the quantification. The dependence of the estimated number of accounts within the source region and variability of the value of the transmission factor were determined, according to the size established for the region of interest, keeping constant its geometry. The images of all acquisitions were corrected by environmental background radiation and radiation dispersed, by the dual energy window method (DEW). The impact of corrections in the image were checked, both qualitatively and quantitative. The acquisition to obtain the calibration factor was realized with the same configuration and the same conditions that were used to realize the acquisition for quantification; in which, the same volume and the same geometry were used to contain the distribution of the activity of 99m Tc. The volume and geometry of the same medium attenuator have obtained a calibration factor exactly in the same circumstances in which have quantified. The behavior of the estimation of the calibration factor of the gamma camera was analyzed, according to the decay corrections of the activity and the attenuation that are applied. The dependence of the calibration factors and transmission were analyzed, according to the region of interest used in the corresponding images to estimate their values. The behavior of the estimation of the activity was determined, according to all possible combinations of the factors studied that have intervened in the quantification algorithm of conjugate view, namely, the size of the region of interest corresponding to the source region, the transmission factor , the calibration factor and background correction by simple subtraction. The results obtained of the estimates of the activity were compared. A tendency is established, indicating which have been combinations of the studied factors that

  6. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  7. Planar imaging quantification using 3D attenuation correction data and Monte Carlo simulated buildup factors

    International Nuclear Information System (INIS)

    Miller, C.; Filipow, L.; Jackson, S.; Riauka, T.

    1996-01-01

    A new method to correct for attenuation and the buildup of scatter in planar imaging quantification is presented. The method is based on the combined use of 3D density information provided by computed tomography to correct for attenuation and the application of Monte Carlo simulated buildup factors to correct for buildup in the projection pixels. CT and nuclear medicine images were obtained for a purpose-built nonhomogeneous phantom that models the human anatomy in the thoracic and abdominal regions. The CT transverse slices of the phantom were converted to a set of consecutive density maps. An algorithm was developed that projects the 3D information contained in the set of density maps to create opposing pairs of accurate 2D correction maps that were subsequently applied to planar images acquired from a dual-head gamma camera. A comparison of results obtained by the new method and the geometric mean approach based on published techniques is presented for some of the source arrangements used. Excellent results were obtained for various source - phantom configurations used to evaluate the method. Activity quantification of a line source at most locations in the nonhomogeneous phantom produced errors of less than 2%. Additionally, knowledge of the actual source depth is not required for accurate activity quantification. Quantification of volume sources placed in foam, Perspex and aluminium produced errors of less than 7% for the abdominal and thoracic configurations of the phantom. (author)

  8. Database of normal human cerebral blood flow measured by SPECT: II. Quantification of I-123-IMP studies with ARG method and effects of partial volume correction.

    Science.gov (United States)

    Inoue, Kentaro; Ito, Hiroshi; Shidahara, Miho; Goto, Ryoi; Kinomura, Shigeo; Sato, Kazunori; Taki, Yasuyuki; Okada, Ken; Kaneta, Tomohiro; Fukuda, Hiroshi

    2006-02-01

    The limited spatial resolution of SPECT causes a partial volume effect (PVE) and can lead to the significant underestimation of regional tracer concentration in the small structures surrounded by a low tracer concentration, such as the cortical gray matter of an atrophied brain. The aim of the present study was to determine, using 123I-IMP and SPECT, normal CBF of elderly subjects with and without PVE correction (PVC), and to determine regional differences in the effect of PVC and their association with the regional tissue fraction of the brain. Quantitative CBF SPECT using 123I-IMP was performed in 33 healthy elderly subjects (18 males, 15 females, 54-74 years old) using the autoradiographic method. We corrected CBF for PVE using segmented MR images, and analyzed quantitative CBF and regional differences in the effect of PVC using tissue fractions of gray matter (GM) and white matter (WM) in regions of interest (ROIs) placed on the cortical and subcortical GM regions and deep WM regions. The mean CBF in GM-ROIs were 31.7 +/- 6.6 and 41.0 +/- 8.1 ml/100 g/min for males and females, and in WM-ROIs, 18.2 +/- 0.7 and 22.9 +/- 0.8 ml/100 g/min for males and females, respectively. The mean CBF in GM-ROIs after PVC were 50.9 +/- 12.8 and 65.8 +/- 16.1 ml/100 g/min for males and females, respectively. There were statistically significant differences in the effect of PVC among ROIs, but not between genders. The effect of PVC was small in the cerebellum and parahippocampal gyrus, and it was large in the superior frontal gyrus, superior parietal lobule and precentral gyrus. Quantitative CBF in GM recovered significantly, but did not reach values as high as those obtained by invasive methods or in the H2(15)O PET study that used PVC. There were significant regional differences in the effect of PVC, which were considered to result from regional differences in GM tissue fraction, which is more reduced in the frontoparietal regions in the atrophied brain of the elderly.

  9. Database of normal human cerebral blood flow measured by SPECT. II. Quantification of I-123-IMP studies with ARG method and effects of partial volume correction

    International Nuclear Information System (INIS)

    Inoue, Kentaro; Ito, Hiroshi; Shidahara, Miho

    2006-01-01

    The limited spatial resolution of SPECT causes a partial volume effect (PVE) and can lead to the significant underestimation of regional tracer concentration in the small structures surrounded by a low tracer concentration, such as the cortical gray matter of an atrophied brain. The aim of the present study was to determine, using 123 I-IMP and SPECT, normal cerebral blood flow (CBF) of elderly subjects with and without PVE correction (PVC), and to determine regional differences in the effect of PVC and their association with the regional tissue fraction of the brain. Quantitative CBF SPECT using 123 I-IMP was performed in 33 healthy elderly subjects (18 males, 15 females, 54-74 years old) using the autoradiographic method. We corrected CBF for PVE using segmented MR images, and analyzed quantitative CBF and regional differences in the effect of PVC using tissue fractions of gray matter (GM) and white matter (WM) in regions of interest (ROIs) placed on the cortical and subcortical GM regions and deep WM regions. The mean CBF in GM-ROIs were 31.7±6.6 and 41.0±8.1 ml/100 g/min for males and females, and in WM-ROIs, 18.2±0.7 and 22.9±0.8 ml/100 g/min for males and females, respectively. The mean CBF in GM-ROIs after PVC were 50.9±12.8 and 65.8±16.1 ml/100 g/min for males and females, respectively. There were statistically significant differences in the effect of PVC among ROIs, but not between genders. The effect of PVC was small in the cerebellum and parahippocampal gyrus, and it was large in the superior frontal gyrus, superior parietal lobule and precentral gyrus. Quantitative CBF in GM recovered significantly, but did not reach values as high as those obtained by invasive methods or in the H 2 15 O PET study that used PVC. There were significant regional differences in the effect of PVC, which were considered to result from regional differences in GM tissue fraction, which is more reduced in the frontoparietal regions in the atrophied brain of the elderly

  10. Comparison of five DNA quantification methods

    DEFF Research Database (Denmark)

    Nielsen, Karsten; Mogensen, Helle Smidt; Hedman, Johannes

    2008-01-01

    Six commercial preparations of human genomic DNA were quantified using five quantification methods: UV spectrometry, SYBR-Green dye staining, slot blot hybridization with the probe D17Z1, Quantifiler Human DNA Quantification kit and RB1 rt-PCR. All methods measured higher DNA concentrations than...... Quantification kit in two experiments. The measured DNA concentrations with Quantifiler were 125 and 160% higher than expected based on the manufacturers' information. When the Quantifiler human DNA standard (Raji cell line) was replaced by the commercial human DNA preparation G147A (Promega) to generate the DNA...... standard curve in the Quantifiler Human DNA Quantification kit, the DNA quantification results of the human DNA preparations were 31% higher than expected based on the manufacturers' information. The results indicate a calibration problem with the Quantifiler human DNA standard for its use...

  11. Generalized subspace correction methods

    Energy Technology Data Exchange (ETDEWEB)

    Kolm, P. [Royal Institute of Technology, Stockholm (Sweden); Arbenz, P.; Gander, W. [Eidgenoessiche Technische Hochschule, Zuerich (Switzerland)

    1996-12-31

    A fundamental problem in scientific computing is the solution of large sparse systems of linear equations. Often these systems arise from the discretization of differential equations by finite difference, finite volume or finite element methods. Iterative methods exploiting these sparse structures have proven to be very effective on conventional computers for a wide area of applications. Due to the rapid development and increasing demand for the large computing powers of parallel computers, it has become important to design iterative methods specialized for these new architectures.

  12. Development of Quantification Method for Bioluminescence Imaging

    International Nuclear Information System (INIS)

    Kim, Hyeon Sik; Min, Jung Joon; Lee, Byeong Il; Choi, Eun Seo; Tak, Yoon O; Choi, Heung Kook; Lee, Ju Young

    2009-01-01

    Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time

  13. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    International Nuclear Information System (INIS)

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson'sr, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson'sr increased from 0.86 to 0.92 with the bias field correction

  14. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    Science.gov (United States)

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method

  15. The Method of Manufactured Universes for validating uncertainty quantification methods

    KAUST Repository

    Stripling, H.F.; Adams, M.L.; McClarren, R.G.; Mallick, B.K.

    2011-01-01

    The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework

  16. Uncertainty Quantification in Alchemical Free Energy Methods.

    Science.gov (United States)

    Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V

    2018-05-02

    Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.

  17. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  18. Improved perfusion quantification in FAIR imaging by offset correction

    DEFF Research Database (Denmark)

    Sidaros, Karam; Andersen, Irene Klærke; Gesmar, Henrik

    2001-01-01

    Perfusion quantification using pulsed arterial spin labeling has been shown to be sensitive to the RF pulse slice profiles. Therefore, in Flow-sensitive Alternating-Inversion Recovery (FAIR) imaging the slice selective (ss) inversion slab is usually three to four times thicker than the imaging...... slice. However, this reduces perfusion sensitivity due to the increased transit delay of the incoming blood with unperturbed spins. In the present article, the dependence of the magnetization on the RF pulse slice profiles is inspected both theoretically and experimentally. A perfusion quantification...... model is presented that allows the use of thinner ss inversion slabs by taking into account the offset of RF slice profiles between ss and nonselective inversion slabs. This model was tested in both phantom and human studies. Magn Reson Med 46:193-197, 2001...

  19. Quantification by aberration corrected (S)TEM of boundaries formed by symmetry breaking phase transformations

    Energy Technology Data Exchange (ETDEWEB)

    Schryvers, D., E-mail: nick.schryvers@uantwerpen.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Salje, E.K.H. [Department of Earth Sciences, University of Cambridge, Cambridge CB2 3EQ (United Kingdom); Nishida, M. [Department of Engineering Sciences for Electronics and Materials, Faculty of Engineering Sciences, Kyushu University, Kasuga, Fukuoka 816-8580 (Japan); De Backer, A. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Idrissi, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Institute of Mechanics, Materials and Civil Engineering, Université Catholique de Louvain, Place Sainte Barbe, 2, B-1348, Louvain-la-Neuve (Belgium); Van Aert, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)

    2017-05-15

    The present contribution gives a review of recent quantification work of atom displacements, atom site occupations and level of crystallinity in various systems and based on aberration corrected HR(S)TEM images. Depending on the case studied, picometer range precisions for individual distances can be obtained, boundary widths at the unit cell level determined or statistical evolutions of fractions of the ordered areas calculated. In all of these cases, these quantitative measures imply new routes for the applications of the respective materials. - Highlights: • Quantification of picometer displacements at ferroelastic twin boundary in CaTiO{sub 3.} • Quantification of kinks in meandering ferroelectric domain wall in LiNbO{sub 3}. • Quantification of column occupation in anti-phase boundary in Co-Pt. • Quantification of atom displacements at twin boundary in Ni-Ti B19′ martensite.

  20. Error correction in multi-fidelity molecular dynamics simulations using functional uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu

    2017-04-01

    We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated under three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.

  1. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    Science.gov (United States)

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Aarsvold, John N.; Raghunath, Nivedita; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.; Votaw, John R.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [11C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR

  2. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Fei, Baowei, E-mail: bfei@emory.edu [Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road Northeast, Atlanta, Georgia 30329 (United States); Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia 30322 (United States); Department of Mathematics and Computer Sciences, Emory University, Atlanta, Georgia 30322 (United States); Yang, Xiaofeng; Nye, Jonathon A.; Raghunath, Nivedita; Votaw, John R. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Aarsvold, John N. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Nuclear Medicine Service, Atlanta Veterans Affairs Medical Center, Atlanta, Georgia 30033 (United States); Cervo, Morgan; Stark, Rebecca [The Medical Physics Graduate Program in the George W. Woodruff School, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Meltzer, Carolyn C. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Department of Neurology and Department of Psychiatry and Behavior Sciences, Emory University School of Medicine, Atlanta, Georgia 30322 (United States)

    2012-10-15

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [{sup 11}C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET.

  3. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    International Nuclear Information System (INIS)

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Raghunath, Nivedita; Votaw, John R.; Aarsvold, John N.; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with ["1"1C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET.

  4. Impact of improved attenuation correction featuring a bone atlas and truncation correction on PET quantification in whole-body PET/MR.

    Science.gov (United States)

    Oehmigen, Mark; Lindemann, Maike E; Gratz, Marcel; Kirchner, Julian; Ruhlmann, Verena; Umutlu, Lale; Blumhagen, Jan Ole; Fenchel, Matthias; Quick, Harald H

    2018-04-01

    Recent studies have shown an excellent correlation between PET/MR and PET/CT hybrid imaging in detecting lesions. However, a systematic underestimation of PET quantification in PET/MR has been observed. This is attributable to two methodological challenges of MR-based attenuation correction (AC): (1) lack of bone information, and (2) truncation of the MR-based AC maps (μmaps) along the patient arms. The aim of this study was to evaluate the impact of improved AC featuring a bone atlas and truncation correction on PET quantification in whole-body PET/MR. The MR-based Dixon method provides four-compartment μmaps (background air, lungs, fat, soft tissue) which served as a reference for PET/MR AC in this study. A model-based bone atlas provided bone tissue as a fifth compartment, while the HUGE method provided truncation correction. The study population comprised 51 patients with oncological diseases, all of whom underwent a whole-body PET/MR examination. Each whole-body PET dataset was reconstructed four times using standard four-compartment μmaps, five-compartment μmaps, four-compartment μmaps + HUGE, and five-compartment μmaps + HUGE. The SUV max for each lesion was measured to assess the impact of each μmap on PET quantification. All four μmaps in each patient provided robust results for reconstruction of the AC PET data. Overall, SUV max was quantified in 99 tumours and lesions. Compared to the reference four-compartment μmap, the mean SUV max of all 99 lesions increased by 1.4 ± 2.5% when bone was added, by 2.1 ± 3.5% when HUGE was added, and by 4.4 ± 5.7% when bone + HUGE was added. Larger quantification bias of up to 35% was found for single lesions when bone and truncation correction were added to the μmaps, depending on their individual location in the body. The novel AC method, featuring a bone model and truncation correction, improved PET quantification in whole-body PET/MR imaging. Short reconstruction times, straightforward

  5. Impact of improved attenuation correction featuring a bone atlas and truncation correction on PET quantification in whole-body PET/MR

    Energy Technology Data Exchange (ETDEWEB)

    Oehmigen, Mark; Lindemann, Maike E. [University Hospital Essen, High Field and Hybrid MR Imaging, Essen (Germany); Gratz, Marcel; Quick, Harald H. [University Hospital Essen, High Field and Hybrid MR Imaging, Essen (Germany); University Duisburg-Essen, Erwin L. Hahn Institute for MR Imaging, Essen (Germany); Kirchner, Julian [University Dusseldorf, Department of Diagnostic and Interventional Radiology, Medical Faculty, Dusseldorf (Germany); Ruhlmann, Verena [University Hospital Essen, Department of Nuclear Medicine, Essen (Germany); Umutlu, Lale [University Hospital Essen, Department of Diagnostic and Interventional Radiology and Neuroradiology, Essen (Germany); Blumhagen, Jan Ole; Fenchel, Matthias [Siemens Healthcare GmbH, Erlangen (Germany)

    2018-04-15

    Recent studies have shown an excellent correlation between PET/MR and PET/CT hybrid imaging in detecting lesions. However, a systematic underestimation of PET quantification in PET/MR has been observed. This is attributable to two methodological challenges of MR-based attenuation correction (AC): (1) lack of bone information, and (2) truncation of the MR-based AC maps (μmaps) along the patient arms. The aim of this study was to evaluate the impact of improved AC featuring a bone atlas and truncation correction on PET quantification in whole-body PET/MR. The MR-based Dixon method provides four-compartment μmaps (background air, lungs, fat, soft tissue) which served as a reference for PET/MR AC in this study. A model-based bone atlas provided bone tissue as a fifth compartment, while the HUGE method provided truncation correction. The study population comprised 51 patients with oncological diseases, all of whom underwent a whole-body PET/MR examination. Each whole-body PET dataset was reconstructed four times using standard four-compartment μmaps, five-compartment μmaps, four-compartment μmaps + HUGE, and five-compartment μmaps + HUGE. The SUV{sub max} for each lesion was measured to assess the impact of each μmap on PET quantification. All four μmaps in each patient provided robust results for reconstruction of the AC PET data. Overall, SUV{sub max} was quantified in 99 tumours and lesions. Compared to the reference four-compartment μmap, the mean SUV{sub max} of all 99 lesions increased by 1.4 ± 2.5% when bone was added, by 2.1 ± 3.5% when HUGE was added, and by 4.4 ± 5.7% when bone + HUGE was added. Larger quantification bias of up to 35% was found for single lesions when bone and truncation correction were added to the μmaps, depending on their individual location in the body. The novel AC method, featuring a bone model and truncation correction, improved PET quantification in whole-body PET/MR imaging. Short reconstruction times, straightforward

  6. A practical method for accurate quantification of large fault trees

    International Nuclear Information System (INIS)

    Choi, Jong Soo; Cho, Nam Zin

    2007-01-01

    This paper describes a practical method to accurately quantify top event probability and importance measures from incomplete minimal cut sets (MCS) of a large fault tree. The MCS-based fault tree method is extensively used in probabilistic safety assessments. Several sources of uncertainties exist in MCS-based fault tree analysis. The paper is focused on quantification of the following two sources of uncertainties: (1) the truncation neglecting low-probability cut sets and (2) the approximation in quantifying MCSs. The method proposed in this paper is based on a Monte Carlo simulation technique to estimate probability of the discarded MCSs and the sum of disjoint products (SDP) approach complemented by the correction factor approach (CFA). The method provides capability to accurately quantify the two uncertainties and estimate the top event probability and importance measures of large coherent fault trees. The proposed fault tree quantification method has been implemented in the CUTREE code package and is tested on the two example fault trees

  7. Quantification Methods of Management Skills in Shipping

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2012-04-01

    Full Text Available Romania can not overcome the financial crisis without business growth, without finding opportunities for economic development and without attracting investment into the country. Successful managers find ways to overcome situations of uncertainty. The purpose of this paper is to determine the managerial skills developed by the Romanian fluvial shipping company NAVROM (hereinafter CNFR NAVROM SA, compared with ten other major competitors in the same domain, using financial information of these companies during the years 2005-2010. For carrying out the work it will be used quantification methods of managerial skills to CNFR NAVROM SA Galati, Romania, as example mentioning the analysis of financial performance management based on profitability ratios, net profit margin, suppliers management, turnover.

  8. Standardless quantification methods in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Trincavelli, Jorge, E-mail: trincavelli@famaf.unc.edu.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Limandri, Silvina, E-mail: s.limandri@conicet.gov.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Bonetto, Rita, E-mail: bonetto@quimica.unlp.edu.ar [Centro de Investigación y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Facultad de Ciencias Exactas, de la Universidad Nacional de La Plata, Calle 47 N° 257, 1900 La Plata (Argentina)

    2014-11-01

    The elemental composition of a solid sample can be determined by electron probe microanalysis with or without the use of standards. The standardless algorithms are quite faster than the methods that require standards; they are useful when a suitable set of standards is not available or for rough samples, and also they help to solve the problem of current variation, for example, in equipments with cold field emission gun. Due to significant advances in the accuracy achieved during the last years, product of the successive efforts made to improve the description of generation, absorption and detection of X-rays, the standardless methods have increasingly become an interesting option for the user. Nevertheless, up to now, algorithms that use standards are still more precise than standardless methods. It is important to remark, that care must be taken with results provided by standardless methods that normalize the calculated concentration values to 100%, unless an estimate of the errors is reported. In this work, a comprehensive discussion of the key features of the main standardless quantification methods, as well as the level of accuracy achieved by them is presented. - Highlights: • Standardless methods are a good alternative when no suitable standards are available. • Their accuracy reaches 10% for 95% of the analyses when traces are excluded. • Some of them are suitable for the analysis of rough samples.

  9. The relative contributions of scatter and attenuation corrections toward improved brain SPECT quantification

    International Nuclear Information System (INIS)

    Stodilka, Robert Z.; Msaki, Peter; Prato, Frank S.; Nicholson, Richard L.; Kemp, B.J.

    1998-01-01

    Mounting evidence indicates that scatter and attenuation are major confounds to objective diagnosis of brain disease by quantitative SPECT. There is considerable debate, however, as to the relative importance of scatter correction (SC) and attenuation correction (AC), and how they should be implemented. The efficacy of SC and AC for 99m Tc brain SPECT was evaluated using a two-compartment fully tissue-equivalent anthropomorphic head phantom. Four correction schemes were implemented: uniform broad-beam AC, non-uniform broad-beam AC, uniform SC+AC, and non-uniform SC+AC. SC was based on non-stationary deconvolution scatter subtraction, modified to incorporate a priori knowledge of either the head contour (uniform SC) or transmission map (non-uniform SC). The quantitative accuracy of the correction schemes was evaluated in terms of contrast recovery, relative quantification (cortical:cerebellar activity), uniformity ((coefficient of variation of 230 macro-voxels) x100%), and bias (relative to a calibration scan). Our results were: uniform broad-beam (μ=0.12cm -1 ) AC (the most popular correction): 71% contrast recovery, 112% relative quantification, 7.0% uniformity, +23% bias. Non-uniform broad-beam (soft tissue μ=0.12cm -1 ) AC: 73%, 114%, 6.0%, +21%, respectively. Uniform SC+AC: 90%, 99%, 4.9%, +12%, respectively. Non-uniform SC+AC: 93%, 101%, 4.0%, +10%, respectively. SC and AC achieved the best quantification; however, non-uniform corrections produce only small improvements over their uniform counterparts. SC+AC was found to be superior to AC; this advantage is distinct and consistent across all four quantification indices. (author)

  10. Quantification accuracy and partial volume effect in dependence of the attenuation correction of a state-of-the-art small animal PET scanner

    International Nuclear Information System (INIS)

    Mannheim, Julia G; Judenhofer, Martin S; Schmid, Andreas; Pichler, Bernd J; Tillmanns, Julia; Stiller, Detlef; Sossi, Vesna

    2012-01-01

    Quantification accuracy and partial volume effect (PVE) of the Siemens Inveon PET scanner were evaluated. The influence of transmission source activities (40 and 160 MBq) on the quantification accuracy and the PVE were determined. Dynamic range, object size and PVE for different sphere sizes, contrast ratios and positions in the field of view (FOV) were evaluated. The acquired data were reconstructed using different algorithms and correction methods. The activity level of the transmission source and the total emission activity in the FOV strongly influenced the attenuation maps. Reconstruction algorithms, correction methods, object size and location within the FOV had a strong influence on the PVE in all configurations. All evaluated parameters potentially influence the quantification accuracy. Hence, all protocols should be kept constant during a study to allow a comparison between different scans. (paper)

  11. A hybrid numerical method for orbit correction

    International Nuclear Information System (INIS)

    White, G.; Himel, T.; Shoaee, H.

    1997-09-01

    The authors describe a simple hybrid numerical method for beam orbit correction in particle accelerators. The method overcomes both degeneracy in the linear system being solved and respects boundaries on the solution. It uses the Singular Value Decomposition (SVD) to find and remove the null-space in the system, followed by a bounded Linear Least Squares analysis of the remaining recast problem. It was developed for correcting orbit and dispersion in the B-factory rings

  12. Multi-atlas attenuation correction supports full quantification of static and dynamic brain PET data in PET-MR

    Science.gov (United States)

    Mérida, Inés; Reilhac, Anthonin; Redouté, Jérôme; Heckemann, Rolf A.; Costes, Nicolas; Hammers, Alexander

    2017-04-01

    In simultaneous PET-MR, attenuation maps are not directly available. Essential for absolute radioactivity quantification, they need to be derived from MR or PET data to correct for gamma photon attenuation by the imaged object. We evaluate a multi-atlas attenuation correction method for brain imaging (MaxProb) on static [18F]FDG PET and, for the first time, on dynamic PET, using the serotoninergic tracer [18F]MPPF. A database of 40 MR/CT image pairs (atlases) was used. The MaxProb method synthesises subject-specific pseudo-CTs by registering each atlas to the target subject space. Atlas CT intensities are then fused via label propagation and majority voting. Here, we compared these pseudo-CTs with the real CTs in a leave-one-out design, contrasting the MaxProb approach with a simplified single-atlas method (SingleAtlas). We evaluated the impact of pseudo-CT accuracy on reconstructed PET images, compared to PET data reconstructed with real CT, at the regional and voxel levels for the following: radioactivity images; time-activity curves; and kinetic parameters (non-displaceable binding potential, BPND). On static [18F]FDG, the mean bias for MaxProb ranged between 0 and 1% for 73 out of 84 regions assessed, and exceptionally peaked at 2.5% for only one region. Statistical parametric map analysis of MaxProb-corrected PET data showed significant differences in less than 0.02% of the brain volume, whereas SingleAtlas-corrected data showed significant differences in 20% of the brain volume. On dynamic [18F]MPPF, most regional errors on BPND ranged from -1 to  +3% (maximum bias 5%) for the MaxProb method. With SingleAtlas, errors were larger and had higher variability in most regions. PET quantification bias increased over the duration of the dynamic scan for SingleAtlas, but not for MaxProb. We show that this effect is due to the interaction of the spatial tracer-distribution heterogeneity variation over time with the degree of accuracy of the attenuation maps. This

  13. Application of Fuzzy Comprehensive Evaluation Method in Trust Quantification

    Directory of Open Access Journals (Sweden)

    Shunan Ma

    2011-10-01

    Full Text Available Trust can play an important role for the sharing of resources and information in open network environments. Trust quantification is thus an important issue in dynamic trust management. By considering the fuzziness and uncertainty of trust, in this paper, we propose a fuzzy comprehensive evaluation method to quantify trust along with a trust quantification algorithm. Simulation results show that the trust quantification algorithm that we propose can effectively quantify trust and the quantified value of an entity's trust is consistent with the behavior of the entity.

  14. A New Class of Scaling Correction Methods

    International Nuclear Information System (INIS)

    Mei Li-Jie; Wu Xin; Liu Fu-Yao

    2012-01-01

    When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)

  15. Another method of dead time correction

    International Nuclear Information System (INIS)

    Sabol, J.

    1988-01-01

    A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs

  16. Off-Angle Iris Correction Methods

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Thompson, Joseph T [ORNL; Karakaya, Mahmut [ORNL; Boehnen, Chris Bensing [ORNL

    2016-01-01

    In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not account for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.

  17. Iteration of ultrasound aberration correction methods

    Science.gov (United States)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  18. Efficient orbit integration by manifold correction methods.

    Science.gov (United States)

    Fukushima, Toshio

    2005-12-01

    Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.

  19. The Method of Manufactured Universes for validating uncertainty quantification methods

    KAUST Repository

    Stripling, H.F.

    2011-09-01

    The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which experimental data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented in this paper manufactures a particle-transport universe, models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new experiments within the manufactured reality. The results of this preliminary study indicate that, even in a simple problem, the improper application of a specific UQ method or unrealized effects of a modeling assumption may produce inaccurate predictions. We conclude that the validation framework presented in this paper is a powerful and flexible tool for the investigation and understanding of UQ methodologies. © 2011 Elsevier Ltd. All rights reserved.

  20. Improved correlation between CT emphysema quantification and pulmonary function test by density correction of volumetric CT data based on air and aortic density

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Soo [Department of Radiology, Chungnam National University Hospital, Chungnam National University School of Medicine (Korea, Republic of); Seo, Joon Beom, E-mail: seojb@amc.seoul.kr [Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center (Korea, Republic of); Kim, Namkug; Chae, Eun Jin [Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center (Korea, Republic of); Lee, Young Kyung [Department of Radiology, Kyung Hee University Hospital at Gangdong (Korea, Republic of); Oh, Yeon Mok; Lee, Sang Do [Division of Pulmonology, Department of Internal Medicine, University of Ulsan College of Medicine, Asan Medical Center (Korea, Republic of)

    2014-01-15

    Objectives: To determine the improvement of emphysema quantification with density correction and to determine the optimal site to use for air density correction on volumetric computed tomography (CT). Methods: Seventy-eight CT scans of COPD patients (GOLD II–IV, smoking history 39.2 ± 25.3 pack-years) were obtained from several single-vendor 16-MDCT scanners. After density measurement of aorta, tracheal- and external air, volumetric CT density correction was conducted (two reference values: air, −1000 HU/blood, +50 HU). Using in-house software, emphysema index (EI) and mean lung density (MLD) were calculated. Differences in air densities, MLD and EI prior to and after density correction were evaluated (paired t-test). Correlation between those parameters and FEV{sub 1} and FEV{sub 1}/FVC were compared (age- and sex adjusted partial correlation analysis). Results: Measured densities (HU) of tracheal- and external air differed significantly (−990 ± 14, −1016 ± 9, P < 0.001). MLD and EI on original CT data, after density correction using tracheal- and external air also differed significantly (MLD: −874.9 ± 27.6 vs. −882.3 ± 24.9 vs. −860.5 ± 26.6; EI: 16.8 ± 13.4 vs. 21.1 ± 14.5 vs. 9.7 ± 10.5, respectively, P < 0.001). The correlation coefficients between CT quantification indices and FEV{sub 1}, and FEV{sub 1}/FVC increased after density correction. The tracheal air correction showed better results than the external air correction. Conclusion: Density correction of volumetric CT data can improve correlations of emphysema quantification and PFT.

  1. Improved correlation between CT emphysema quantification and pulmonary function test by density correction of volumetric CT data based on air and aortic density

    International Nuclear Information System (INIS)

    Kim, Song Soo; Seo, Joon Beom; Kim, Namkug; Chae, Eun Jin; Lee, Young Kyung; Oh, Yeon Mok; Lee, Sang Do

    2014-01-01

    Objectives: To determine the improvement of emphysema quantification with density correction and to determine the optimal site to use for air density correction on volumetric computed tomography (CT). Methods: Seventy-eight CT scans of COPD patients (GOLD II–IV, smoking history 39.2 ± 25.3 pack-years) were obtained from several single-vendor 16-MDCT scanners. After density measurement of aorta, tracheal- and external air, volumetric CT density correction was conducted (two reference values: air, −1000 HU/blood, +50 HU). Using in-house software, emphysema index (EI) and mean lung density (MLD) were calculated. Differences in air densities, MLD and EI prior to and after density correction were evaluated (paired t-test). Correlation between those parameters and FEV 1 and FEV 1 /FVC were compared (age- and sex adjusted partial correlation analysis). Results: Measured densities (HU) of tracheal- and external air differed significantly (−990 ± 14, −1016 ± 9, P < 0.001). MLD and EI on original CT data, after density correction using tracheal- and external air also differed significantly (MLD: −874.9 ± 27.6 vs. −882.3 ± 24.9 vs. −860.5 ± 26.6; EI: 16.8 ± 13.4 vs. 21.1 ± 14.5 vs. 9.7 ± 10.5, respectively, P < 0.001). The correlation coefficients between CT quantification indices and FEV 1 , and FEV 1 /FVC increased after density correction. The tracheal air correction showed better results than the external air correction. Conclusion: Density correction of volumetric CT data can improve correlations of emphysema quantification and PFT

  2. Methods of correcting Anger camera deadtime losses

    International Nuclear Information System (INIS)

    Sorenson, J.A.

    1976-01-01

    Three different methods of correcting for Anger camera deadtime loss were investigated. These included analytic methods (mathematical modeling), the marker-source method, and a new method based on counting ''pileup'' events appearing in a pulseheight analyzer window positioned above the photopeak of interest. The studies were done with /sup 99m/Tc on a Searle Radiographics camera with a measured deadtime of about 6 μsec. Analytic methods were found to be unreliable because of unpredictable changes in deadtime with changes in radiation scattering conditions. Both the marker-source method and the pileup-counting method were found to be accurate to within a few percent for true counting rates of up to about 200 K cps, with the pileup-counting method giving better results. This finding applied to sources at depths ranging up to 10 cm of pressed wood. The relative merits of the two methods are discussed

  3. Decay correction methods in dynamic PET studies

    International Nuclear Information System (INIS)

    Chen, K.; Reiman, E.; Lawson, M.

    1995-01-01

    In order to reconstruct positron emission tomography (PET) images in quantitative dynamic studies, the data must be corrected for radioactive decay. One of the two commonly used methods ignores physiological processes including blood flow that occur at the same time as radioactive decay; the other makes incorrect use of time-accumulated PET counts. In simulated dynamic PET studies using 11 C-acetate and 18 F-fluorodeoxyglucose (FDG), these methods are shown to result in biased estimates of the time-activity curve (TAC) and model parameters. New methods described in this article provide significantly improved parameter estimates in dynamic PET studies

  4. A simple method to improve the quantification accuracy of energy-dispersive X-ray microanalysis

    International Nuclear Information System (INIS)

    Walther, T

    2008-01-01

    Energy-dispersive X-ray spectroscopy in a transmission electron microscope is a standard tool for chemical microanalysis and routinely provides qualitative information on the presence of all major elements above Z=5 (boron) in a sample. Spectrum quantification relies on suitable corrections for absorption and fluorescence, in particular for thick samples and soft X-rays. A brief presentation is given of an easy way to improve quantification accuracy by evaluating the intensity ratio of two measurements acquired at different detector take-off angles. As the take-off angle determines the effective sample thickness seen by the detector this method corresponds to taking two measurements from the same position at two different thicknesses, which allows to correct absorption and fluorescence more reliably. An analytical solution for determining the depth of a feature embedded in the specimen foil is also provided.

  5. Enhancement of Electroluminescence (EL) image measurements for failure quantification methods

    DEFF Research Database (Denmark)

    Parikh, Harsh; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    Enhanced quality images are necessary for EL image analysis and failure quantification. A method is proposed which determines image quality in terms of more accurate failure detection of solar panels through electroluminescence (EL) imaging technique. The goal of the paper is to determine the most...

  6. Comparison of DNA Quantification Methods for Next Generation Sequencing.

    Science.gov (United States)

    Robin, Jérôme D; Ludlow, Andrew T; LaRanger, Ryan; Wright, Woodring E; Shay, Jerry W

    2016-04-06

    Next Generation Sequencing (NGS) is a powerful tool that depends on loading a precise amount of DNA onto a flowcell. NGS strategies have expanded our ability to investigate genomic phenomena by referencing mutations in cancer and diseases through large-scale genotyping, developing methods to map rare chromatin interactions (4C; 5C and Hi-C) and identifying chromatin features associated with regulatory elements (ChIP-seq, Bis-Seq, ChiA-PET). While many methods are available for DNA library quantification, there is no unambiguous gold standard. Most techniques use PCR to amplify DNA libraries to obtain sufficient quantities for optical density measurement. However, increased PCR cycles can distort the library's heterogeneity and prevent the detection of rare variants. In this analysis, we compared new digital PCR technologies (droplet digital PCR; ddPCR, ddPCR-Tail) with standard methods for the titration of NGS libraries. DdPCR-Tail is comparable to qPCR and fluorometry (QuBit) and allows sensitive quantification by analysis of barcode repartition after sequencing of multiplexed samples. This study provides a direct comparison between quantification methods throughout a complete sequencing experiment and provides the impetus to use ddPCR-based quantification for improvement of NGS quality.

  7. Definition of a new thermal contrast and pulse correction for defect quantification in pulsed thermography

    Science.gov (United States)

    Benítez, Hernán D.; Ibarra-Castanedo, Clemente; Bendada, AbdelHakim; Maldague, Xavier; Loaiza, Humberto; Caicedo, Eduardo

    2008-01-01

    It is well known that the methods of thermographic non-destructive testing based on the thermal contrast are strongly affected by non-uniform heating at the surface. Hence, the results obtained from these methods considerably depend on the chosen reference point. The differential absolute contrast (DAC) method was developed to eliminate the need of determining a reference point that defined the thermal contrast with respect to an ideal sound area. Although, very useful at early times, the DAC accuracy decreases when the heat front approaches the sample rear face. We propose a new DAC version by explicitly introducing the sample thickness using the thermal quadrupoles theory and showing that the new DAC range of validity increases for long times while preserving the validity for short times. This new contrast is used for defect quantification in composite, Plexiglas™ and aluminum samples.

  8. Collaborative framework for PIV uncertainty quantification: comparative assessment of methods

    International Nuclear Information System (INIS)

    Sciacchitano, Andrea; Scarano, Fulvio; Neal, Douglas R; Smith, Barton L; Warner, Scott O; Vlachos, Pavlos P; Wieneke, Bernhard

    2015-01-01

    A posteriori uncertainty quantification of particle image velocimetry (PIV) data is essential to obtain accurate estimates of the uncertainty associated with a given experiment. This is particularly relevant when measurements are used to validate computational models or in design and decision processes. In spite of the importance of the subject, the first PIV uncertainty quantification (PIV-UQ) methods have been developed only in the last three years. The present work is a comparative assessment of four approaches recently proposed in the literature: the uncertainty surface method (Timmins et al 2012), the particle disparity approach (Sciacchitano et al 2013), the peak ratio criterion (Charonko and Vlachos 2013) and the correlation statistics method (Wieneke 2015). The analysis is based upon experiments conducted for this specific purpose, where several measurement techniques are employed simultaneously. The performances of the above approaches are surveyed across different measurement conditions and flow regimes. (paper)

  9. Carotid wall volume quantification from magnetic resonance images using deformable model fitting and learning-based correction of systematic errors

    International Nuclear Information System (INIS)

    Hameeteman, K; Niessen, W J; Klein, S; Van 't Klooster, R; Selwaness, M; Van der Lugt, A; Witteman, J C M

    2013-01-01

    We present a method for carotid vessel wall volume quantification from magnetic resonance imaging (MRI). The method combines lumen and outer wall segmentation based on deformable model fitting with a learning-based segmentation correction step. After selecting two initialization points, the vessel wall volume in a region around the bifurcation is automatically determined. The method was trained on eight datasets (16 carotids) from a population-based study in the elderly for which one observer manually annotated both the lumen and outer wall. An evaluation was carried out on a separate set of 19 datasets (38 carotids) from the same study for which two observers made annotations. Wall volume and normalized wall index measurements resulting from the manual annotations were compared to the automatic measurements. Our experiments show that the automatic method performs comparably to the manual measurements. All image data and annotations used in this study together with the measurements are made available through the website http://ergocar.bigr.nl. (paper)

  10. Toponomics method for the automated quantification of membrane protein translocation.

    Science.gov (United States)

    Domanova, Olga; Borbe, Stefan; Mühlfeld, Stefanie; Becker, Martin; Kubitz, Ralf; Häussinger, Dieter; Berlage, Thomas

    2011-09-19

    Intra-cellular and inter-cellular protein translocation can be observed by microscopic imaging of tissue sections prepared immunohistochemically. A manual densitometric analysis is time-consuming, subjective and error-prone. An automated quantification is faster, more reproducible, and should yield results comparable to manual evaluation. The automated method presented here was developed on rat liver tissue sections to study the translocation of bile salt transport proteins in hepatocytes. For validation, the cholestatic liver state was compared to the normal biological state. An automated quantification method was developed to analyze the translocation of membrane proteins and evaluated in comparison to an established manual method. Firstly, regions of interest (membrane fragments) are identified in confocal microscopy images. Further, densitometric intensity profiles are extracted orthogonally to membrane fragments, following the direction from the plasma membrane to cytoplasm. Finally, several different quantitative descriptors were derived from the densitometric profiles and were compared regarding their statistical significance with respect to the transport protein distribution. Stable performance, robustness and reproducibility were tested using several independent experimental datasets. A fully automated workflow for the information extraction and statistical evaluation has been developed and produces robust results. New descriptors for the intensity distribution profiles were found to be more discriminative, i.e. more significant, than those used in previous research publications for the translocation quantification. The slow manual calculation can be substituted by the fast and unbiased automated method.

  11. Improved Method for PD-Quantification in Power Cables

    DEFF Research Database (Denmark)

    Holbøll, Joachim T.; Villefrance, Rasmus; Henriksen, Mogens

    1999-01-01

    n this paper, a method is described for improved quantification of partial discharges(PD) in power cables. The method is suitable for PD-detection and location systems in the MHz-range, where pulse attenuation and distortion along the cable cannot be neglected. The system transfer function...... was calculated and measured in order to form basis for magnitude calculation after each measurements. --- Limitations and capabilities of the method will be discussed and related to relevant field applications of high frequent PD-measurements. --- Methods for increased signal/noise ratio are easily implemented...

  12. Study of the orbital correction method

    International Nuclear Information System (INIS)

    Meserve, R.A.

    1976-01-01

    Two approximations of interest in atomic, molecular, and solid state physics are explored. First, a procedure for calculating an approximate Green's function for use in perturbation theory is derived. In lowest order it is shown to be equivalent to treating the contribution of the bound states of the unperturbed Hamiltonian exactly and representing the continuum contribution by plane waves orthogonalized to the bound states (OPW's). If the OPW approximation were inadequate, the procedure allows for systematic improvement of the approximation. For comparison purposes an exact but more limited procedure for performing second-order perturbation theory, one that involves solving an inhomogeneous differential equation, is also derived. Second, the Kohn-Sham many-electron formalism is discussed and formulae are derived and discussed for implementing perturbation theory within the formalism so as to find corrections to the total energy of a system through second order in the perturbation. Both approximations were used in the calculation of the polarizability of helium, neon, and argon. The calculation included direct and exchange effects by the Kohn-Sham method and full self-consistency was demanded. The results using the differential equation method yielded excellent agreement with the coupled Hartree-Fock results of others and with experiment. Moreover, the OPW approximation yielded satisfactory comparison with the results of calculation by the exact differential equation method. Finally, both approximations were used in the calculation of properties of hydrogen fluoride and methane. The appendix formulates a procedure using group theory and the internal coordinates of a molecular system to simplify the calculation of vibrational frequencies

  13. Quantification of Hepatic Steatosis with T1-independent, T2*-corrected MR Imaging with Spectral Modeling of Fat: Blinded Comparison with MR Spectroscopy

    Science.gov (United States)

    Hines, Catherine D. G.; Hamilton, Gavin; Sirlin, Claude B.; McKenzie, Charles A.; Yu, Huanzhou; Brittain, Jean H.; Reeder, Scott B.

    2011-01-01

    Purpose: To prospectively compare an investigational version of a complex-based chemical shift–based fat fraction magnetic resonance (MR) imaging method with MR spectroscopy for the quantification of hepatic steatosis. Materials and Methods: This study was approved by the institutional review board and was HIPAA compliant. Written informed consent was obtained before all studies. Fifty-five patients (31 women, 24 men; age range, 24–71 years) were prospectively imaged at 1.5 T with quantitative MR imaging and single-voxel MR spectroscopy, each within a single breath hold. The effects of T2* correction, spectral modeling of fat, and magnitude fitting for eddy current correction on fat quantification with MR imaging were investigated by reconstructing fat fraction images from the same source data with different combinations of error correction. Single-voxel T2-corrected MR spectroscopy was used to measure fat fraction and served as the reference standard. All MR spectroscopy data were postprocessed at a separate institution by an MR physicist who was blinded to MR imaging results. Fat fractions measured with MR imaging and MR spectroscopy were compared statistically to determine the correlation (r2), and the slope and intercept as measures of agreement between MR imaging and MR spectroscopy fat fraction measurements, to determine whether MR imaging can help quantify fat, and examine the importance of T2* correction, spectral modeling of fat, and eddy current correction. Two-sided t tests (significance level, P = .05) were used to determine whether estimated slopes and intercepts were significantly different from 1.0 and 0.0, respectively. Sensitivity and specificity for the classification of clinically significant steatosis were evaluated. Results: Overall, there was excellent correlation between MR imaging and MR spectroscopy for all reconstruction combinations. However, agreement was only achieved when T2* correction, spectral modeling of fat, and magnitude

  14. A New Dyslexia Reading Method and Visual Correction Position Method.

    Science.gov (United States)

    Manilla, George T; de Braga, Joe

    2017-01-01

    Pediatricians and educators may interact daily with several dyslexic patients or students. One dyslexic author accidently developed a personal, effective, corrective reading method. Its effectiveness was evaluated in 3 schools. One school utilized 8 demonstration special education students. Over 3 months, one student grew one third year, 3 grew 1 year, and 4 grew 2 years. In another school, 6 sixth-, seventh-, and eighth-grade classroom teachers followed 45 treated dyslexic students. They all excelled and progressed beyond their classroom peers in 4 months. Using cyclovergence upper gaze, dyslexic reading problems disappeared at one of the Positional Reading Arc positions of 30°, 60°, 90°, 120°, or 150° for 10 dyslexics. Positional Reading Arc on 112 students of the second through eighth grades showed words read per minute, reading errors, and comprehension improved. Dyslexia was visually corrected by use of a new reading method and Positional Reading Arc positions.

  15. Impact of attenuation correction strategies on the quantification of High Resolution Research Tomograph PET studies

    International Nuclear Information System (INIS)

    Velden, Floris H P van; Kloet, Reina W; Berckel, Bart N M van; Molthoff, Carla F M; Jong, Hugo W A M de; Lammertsma, Adriaan A; Boellaard, Ronald

    2008-01-01

    In this study, the quantitative accuracy of different attenuation correction strategies presently available for the High Resolution Research Tomograph (HRRT) was investigated. These attenuation correction methods differ in reconstruction and processing (segmentation) algorithms used for generating a μ-image from measured 2D transmission scans, an intermediate step in the generation of 3D attenuation correction factors. Available methods are maximum-a-posteriori reconstruction (MAP-TR), unweighted OSEM (UW-OSEM) and NEC-TR, which transforms sinogram values back to their noise equivalent counts (NEC) to restore Poisson distribution. All methods can be applied with or without μ-image segmentation. However, for MAP-TR a μ-histogram is a prior during reconstruction. All possible strategies were evaluated using phantoms of various sizes, simulating preclinical and clinical situations. Furthermore, effects of emission contamination of the transmission scan on the accuracy of various attenuation correction strategies were studied. Finally, the accuracy of various attenuation corrections strategies and its relative impact on the reconstructed activity concentration (AC) were evaluated using small animal and human brain studies. For small structures, MAP-TR with human brain priors showed smaller differences in μ-values for transmission scans with and without emission contamination (<8%) than the other methods (<26%). In addition, it showed best agreement with true AC (deviation <4.5%). A specific prior designed to take into account the presence of small animal fixation devices only very slightly improved AC precision to 4.3%. All methods scaled μ-values of a large homogeneous phantom to within 4% of the water peak, but MAP-TR provided most accurate AC after reconstruction. However, for clinical data MAP-TR using the default prior settings overestimated the thickness of the skull, resulting in overestimations of μ-values in regions near the skull and thus in incorrect

  16. Quantification of hepatic steatosis with T1-independent, T2-corrected MR imaging with spectral modeling of fat: blinded comparison with MR spectroscopy.

    Science.gov (United States)

    Meisamy, Sina; Hines, Catherine D G; Hamilton, Gavin; Sirlin, Claude B; McKenzie, Charles A; Yu, Huanzhou; Brittain, Jean H; Reeder, Scott B

    2011-03-01

    To prospectively compare an investigational version of a complex-based chemical shift-based fat fraction magnetic resonance (MR) imaging method with MR spectroscopy for the quantification of hepatic steatosis. This study was approved by the institutional review board and was HIPAA compliant. Written informed consent was obtained before all studies. Fifty-five patients (31 women, 24 men; age range, 24-71 years) were prospectively imaged at 1.5 T with quantitative MR imaging and single-voxel MR spectroscopy, each within a single breath hold. The effects of T2 correction, spectral modeling of fat, and magnitude fitting for eddy current correction on fat quantification with MR imaging were investigated by reconstructing fat fraction images from the same source data with different combinations of error correction. Single-voxel T2-corrected MR spectroscopy was used to measure fat fraction and served as the reference standard. All MR spectroscopy data were postprocessed at a separate institution by an MR physicist who was blinded to MR imaging results. Fat fractions measured with MR imaging and MR spectroscopy were compared statistically to determine the correlation (r(2)), and the slope and intercept as measures of agreement between MR imaging and MR spectroscopy fat fraction measurements, to determine whether MR imaging can help quantify fat, and examine the importance of T2 correction, spectral modeling of fat, and eddy current correction. Two-sided t tests (significance level, P = .05) were used to determine whether estimated slopes and intercepts were significantly different from 1.0 and 0.0, respectively. Sensitivity and specificity for the classification of clinically significant steatosis were evaluated. Overall, there was excellent correlation between MR imaging and MR spectroscopy for all reconstruction combinations. However, agreement was only achieved when T2 correction, spectral modeling of fat, and magnitude fitting for eddy current correction were used (r(2

  17. High SNR Acquisitions Improve the Repeatability of Liver Fat Quantification Using Confounder-corrected Chemical Shift-encoded MR Imaging

    Science.gov (United States)

    Motosugi, Utaroh; Hernando, Diego; Wiens, Curtis; Bannas, Peter; Reeder, Scott. B

    2017-01-01

    Purpose: To determine whether high signal-to-noise ratio (SNR) acquisitions improve the repeatability of liver proton density fat fraction (PDFF) measurements using confounder-corrected chemical shift-encoded magnetic resonance (MR) imaging (CSE-MRI). Materials and Methods: Eleven fat-water phantoms were scanned with 8 different protocols with varying SNR. After repositioning the phantoms, the same scans were repeated to evaluate the test-retest repeatability. Next, an in vivo study was performed with 20 volunteers and 28 patients scheduled for liver magnetic resonance imaging (MRI). Two CSE-MRI protocols with standard- and high-SNR were repeated to assess test-retest repeatability. MR spectroscopy (MRS)-based PDFF was acquired as a standard of reference. The standard deviation (SD) of the difference (Δ) of PDFF measured in the two repeated scans was defined to ascertain repeatability. The correlation between PDFF of CSE-MRI and MRS was calculated to assess accuracy. The SD of Δ and correlation coefficients of the two protocols (standard- and high-SNR) were compared using F-test and t-test, respectively. Two reconstruction algorithms (complex-based and magnitude-based) were used for both the phantom and in vivo experiments. Results: The phantom study demonstrated that higher SNR improved the repeatability for both complex- and magnitude-based reconstruction. Similarly, the in vivo study demonstrated that the repeatability of the high-SNR protocol (SD of Δ = 0.53 for complex- and = 0.85 for magnitude-based fit) was significantly higher than using the standard-SNR protocol (0.77 for complex, P magnitude-based fit, P = 0.003). No significant difference was observed in the accuracy between standard- and high-SNR protocols. Conclusion: Higher SNR improves the repeatability of fat quantification using confounder-corrected CSE-MRI. PMID:28190853

  18. Nowcasting Surface Meteorological Parameters Using Successive Correction Method

    National Research Council Canada - National Science Library

    Henmi, Teizi

    2002-01-01

    The successive correction method was examined and evaluated statistically as a nowcasting method for surface meteorological parameters including temperature, dew point temperature, and horizontal wind vector components...

  19. A method to correct coordinate distortion in EBSD maps

    International Nuclear Information System (INIS)

    Zhang, Y.B.; Elbrønd, A.; Lin, F.X.

    2014-01-01

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. - Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction

  20. Overview of hybrid subspace methods for uncertainty quantification, sensitivity analysis

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Bang, Youngsuk; Wang, Congjian

    2013-01-01

    Highlights: ► We overview the state-of-the-art in uncertainty quantification and sensitivity analysis. ► We overview new developments in above areas using hybrid methods. ► We give a tutorial introduction to above areas and the new developments. ► Hybrid methods address the explosion in dimensionality in nonlinear models. ► Representative numerical experiments are given. -- Abstract: The role of modeling and simulation has been heavily promoted in recent years to improve understanding of complex engineering systems. To realize the benefits of modeling and simulation, concerted efforts in the areas of uncertainty quantification and sensitivity analysis are required. The manuscript intends to serve as a pedagogical presentation of the material to young researchers and practitioners with little background on the subjects. We believe this is important as the role of these subjects is expected to be integral to the design, safety, and operation of existing as well as next generation reactors. In addition to covering the basics, an overview of the current state-of-the-art will be given with particular emphasis on the challenges pertaining to nuclear reactor modeling. The second objective will focus on presenting our own development of hybrid subspace methods intended to address the explosion in the computational overhead required when handling real-world complex engineering systems.

  1. PET/MR imaging of bone lesions - implications for PET quantification from imperfect attenuation correction

    International Nuclear Information System (INIS)

    Samarin, Andrei; Burger, Cyrill; Crook, David W.; Burger, Irene A.; Schmid, Daniel T.; Schulthess, Gustav K. von; Kuhn, Felix P.; Wollenweber, Scott D.

    2012-01-01

    Accurate attenuation correction (AC) is essential for quantitative analysis of PET tracer distribution. In MR, the lack of cortical bone signal makes bone segmentation difficult and may require implementation of special sequences. The purpose of this study was to evaluate the need for accurate bone segmentation in MR-based AC for whole-body PET/MR imaging. In 22 patients undergoing sequential PET/CT and 3-T MR imaging, modified CT AC maps were produced by replacing pixels with values of >100 HU, representing mostly bone structures, by pixels with a constant value of 36 HU corresponding to soft tissue, thereby simulating current MR-derived AC maps. A total of 141 FDG-positive osseous lesions and 50 soft-tissue lesions adjacent to bones were evaluated. The mean standardized uptake value (SUVmean) was measured in each lesion in PET images reconstructed once using the standard AC maps and once using the modified AC maps. Subsequently, the errors in lesion tracer uptake for the modified PET images were calculated using the standard PET image as a reference. Substitution of bone by soft tissue values in AC maps resulted in an underestimation of tracer uptake in osseous and soft tissue lesions adjacent to bones of 11.2 ± 5.4 % (range 1.5-30.8 %) and 3.2 ± 1.7 % (range 0.2-4 %), respectively. Analysis of the spine and pelvic osseous lesions revealed a substantial dependence of the error on lesion composition. For predominantly sclerotic spine lesions, the mean underestimation was 15.9 ± 3.4 % (range 9.9-23.5 %) and for osteolytic spine lesions, 7.2 ± 1.7 % (range 4.9-9.3 %), respectively. CT data simulating treating bone as soft tissue as is currently done in MR maps for PET AC leads to a substantial underestimation of tracer uptake in bone lesions and depends on lesion composition, the largest error being seen in sclerotic lesions. Therefore, depiction of cortical bone and other calcified areas in MR AC maps is necessary for accurate quantification of tracer uptake

  2. Reliability and discriminatory power of methods for dental plaque quantification

    Directory of Open Access Journals (Sweden)

    Daniela Prócida Raggio

    2010-04-01

    Full Text Available OBJECTIVE: This in situ study evaluated the discriminatory power and reliability of methods of dental plaque quantification and the relationship between visual indices (VI and fluorescence camera (FC to detect plaque. MATERIAL AND METHODS: Six volunteers used palatal appliances with six bovine enamel blocks presenting different stages of plaque accumulation. The presence of plaque with and without disclosing was assessed using VI. Images were obtained with FC and digital camera in both conditions. The area covered by plaque was assessed. Examinations were done by two independent examiners. Data were analyzed by Kruskal-Wallis and Kappa tests to compare different conditions of samples and to assess the inter-examiner reproducibility. RESULTS: Some methods presented adequate reproducibility. The Turesky index and the assessment of area covered by disclosed plaque in the FC images presented the highest discriminatory powers. CONCLUSION: The Turesky index and images with FC with disclosing present good reliability and discriminatory power in quantifying dental plaque.

  3. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  4. Drug quantification in turbid media by fluorescence imaging combined with light-absorption correction using white Monte Carlo simulations

    DEFF Research Database (Denmark)

    Xie, Haiyan; Liu, Haichun; Svenmarker, Pontus

    2011-01-01

    Accurate quantification of photosensitizers is in many cases a critical issue in photodynamic therapy. As a noninvasive and sensitive tool, fluorescence imaging has attracted particular interest for quantification in pre-clinical research. However, due to the absorption of excitation and emission...... in vivo by the fluorescence imaging technique. In this paper we present a novel approach to compensate for the light absorption in homogeneous turbid media both for the excitation and emission light, utilizing time-resolved fluorescence white Monte Carlo simulations combined with the Beer-Lambert law......-absorption correction and absolute fluorophore concentrations. These results suggest that the technique potentially provides the means to quantify the fluorophore concentration from fluorescence images. © 2011 Society of Photo-Optical Instrumentation Engineers (SPIE)....

  5. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  6. Effect of Attenuation Correction on Regional Quantification Between PET/MR and PET/CT

    DEFF Research Database (Denmark)

    Teuho, Jarmo; Johansson, Jarkko; Linden, Jani

    2016-01-01

    UNLABELLED: A spatial bias in brain PET/MR exists compared with PET/CT, because of MR-based attenuation correction. We performed an evaluation among 4 institutions, 3 PET/MR systems, and 4 PET/CT systems using an anthropomorphic brain phantom, hypothesizing that the spatial bias would be minimized....../MR systems, CTAC was applied as the reference method for attenuation correction. RESULTS: With CTAC, visual and quantitative differences between PET/MR and PET/CT systems were minimized. Intersystem variation between institutions was +3.42% to -3.29% in all VOIs for PET/CT and +2.15% to -4.50% in all VOIs...... for PET/MR. PET/MR systems differed by +2.34% to -2.21%, +2.04% to -2.08%, and -1.77% to -5.37% when compared with a PET/CT system at each institution, and these differences were not significant (P ≥ 0.05). CONCLUSION: Visual and quantitative differences between PET/MR and PET/CT systems can be minimized...

  7. A method to correct coordinate distortion in EBSD maps

    DEFF Research Database (Denmark)

    Zhang, Yubin; Elbrønd, Andreas Benjamin; Lin, Fengxiang

    2014-01-01

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after...... the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct...

  8. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2014-01-01

    The aim in this paper is to develop a new local defect correction approach to gridding for problems with localised regions of high activity in the boundary element method. The technique of local defect correction has been studied for other methods as finite difference methods and finite volume

  9. Attenuation correction method for single photon emission CT

    Energy Technology Data Exchange (ETDEWEB)

    Morozumi, Tatsuru; Nakajima, Masato [Keio Univ., Yokohama (Japan). Faculty of Science and Technology; Ogawa, Koichi; Yuta, Shinichi

    1983-10-01

    A correction method (Modified Correction Matrix method) is proposed to implement iterative correction by exactly measuring attenuation constant distribution in a test body, calculating a correction factor for every picture element, then multiply the image by these factors. Computer simulation for the comparison of the results showed that the proposed method was specifically more effective to an application to the test body, in which the rate of attenuation constant change is large, than the conventional correction matrix method. Since the actual measurement data always contain quantum noise, the noise was taken into account in the simulation. However, the correction effect was large even under the noise. For verifying its clinical effectiveness, the experiment using an acrylic phantom was also carried out. As the result, the recovery of image quality in the parts with small attenuation constant was remarkable as compared with the conventional method.

  10. Quantification of Iodine-123-FP-CIT SPECT with a resolution-independent method

    International Nuclear Information System (INIS)

    Dobbeleir, A.A.; Ham, H.R.; Hambye, A.E.; Vervaet, A.M.

    2005-01-01

    Accurate quantification of small-sized objects by SPECT is hampered by the partial volume effect. The present work evaluates the magnitude of this phenomenon with Iodine- 123 in phantom studies, and presents a resolution- independent method to quantify striatal I-123 FP-CIT uptake in patients. At first five syringes with internal diameters varying between 9 and 29mm and an anthropomorphic striatal phantom were filled with known concentrations of Iodine-123 and imaged by SPECT using different collimators and radii of rotation. Data were processed with and without scatter correction. From the measured activities, calibration factors were calculated for each specific collimator. Then a resolution-independent method for FP-CIT quantification using large regions of interest was developed and validated in 34 human studies (controls and patients) acquired in 2 different hospitals, by comparing its results to those obtained by a semi- quantitative striatal-to-occipital analysis. Taking the injected activity and decay into account, the measured counts/volume could be converted into absolute tracer concentrations. For the fan-beam, high resolution and medium energy collimators, the measured maximum activity in comparison to the 29 mm-diameter syringe was respectively 38%, 16% and 9% for the 9 mm-diameter syringe and 82%, 80% and 30% for the 16 mm syringe, and not significantly modified after scatter correction. For the anthropomorphic phantom, the error in measurement in % of the true concentration ranged between 0.3-9.5% and was collimator dependent. Medium energy collimators yielded the most homogeneous results. In the human studies, inter- observer variability was 11.4% for the striatal-to-occipital ratio and 3.1% for the resolution-independent method, with correlation coefficients >0.8 between both. The resolution- independent method was 89%-sensitive and 100%-specific to separate the patients without and with abnormal FP-CIT uptake (accuracy: 94%). Also the

  11. Robust sleep quality quantification method for a personal handheld device.

    Science.gov (United States)

    Shin, Hangsik; Choi, Byunghun; Kim, Doyoon; Cho, Jaegeol

    2014-06-01

    The purpose of this study was to develop and validate a novel method for sleep quality quantification using personal handheld devices. The proposed method used 3- or 6-axes signals, including acceleration and angular velocity, obtained from built-in sensors in a smartphone and applied a real-time wavelet denoising technique to minimize the nonstationary noise. Sleep or wake status was decided on each axis, and the totals were finally summed to calculate sleep efficiency (SE), regarded as sleep quality in general. The sleep experiment was carried out for performance evaluation of the proposed method, and 14 subjects participated. An experimental protocol was designed for comparative analysis. The activity during sleep was recorded not only by the proposed method but also by well-known commercial applications simultaneously; moreover, activity was recorded on different mattresses and locations to verify the reliability in practical use. Every calculated SE was compared with the SE of a clinically certified medical device, the Philips (Amsterdam, The Netherlands) Actiwatch. In these experiments, the proposed method proved its reliability in quantifying sleep quality. Compared with the Actiwatch, accuracy and average bias error of SE calculated by the proposed method were 96.50% and -1.91%, respectively. The proposed method was vastly superior to other comparative applications with at least 11.41% in average accuracy and at least 6.10% in average bias; average accuracy and average absolute bias error of comparative applications were 76.33% and 17.52%, respectively.

  12. Numerical Continuation Methods for Intrusive Uncertainty Quantification Studies

    Energy Technology Data Exchange (ETDEWEB)

    Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Phipps, Eric Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.

  13. Effect of gadolinium on hepatic fat quantification using multi-echo reconstruction technique with T2* correction and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Ge, Mingmei; Wu, Bing; Liu, Zhiqin; Song, Hai; Meng, Xiangfeng; Wu, Xinhuai [The Military General Hospital of Beijing PLA, Department of Radiology, Beijing (China); Zhang, Jing [The 309th Hospital of Chinese People' s Liberation Army, Department of Radiology, Beijing (China)

    2016-06-15

    To determine whether hepatic fat quantification is affected by administration of gadolinium using a multiecho reconstruction technique with T2* correction and estimation. Forty-eight patients underwent the investigational sequence for hepatic fat quantification at 3.0T MRI once before and twice after administration of gadopentetate dimeglumine (0.1 mmol/kg). A one-way repeated-measures analysis of variance with pairwise comparisons was conducted to evaluate the systematic bias of fat fraction (FF) and R2* measurements between three acquisitions. Bland-Altman plots were used to assess the agreements between pre- and post-contrast FF measurements in the liver. A P value <0.05 indicated statistically significant difference. FF measurements of liver, spleen and spine revealed no significant systematic bias between the three measurements (P > 0.05 for all). Good agreements (95 % confidence interval) of FF measurements were demonstrated between pre-contrast and post-contrast1 (-0.49 %, 0.52 %) and post-contrast2 (-0.83 %, 0.77 %). R2* increased in liver and spleen (P = 0.039, P = 0.01) after administration of gadolinium. Although under the impact of an increased R2* in liver and spleen post-contrast, the investigational sequence can still obtain stable fat quantification. Therefore, it could be applied post-contrast to substantially increase the efficiency of MR examination and also provide a backup for the occasional failure of FF measurements pre-contrast. (orig.)

  14. Effect of gadolinium on hepatic fat quantification using multi-echo reconstruction technique with T2* correction and estimation

    International Nuclear Information System (INIS)

    Ge, Mingmei; Wu, Bing; Liu, Zhiqin; Song, Hai; Meng, Xiangfeng; Wu, Xinhuai; Zhang, Jing

    2016-01-01

    To determine whether hepatic fat quantification is affected by administration of gadolinium using a multiecho reconstruction technique with T2* correction and estimation. Forty-eight patients underwent the investigational sequence for hepatic fat quantification at 3.0T MRI once before and twice after administration of gadopentetate dimeglumine (0.1 mmol/kg). A one-way repeated-measures analysis of variance with pairwise comparisons was conducted to evaluate the systematic bias of fat fraction (FF) and R2* measurements between three acquisitions. Bland-Altman plots were used to assess the agreements between pre- and post-contrast FF measurements in the liver. A P value <0.05 indicated statistically significant difference. FF measurements of liver, spleen and spine revealed no significant systematic bias between the three measurements (P > 0.05 for all). Good agreements (95 % confidence interval) of FF measurements were demonstrated between pre-contrast and post-contrast1 (-0.49 %, 0.52 %) and post-contrast2 (-0.83 %, 0.77 %). R2* increased in liver and spleen (P = 0.039, P = 0.01) after administration of gadolinium. Although under the impact of an increased R2* in liver and spleen post-contrast, the investigational sequence can still obtain stable fat quantification. Therefore, it could be applied post-contrast to substantially increase the efficiency of MR examination and also provide a backup for the occasional failure of FF measurements pre-contrast. (orig.)

  15. A practical procedure to improve the accuracy of radiochromic film dosimetry. A integration with a correction method of uniformity correction and a red/blue correction method

    International Nuclear Information System (INIS)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-01-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000 G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical intensity modulated radiation therapy (IMRT) dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method. (author)

  16. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    Science.gov (United States)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  17. Quantification methods of Black Carbon: Comparison of Rock-Eval analysis with traditional methods

    NARCIS (Netherlands)

    Poot, A.; Quik, J.T.K.; Veld, H.; Koelmans, A.A.

    2009-01-01

    Black Carbon (BC) quantification methods are reviewed, including new Rock-Eval 6 data on BC reference materials. BC has been reported to have major impacts on climate, human health and environmental quality. Especially for risk assessment of persistent organic pollutants (POPs) it is important to

  18. Methods for modeling and quantification in functional imaging by positron emissions tomography and magnetic resonance imaging

    International Nuclear Information System (INIS)

    Costes, Nicolas

    2017-01-01

    This report presents experiences and researches in the field of in vivo medical imaging by positron emission tomography (PET) and magnetic resonance imaging (MRI). In particular, advances in terms of reconstruction, quantification and modeling in PET are described. The validation of processing and analysis methods is supported by the creation of data by simulation of the imaging process in PET. The recent advances of combined PET/MRI clinical cameras, allowing simultaneous acquisition of molecular/metabolic PET information, and functional/structural MRI information opens the door to unique methodological innovations, exploiting spatial alignment and simultaneity of the PET and MRI signals. It will lead to an increase in accuracy and sensitivity in the measurement of biological phenomena. In this context, the developed projects address new methodological issues related to quantification, and to the respective contributions of MRI or PET information for a reciprocal improvement of the signals of the two modalities. They open perspectives for combined analysis of the two imaging techniques, allowing optimal use of synchronous, anatomical, molecular and functional information for brain imaging. These innovative concepts, as well as data correction and analysis methods, will be easily translated into other areas of investigation using combined PET/MRI. (author) [fr

  19. Contribution to the development of an absolute quantification method in Single Photon Emission Tomography of the brain

    International Nuclear Information System (INIS)

    Dinis-De-Almeida, Pedro-Miguel

    1999-01-01

    Recent technical advances in SPECT have focused on the use of transmission imaging and on the development of new iterative algorithms for attenuation correction. These new tools can be coupled to approaches which compensate for scattering and spatial resolution, in order to quantify the radioactive concentration values in vivo. The main objective of this work was to investigate a quantification method of radioactivity uptake in small cerebral structures using SPECT. This method was based on the correction of attenuation using transmission data. Compton events were estimated and subtracted by positioning a lower energy window. Spatial resolution effects have been corrected using Fourier deconvolution. The radiation dose received by patients during transmission scans was evaluated using anthropomorphic phantoms and suitable dosimeters. A preliminary evaluation of the quantification method was carried out using an anthropomorphic head phantom. In a second phase, in vivo acquisitions were performed in baboon. The values of the percent injected doses per millilitre of tissue in baboon striata were compared under similar experimental conditions using SPECT and PET radiotracers specific for the D2 dopamine receptors. Experiments carried with anthropomorphic phantoms have indicated that the clinical use of transmission scans in SPECT is not limited by radiation doses. Measurements have demonstrated that attenuation dramatically affects quantification in brain SPECT. This effect can be corrected using a map of linear attenuation coefficients obtained through transmission scans and an iterative reconstruction algorithm. After correcting for attenuation, scatter and spatial resolution effects, the accuracy of activity concentration values measurement in the 'striata' of phantom is greatly improved. Results obtained in vivo show that the percent injected doses per millilitre of tissue can be measured with errors similar to those found in PET. This work demonstrates

  20. Development of Uncertainty Quantification Method for MIR-PIV Measurement using BOS Technique

    International Nuclear Information System (INIS)

    Seong, Jee Hyun; Song, Min Seop; Kim, Eung Soo

    2014-01-01

    Matching Index of Refraction (MIR) is frequently used for obtaining high quality PIV measurement data. ven small distortion by unmatched refraction index of test section can result in uncertainty problems. In this context, it is desirable to construct new concept for checking errors of MIR and following uncertainty of PIV measurement. This paper proposes a couple of experimental concept and relative results. This study developed an MIR uncertainty quantification method for PIV measurement using SBOS technique. From the reference data of the BOS, the reliable SBOS experiment procedure was constructed. Then with the combination of SBOS technique with MIR-PIV technique, velocity vector and refraction displacement vector field was measured simultaneously. MIR errors are calculated through mathematical equation, in which PIV and SBOS data are put. These errors are also verified by another BOS experiment. Finally, with the applying of calculated MIR-PIV uncertainty, correct velocity vector field can be obtained regardless of MIR errors

  1. Different partial volume correction methods lead to different conclusions

    DEFF Research Database (Denmark)

    Greve, Douglas N; Salat, David H; Bowen, Spencer L

    2016-01-01

    A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) usin...

  2. Simulating water hammer with corrective smoothed particle method

    NARCIS (Netherlands)

    Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.

    2012-01-01

    The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in

  3. Method of absorbance correction in a spectroscopic heating value sensor

    Science.gov (United States)

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  4. A spectrum correction method for fuel assembly rehomogenization

    International Nuclear Information System (INIS)

    Lee, Kyung Taek; Cho, Nam Zin

    2004-01-01

    To overcome the limitation of existing homogenization methods based on the single assembly calculation with zero current boundary condition, we propose a new rehomogenization method, named spectrum correction method (SCM), consisting of the multigroup energy spectrum approximation by spectrum correction and the condensed two-group heterogeneous single assembly calculations with non-zero current boundary condition. In SCM, the spectrum shifting phenomena caused by current across assembly interfaces are considered by the spectrum correction at group condensation stage at first. Then, heterogeneous single assembly calculations with two-group cross sections condensed by using corrected multigroup energy spectrum are performed to obtain rehomogenized nodal diffusion parameters, i.e., assembly-wise homogenized cross sections and discontinuity factors. To evaluate the performance of SCM, it was applied to the analytic function expansion nodal (AFEN) method and several test problems were solved. The results show that SCM can reduce the errors significantly both in multiplication factors and assembly averaged power distributions

  5. Methods to Increase Educational Effectiveness in an Adult Correctional Setting.

    Science.gov (United States)

    Kuster, Byron

    1998-01-01

    A correctional educator reflects on methods that improve instructional effectiveness. These include teacher-student collaboration, clear goals, student accountability, positive classroom atmosphere, high expectations, and mutual respect. (SK)

  6. Automated general temperature correction method for dielectric soil moisture sensors

    Science.gov (United States)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a

  7. A SIMPLE METHOD FOR THE EXTRACTION AND QUANTIFICATION OF PHOTOPIGMENTS FROM SYMBIODINIUM SPP.

    Science.gov (United States)

    John E. Rogers and Dragoslav Marcovich. Submitted. Simple Method for the Extraction and Quantification of Photopigments from Symbiodinium spp.. Limnol. Oceanogr. Methods. 19 p. (ERL,GB 1192). We have developed a simple, mild extraction procedure using methanol which, when...

  8. An overview of quantification methods in energy-dispersive X-ray ...

    Indian Academy of Sciences (India)

    methods for thin samples, samples with intermediate thickness and thick ... algorithms and quantification methods based on scattered primary radiation. ... technique for in situ characterization of materials such as contaminated soil, archaeo-.

  9. Evaluation of two autoinducer-2 quantification methods for application in marine environments

    KAUST Repository

    Wang, Tian-Nyu; Kaksonen, Anna H.; Hong, Pei-Ying

    2018-01-01

    This study evaluated two methods, namely high performance liquid chromatography with fluorescence detection (HPLC-FLD) and Vibrio harveyi BB170 bioassay, for autoinducer-2 (AI-2) quantification in marine samples. Using both methods, the study also

  10. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  11. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  12. A highly sensitive method for quantification of iohexol

    DEFF Research Database (Denmark)

    Schulz, A.; Boeringer, F.; Swifka, J.

    2014-01-01

    -chromatography-electrospray-massspectrometry (LC-ESI-MS) approach using the multiple reaction monitoring mode for iohexol quantification. In order to test whether a significantly decreased amount of iohexol is sufficient for reliable quantification, a LC-ESI-MS approach was assessed. We analyzed the kinetics of iohexol in rats after application...... of different amounts of iohexol (15 mg to 150 1.tg per rat). Blood sampling was conducted at four time points, at 15, 30, 60, and 90 min, after iohexol injection. The analyte (iohexol) and the internal standard (iotha(amic acid) were separated from serum proteins using a centrifugal filtration device...... with a cut-off of 3 kDa. The chromatographic separation was achieved on an analytical Zorbax SB C18 column. The detection and quantification were performed on a high capacity trap mass spectrometer using positive ion ESI in the multiple reaction monitoring (MRM) mode. Furthermore, using real-time polymerase...

  13. Impact and correction of the bladder uptake on 18F-FCH PET quantification: a simulation study using the XCAT2 phantom

    Science.gov (United States)

    Silva-Rodríguez, Jesús; Tsoumpas, Charalampos; Domínguez-Prado, Inés; Pardo-Montero, Juan; Ruibal, Álvaro; Aguiar, Pablo

    2016-01-01

    The spill-in counts from neighbouring regions can significantly bias the quantification over small regions close to high activity extended sources. This effect can be a drawback for 18F-based radiotracers positron emission tomography (PET) when quantitatively evaluating the bladder area for diseases such as prostate cancer. In this work, we use Monte Carlo simulations to investigate the impact of the spill-in counts from the bladder on the quantitative evaluation of prostate cancer when using 18F-Fluorcholine (FCH) PET and we propose a novel reconstruction-based correction method. Monte Carlo simulations of a modified version of the XCAT2 anthropomorphic phantom with 18F-FCH biological distribution, variable bladder uptake and inserted prostatic tumours were used in order to obtain simulated realistic 18F-FCH data. We evaluated possible variations of the measured tumour Standardized Uptake Value (SUV) for different values of bladder uptake and propose a novel correction by appropriately adapting image reconstruction methodology. The correction is based on the introduction of physiological background terms on the reconstruction, removing the contribution of the bladder to the final image. The bladder is segmented from the reconstructed image and then forward-projected to the sinogram space. The resulting sinograms are used as background terms for the reconstruction. SUVmax and SUVmean could be overestimated by 41% and 22% respectively due to the accumulation of radiotracer in the bladder, with strong dependence on bladder-to-lesion ratio. While the SUVs measured under these conditions are not reliable, images corrected using the proposed methodology provide better repeatability of SUVs, with biases below 6%. Results also showed remarkable improvements on visual detectability. The spill-in counts from the bladder can affect prostatic SUV measurements of 18F-FCH images, which can be corrected to less than 6% using the proposed methodology, providing reliable SUV

  14. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2013-01-01

    This paper presents a new approach to gridding for problems with localised regions of high activity. The technique of local defect correction has been studied for other methods as ¿nite difference methods and ¿nite volume methods. In this paper we develop the technique for the boundary element

  15. Quantification in emission tomography

    International Nuclear Information System (INIS)

    Buvat, Irene

    2011-11-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) and positron emission tomography (PET) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena 2 - Main problems impacting quantification in PET and SPECT: problems, consequences, correction methods, results (Attenuation, scattering, partial volume effect, movement, un-stationary spatial resolution in SPECT, fortuitous coincidences in PET, standardisation in PET); 3 - Synthesis: accessible efficiency, know-how, Precautions, beyond the activity measurement

  16. A method of detector correction for cosmic ray muon radiography

    International Nuclear Information System (INIS)

    Liu Yuanyuan; Zhao Ziran; Chen Zhiqiang; Zhang Li; Wang Zhentian

    2008-01-01

    Cosmic ray muon radiography which has good penetrability and sensitivity to high-Z materials is an effective way for detecting shielded nuclear materials. The problem of data correction is one of the key points of muon radiography technique. Because of the influence of environmental background, environmental yawp and error of detectors, the raw data can not be used directly. If we used the raw data as the usable data to reconstruct without any corrections, it would turn up terrible artifacts. Based on the characteristics of the muon radiography system, aimed at the error of detectors, this paper proposes a method of detector correction. The simulation experiments demonstrate that this method can effectively correct the error produced by detectors. Therefore, we can say that it does a further step to let the technique of cosmic muon radiography into out real life. (authors)

  17. Track benchmarking method for uncertainty quantification of particle tracking velocimetry interpolations

    International Nuclear Information System (INIS)

    Schneiders, Jan F G; Sciacchitano, Andrea

    2017-01-01

    The track benchmarking method (TBM) is proposed for uncertainty quantification of particle tracking velocimetry (PTV) data mapped onto a regular grid. The method provides statistical uncertainty for a velocity time-series and can in addition be used to obtain instantaneous uncertainty at increased computational cost. Interpolation techniques are typically used to map velocity data from scattered PTV (e.g. tomographic PTV and Shake-the-Box) measurements onto a Cartesian grid. Recent examples of these techniques are the FlowFit and VIC+  methods. The TBM approach estimates the random uncertainty in dense velocity fields by performing the velocity interpolation using a subset of typically 95% of the particle tracks and by considering the remaining tracks as an independent benchmarking reference. In addition, also a bias introduced by the interpolation technique is identified. The numerical assessment shows that the approach is accurate when particle trajectories are measured over an extended number of snapshots, typically on the order of 10. When only short particle tracks are available, the TBM estimate overestimates the measurement error. A correction to TBM is proposed and assessed to compensate for this overestimation. The experimental assessment considers the case of a jet flow, processed both by tomographic PIV and by VIC+. The uncertainty obtained by TBM provides a quantitative evaluation of the measurement accuracy and precision and highlights the regions of high error by means of bias and random uncertainty maps. In this way, it is possible to quantify the uncertainty reduction achieved by advanced interpolation algorithms with respect to standard correlation-based tomographic PIV. The use of TBM for uncertainty quantification and comparison of different processing techniques is demonstrated. (paper)

  18. Methods for external event screening quantification: Risk Methods Integration and Evaluation Program (RMIEP) methods development

    International Nuclear Information System (INIS)

    Ravindra, M.K.; Banon, H.

    1992-07-01

    In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as a three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC's PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described

  19. An corrective method to correct of the inherent flaw of the asynchronization direct counting circuit

    International Nuclear Information System (INIS)

    Wang Renfei; Liu Congzhan; Jin Yongjie; Zhang Zhi; Li Yanguo

    2003-01-01

    As a inherent flaw of the Asynchronization Direct Counting Circuit, the crosstalk, which is resulted from the randomicity of the time-signal always exists between two adjacent channels. In order to reduce the counting error derived from the crosstalk, the author propose an effective method to correct the flaw after analysing the mechanism of the crosstalk

  20. Comparison of Suitability of the Most Common Ancient DNA Quantification Methods.

    Science.gov (United States)

    Brzobohatá, Kristýna; Drozdová, Eva; Smutný, Jiří; Zeman, Tomáš; Beňuš, Radoslav

    2017-04-01

    Ancient DNA (aDNA) extracted from historical bones is damaged and fragmented into short segments, present in low quantity, and usually copurified with microbial DNA. A wide range of DNA quantification methods are available. The aim of this study was to compare the five most common DNA quantification methods for aDNA. Quantification methods were tested on DNA extracted from skeletal material originating from an early medieval burial site. The tested methods included ultraviolet (UV) absorbance, real-time quantitative polymerase chain reaction (qPCR) based on SYBR ® green detection, real-time qPCR based on a forensic kit, quantification via fluorescent dyes bonded to DNA, and fragmentary analysis. Differences between groups were tested using a paired t-test. Methods that measure total DNA present in the sample (NanoDrop ™ UV spectrophotometer and Qubit ® fluorometer) showed the highest concentrations. Methods based on real-time qPCR underestimated the quantity of aDNA. The most accurate method of aDNA quantification was fragmentary analysis, which also allows DNA quantification of the desired length and is not affected by PCR inhibitors. Methods based on the quantification of the total amount of DNA in samples are unsuitable for ancient samples as they overestimate the amount of DNA presumably due to the presence of microbial DNA. Real-time qPCR methods give undervalued results due to DNA damage and the presence of PCR inhibitors. DNA quantification methods based on fragment analysis show not only the quantity of DNA but also fragment length.

  1. Implementation of the Centroid Method for the Correction of Turbulence

    Directory of Open Access Journals (Sweden)

    Enric Meinhardt-Llopis

    2014-07-01

    Full Text Available The centroid method for the correction of turbulence consists in computing the Karcher-Fréchet mean of the sequence of input images. The direction of deformation between a pair of images is determined by the optical flow. A distinguishing feature of the centroid method is that it can produce useful results from an arbitrarily small set of input images.

  2. [Study on phase correction method of spatial heterodyne spectrometer].

    Science.gov (United States)

    Wang, Xin-Qiang; Ye, Song; Zhang, Li-Juan; Xiong, Wei

    2013-05-01

    Phase distortion exists in collected interferogram because of a variety of measure reasons when spatial heterodyne spectrometers are used in practice. So an improved phase correction method is presented. The phase curve of interferogram was obtained through Fourier inverse transform to extract single side transform spectrum, based on which, the phase distortions were attained by fitting phase slope, so were the phase correction functions, and the convolution was processed between transform spectrum and phase correction function to implement spectrum phase correction. The method was applied to phase correction of actually measured monochromatic spectrum and emulational water vapor spectrum. Experimental results show that the low-frequency false signals in monochromatic spectrum fringe would be eliminated effectively to increase the periodicity and the symmetry of interferogram, in addition when the continuous spectrum imposed phase error was corrected, the standard deviation between it and the original spectrum would be reduced form 0.47 to 0.20, and thus the accuracy of spectrum could be improved.

  3. An attenuation correction method for PET/CT images

    International Nuclear Information System (INIS)

    Ue, Hidenori; Yamazaki, Tomohiro; Haneishi, Hideaki

    2006-01-01

    In PET/CT systems, accurate attenuation correction can be achieved by creating an attenuation map from an X-ray CT image. On the other hand, respiratory-gated PET acquisition is an effective method for avoiding motion blurring of the thoracic and abdominal organs caused by respiratory motion. In PET/CT systems employing respiratory-gated PET, using an X-ray CT image acquired during breath-holding for attenuation correction may have a large effect on the voxel values, especially in regions with substantial respiratory motion. In this report, we propose an attenuation correction method in which, as the first step, a set of respiratory-gated PET images is reconstructed without attenuation correction, as the second step, the motion of each phase PET image from the PET image in the same phase as the CT acquisition timing is estimated by the previously proposed method, as the third step, the CT image corresponding to each respiratory phase is generated from the original CT image by deformation according to the motion vector maps, and as the final step, attenuation correction using these CT images and reconstruction are performed. The effectiveness of the proposed method was evaluated using 4D-NCAT phantoms, and good stability of the voxel values near the diaphragm was observed. (author)

  4. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    Science.gov (United States)

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  5. The various correction methods to the high precision aeromagnetic data

    International Nuclear Information System (INIS)

    Xu Guocang; Zhu Lin; Ning Yuanli; Meng Xiangbao; Zhang Hongjian

    2014-01-01

    In the airborne geophysical survey, an outstanding achievement first depends on the measurement precision of the instrument, and the choice of measurement conditions, the reliability of data collection, followed by the correct method of measurement data processing, the rationality of the data interpretation. Obviously, geophysical data processing is an important task for the comprehensive interpretation of the measurement results, processing method is correct or not directly related to the quality of the final results. we have developed a set of personal computer software to aeromagnetic and radiometric survey data processing in the process of actual production and scientific research in recent years, and successfully applied to the production. The processing methods and flowcharts to the high precision aromagnetic data were simply introduced in this paper. However, the mathematical techniques of the various correction programes to IGRF and flying height and magnetic diurnal variation were stressily discussed in the paper. Their processing effectness were illustrated by taking an example as well. (authors)

  6. Monte Carlo evaluation of scattering correction methods in 131I studies using pinhole collimator

    International Nuclear Information System (INIS)

    López Díaz, Adlin; San Pedro, Aley Palau; Martín Escuela, Juan Miguel; Rodríguez Pérez, Sunay; Díaz García, Angelina

    2017-01-01

    Scattering is quite important for image activity quantification. In order to study the scattering factors and the efficacy of 3 multiple window energy scatter correction methods during 131 I thyroid studies with a pinhole collimator (5 mm hole) a Monte Carlo simulation (MC) was developed. The GAMOS MC code was used to model the gamma camera and the thyroid source geometry. First, to validate the MC gamma camera pinhole-source model, sensibility in air and water of the simulated and measured thyroid phantom geometries were compared. Next, simulations to investigate scattering and the result of triple energy (TEW), Double energy (DW) and Reduced double (RDW) energy windows correction methods were performed for different thyroid sizes and depth thicknesses. The relative discrepancies to MC real event were evaluated. Results: The accuracy of the GAMOS MC model was verified and validated. The image’s scattering contribution was significant, between 27-40 %. The discrepancies between 3 multiple window energy correction method results were significant (between 9-86 %). The Reduce Double Window methods (15%) provide discrepancies of 9-16 %. Conclusions: For the simulated thyroid geometry with pinhole, the RDW (15 %) was the most effective. (author)

  7. A vibration correction method for free-fall absolute gravimeters

    Science.gov (United States)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  8. SU-F-R-28: Correction of FCh-PET Bladder Uptake Using Virtual Sinograms and Investigation of Its Impact On the Quantification of Prostate Textural Characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Laberge, S; Beauregard, J; Archambault, L [CHUQ Pavillon Hotel-Dieu de Quebec, Quebec, QC (Canada)

    2016-06-15

    Purpose: Textural biomarkers as a tool for quantifying intratumoral heterogeneity hold great promise for diagnosis and early assessment of treatment response in prostate cancer. However, spill-in counts from the bladder uptake are suspected to have an impact on the textural measurements of the prostate volume. This work proposes a correction method for the FCh-PET bladder uptake and investigates its impact on intraprostatic textural properties. Methods: Two patients with PC received pre-treatment dynamic FCh-PET scans reconstructed at four time points (interval: 2 min), for which prostate and bladder contours were obtained. Projection bins affected by bladder uptake were determined by forward-projection. For each time point and axial position, virtual sinograms were obtained and affected bins replaced by a weighted combination of original values and values interpolated using cubic spline from non-affected bins of the current and adjacent projection angles. The process was optimized using a genetic algorithm in terms of minimization of the root-mean-square error (RMSE) within the bladder between the corrected dynamic time point volume and a reference initial uptake volume. Finally, the impact of the bladder uptake correction on the prostate region was investigated using two standard SUV metrics (1) and three texture metrics (2): 1) SUVmax, SUVmean; 2) Contrast, Homogeneity, Coarseness. Results: Without bladder uptake correction, SUVmax and SUVmean were on average overestimated in the prostate by 0%, 0%, 33.2%, 51.2%, and 3.6%, 6.0%, 2.9%, 3.2%, for each time point respectively. Contrast varied by −9.1%, −6.7%, +40.4%, +107.7%, and Homogeneity and Coarseness by +4.5%, +1.8%, −8.8%, −14.8% and +1.0%, +0.5%, −9.5%, +0.9%. Conclusion: We proposed a method for FCh-PET bladder uptake correction and showed an impact on the quantification of the prostate signal. This method achieved a large reduction of intra-prostatic SUVmax while minimizing the impact on SUVmean

  9. Evaluation of a method for correction of scatter radiation in thorax cone beam CT

    International Nuclear Information System (INIS)

    Rinkel, J.; Dinten, J.M.; Esteve, F.

    2004-01-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  10. Method for indirect quantification of CH4 production via H2O production using hydrogenotrophic methanogens

    Directory of Open Access Journals (Sweden)

    Ruth-Sophie eTaubner

    2016-04-01

    Full Text Available ydrogenotrophic methanogens are an intriguing group of microorganisms from the domain Archaea. They exhibit extraordinary ecological, biochemical, physiological characteristics colorbox{yellow}{and have a huge biotechnological potential}. Yet, the only possibility to assess the methane (CH$_4$ production potential of hydrogenotrophic methanogens is to apply gas chromatographic quantification of CH$_4$.In order to be able to effectively screen pure cultures of hydrogenotrophic methanogens regarding their CH$_4$ production potential we developed a novel method for indirect quantification of colorbox{yellow}{the} volumetric CH$_4$ production rate by measuring colorbox{yellow}{the} volumetric water production rate. This colorbox{yellow}{ } method was established in serum bottles for cultivation of methanogens in closed batch cultivation mode. Water production was colorbox{yellow}{estimated} by determining the difference in mass increase in an isobaric setting.This novel CH$_4$ quantification method is an accurate and precise analytical technique, colorbox{yellow}{which can be used} to rapidly screen pure cultures of methanogens regarding colorbox{yellow}{their} volumetric CH$_{4}$ evolution rate. colorbox{yellow}{It} is a cost effective alternative colorbox{yellow}{determining} CH$_4$ production of methanogens over CH$_4$ quantification by using gas chromatography, especially if colorbox{yellow}{ } applied as a high throughput quantification method. colorbox{yellow}{Eventually, the} method can be universally applied for quantification of CH$_4$ production from psychrophilic, thermophilic and hyperthermophilic hydrogenotrophic methanogens.

  11. A Horizontal Tilt Correction Method for Ship License Numbers Recognition

    Science.gov (United States)

    Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi

    2018-02-01

    An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.

  12. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  13. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  14. Prospective comparison of liver stiffness measurements between two point wave elastography methods: Virtual ouch quantification and elastography point quantification

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Lee, Dong Ho; Chang, Won; Han, Joon Koo [Seoul National University Hospital, Seoul (Korea, Republic of)

    2016-09-15

    To prospectively compare technical success rate and reliable measurements of virtual touch quantification (VTQ) elastography and elastography point quantification (ElastPQ), and to correlate liver stiffness (LS) measurements obtained by the two elastography techniques. Our study included 85 patients, 80 of whom were previously diagnosed with chronic liver disease. The technical success rate and reliable measurements of the two kinds of point shear wave elastography (pSWE) techniques were compared by χ{sup 2} analysis. LS values measured using the two techniques were compared and correlated via Wilcoxon signed-rank test, Spearman correlation coefficient, and 95% Bland-Altman limit of agreement. The intraobserver reproducibility of ElastPQ was determined by 95% Bland-Altman limit of agreement and intraclass correlation coefficient (ICC). The two pSWE techniques showed similar technical success rate (98.8% for VTQ vs. 95.3% for ElastPQ, p = 0.823) and reliable LS measurements (95.3% for VTQ vs. 90.6% for ElastPQ, p = 0.509). The mean LS measurements obtained by VTQ (1.71 ± 0.47 m/s) and ElastPQ (1.66 ± 0.41 m/s) were not significantly different (p = 0.209). The LS measurements obtained by the two techniques showed strong correlation (r = 0.820); in addition, the 95% limit of agreement of the two methods was 27.5% of the mean. Finally, the ICC of repeat ElastPQ measurements was 0.991. Virtual touch quantification and ElastPQ showed similar technical success rate and reliable measurements, with strongly correlated LS measurements. However, the two methods are not interchangeable due to the large limit of agreement.

  15. Quantification of miRNAs by a simple and specific qPCR method

    DEFF Research Database (Denmark)

    Cirera Salicio, Susanna; Busk, Peter K.

    2014-01-01

    MicroRNAs (miRNAs) are powerful regulators of gene expression at posttranscriptional level and play important roles in many biological processes and in disease. The rapid pace of the emerging field of miRNAs has opened new avenues for development of techniques to quantitatively determine mi...... in miRNA quantification. Furthermore, the method is easy to perform with common laboratory reagents, which allows miRNA quantification at low cost....

  16. Correction for Metastability in the Quantification of PID in Thin-film Module Testing

    DEFF Research Database (Denmark)

    Hacke, Peter; Spataru, Sergiu; Johnston, Steve

    2017-01-01

    A fundamental change in the analysis for the accelerated stress testing of thin-film modules is proposed, whereby power changes due to metastability and other effects that may occur due to the thermal history are removed from the power measurement that we obtain as a function of the applied stress...... in standardized tests, the method is demonstrated and discussed for potential-induced degradation testing in view of the physical mechanisms that can lead to confounding power changes in the module....

  17. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  18. Correcting saturation of detectors for particle/droplet imaging methods

    International Nuclear Information System (INIS)

    Kalt, Peter A M

    2010-01-01

    Laser-based diagnostic methods are being applied to more and more flows of theoretical and practical interest and are revealing interesting new flow features. Imaging particles or droplets in nephelometry and laser sheet dropsizing methods requires a trade-off of maximized signal-to-noise ratio without over-saturating the detector. Droplet and particle imaging results in lognormal distribution of pixel intensities. It is possible to fit a derived lognormal distribution to the histogram of measured pixel intensities. If pixel intensities are clipped at a saturated value, it is possible to estimate a presumed probability density function (pdf) shape without the effects of saturation from the lognormal fit to the unsaturated histogram. Information about presumed shapes of the pixel intensity pdf is used to generate corrections that can be applied to data to account for saturation. The effects of even slight saturation are shown to be a significant source of error on the derived average. The influence of saturation on the derived root mean square (rms) is even more pronounced. It is found that errors on the determined average exceed 5% when the number of saturated samples exceeds 3% of the total. Errors on the rms are 20% for a similar saturation level. This study also attempts to delineate limits, within which the detector saturation can be accurately corrected. It is demonstrated that a simple method for reshaping the clipped part of the pixel intensity histogram makes accurate corrections to account for saturated pixels. These outcomes can be used to correct a saturated signal, quantify the effect of saturation on a derived average and offer a method to correct the derived average in the case of slight to moderate saturation of pixels

  19. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  20. Development for 2D pattern quantification method on mask and wafer

    Science.gov (United States)

    Matsuoka, Ryoichi; Mito, Hiroaki; Toyoda, Yasutaka; Wang, Zhigang

    2010-03-01

    We have developed the effective method of mask and silicon 2-dimensional metrology. The aim of this method is evaluating the performance of the silicon corresponding to Hotspot on a mask. The method adopts a metrology management system based on DBM (Design Based Metrology). This is the high accurate contouring created by an edge detection algorithm used in mask CD-SEM and silicon CD-SEM. Currently, as semiconductor manufacture moves towards even smaller feature size, this necessitates more aggressive optical proximity correction (OPC) to drive the super-resolution technology (RET). In other words, there is a trade-off between highly precise RET and mask manufacture, and this has a big impact on the semiconductor market that centers on the mask business. 2-dimensional Shape quantification is important as optimal solution over these problems. Although 1-dimensional shape measurement has been performed by the conventional technique, 2-dimensional shape management is needed in the mass production line under the influence of RET. We developed the technique of analyzing distribution of shape edge performance as the shape management technique. On the other hand, there is roughness in the silicon shape made from a mass-production line. Moreover, there is variation in the silicon shape. For this reason, quantification of silicon shape is important, in order to estimate the performance of a pattern. In order to quantify, the same shape is equalized in two dimensions. And the method of evaluating based on the shape is popular. In this study, we conducted experiments for averaging method of the pattern (Measurement Based Contouring) as two-dimensional mask and silicon evaluation technique. That is, observation of the identical position of a mask and a silicon was considered. It is possible to analyze variability of the edge of the same position with high precision. The result proved its detection accuracy and reliability of variability on two-dimensional pattern (mask and

  1. An efficient dose-compensation method for proximity effect correction

    International Nuclear Information System (INIS)

    Wang Ying; Han Weihua; Yang Xiang; Zhang Yang; Yang Fuhua; Zhang Renping

    2010-01-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved. (semiconductor technology)

  2. Correction for Metastability in the Quantification of PID in Thin-film Module Testing: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hacke, Peter L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Johnston, Steven [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Spataru, Sergiu [Aalborg University

    2017-10-01

    A fundamental change in the analysis for the accelerated stress testing of thin-film modules is proposed, whereby power changes due to metastability and other effects that may occur due to the thermal history are removed from the power measurement that we obtain as a function of the applied stress factor. The power of reference modules normalized to an initial state - undergoing the same thermal and light- exposure history but without the applied stress factor such as humidity or voltage bias - is subtracted from that of the stressed modules. For better understanding and appropriate application in standardized tests, the method is demonstrated and discussed for potential-induced degradation testing in view of the parallel-occurring but unrelated physical mechanisms that can lead to confounding power changes in the module.

  3. A rigid motion correction method for helical computed tomography (CT)

    International Nuclear Information System (INIS)

    Kim, J-H; Kyme, A; Fulton, R; Nuyts, J; Kuncic, Z

    2015-01-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data. (paper)

  4. Quantification of myocardial perfusion SPECT for the assessment of coronary artery disease: should we apply scatter correction?

    International Nuclear Information System (INIS)

    Hambye, A.S.; Vervaet, A.; Dobbeleir, A.

    2002-01-01

    Compared to other non invasive testings for CAD diagnosis, myocardial perfusion imaging (MPI) is considered as a very sensitive method which accuracy is however often dimmed by a certain lack of specificity, especially in patients with a small heart. With gated SPECT MPI, use of end-diastolic instead of summed images has been presented as an interesting approach for increasing specificity. Since scatter correction is reported to improve image contrast, it might potentially constitute another way to ameliorate MPI accuracy. We aimed at comparing the value of both approaches, either separate or combined, for CAD diagnosis. Methods. Hundred patients addressed for gated 99m-Tc sestamibi SPECT MPI were prospectively included (Group A). Thirty-five had an end-systolic volume <30ml by QGS-analysis (Group B). All had a coronary angiogram within 3 months of the MPI. Four polar maps (non-corrected and scatter-corrected summed, and non-corrected and scatter-corrected end-diastolic) were created to quantify the extent (EXT) and severity (TDS) of the perfusion defects if any. ROC-curve analysis was applied to define the optimal thresholds of EXT and TDS separating non-CAD from CAD-patients, using a 50%-stenosis on coronary angiogram as cutoff for disease positivity. Results. Significant CAD was present in 86 patients (25 in Group B). In Group A, assessment of EXT and TDS of perfusion defects on scatter-corrected summed images demonstrated the highest accuracy (76% for EXT; sens: 77%; spec: 71%, and 74% for TDS, sens: 73%, spec: 79%). Accuracy of EXT and TDS calculated from the other data sets was slightly but not significantly lower, especially because of a lower sensitivity. As a comparison, visual analysis was 90% accurate for the diagnosis of CAD (sens: 94%, spec: 64%). In group B, overall results were worse mainly due to a decreased sensitivity, with accuracies ranging between 51 and 63%. Again scatter-corrected summed data were the most accurate (EXT: 60%, TDS: 63%, visual

  5. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  6. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  7. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-01-01

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  8. Digital quantification of fibrosis in liver biopsy sections: description of a new method by Photoshop software.

    Science.gov (United States)

    Dahab, Gamal M; Kheriza, Mohamed M; El-Beltagi, Hussien M; Fouda, Abdel-Motaal M; El-Din, Osama A Sharaf

    2004-01-01

    The precise quantification of fibrous tissue in liver biopsy sections is extremely important in the classification, diagnosis and grading of chronic liver disease, as well as in evaluating the response to antifibrotic therapy. Because the recently described methods of digital image analysis of fibrosis in liver biopsy sections have major flaws, including the use of out-dated techniques in image processing, inadequate precision and inability to detect and quantify perisinusoidal fibrosis, we developed a new technique in computerized image analysis of liver biopsy sections based on Adobe Photoshop software. We prepared an experimental model of liver fibrosis involving treatment of rats with oral CCl4 for 6 weeks. After staining liver sections with Masson's trichrome, a series of computer operations were performed including (i) reconstitution of seamless widefield images from a number of acquired fields of liver sections; (ii) image size and solution adjustment; (iii) color correction; (iv) digital selection of a specified color range representing all fibrous tissue in the image and; (v) extraction and calculation. This technique is fully computerized with no manual interference at any step, and thus could be very reliable for objectively quantifying any pattern of fibrosis in liver biopsy sections and in assessing the response to antifibrotic therapy. It could also be a valuable tool in the precise assessment of antifibrotic therapy to other tissue regardless of the pattern of tissue or fibrosis.

  9. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  10. GPU accelerated manifold correction method for spinning compact binaries

    Science.gov (United States)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  11. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  12. A phase quantification method based on EBSD data for a continuously cooled microalloyed steel

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, H.; Wynne, B.P.; Palmiere, E.J., E-mail: e.j.palmiere@sheffield.ac.uk

    2017-01-15

    Mechanical properties of steels depend on the phase constitutions of the final microstructures which can be related to the processing parameters. Therefore, accurate quantification of different phases is necessary to investigate the relationships between processing parameters, final microstructures and mechanical properties. Point counting on micrographs observed by optical or scanning electron microscopy is widely used as a phase quantification method, and different phases are discriminated according to their morphological characteristics. However, it is difficult to differentiate some of the phase constituents with similar morphology. Differently, for EBSD based phase quantification methods, besides morphological characteristics, other parameters derived from the orientation information can also be used for discrimination. In this research, a phase quantification method based on EBSD data in the unit of grains was proposed to identify and quantify the complex phase constitutions of a microalloyed steel subjected to accelerated coolings. Characteristics of polygonal ferrite/quasi-polygonal ferrite, acicular ferrite and bainitic ferrite on grain averaged misorientation angles, aspect ratios, high angle grain boundary fractions and grain sizes were analysed and used to develop the identification criteria for each phase. Comparing the results obtained by this EBSD based method and point counting, it was found that this EBSD based method can provide accurate and reliable phase quantification results for microstructures with relatively slow cooling rates. - Highlights: •A phase quantification method based on EBSD data in the unit of grains was proposed. •The critical grain area above which GAM angles are valid parameters was obtained. •Grain size and grain boundary misorientation were used to identify acicular ferrite. •High cooling rates deteriorate the accuracy of this EBSD based method.

  13. Direct liquid chromatography method for the simultaneous quantification of hydroxytyrosol and tyrosol in red wines.

    Science.gov (United States)

    Piñeiro, Zulema; Cantos-Villar, Emma; Palma, Miguel; Puertas, Belen

    2011-11-09

    A validated HPLC method with fluorescence detection for the simultaneous quantification of hydroxytyrosol and tyrosol in red wines is described. Detection conditions for both compounds were optimized (excitation at 279 and 278 and emission at 631 and 598 nm for hydroxytyrosol and tyrosol, respectively). The validation of the analytical method was based on selectivity, linearity, robustness, detection and quantification limits, repeatability, and recovery. The detection and quantification limits in red wines were set at 0.023 and 0.076 mg L(-1) for hydroxytyrosol and at 0.007 and 0.024 mg L(-1) for tyrosol determination, respectively. Precision values, both within-day and between-day (n = 5), remained below 3% for both compounds. In addition, a fractional factorial experimental design was developed to analyze the influence of six different conditions on analysis. The final optimized HPLC-fluorescence method allowed the analysis of 30 nonpretreated Spanish red wines to evaluate their hydroxytyrosol and tyrosol contents.

  14. Towards a new method for the quantification of metabolites in the biological sample

    International Nuclear Information System (INIS)

    Neugnot, B.

    2005-03-01

    The quantification of metabolites is a key step in drug development. The aim of this Ph.D. work was to study the feasibility of a new method for this quantification, in the biological sample, without the drawbacks (cost, time, ethics) of the classical quantification methods based on metabolites synthesis or administration to man of the radiolabelled drug. Our strategy consists in determining the response factor, in mass spectrometry, of the metabolites. This approach is based on tritium labelling of the metabolites, ex vivo, by isotopic exchange. The labelling step was studied with deuterium. Metabolites of a model drug, recovered from in vitro or urinary samples, were labelled by three ways (Crab tree's catalyst ID2, deuterated trifluoroacetic acid or rhodium chloride ID20). Then, the transposition to tritium labelling was studied and the first results are very promising for the ultimate validation of the method. (author)

  15. A method for simultaneous quantification of phospholipid species by routine 31P NMR

    DEFF Research Database (Denmark)

    Brinkmann-Trettenes, Ulla; Stein, Paul C.; Klösgen, Beate Maria

    2012-01-01

    We report a 31P NMR assay for quantification of aqueous phospholipid samples. Using a capillary with trimethylphosphate as internal standard, the limit of quantification is 1.30mM. Comparison of the 31P NMR quantification method in aqueous buffer and in organic solvent revealed that the two methods...... are equal within experimental error. Changing the pH of the buffer enables peak separation for different phospholipid species. This is an advantage compared to the commercial enzyme assay based on phospholipase D and choline oxidase. The reported method, using routine 31P NMR equipment, is suitable when...... fast results of a limited number of samples are requested. © 2012 Elsevier B.V.....

  16. An external standard method for quantification of human cytomegalovirus by PCR

    International Nuclear Information System (INIS)

    Rongsen, Shen; Liren, Ma; Fengqi, Zhou; Qingliang, Luo

    1997-01-01

    An external standard method for PCR quantification of HCMV was reported. [α- 32 P]dATP was used as a tracer. 32 P-labelled specific amplification product was separated by agarose gel electrophoresis. A gel piece containing the specific product band was excised and counted in a plastic scintillation counter. Distribution of [α- 32 P]dATP in the electrophoretic gel plate and effect of separation between the 32 P-labelled specific product and free [α- 32 P]dATP were observed. A standard curve for quantification of HCMV by PCR was established and detective results of quality control templets were presented. The external standard method and the electrophoresis separation effect were appraised. The results showed that the method could be used for relative quantification of HCMV. (author)

  17. Method for measuring multiple scattering corrections between liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  18. A Method To ModifyCorrect The Performance Of Amplifiers

    Directory of Open Access Journals (Sweden)

    Rohith Krishnan R

    2015-01-01

    Full Text Available Abstract The actual response of the amplifier may vary with the replacement of some aged or damaged components and this method is to compensate that problem. Here we use op-amp Fixator as the design tool. The tool helps us to isolate the selected circuit component from rest of the circuit adjust its operating point to correct the performance deviations and to modify the circuit without changing other parts of the circuit. A method to modifycorrect the performance of amplifiers by properly redesign the circuit is presented in this paper.

  19. New method in obtaining correction factor of power confirming

    International Nuclear Information System (INIS)

    Deng Yongjun; Li Rundong; Liu Yongkang; Zhou Wei

    2010-01-01

    Westcott theory is the most widely used method in reactor power calibration, which particularly suited to research reactor. But this method is very fussy because lots of correction parameters which rely on empirical formula to special reactor type are needed. The incidence coefficient between foil activity and reactor power was obtained by Monte-Carlo calculation, which was carried out with precise description of the reactor core and the foil arrangement position by MCNP input card. So the reactor power was determined by the core neutron fluence profile and the foil activity placed in the position for normalization use. The characteristic of this new method is simpler, more flexible and accurate than Westcott theory. In this paper, the results of SPRR-300 obtained by the new method in theory were compared with the experimental results, which verified the possibility of this new method. (authors)

  20. A Method for Correcting IMRT Optimizer Heterogeneity Dose Calculations

    International Nuclear Information System (INIS)

    Zacarias, Albert S.; Brown, Mellonie F.; Mills, Michael D.

    2010-01-01

    Radiation therapy treatment planning for volumes close to the patient's surface, in lung tissue and in the head and neck region, can be challenging for the planning system optimizer because of the complexity of the treatment and protected volumes, as well as striking heterogeneity corrections. Because it is often the goal of the planner to produce an isodose plan with uniform dose throughout the planning target volume (PTV), there is a need for improved planning optimization procedures for PTVs located in these anatomical regions. To illustrate such an improved procedure, we present a treatment planning case of a patient with a lung lesion located in the posterior right lung. The intensity-modulated radiation therapy (IMRT) plan generated using standard optimization procedures produced substantial dose nonuniformity across the tumor caused by the effect of lung tissue surrounding the tumor. We demonstrate a novel iterative method of dose correction performed on the initial IMRT plan to produce a more uniform dose distribution within the PTV. This optimization method corrected for the dose missing on the periphery of the PTV and reduced the maximum dose on the PTV to 106% from 120% on the representative IMRT plan.

  1. Examination of packaging materials in bakery products : a validated method for detection and quantification

    NARCIS (Netherlands)

    Raamsdonk, van L.W.D.; Pinckaers, V.G.Z.; Vliege, J.J.M.; Egmond, van H.J.

    2012-01-01

    Methods for the detection and quantification of packaging materials are necessary for the control of the prohibition of these materials according to Regulation (EC)767/2009. A method has been developed and validated at RIKILT for bakery products, including sweet bread and raisin bread. This choice

  2. Formic acid hydrolysis/liquid chromatography isotope dilution mass spectrometry: An accurate method for large DNA quantification.

    Science.gov (United States)

    Shibayama, Sachie; Fujii, Shin-Ichiro; Inagaki, Kazumi; Yamazaki, Taichi; Takatsu, Akiko

    2016-10-14

    Liquid chromatography-isotope dilution mass spectrometry (LC-IDMS) with formic acid hydrolysis was established for the accurate quantification of λDNA. The over-decomposition of nucleobases in formic acid hydrolysis was restricted by optimizing the reaction temperature and the reaction time, and accurately corrected by using deoxynucleotides (dNMPs) and isotope-labeled dNMPs as the calibrator and the internal standard, respectively. The present method could quantify λDNA with an expanded uncertainty of 4.6% using 10fmol of λDNA. The analytical results obtained with the present method were validated by comparing with the results of phosphate-base quantification by inductively coupled plasma-mass spectrometry (ICP-MS). The results showed good agreement with each other. We conclude that the formic acid hydrolysis/LC-IDMS method can quantify λDNA accurately and is promising as the primary method for the certification of DNA as reference material. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification

    OpenAIRE

    Tzong-Shi Lu; Szu-Yu Yiao; Kenneth Lim; Roderick V. Jensen; Li-Li Hsiao

    2010-01-01

    Background: The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. Aims: We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. Material & Methods: Differential protein expression patterns was assessed by western bl...

  4. Forest Carbon Leakage Quantification Methods and Their Suitability for Assessing Leakage in REDD

    Directory of Open Access Journals (Sweden)

    Sabine Henders

    2012-01-01

    Full Text Available This paper assesses quantification methods for carbon leakage from forestry activities for their suitability in leakage accounting in a future Reducing Emissions from Deforestation and Forest Degradation (REDD mechanism. To that end, we first conducted a literature review to identify specific pre-requisites for leakage assessment in REDD. We then analyzed a total of 34 quantification methods for leakage emissions from the Clean Development Mechanism (CDM, the Verified Carbon Standard (VCS, the Climate Action Reserve (CAR, the CarbonFix Standard (CFS, and from scientific literature sources. We screened these methods for the leakage aspects they address in terms of leakage type, tools used for quantification and the geographical scale covered. Results show that leakage methods can be grouped into nine main methodological approaches, six of which could fulfill the recommended REDD leakage requirements if approaches for primary and secondary leakage are combined. The majority of methods assessed, address either primary or secondary leakage; the former mostly on a local or regional and the latter on national scale. The VCS is found to be the only carbon accounting standard at present to fulfill all leakage quantification requisites in REDD. However, a lack of accounting methods was identified for international leakage, which was addressed by only two methods, both from scientific literature.

  5. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  6. Quantification of diatoms in biofilms: Standardisation of methods

    Digital Repository Service at National Institute of Oceanography (India)

    Patil, J.S.; Anil, A.C.

    of the difficulty in sampling and enumeration. Scraping or brushing are the traditional methods used for removal of diatoms from biofilms developed on solid substrata. The method of removal is the most critical step in enumerating the biofilm diatom community...

  7. Auto correct method of AD converters precision based on ethernet

    Directory of Open Access Journals (Sweden)

    NI Jifeng

    2013-10-01

    Full Text Available Ideal AD conversion should be a straight zero-crossing line in the Cartesian coordinate axis system. While in practical engineering, the signal processing circuit, chip performance and other factors have an impact on the accuracy of conversion. Therefore a linear fitting method is adopted to improve the conversion accuracy. An automatic modification of AD conversion based on Ethernet is presented by using software and hardware. Just by tapping the mouse, all the AD converter channel linearity correction can be automatically completed, and the error, SNR and ENOB (effective number of bits are calculated. Then the coefficients of linear modification are loaded into the onboard AD converter card's EEPROM. Compared with traditional methods, this method is more convenient, accurate and efficient,and has a broad application prospects.

  8. Critical assessment of three high performance liquid chromatography analytical methods for food carotenoid quantification

    NARCIS (Netherlands)

    Dias, M.G.; Oliveira, L.; Camoes, M.F.G.F.C.; Nunes, B.; Versloot, P.; Hulshof, P.J.M.

    2010-01-01

    Three sets of extraction/saponification/HPLC conditions for food carotenoid quantification were technically and economically compared. Samples were analysed for carotenoids a-carotene, ß-carotene, ß-cryptoxanthin, lutein, lycopene, and zeaxanthin. All methods demonstrated good performance in the

  9. Correction

    CERN Multimedia

    2002-01-01

    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  10. Filtering of SPECT reconstructions made using Bellini's attenuation correction method

    International Nuclear Information System (INIS)

    Glick, S.J.; Penney, B.C.; King, M.A.

    1991-01-01

    This paper evaluates a three-dimensional (3D) Wiener filter which is used to restore SPECT reconstructions which were made using Bellini's method of attenuation correction. Its performance is compared to that of several pre-reconstruction filers: the one-dimensional (1D) Butterworth, the two-dimensional (2D) Butterworth, and a 2D Wiener filer. A simulation study is used to compare the four filtering methods. An approximation to a clinical liver spleen study was used as the source distribution and algorithm which accounts for the depth and distance dependent blurring in SPECT was used to compute noise free projections. To study the effect of filtering method on tumor detection accuracy, a 2 cm diameter, cool spherical tumor (40% contrast) was placed at a known, but random, location with the liver. Projection sets for ten tumor locations were computed and five noise realizations of each set were obtained by introducing Poisson noise. The simulated projections were either: filtered with the 1D or 2D Butterworth or the 2D Wiener and then reconstructed using Bellini's intrinsic attenuation correction, or reconstructed first, then filtered with the 3D Wiener. The criteria used for comparison were: normalized mean square error (NMSE), cold spot contrast, and accuracy of tumor detection with an automated numerical method. Results indicate that restorations obtained with 3D Wiener filtering yielded significantly higher lesion contrast and lower NMSE values compared to the other methods of processing. The Wiener restoration filters and the 2D Butterworth all provided similar measures of detectability, which were noticeably higher than that obtained with 1D Butterworth smoothing

  11. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification.

    Science.gov (United States)

    Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li

    2010-07-01

    The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.

  12. Development and Assessment of a Bundle Correction Method for CHF

    International Nuclear Information System (INIS)

    Hwang, Dae Hyun; Chang, Soon Heung

    1993-01-01

    A bundle correction method, based on the conservation laws of mass, energy, and momentum in an open subchannel, is proposed for the prediction of the critical heat flux (CHF) in rod bundles from round tube CHF correlations without detailed subchannel analysis. It takes into account the effects of the enthalpy and mass velocity distributions at subchannel level using the first dericatives of CHF with respect to the independent parameters. Three different CHF correlations for tubes (Groeneveld's CHF table, Katto correlation, and Biasi correlation) have been examined with uniformly heated bundle CHF data collected from various sources. A limited number of GHE data from a non-uniformly heated rod bundle are also evaluated with the aid of Tong's F-factor. The proposed method shows satisfactory CHF predictions for rod bundles both uniform and non-uniform power distributions. (Author)

  13. Empirical method for matrix effects correction in liquid samples

    International Nuclear Information System (INIS)

    Vigoda de Leyt, Dora; Vazquez, Cristina

    1987-01-01

    A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es

  14. SU-G-IeP1-15: Towards Accurate Cerebral Blood Flow Quantification with Distortion- Corrected Pseudo-Continuous Arterial Spin Labeling

    Energy Technology Data Exchange (ETDEWEB)

    Hoff, M; Rane-Levandovsky, S; Andre, J [University of Washington, Seattle, WA (United States)

    2016-06-15

    Purpose: Traditional arterial spin labeling (ASL) acquisitions with echo planar imaging (EPI) readouts suffer from image distortion due to susceptibility effects, compromising ASL’s ability to accurately quantify cerebral blood flow (CBF) and assess disease-specific patterns associated with CBF abnormalities. Phase labeling for additional coordinate encoding (PLACE) can remove image distortion; our goal is to apply PLACE to improve the quantitative accuracy of ASL CBF in humans. Methods: Four subjects were imaged on a 3T Philips Ingenia scanner using a 16-channel receive coil with a 21/21/10cm (frequency/phase/slice direction) field-of-view. An ASL sequence with a pseudo-continuous ASL (pCASL) labeling scheme was employed to acquire thirty dynamics of single-shot EPI data, with control and label datasets for all dynamics, and PLACE gradients applied on odd dynamics. Parameters included a post-labeling delay = 2s, label duration = 1.8s, flip angle = 90°, TR/TE = 5000/23.5ms, and 2.9/2.9/5.0mm (frequency/phase/slice direction) voxel size. “M0” EPI-reference images and T1-weighted spin-echo images with 0.8/1.0/3.3mm (frequency/phase/slice directions) voxel size were also acquired. Complex conjugate image products of pCASL odd and even dynamics were formed, a linear phase ramp applied, and data expanded and smoothed. Data phase was extracted to map control, label, and M0 magnitude image pixels to their undistorted locations, and images were rebinned to original size. All images were corrected for motion artifacts in FSL 5.0. pCASL images were registered to M0 images, and control and label images were subtracted to compute quantitative CBF maps. Results: pCASL image and CBF map distortions were removed by PLACE in all subjects. Corrected images conformed well to the anatomical T1-weighted reference image, and deviations in corrected CBF maps were evident. Conclusion: Eliminating pCASL distortion with PLACE can improve CBF quantification accuracy using minimal

  15. SU-G-IeP1-15: Towards Accurate Cerebral Blood Flow Quantification with Distortion- Corrected Pseudo-Continuous Arterial Spin Labeling

    International Nuclear Information System (INIS)

    Hoff, M; Rane-Levandovsky, S; Andre, J

    2016-01-01

    Purpose: Traditional arterial spin labeling (ASL) acquisitions with echo planar imaging (EPI) readouts suffer from image distortion due to susceptibility effects, compromising ASL’s ability to accurately quantify cerebral blood flow (CBF) and assess disease-specific patterns associated with CBF abnormalities. Phase labeling for additional coordinate encoding (PLACE) can remove image distortion; our goal is to apply PLACE to improve the quantitative accuracy of ASL CBF in humans. Methods: Four subjects were imaged on a 3T Philips Ingenia scanner using a 16-channel receive coil with a 21/21/10cm (frequency/phase/slice direction) field-of-view. An ASL sequence with a pseudo-continuous ASL (pCASL) labeling scheme was employed to acquire thirty dynamics of single-shot EPI data, with control and label datasets for all dynamics, and PLACE gradients applied on odd dynamics. Parameters included a post-labeling delay = 2s, label duration = 1.8s, flip angle = 90°, TR/TE = 5000/23.5ms, and 2.9/2.9/5.0mm (frequency/phase/slice direction) voxel size. “M0” EPI-reference images and T1-weighted spin-echo images with 0.8/1.0/3.3mm (frequency/phase/slice directions) voxel size were also acquired. Complex conjugate image products of pCASL odd and even dynamics were formed, a linear phase ramp applied, and data expanded and smoothed. Data phase was extracted to map control, label, and M0 magnitude image pixels to their undistorted locations, and images were rebinned to original size. All images were corrected for motion artifacts in FSL 5.0. pCASL images were registered to M0 images, and control and label images were subtracted to compute quantitative CBF maps. Results: pCASL image and CBF map distortions were removed by PLACE in all subjects. Corrected images conformed well to the anatomical T1-weighted reference image, and deviations in corrected CBF maps were evident. Conclusion: Eliminating pCASL distortion with PLACE can improve CBF quantification accuracy using minimal

  16. Performance of spectral fitting methods for vegetation fluorescence quantification

    NARCIS (Netherlands)

    Meroni, M.; Busetto, D.; Colombo, R.; Guanter, L.; Moreno, J.; Verhoef, W.

    2010-01-01

    The Fraunhofer Line Discriminator (FLD) principle has long been considered as the reference method to quantify solar-induced chlorophyll fluorescence (F) from passive remote sensing measurements. Recently, alternative retrieval algorithms based on the spectral fitting of hyperspectral radiance

  17. A gamma camera count rate saturation correction method for whole-body planar imaging

    Science.gov (United States)

    Hobbs, Robert F.; Baechler, Sébastien; Senthamizhchelvan, Srinivasan; Prideaux, Andrew R.; Esaias, Caroline E.; Reinhardt, Melvin; Frey, Eric C.; Loeb, David M.; Sgouros, George

    2010-02-01

    Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating

  18. Correction

    Directory of Open Access Journals (Sweden)

    2012-01-01

    Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.

  19. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  20. Correction

    Directory of Open Access Journals (Sweden)

    2014-01-01

    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  1. Developing a method for quantification of Ascaris eggs on hands

    DEFF Research Database (Denmark)

    Jeandron, Aurelie; Ensink, Jeroen J. H.; Thamsborg, Stig Milan

    In transmission of soil transmitted helminths, especially with Ascaris and Trichuris infections, the importance of hands is unclear and very limited literature exists. This is partly because of the absence of a reliable method to quantify the number of helminth eggs on hands. The aim of this study...... was to develop a method to assess the number of Ascaris eggs on hands and determine the egg recovery rate of the method. Under laboratory conditions, hands were contaminated with app. 1000 Ascaris eggs, air dried and washed in a plastic bag retaining the washing water, in order to determine recovery rates...... of eggs for two different detergents (cationic [benzethonium chloride 0.1%], anionic [7X 1% - quadrafos, glycol ether, and dioctyl sulfoccinate sodium salt]) and de-ionized water used as control. The highest recovery rate (95.6%) was achieved with a hand rinse performed with 7X 1%. Washing hands...

  2. Gynecomastia: the horizontal ellipse method for its correction.

    Science.gov (United States)

    Gheita, Alaa

    2008-09-01

    Gynecomastia is an extremely disturbing deformity affecting males, especially when it occurs in young subjects. Such subjects generally have no hormonal anomalies and thus either liposuction or surgical intervention, depending on the type and consistency of the breast, is required for treatment. If there is slight hypertrophy alone with no ptosis, then subcutaneous mastectomy is usually sufficient. However, when hypertrophy and/or ptosis are present, then corrective surgery on the skin and breast is mandatory to obtain a good cosmetic result. Most of the procedures suggested for reduction of the male breast are usually derived from reduction mammaplasty methods used for females. They have some disadvantages, mainly the multiple scars, which remain apparent in males, unusual shape, and the lack of symmetry with regard to the size of both breasts and/or the nipple position. The author presents a new, simple method that has proven superior to any previous method described so far. It consists of a horizontal excision ellipse of the breast's redundant skin and deep excess tissue and a superior pedicle flap carrying the areola-nipple complex to its new site on the chest wall. The method described yields excellent shape, symmetry, and minimal scars. A new method for treating gynecomastis is described in detail, its early and late operative results are shown, and its advantages are discussed.

  3. Gallic Acid: Review of the Methods of Determination and Quantification.

    Science.gov (United States)

    Fernandes, Felipe Hugo Alencar; Salgado, Hérida Regina Nunes

    2016-05-03

    Gallic acid (3,4,5 trihydroxybenzoic acid) is a secondary metabolite present in most plants. This metabolite is known to exhibit a range of bioactivities including antioxidant, antimicrobial, anti-inflammatory, and anticancer. There are various methods to analyze gallic acid including spectrometry, chromatography, and capillary electrophoresis, among others. They have been developed to identify and quantify this active ingredient in most biological matrices. The aim of this article is to review the available information on analytical methods for gallic acid, as well as presenting the advantages and limitations of each technique.

  4. Correlation Coefficients Between Different Methods of Expressing Bacterial Quantification Using Real Time PCR

    Directory of Open Access Journals (Sweden)

    Bahman Navidshad

    2012-02-01

    Full Text Available The applications of conventional culture-dependent assays to quantify bacteria populations are limited by their dependence on the inconsistent success of the different culture-steps involved. In addition, some bacteria can be pathogenic or a source of endotoxins and pose a health risk to the researchers. Bacterial quantification based on the real-time PCR method can overcome the above-mentioned problems. However, the quantification of bacteria using this approach is commonly expressed as absolute quantities even though the composition of samples (like those of digesta can vary widely; thus, the final results may be affected if the samples are not properly homogenized, especially when multiple samples are to be pooled together before DNA extraction. The objective of this study was to determine the correlation coefficients between four different methods of expressing the output data of real-time PCR-based bacterial quantification. The four methods were: (i the common absolute method expressed as the cell number of specific bacteria per gram of digesta; (ii the Livak and Schmittgen, ΔΔCt method; (iii the Pfaffl equation; and (iv a simple relative method based on the ratio of cell number of specific bacteria to the total bacterial cells. Because of the effect on total bacteria population in the results obtained using ΔCt-based methods (ΔΔCt and Pfaffl, these methods lack the acceptable consistency to be used as valid and reliable methods in real-time PCR-based bacterial quantification studies. On the other hand, because of the variable compositions of digesta samples, a simple ratio of cell number of specific bacteria to the corresponding total bacterial cells of the same sample can be a more accurate method to quantify the population.

  5. Validated RP-HPLC Method for Quantification of Phenolic ...

    African Journals Online (AJOL)

    Purpose: To evaluate the total phenolic content and antioxidant potential of the methanol extracts of aerial parts and roots of Thymus sipyleus Boiss and also to determine some phenolic compounds using a newly developed and validated reversed phase high performance liquid chromatography (RP-HPLC) method.

  6. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    Directory of Open Access Journals (Sweden)

    Žel Jana

    2006-08-01

    Full Text Available Abstract Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was

  7. Critical points of DNA quantification by real-time PCR--effects of DNA extraction method and sample matrix on quantification of genetically modified organisms.

    Science.gov (United States)

    Cankar, Katarina; Stebih, Dejan; Dreo, Tanja; Zel, Jana; Gruden, Kristina

    2006-08-14

    Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary criterion by which to

  8. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    Science.gov (United States)

    Cankar, Katarina; Štebih, Dejan; Dreo, Tanja; Žel, Jana; Gruden, Kristina

    2006-01-01

    Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary

  9. Direct infusion-SIM as fast and robust method for absolute protein quantification in complex samples

    Directory of Open Access Journals (Sweden)

    Christina Looße

    2015-06-01

    Full Text Available Relative and absolute quantification of proteins in biological and clinical samples are common approaches in proteomics. Until now, targeted protein quantification is mainly performed using a combination of HPLC-based peptide separation and selected reaction monitoring on triple quadrupole mass spectrometers. Here, we show for the first time the potential of absolute quantification using a direct infusion strategy combined with single ion monitoring (SIM on a Q Exactive mass spectrometer. By using complex membrane fractions of Escherichia coli, we absolutely quantified the recombinant expressed heterologous human cytochrome P450 monooxygenase 3A4 (CYP3A4 comparing direct infusion-SIM with conventional HPLC-SIM. Direct-infusion SIM revealed only 14.7% (±4.1 (s.e.m. deviation on average, compared to HPLC-SIM and a decreased processing and analysis time of 4.5 min (that could be further decreased to 30 s for a single sample in contrast to 65 min by the LC–MS method. Summarized, our simplified workflow using direct infusion-SIM provides a fast and robust method for quantification of proteins in complex protein mixtures.

  10. Non-linear methods for the quantification of cyclic motion

    OpenAIRE

    Quintana Duque, Juan Carlos

    2016-01-01

    Traditional methods of human motion analysis assume that fluctuations in cycles (e.g. gait motion) and repetitions (e.g. tennis shots) arise solely from noise. However, the fluctuations may have enough information to describe the properties of motion. Recently, the fluctuations in motion have been analysed based on the concepts of variability and stability, but they are not used uniformly. On the one hand, these concepts are often mixed in the existing literature, while on the other hand, the...

  11. A PRACTICAL METHOD FOR QUANTIFICATION OF PLEURAL EFFUSION BY USG

    OpenAIRE

    Swish Kumar; Dinesh Kumar; Suganita; Singh; Vijay Shankar; Rajeev; Ajay; Anjali

    2016-01-01

    OBJECTIVE The aim of this study is to find a correlation between pleural separation and amount of aspirated effusion. METHODS Total 20 adult patients with 25 effusions were taken into the study with chest x-ray showing homogeneous opacity in either one or both of the lung field, which was confirmed on USG. Only uncomplicated pleural effusion were taken into study. Effusion with septations or encysted effusion or pyothorax were excluded from the study. RESULTS...

  12. Rationalization of thermal injury quantification methods: application to skin burns.

    Science.gov (United States)

    Viglianti, Benjamin L; Dewhirst, Mark W; Abraham, John P; Gorman, John M; Sparrow, Eph M

    2014-08-01

    Classification of thermal injury is typically accomplished either through the use of an equivalent dosimetry method (equivalent minutes at 43 °C, CEM43 °C) or through a thermal-injury-damage metric (the Arrhenius method). For lower-temperature levels, the equivalent dosimetry approach is typically employed while higher-temperature applications are most often categorized by injury-damage calculations. The two methods derive from common thermodynamic/physical chemistry origins. To facilitate the development of the interrelationships between the two metrics, application is made to the case of skin burns. This thermal insult has been quantified by numerical simulation, and the extracted time-temperature results served for the evaluation of the respective characterizations. The simulations were performed for skin-surface exposure temperatures ranging from 60 to 90 °C, where each surface temperature was held constant for durations extending from 10 to 110 s. It was demonstrated that values of CEM43 at the basal layer of the skin were highly correlated with the depth of injury calculated from a thermal injury integral. Local values of CEM43 were connected to the local cell survival rate, and a correlating equation was developed relating CEM43 with the decrease in cell survival from 90% to 10%. Finally, it was shown that the cell survival/CEM43 relationship for the cases investigated here most closely aligns with isothermal exposure of tissue to temperatures of ~50 °C. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.

  13. Alternative method for quantification of alfa-amylase activity.

    Science.gov (United States)

    Farias, D F; Carvalho, A F U; Oliveira, C C; Sousa, N M; Rocha-Bezerrra, L C B; Ferreira, P M P; Lima, G P G; Hissa, D C

    2010-05-01

    A modification of the sensitive agar diffusion method was developed for macro-scale determination of alfa-amylase. The proposed modifications lower costs with the utilisation of starch as substrate and agar as supporting medium. Thus, a standard curve was built using alfa-amylase solution from Aspergillus oryzae, with concentrations ranging from 2.4 to 7,500 U.mL-1. Clear radial diffusion zones were measured after 4 hours of incubation at 20 A degrees C. A linear relationship between the logarithm of enzyme activities and the area of clear zones was obtained. The method was validated by testing alpha-amylase from barley at the concentrations of 2.4; 60; 300 and 1,500 U.mL-1. The proposed method turned out to be simpler, faster, less expensive and able to determine on a macro-scale alpha-amylase over a wide range (2.4 to 7,500 U.mL-1) in scientific investigation as well as in teaching laboratory activities.

  14. Alternative method for quantification of alfa-amylase activity

    Directory of Open Access Journals (Sweden)

    DF. Farias

    Full Text Available A modification of the sensitive agar diffusion method was developed for macro-scale determination of alfa-amylase. The proposed modifications lower costs with the utilisation of starch as substrate and agar as supporting medium. Thus, a standard curve was built using alfa-amylase solution from Aspergillus oryzae, with concentrations ranging from 2.4 to 7,500 U.mL-1. Clear radial diffusion zones were measured after 4 hours of incubation at 20 °C. A linear relationship between the logarithm of enzyme activities and the area of clear zones was obtained. The method was validated by testing α-amylase from barley at the concentrations of 2.4; 60; 300 and 1,500 U.mL-1. The proposed method turned out to be simpler, faster, less expensive and able to determine on a macro-scale α-amylase over a wide range (2.4 to 7,500 U.mL-1 in scientific investigation as well as in teaching laboratory activities.

  15. Biogeosystem Technique as a method to correct the climate

    Science.gov (United States)

    Kalinitchenko, Valery; Batukaev, Abdulmalik; Batukaev, Magomed; Minkina, Tatiana

    2017-04-01

    can be produced; The less energy is consumed for climate correction, the better. The proposed algorithm was never discussed before because most of its ingredients were unenforceable. Now the possibility to execute the algorithm exists in the framework of our new scientific-technical branch - Biogeosystem Technique (BGT*). The BGT* is a transcendental (non-imitating natural processes) approach to soil processing, regulation of energy, matter, water fluxes and biological productivity of biosphere: intra-soil machining to provide the new highly productive dispersed system of soil; intra-soil pulse continuous-discrete plants watering to reduce the transpiration rate and water consumption of plants for 5-20 times; intra-soil environmentally safe return of matter during intra-soil milling processing and (or) intra-soil pulse continuous-discrete plants watering with nutrition. Are possible: waste management; reducing flow of nutrients to water systems; carbon and other organic and mineral substances transformation into the soil to plant nutrition elements; less degradation of biological matter to greenhouse gases; increasing biological sequestration of carbon dioxide in terrestrial system's photosynthesis; oxidizing methane and hydrogen sulfide by fresh photosynthesis ionized biologically active oxygen; expansion of the active terrestrial site of biosphere. The high biological product output of biosphere will be gained. BGT* robotic systems are of low cost, energy and material consumption. By BGT* methods the uncertainties of climate and biosphere will be reduced. Key words: Biogeosystem Technique, method to correct, climate

  16. Diagnostics and correction of disregulation states by physical methods

    OpenAIRE

    Gorsha, O. V.; Gorsha, V. I.

    2017-01-01

    Nicolaus Copernicus University, Toruń, Poland Ukrainian Research Institute for Medicine of Transport, Odesa, Ukraine Gorsha O. V., Gorsha V. I. Diagnostics and correction of disregulation states by physical methods Горша О. В., Горша В. И. Диагностика и коррекция физическими методами дизрегуляторных состояний Toruń, Odesa 2017 Nicolaus Copernicus University, To...

  17. A simple and fast method for extraction and quantification of cryptophyte phycoerythrin

    OpenAIRE

    Thoisen, Christina; Hansen, Benni Winding; Nielsen, S?ren Laurentius

    2017-01-01

    The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equip...

  18. Evaluation of a scattering correction method for high energy tomography

    Science.gov (United States)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where

  19. Standardless quantification approach of TXRF analysis using fundamental parameter method

    International Nuclear Information System (INIS)

    Szaloki, I.; Taniguchi, K.

    2000-01-01

    New standardless evaluation procedure based on the fundamental parameter method (FPM) has been developed for TXRF analysis. The theoretical calculation describes the relationship between characteristic intensities and the geometrical parameters of the excitation, detection system and the specimen parameters: size, thickness, angle of the excitation beam to the surface and the optical properties of the specimen holder. Most of the TXRF methods apply empirical calibration, which requires the application of special preparation technique. However, the characteristic lines of the specimen holder (Si Kα,β) present information from the local excitation and geometrical conditions on the substrate surface. On the basis of the theoretically calculation of the substrate characteristic intensity the excitation beam flux can be approximated. Taking into consideration the elements are in the specimen material a system of non-linear equation can be given involving the unknown concentration values and the geometrical and detection parameters. In order to solve this mathematical problem PASCAL software was written, which calculates the sample composition and the average sample thickness by gradient algorithm. Therefore, this quantitative estimation of the specimen composition requires neither external nor internal standard sample. For verification of the theoretical calculation and the numerical procedure, several experiments were carried out using mixed standard solution containing elements of K, Sc, V, Mn, Co and Cu in 0.1 - 10 ppm concentration range. (author)

  20. Quantification of emissions from knapsack sprayers: 'the weight method

    Science.gov (United States)

    Garcia-Santos, Glenda; Binder, Claudia R.

    2010-05-01

    Misuse of pesticides kill or seriously sicken thousands of people every year and poison the natural environment. Investigations of occupational and environmental risk have received considerable interest over the last decades. And yet, lack of staff and analytical equipments as well the costs of chemical analyses make difficult, if not impossible, the control of the pesticide contamination and residues in humans, air, water, and soils in developing countries. To assess emissions of pesticides (transport and deposition) during spray application and the risk for the human health and the environment, tracers can be useful tools. Uranine was used to quantify drift airborne and later deposition on the neighbouring field and clothes of the applicator after spraying with a knapsack sprayer in one of the biggest areas of potato production in Colombia. Keeping the same setup the amount of wet drift was measured by difference in the weight of high absorbent papers used to collect the tracer. Surprisingly this weight method (Weight-HAP) was able to explain 71% of the drift variance measured with the tracer. Therefore the weight method is presented as a suitable rapid low cost screening tool, complementary to toxicological tests, to assess air pollution, occupational and environmental exposure generated by the emissions from knapsack sprayers during pesticide application. This technique might be important in places were there is lack of analytical instruments.

  1. LC-MS-based quantification method for Achyranthes root saponins.

    Science.gov (United States)

    Kawahara, Yuki; Hoshino, Tatsuro; Morimoto, Hidetaka; Shinizu, Tomofumi; Narukawa, Yuji; Fuchino, Hiroyuki; Kawahara, Nobuo; Kiuchi, Fumiyuki

    2016-01-01

    A liquid chromatography mass spectrometry (LC-MS) method was developed for simultaneous quantitative analysis of Achyranthes root saponins: chikusetsusaponins IVa (1) and V (2), achyranthosides B (3), C (4), D (5), E (6), and G (7), sulfachyranthosides B (8) and D (9), and betavulgarosides II (10) and IV (11). Satisfactory separation of the saponins was achieved with the use of a volatile ion-pair reagent (dihexyl ammonium acetate) on a phenyl-hexylated silica gel column, and the amounts of saponins extracted under three different conditions were determined. When Achyranthes root was extracted with water at room temperature, achyranthosides B (3) and D (5) were the major saponins, and smaller amounts of other saponins (4, 6-11) were present. However, the amounts of chikusetsusaponins (1 and 2) were negligible. Under the condition to make a standard decoction of a Kampo formula, the major saponins were achyranthosides B (3), C (4), and D (5), and small amounts of chikusetsusaponins IVa (1) and V (2) appeared, whereas prolonged heating largely increased the amounts of chikusetsusaponins. This method can be used for quality control of Achyranthes root.

  2. [Doppler echocardiography of tricuspid insufficiency. Methods of quantification].

    Science.gov (United States)

    Loubeyre, C; Tribouilloy, C; Adam, M C; Mirode, A; Trojette, F; Lesbre, J P

    1994-01-01

    Evaluation of tricuspid incompetence has benefitted considerably from the development of Doppler ultrasound. In addition to direct analysis of the valves, which provides information about the mechanism involved, this method is able to provide an accurate evaluation, mainly through use of the Doppler mode. In addition to new criteria being evaluated (mainly the convergence zone of the regurgitant jet), some indices are recognised as good quantitative parameters: extension of the regurgitant jet into the right atrium, anterograde tricuspid flow, laminar nature of the regurgitant flow, analysis of the flow in the supra-hepatic veins, this is only semi-quantitative, since the calculation of the regurgitation fraction from the pulsed Doppler does not seem to be reliable; This accurate semi-quantitative evaluation is made possible by careful and consistent use of all the criteria available. The authors set out to discuss the value of the various evaluation criteria mentioned in the literature and try to define a practical approach.

  3. Validation of a spectrophotometric method for quantification of carboxyhemoglobin.

    Science.gov (United States)

    Luchini, Paulo D; Leyton, Jaime F; Strombech, Maria de Lourdes C; Ponce, Julio C; Jesus, Maria das Graças S; Leyton, Vilma

    2009-10-01

    The measurement of carboxyhemoglobin (COHb) levels in blood is a valuable procedure to confirm exposure to carbon monoxide (CO) either for forensic or occupational matters. A previously described method using spectrophotometric readings at 420 and 432 nm after reduction of oxyhemoglobin (O(2)Hb) and methemoglobin with sodium hydrosulfite solution leads to an exponential curve. This curve, used with pre-established factors, serves well for lower concentrations (1-7%) or for high concentrations (> 20%) but very rarely for both. The authors have observed that small variations on the previously described factors F1, F2, and F3, obtained from readings for 100% COHb and 100% O(2)Hb, turn into significant changes in COHb% results and propose that these factors should be determined every time COHb is measured by reading CO and O(2) saturated samples. This practice leads to an increase in accuracy and precision.

  4. Analysis of an automated background correction method for cardiovascular MR phase contrast imaging in children and young adults

    Energy Technology Data Exchange (ETDEWEB)

    Rigsby, Cynthia K.; Hilpipre, Nicholas; Boylan, Emma E.; Popescu, Andrada R.; Deng, Jie [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); McNeal, Gary R. [Siemens Medical Solutions USA Inc., Customer Solutions Group, Cardiovascular MR R and D, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago Research Center, Biostatistics Research Core, Chicago, IL (United States); Choi, Grace [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Pediatrics, Chicago, IL (United States); Greiser, Andreas [Siemens AG Healthcare Sector, Erlangen (Germany)

    2014-03-15

    Phase contrast magnetic resonance imaging (MRI) is a powerful tool for evaluating vessel blood flow. Inherent errors in acquisition, such as phase offset, eddy currents and gradient field effects, can cause significant inaccuracies in flow parameters. These errors can be rectified with the use of background correction software. To evaluate the performance of an automated phase contrast MRI background phase correction method in children and young adults undergoing cardiac MR imaging. We conducted a retrospective review of patients undergoing routine clinical cardiac MRI including phase contrast MRI for flow quantification in the aorta (Ao) and main pulmonary artery (MPA). When phase contrast MRI of the right and left pulmonary arteries was also performed, these data were included. We excluded patients with known shunts and metallic implants causing visible MRI artifact and those with more than mild to moderate aortic or pulmonary stenosis. Phase contrast MRI of the Ao, mid MPA, proximal right pulmonary artery (RPA) and left pulmonary artery (LPA) using 2-D gradient echo Fast Low Angle SHot (FLASH) imaging was acquired during normal respiration with retrospective cardiac gating. Standard phase image reconstruction and the automatic spatially dependent background-phase-corrected reconstruction were performed on each phase contrast MRI dataset. Non-background-corrected and background-phase-corrected net flow, forward flow, regurgitant volume, regurgitant fraction, and vessel cardiac output were recorded for each vessel. We compared standard non-background-corrected and background-phase-corrected mean flow values for the Ao and MPA. The ratio of pulmonary to systemic blood flow (Qp:Qs) was calculated for the standard non-background and background-phase-corrected data and these values were compared to each other and for proximity to 1. In a subset of patients who also underwent phase contrast MRI of the MPA, RPA, and LPA a comparison was made between standard non-background-corrected

  5. Powder X-ray diffraction method for the quantification of cocrystals in the crystallization mixture.

    Science.gov (United States)

    Padrela, Luis; de Azevedo, Edmundo Gomes; Velaga, Sitaram P

    2012-08-01

    The solid state purity of cocrystals critically affects their performance. Thus, it is important to accurately quantify the purity of cocrystals in the final crystallization product. The aim of this study was to develop a powder X-ray diffraction (PXRD) quantification method for investigating the purity of cocrystals. The method developed was employed to study the formation of indomethacin-saccharin (IND-SAC) cocrystals by mechanochemical methods. Pure IND-SAC cocrystals were geometrically mixed with 1:1 w/w mixture of indomethacin/saccharin in various proportions. An accurately measured amount (550 mg) of the mixture was used for the PXRD measurements. The most intense, non-overlapping, characteristic diffraction peak of IND-SAC was used to construct the calibration curve in the range 0-100% (w/w). This calibration model was validated and used to monitor the formation of IND-SAC cocrystals by liquid-assisted grinding (LAG). The IND-SAC cocrystal calibration curve showed excellent linearity (R(2) = 0.9996) over the entire concentration range, displaying limit of detection (LOD) and limit of quantification (LOQ) values of 1.23% (w/w) and 3.74% (w/w), respectively. Validation results showed excellent correlations between actual and predicted concentrations of IND-SAC cocrystals (R(2) = 0.9981). The accuracy and reliability of the PXRD quantification method depend on the methods of sample preparation and handling. The crystallinity of the IND-SAC cocrystals was higher when larger amounts of methanol were used in the LAG method. The PXRD quantification method is suitable and reliable for verifying the purity of cocrystals in the final crystallization product.

  6. Correction of the lack of commutability between plasmid DNA and genomic DNA for quantification of genetically modified organisms using pBSTopas as a model.

    Science.gov (United States)

    Zhang, Li; Wu, Yuhua; Wu, Gang; Cao, Yinglong; Lu, Changming

    2014-10-01

    Plasmid calibrators are increasingly applied for polymerase chain reaction (PCR) analysis of genetically modified organisms (GMOs). To evaluate the commutability between plasmid DNA (pDNA) and genomic DNA (gDNA) as calibrators, a plasmid molecule, pBSTopas, was constructed, harboring a Topas 19/2 event-specific sequence and a partial sequence of the rapeseed reference gene CruA. Assays of the pDNA showed similar limits of detection (five copies for Topas 19/2 and CruA) and quantification (40 copies for Topas 19/2 and 20 for CruA) as those for the gDNA. Comparisons of plasmid and genomic standard curves indicated that the slopes, intercepts, and PCR efficiency for pBSTopas were significantly different from CRM Topas 19/2 gDNA for quantitative analysis of GMOs. Three correction methods were used to calibrate the quantitative analysis of control samples using pDNA as calibrators: model a, or coefficient value a (Cva); model b, or coefficient value b (Cvb); and the novel model c or coefficient formula (Cf). Cva and Cvb gave similar estimated values for the control samples, and the quantitative bias of the low concentration sample exceeded the acceptable range within ±25% in two of the four repeats. Using Cfs to normalize the Ct values of test samples, the estimated values were very close to the reference values (bias -13.27 to 13.05%). In the validation of control samples, model c was more appropriate than Cva or Cvb. The application of Cf allowed pBSTopas to substitute for Topas 19/2 gDNA as a calibrator to accurately quantify the GMO.

  7. A simple and fast method for extraction and quantification of cryptophyte phycoerythrin.

    Science.gov (United States)

    Thoisen, Christina; Hansen, Benni Winding; Nielsen, Søren Laurentius

    2017-01-01

    The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equipment. The cryptophyte cells on the filters were disrupted at -80 °C and added phosphate buffer for extraction at 4 °C followed by absorbance measurement. The cryptophyte Rhodomonas salina was used as a model organism. •Simple method for extraction and quantification of phycoerythrin from cryptophytes.•Minimal usage of equipment and chemicals, and low labor costs.•Applicable for industrial and biological purposes.

  8. A simple and fast method for extraction and quantification of cryptophyte phycoerythrin

    DEFF Research Database (Denmark)

    Thoisen, Christina Vinum; Hansen, Benni Winding; Nielsen, Søren Laurentius

    2017-01-01

    The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive...... freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equipment. Filters with the cryptophyte were frozen (−80 °C) and added phosphate buffer for extraction at 4 °C...... followed by absorbance measurement. The cryptophyte Rhodomonas salina was used as a model organism. •Simple method for extraction and quantification of phycoerythrin from cryptophytes. •Minimal usage of equipment and chemicals, and low labor costs. •Applicable for industrial and biological purposes....

  9. Quantification of cellular viability for the MTT method

    International Nuclear Information System (INIS)

    Altanes, M.

    2001-01-01

    In the last years, the scientists have been given to the task of finding new biomaterials whose biocompatibility with the human body allow them to substitute parts of the organism, as the bones and the junctures and also that they are little rejected. In the following work it was evaluated and quantified by the method of the MTT, a colorimetric, quick, simple and economic technical, the possible citohepatic effect of a watery extract of a biomaterials of polymeric origin (acrilamide and metacrilic acid) obtained for technical of gamma irradiation, consistent in culture medium 199 and veal serum, in cells Vero. In order to compare the answer of the analysis material, they were used the controls negative (polyethylene of high molecular weight), and positive (Sulphate of Streptomycin), just as they indicate it the established norms, being achieved the waited result of each one of them. In the observation made to the optic microscope, after 24 and 48 hours of contact between the cells and the extract of the analysis material, they were not perceived indicative of damage or cellular lysis, or morphologic changes in the structure of cells that it indicated us the possible cytotoxic effect of this biomaterials, therefore the qualitative analysis was not enough for the determination of the cytotoxic effect of the analysis sample

  10. Mathematical methods for quantification and comparison of dissolution testing data.

    Science.gov (United States)

    Vranić, Edina; Mehmedagić, Aida; Hadzović, Sabira

    2002-12-01

    In recent years, drug release/dissolution from solid dosage forms has been the subject of intense and profitable scientific developments. Whenever a new solid dosage form is developed or produced, it is necessary to ensure that drug dissolution occurs in an appropriate manner. The pharmaceutical industry and the registration authorities do focus, nowadays, on drug dissolution studies. The quantitative analysis of the values obtained in dissolution/release tests is easier when mathematical formulas that express the dissolution results as a function of some of the dosage forms characteristics are used. This work discusses the analysis of data obtained for dissolution profiles under different media pH conditions using mathematical methods of analysis described by Moore and Flanner. These authors have described difference factor (f1) and similarity factor (f2), which can be used to characterise drug dissolution/release profiles. In this work we have used these formulas for evaluation of dissolution profiles of the conventional tablets in different pH of dissolution medium (range of physiological variations).

  11. Quantification of immobilized Candida antarctica lipase B (CALB) using ICP-AES combined with Bradford method.

    Science.gov (United States)

    Nicolás, Paula; Lassalle, Verónica L; Ferreira, María L

    2017-02-01

    The aim of this manuscript was to study the application of a new method of protein quantification in Candida antarctica lipase B commercial solutions. Error sources associated to the traditional Bradford technique were demonstrated. Eight biocatalysts based on C. antarctica lipase B (CALB) immobilized onto magnetite nanoparticles were used. Magnetite nanoparticles were coated with chitosan (CHIT) and modified with glutaraldehyde (GLUT) and aminopropyltriethoxysilane (APTS). Later, CALB was adsorbed on the modified support. The proposed novel protein quantification method included the determination of sulfur (from protein in CALB solution) by means of Atomic Emission by Inductive Coupling Plasma (AE-ICP). Four different protocols were applied combining AE-ICP and classical Bradford assays, besides Carbon, Hydrogen and Nitrogen (CHN) analysis. The calculated error in protein content using the "classic" Bradford method with bovine serum albumin as standard ranged from 400 to 1200% when protein in CALB solution was quantified. These errors were calculated considering as "true protein content values" the results of the amount of immobilized protein obtained with the improved method. The optimum quantification procedure involved the combination of Bradford method, ICP and CHN analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Quantification of total phosphorothioate in bacterial DNA by a bromoimane-based fluorescent method.

    Science.gov (United States)

    Xiao, Lu; Xiang, Yu

    2016-06-01

    The discovery of phosphorothioate (PT) modifications in bacterial DNA has challenged our understanding of conserved phosphodiester backbone structure of cellular DNA. This exclusive DNA modification in bacteria is not found in animal cells yet, and its biological function in bacteria is still poorly understood. Quantitative information about the bacterial PT modifications is thus important for the investigation of their possible biological functions. In this study, we have developed a simple fluorescence method for selective quantification of total PTs in bacterial DNA, based on fluorescent labeling of PTs and subsequent release of the labeled fluorophores for absolute quantification. The method was highly selective to PTs and not interfered by the presence of reactive small molecules or proteins. The quantification of PTs in an E. coli DNA sample was successfully achieved using our method and gave a result of about 455 PTs per million DNA nucleotides, while almost no detectable PTs were found in a mammalian calf thymus DNA. With this new method, the content of phosphorothioate in bacterial DNA could be successfully quantified, serving as a simple method suitable for routine use in biological phosphorothioate related studies. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A new method for x-ray scatter correction: first assessment on a cone-beam CT experimental setup

    International Nuclear Information System (INIS)

    Rinkel, J; Gerfault, L; Esteve, F; Dinten, J-M

    2007-01-01

    Cone-beam computed tomography (CBCT) enables three-dimensional imaging with isotropic resolution and a shorter acquisition time compared to a helical CT scanner. Because a larger object volume is exposed for each projection, scatter levels are much higher than in collimated fan-beam systems, resulting in cupping artifacts, streaks and quantification inaccuracies. In this paper, a general method to correct for scatter in CBCT, without supplementary on-line acquisition, is presented. This method is based on scatter calibration through off-line acquisition combined with on-line analytical transformation based on physical equations, to adapt calibration to the object observed. The method was tested on a PMMA phantom and on an anthropomorphic thorax phantom. The results were validated by comparison to simulation for the PMMA phantom and by comparison to scans obtained on a commercial multi-slice CT scanner for the thorax phantom. Finally, the improvements achieved with the new method were compared to those obtained using a standard beam-stop method. The new method provided results that closely agreed with the simulation and with the conventional CT scanner, eliminating cupping artifacts and significantly improving quantification. Compared to the beam-stop method, lower x-ray doses and shorter acquisition times were needed, both divided by a factor of 9 for the same scatter estimation accuracy

  14. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    Science.gov (United States)

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Comparison of quantification methods for the analysis of polychlorinated alkanes using electron capture negative ionization mass spectrometry.

    NARCIS (Netherlands)

    Rusina, T.; Korytar, P.; de Boer, J.

    2011-01-01

    Four quantification methods for short-chain chlorinated paraffins (SCCPs) or polychlorinated alkanes (PCAs) using gas chromatography electron capture negative ionisation low resolution mass spectrometry (GC-ECNI-LRMS) were investigated. The method based on visual comparison of congener group

  16. Comparison of quantification methods for the analysis of polychlorinated alkanes using electron capture negative ionisation mass spectrometry

    NARCIS (Netherlands)

    Rusina, T.; Korytar, P.; Boer, de J.

    2011-01-01

    Four quantification methods for short-chain chlorinated paraffins (SCCPs) or polychlorinated alkanes (PCAs) using gas chromatography electron capture negative ionisation low resolution mass spectrometry (GC-ECNI-LRMS) were investigated. The method based on visual comparison of congener group

  17. Effect of inter-crystal scatter on estimation methods for random coincidences and subsequent correction

    International Nuclear Information System (INIS)

    Torres-Espallardo, I; Spanoudaki, V; Ziegler, S I; Rafecas, M; McElroy, D P

    2008-01-01

    Random coincidences can contribute substantially to the background in positron emission tomography (PET). Several estimation methods are being used for correcting them. The goal of this study was to investigate the validity of techniques for random coincidence estimation, with various low-energy thresholds (LETs). Simulated singles list-mode data of the MADPET-II small animal PET scanner were used as input. The simulations have been performed using the GATE simulation toolkit. Several sources with different geometries have been employed. We evaluated the number of random events using three methods: delayed window (DW), singles rate (SR) and time histogram fitting (TH). Since the GATE simulations allow random and true coincidences to be distinguished, a comparison between the number of random coincidences estimated using the standard methods and the number obtained using GATE was performed. An overestimation in the number of random events was observed using the DW and SR methods. This overestimation decreases for LETs higher than 255 keV. It is additionally reduced when the single events which have undergone a Compton interaction in crystals before being detected are removed from the data. These two observations lead us to infer that the overestimation is due to inter-crystal scatter. The effect of this mismatch in the reconstructed images is important for quantification because it leads to an underestimation of activity. This was shown using a hot-cold-background source with 3.7 MBq total activity in the background region and a 1.59 MBq total activity in the hot region. For both 200 keV and 400 keV LET, an overestimation of random coincidences for the DW and SR methods was observed, resulting in approximately 1.5% or more (at 200 keV LET: 1.7% for DW and 7% for SR) and less than 1% (at 400 keV LET: both methods) underestimation of activity within the background region. In almost all cases, images obtained by compensating for random events in the reconstruction

  18. A new method for evaluation and correction of thermal reactor power and present operational applications

    International Nuclear Information System (INIS)

    Langenstein, M.; Streit, S.; Laipple, B.; Eitschberger, H.

    2005-01-01

    The determination of the thermal reactor power is traditionally be done by heat balance: 1) for a boiling water reactor (BWR) at the interface of reactor control volume and heat cycle. 2) for a pressurised-water reactor (PWR) at the interface of the steam generator control volume and turbine island on the secondary side. The uncertainty of these traditional methods is not easy to determine and can be in the range of several percent. Technical and legal regulations (e.g. 10CFR50) cover an estimated error of instrumentation up to 2% by increasing the design thermal reactor power for emergency analysis to 102 % of the licensed thermal reactor power. Basically the licensee has the duty to warrant at any time operation inside the analyzed region for thermal reactor power. This is normally done by keeping the indicated reactor power at the licensed 100% value. The better way is to use a method which allows a continuous warranty evaluation. The quantification of the level of fulfilment of this warranty is only achievable by a method which: 1) is independent of single measurements accuracies. 2) results in a certified quality of single process values and for the total heat cycle analysis. 3)leads to complete results including 2-sigma deviation especially for thermal reactor power. Here this method, which is called 'process data reconciliation based on VDI 2048 guideline', is presented [1, 2]. This method allows to determine the true process parameters with a statistical probability of 95%, by considering closed material, mass- and energy balances following the Gaussian correction principle. The amount of redundant process information and complexity of the process improves the final results. This represents the most probable state of the process with minimized uncertainty according to VDI 2048. Hence, calibration and control of the thermal reactor power are possible with low effort but high accuracy and independent of single measurement accuracies. Further more, VDI 2048

  19. QUANTIFICATION AND BIOREMEDIATION OF ENVIRONMENTAL SAMPLES BY DEVELOPING A NOVEL AND EFFICIENT METHOD

    Directory of Open Access Journals (Sweden)

    Mohammad Osama

    2014-06-01

    Full Text Available Pleurotus ostreatus, a white rot fungus, is capable of bioremediating a wide range of organic contaminants including Polycyclic Aromatic Hydrocarbons (PAHs. Ergosterol is produced by living fungal biomass and used as a measure of fungal biomass. The first part of this work deals with the extraction and quantification of PAHs from contaminated sediments by Lipid Extraction Method (LEM. The second part consists of the development of a novel extraction method (Ergosterol Extraction Method (EEM, quantification and bioremediation. The novelty of this method is the simultaneously extraction and quantification of two different types of compounds, sterol (ergosterol and PAHs and is more efficient than LEM. EEM has been successful in extracting ergosterol from the fungus grown on barley in the concentrations of 17.5-39.94 µg g-1 ergosterol and the PAHs are much more quantified in numbers and amounts as compared to LEM. In addition, cholesterol usually found in animals, has also been detected in the fungus, P. ostreatus at easily detectable levels.

  20. A method for the 3-D quantification of bridging ligaments during crack propagation

    International Nuclear Information System (INIS)

    Babout, L.; Janaszewski, M.; Marrow, T.J.; Withers, P.J.

    2011-01-01

    This letter shows how a hole-closing algorithm can be used to identify and quantify crack-bridging ligaments from a sequence of X-ray tomography images of intergranular stress corrosion cracking. This allows automatic quantification of the evolution of bridging ligaments through the crack propagation sequence providing fracture mechanics insight previously unobtainable from fractography. The method may also be applied to other three-dimensional materials science problems, such as closing walls in foams.

  1. Quantification of organic acids in beer by nuclear magnetic resonance (NMR)-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, J.E.A. [CICECO-Department of Chemistry, University of Aveiro, Campus de Santiago, 3810-193 Aveiro (Portugal); Erny, G.L. [CESAM - Department of Chemistry, University of Aveiro, Campus de Santiago, 3810-193 Aveiro (Portugal); Barros, A.S. [QOPNAA-Department of Chemistry, University of Aveiro, Campus de Santiago, 3810-193 Aveiro (Portugal); Esteves, V.I. [CESAM - Department of Chemistry, University of Aveiro, Campus de Santiago, 3810-193 Aveiro (Portugal); Brandao, T.; Ferreira, A.A. [UNICER, Bebidas de Portugal, Leca do Balio, 4466-955 S. Mamede de Infesta (Portugal); Cabrita, E. [Department of Chemistry, New University of Lisbon, 2825-114 Caparica (Portugal); Gil, A.M., E-mail: agil@ua.pt [CICECO-Department of Chemistry, University of Aveiro, Campus de Santiago, 3810-193 Aveiro (Portugal)

    2010-08-03

    The organic acids present in beer provide important information on the product's quality and history, determining organoleptic properties and being useful indicators of fermentation performance. NMR spectroscopy may be used for rapid quantification of organic acids in beer and different NMR-based methodologies are hereby compared for the six main acids found in beer (acetic, citric, lactic, malic, pyruvic and succinic). The use of partial least squares (PLS) regression enables faster quantification, compared to traditional integration methods, and the performance of PLS models built using different reference methods (capillary electrophoresis (CE), both with direct and indirect UV detection, and enzymatic essays) was investigated. The best multivariate models were obtained using CE/indirect detection and enzymatic essays as reference and their response was compared with NMR integration, either using an internal reference or an electrical reference signal (Electronic REference To access In vivo Concentrations, ERETIC). NMR integration results generally agree with those obtained by PLS, with some overestimation for malic and pyruvic acids, probably due to peak overlap and subsequent integral errors, and an apparent relative underestimation for citric acid. Overall, these results make the PLS-NMR method an interesting choice for organic acid quantification in beer.

  2. Quantification of organic acids in beer by nuclear magnetic resonance (NMR)-based methods

    International Nuclear Information System (INIS)

    Rodrigues, J.E.A.; Erny, G.L.; Barros, A.S.; Esteves, V.I.; Brandao, T.; Ferreira, A.A.; Cabrita, E.; Gil, A.M.

    2010-01-01

    The organic acids present in beer provide important information on the product's quality and history, determining organoleptic properties and being useful indicators of fermentation performance. NMR spectroscopy may be used for rapid quantification of organic acids in beer and different NMR-based methodologies are hereby compared for the six main acids found in beer (acetic, citric, lactic, malic, pyruvic and succinic). The use of partial least squares (PLS) regression enables faster quantification, compared to traditional integration methods, and the performance of PLS models built using different reference methods (capillary electrophoresis (CE), both with direct and indirect UV detection, and enzymatic essays) was investigated. The best multivariate models were obtained using CE/indirect detection and enzymatic essays as reference and their response was compared with NMR integration, either using an internal reference or an electrical reference signal (Electronic REference To access In vivo Concentrations, ERETIC). NMR integration results generally agree with those obtained by PLS, with some overestimation for malic and pyruvic acids, probably due to peak overlap and subsequent integral errors, and an apparent relative underestimation for citric acid. Overall, these results make the PLS-NMR method an interesting choice for organic acid quantification in beer.

  3. Method and system of doppler correction for mobile communications systems

    Science.gov (United States)

    Georghiades, Costas N. (Inventor); Spasojevic, Predrag (Inventor)

    1999-01-01

    Doppler correction system and method comprising receiving a Doppler effected signal comprising a preamble signal (32). A delayed preamble signal (48) may be generated based on the preamble signal (32). The preamble signal (32) may be multiplied by the delayed preamble signal (48) to generate an in-phase preamble signal (60). The in-phase preamble signal (60) may be filtered to generate a substantially constant in-phase preamble signal (62). A plurality of samples of the substantially constant in-phase preamble signal (62) may be accumulated. A phase-shifted signal (76) may also be generated based on the preamble signal (32). The phase-shifted signal (76) may be multiplied by the delayed preamble signal (48) to generate an out-of-phase preamble signal (80). The out-of-phase preamble signal (80) may be filtered to generate a substantially constant out-of-phase preamble signal (82). A plurality of samples of the substantially constant out-of-phase signal (82) may be accumulated. A sum of the in-phase preamble samples and a sum of the out-of-phase preamble samples may be normalized relative to each other to generate an in-phase Doppler estimator (92) and an out-of-phase Doppler estimator (94).

  4. Comparison of manual and automated quantification methods of 123I-ADAM

    International Nuclear Information System (INIS)

    Kauppinen, T.; Keski-Rahkonen, A.; Sihvola, E.; Helsinki Univ. Central Hospital

    2005-01-01

    123 I-ADAM is a novel radioligand for imaging of the brain serotonin transporters (SERTs). Traditionally, the analysis of brain receptor studies has been based on observer-dependent manual region of interest definitions and visual interpretation. Our aim was to create a template for automated image registrations and volume of interest (VOI) quantification and to show that an automated quantification method of 123 I-ADAM is more repeatable than the manual method. Patients, methods: A template and a predefined VOI map was created from 123 I-ADAM scans done for healthy volunteers (n=15). Scans of another group of healthy persons (HS, n=12) and patients with bulimia nervosa (BN, n=10) were automatically fitted to the template and specific binding ratios (SBRs) were calculated by using the VOI map. Manual VOI definitions were done for the HS and BN groups by both one and two observers. The repeatability of the automated method was evaluated by using the BN group. Results: For the manual method, the interobserver coefficient of repeatability was 0.61 for the HS group and 1.00 for the BN group. The intra-observer coefficient of repeatability for the BN group was 0.70. For the automated method, the coefficient of repeatability was 0.13 for SBRs in midbrain. Conclusion: An automated quantification gives valuable information in addition to visual interpretation decreasing also the total image handling time and giving clear advantages for research work. An automated method for analysing 123 I-ADAM binding to the brain SERT gives repeatable results for fitting the studies to the template and for calculating SBRs, and could therefore replace manual methods. (orig.)

  5. Comparison of manual and automated quantification methods of {sup 123}I-ADAM

    Energy Technology Data Exchange (ETDEWEB)

    Kauppinen, T. [Helsinki Univ. Central Hospital (Finland). HUS Helsinki Medical Imaging Center; Helsinki Univ. Central Hospital (Finland). Division of Nuclear Medicine; Koskela, A.; Ahonen, A. [Helsinki Univ. Central Hospital (Finland). Division of Nuclear Medicine; Diemling, M. [Hermes Medical Solutions, Stockholm (Sweden); Keski-Rahkonen, A.; Sihvola, E. [Helsinki Univ. (Finland). Dept. of Public Health; Helsinki Univ. Central Hospital (Finland). Dept. of Psychiatry

    2005-07-01

    {sup 123}I-ADAM is a novel radioligand for imaging of the brain serotonin transporters (SERTs). Traditionally, the analysis of brain receptor studies has been based on observer-dependent manual region of interest definitions and visual interpretation. Our aim was to create a template for automated image registrations and volume of interest (VOI) quantification and to show that an automated quantification method of {sup 123}I-ADAM is more repeatable than the manual method. Patients, methods: A template and a predefined VOI map was created from {sup 123}I-ADAM scans done for healthy volunteers (n=15). Scans of another group of healthy persons (HS, n=12) and patients with bulimia nervosa (BN, n=10) were automatically fitted to the template and specific binding ratios (SBRs) were calculated by using the VOI map. Manual VOI definitions were done for the HS and BN groups by both one and two observers. The repeatability of the automated method was evaluated by using the BN group. Results: For the manual method, the interobserver coefficient of repeatability was 0.61 for the HS group and 1.00 for the BN group. The intra-observer coefficient of repeatability for the BN group was 0.70. For the automated method, the coefficient of repeatability was 0.13 for SBRs in midbrain. Conclusion: An automated quantification gives valuable information in addition to visual interpretation decreasing also the total image handling time and giving clear advantages for research work. An automated method for analysing {sup 123}I-ADAM binding to the brain SERT gives repeatable results for fitting the studies to the template and for calculating SBRs, and could therefore replace manual methods. (orig.)

  6. Quantification of massively parallel sequencing libraries - a comparative study of eight methods

    DEFF Research Database (Denmark)

    Hussing, Christian; Kampmann, Marie-Louise; Mogensen, Helle Smidt

    2018-01-01

    Quantification of massively parallel sequencing libraries is important for acquisition of monoclonal beads or clusters prior to clonal amplification and to avoid large variations in library coverage when multiple samples are included in one sequencing analysis. No gold standard for quantification...... estimates followed by Qubit and electrophoresis-based instruments (Bioanalyzer, TapeStation, GX Touch, and Fragment Analyzer), while SYBR Green and TaqMan based qPCR assays gave the lowest estimates. qPCR gave more accurate predictions of sequencing coverage than Qubit and TapeStation did. Costs, time......-consumption, workflow simplicity, and ability to quantify multiple samples are discussed. Technical specifications, advantages, and disadvantages of the various methods are pointed out....

  7. Evaluation of a method for correction of scatter radiation in thorax cone beam CT; Evaluation d'une methode de correction du rayonnement diffuse en tomographie du thorax avec faisceau conique

    Energy Technology Data Exchange (ETDEWEB)

    Rinkel, J.; Dinten, J.M. [CEA Grenoble (DTBS/STD), Lab. d' Electronique et de Technologie de l' Informatique, LETI, 38 (France); Esteve, F. [European Synchrotron Radiation Facility (ESRF), 38 - Grenoble (France)

    2004-07-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  8. Quantification of endogenous metabolites by the postcolumn infused-internal standard method combined with matrix normalization factor in liquid chromatography-electrospray ionization tandem mass spectrometry.

    Science.gov (United States)

    Liao, Hsiao-Wei; Chen, Guan-Yuan; Wu, Ming-Shiang; Liao, Wei-Chih; Tsai, I-Lin; Kuo, Ching-Hua

    2015-01-02

    Quantification of endogenous metabolites has enabled the discovery of biomarkers for diagnosis and provided for an understanding of disease etiology. The standard addition and stable isotope labeled-internal standard (SIL-IS) methods are currently the most widely used approaches to quantifying endogenous metabolites, but both have some limitations for clinical measurement. In this study, we developed a new approach for endogenous metabolite quantification by the postcolumn infused-internal standard (PCI-IS) method combined with the matrix normalization factor (MNF) method. MNF was used to correct the difference in MEs between standard solution and biofluids, and PCI-IS additionally tailored the correction of the MEs for individual samples. Androstenedione and testosterone were selected as test articles to verify this new approach to quantifying metabolites in plasma. The repeatability (n=4 runs) and intermediate precision (n=3 days) in terms of the peak area of androstenedione and testosterone at all tested concentrations were all less than 11% relative standard deviation (RSD). The accuracy test revealed that the recoveries were between 95.72% and 113.46%. The concentrations of androstenedione and testosterone in fifty plasma samples obtained from healthy volunteers were quantified by the PCI-IS combined with the MNF method, and the quantification results were compared with the results of the SIL-IS method. The Pearson correlation test showed that the correlation coefficient was 0.98 for both androstenedione and testosterone. We demonstrated that the PCI-IS combined with the MNF method is an effective and accurate method for quantifying endogenous metabolites. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Validation of an HPLC method for quantification of total quercetin in Calendula officinalis extracts

    International Nuclear Information System (INIS)

    Muñoz Muñoz, John Alexander; Morgan Machado, Jorge Enrique; Trujillo González, Mary

    2015-01-01

    Introduction: calendula officinalis extracts are used as natural raw material in a wide range of pharmaceutical and cosmetic preparations; however, there are no official methods for quality control of these extracts. Objective: to validate an HPLC-based analytical method for quantification total quercetin in glycolic and hydroalcoholic extracts of Calendula officinalis. Methods: to quantify total quercetin content in the matrices, it was necessary to hydrolyze flavonoid glycosides under optimal conditions. The chromatographic separation was performed on a C-18 SiliaChrom 4.6x150 mm 5 µm column, adapted to a SiliaChrom 5 um C-18 4.6x10 mm precolumn, with UV detection at 370 nm. The gradient elution was performed with a mobile phase consisting of methanol (MeOH) and phosphoric acid (H 3 PO 4 ) (0.08 % w/v). The quantification was performed through the external standard method and comparison with quercetin reference standard. Results: the studied method selectivity against extract components and degradation products under acid/basic hydrolysis, oxidation and light exposure conditions showed no signals that interfere with the quercetin quantification. It was statistically proved that the method is linear from 1.0 to 5.0 mg/mL. Intermediate precision expressed as a variation coefficient was 1.8 and 1.74 % and the recovery percentage was 102.15 and 101.32 %, for glycolic and hydroalcoholic extracts, respectively. Conclusions: the suggested methodology meets the quality parameters required for quantifying total quercetin, which makes it a useful tool for quality control of C. officinalis extracts. (author)

  10. Validation of methods for the detection and quantification of engineered nanoparticles in food

    DEFF Research Database (Denmark)

    Linsinger, T.P.J.; Chaudhry, Q.; Dehalu, V.

    2013-01-01

    the methods apply equally well to particles of different suppliers. In trueness testing, information whether the particle size distribution has changed during analysis is required. Results are largely expected to follow normal distributions due to the expected high number of particles. An approach...... approach for the validation of methods for detection and quantification of nanoparticles in food samples. It proposes validation of identity, selectivity, precision, working range, limit of detection and robustness, bearing in mind that each “result” must include information about the chemical identity...

  11. Gamma camera correction system and method for using the same

    International Nuclear Information System (INIS)

    Inbar, D.; Gafni, G.; Grimberg, E.; Bialick, K.; Koren, J.

    1986-01-01

    A gamma camera is described which consists of: (a) a detector head that includes photodetectors for producing output signals in response to radiation stimuli which are emitted by a radiation field and which interact with the detector head and produce an event; (b) signal processing circuitry responsive to the output signals of the photodetectors for producing a sum signal that is a measure of the total energy of the event; (c) an energy discriminator having a relatively wide window for comparison with the sum signal; (d) the signal processing circuitry including coordinate computation circuitry for operating on the output signals, and calculating an X,Y coordinate of an event when the sum signal lies within the window of the energy discriminator; (e) an energy correction table containing spatially dependent energy windows for producing a validation signal if the total energy of an event lies within the window associated with the X,Y coordinates of the event; (f) the signal processing circuitry including a dislocation correction table containing spatially dependent correction factors for converting the X,Y coordinates of an event to relocated coordinates in accordance with correction factors determined by the X,Y coordinates; (g) a digital memory for storing a map of the radiation field; and (h) means for recording an event at its relocated coordinates in the memory if the energy correction table produces a validation signal

  12. Effect of methods of myopia correction on visual acuity, contrast sensitivity, and depth of focus

    NARCIS (Netherlands)

    Nio, YK; Jansonius, NM; Wijdh, RHJ; Beekhuis, WH; Worst, JGF; Noorby, S; Kooijman, AC

    Purpose. To psychophysically measure spherical and irregular aberrations in patients with various types of myopia correction. Setting: Laboratory of Experimental Ophthalmology, University of Groningen, Groningen, The Netherlands. Methods: Three groups of patients with low myopia correction

  13. Peculiarities of application the method of autogenic training in the correction of eating behavior

    OpenAIRE

    Shebanova, Vitaliya

    2014-01-01

    The article presented peculiarities of applying the method of autogenic training in the correction of eating disorders. Described stages of correction work with desadaptive eating behavior. Author makes accent on the rules self-assembly formula intentions.

  14. Improved LC-MS/MS method for the quantification of hepcidin-25 in clinical samples.

    Science.gov (United States)

    Abbas, Ioana M; Hoffmann, Holger; Montes-Bayón, María; Weller, Michael G

    2018-06-01

    Mass spectrometry-based methods play a crucial role in the quantification of the main iron metabolism regulator hepcidin by singling out the bioactive 25-residue peptide from the other naturally occurring N-truncated isoforms (hepcidin-20, -22, -24), which seem to be inactive in iron homeostasis. However, several difficulties arise in the MS analysis of hepcidin due to the "sticky" character of the peptide and the lack of suitable standards. Here, we propose the use of amino- and fluoro-silanized autosampler vials to reduce hepcidin interaction to laboratory glassware surfaces after testing several types of vials for the preparation of stock solutions and serum samples for isotope dilution liquid chromatography-tandem mass spectrometry (ID-LC-MS/MS). Furthermore, we have investigated two sample preparation strategies and two chromatographic separation conditions with the aim of developing a LC-MS/MS method for the sensitive and reliable quantification of hepcidin-25 in serum samples. A chromatographic separation based on usual acidic mobile phases was compared with a novel approach involving the separation of hepcidin-25 with solvents at high pH containing 0.1% of ammonia. Both methods were applied to clinical samples in an intra-laboratory comparison of two LC-MS/MS methods using the same hepcidin-25 calibrators with good correlation of the results. Finally, we recommend a LC-MS/MS-based quantification method with a dynamic range of 0.5-40 μg/L for the assessment of hepcidin-25 in human serum that uses TFA-based mobile phases and silanized glass vials. Graphical abstract Structure of hepcidin-25 (Protein Data Bank, PDB ID 2KEF).

  15. [Application of N-isopropyl-p-[123I] iodoamphetamine quantification of regional cerebral blood flow using iterative reconstruction methods: selection of the optimal reconstruction method and optimization of the cutoff frequency of the preprocessing filter].

    Science.gov (United States)

    Asazu, Akira; Hayashi, Masuo; Arai, Mami; Kumai, Yoshiaki; Akagi, Hiroyuki; Okayama, Katsuyoshi; Narumi, Yoshifumi

    2013-05-01

    In cerebral blood flow tests using N-Isopropyl-p-[123I] Iodoamphetamine "I-IMP, quantitative results of greater accuracy than possible using the autoradiography (ARG) method can be obtained with attenuation and scatter correction and image reconstruction by filtered back projection (FBP). However, the cutoff frequency of the preprocessing Butterworth filter affects the quantitative value; hence, we sought an optimal cutoff frequency, derived from the correlation between the FBP method and Xenon-enhanced computed tomography (XeCT)/cerebral blood flow (CBF). In this study, we reconstructed images using ordered subsets expectation maximization (OSEM), a method of successive approximation which has recently come into wide use, and also three-dimensional (3D)-OSEM, a method by which the resolution can be corrected with the addition of collimator broad correction, to examine the effects on the regional cerebral blood flow (rCBF) quantitative value of changing the cutoff frequency, and to determine whether successive approximation is applicable to cerebral blood flow quantification. Our results showed that quantification of greater accuracy was obtained with reconstruction employing the 3D-OSEM method and using a cutoff frequency set near 0.75-0.85 cycles/cm, which is higher than the frequency used in image reconstruction by the ordinary FBP method.

  16. Application of N-isopropyl-p-[123I] iodoamphetamine quantification of regional cerebral blood flow using iterative reconstruction methods. Selection of the optimal reconstruction method and optimization of the cutoff frequency of the preprocessing filter

    International Nuclear Information System (INIS)

    Asazu, Akira; Hayashi, Masuo; Arai, Mami; Kumai, Yoshiaki; Akagi, Hiroyuki; Okayama, Katsuyoshi; Narumi, Yoshifumi

    2013-01-01

    In cerebral blood flow tests using N-Isopropyl-p-[ 123 I] Iodoamphetamine 123 I-IMP, quantitative results of greater accuracy than possible using the autoradiography (ARG) method can be obtained with attenuation and scatter correction and image reconstruction by filtered back projection (FBP). However, the cutoff frequency of the preprocessing Butterworth filter affects the quantitative value; hence, we sought an optimal cutoff frequency, derived from the correlation between the FBP method and Xenon-enhanced computed tomography (XeCT)/cerebral blood flow (CBF). In this study, we reconstructed images using ordered subsets expectation maximization (OSEM), a method of successive approximation which has recently come into wide use, and also three-dimensional (3D)-OSEM, a method by which the resolution can be corrected with the addition of collimator broad correction, to examine the effects on the regional cerebral blood flow (rCBF) quantitative value of changing the cutoff frequency, and to determine whether successive approximation is applicable to cerebral blood flow quantification. Our results showed that quantification of greater accuracy was obtained with reconstruction employing the 3D-OSEM method and using a cutoff frequency set near 0.75-0.85 cycles/cm, which is higher than the frequency used in image reconstruction by the ordinary FBP method. (author)

  17. Methods and apparatus for environmental correction of thermal neutron logs

    International Nuclear Information System (INIS)

    Preeg, W.E.; Scott, H.D.

    1983-01-01

    An on-line environmentally-corrected measurement of the thermal neutron decay time (tau) of an earth formation traversed by a borehole is provided in a two-detector, pulsed neutron logging tool, by measuring tau at each detector and combining the two tau measurements in accordance with a previously established empirical relationship of the general form: tau = tausub(F) +A(tausub(F) + tausub(N)B) + C, where tausub(F) and tausub(N) are the tau measurements at the far-spaced and near-spaced detectors, respectively, A is a correction coefficient for borehole capture cross section effects, B is a correction coefficient for neutron diffusion effects, and C is a constant related to parameters of the logging tool. Preferred numerical values of A, B and C are disclosed, and a relationship for more accurately approximating the A term to specific borehole conditions. (author)

  18. Methods for Motion Correction Evaluation Using 18F-FDG Human Brain Scans on a High-Resolution PET Scanner

    DEFF Research Database (Denmark)

    Keller, Sune H.; Sibomana, Merence; Olesen, Oline Vinter

    2012-01-01

    Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias in the reconstr......Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias...... in the reconstructed emission images. The purpose of this work was the development of quality control (QC) methods for MC procedures based on external motion tracking (EMT) for human scanning using an optical motion tracking system. Methods: Two scans with minor motion and 5 with major motion (as reported...... (automated image registration) software. The following 3 QC methods were used to evaluate the EMT and AIR MC: a method using the ratio between 2 regions of interest with gray matter voxels (GM) and white matter voxels (WM), called GM/WM; mutual information; and cross correlation. Results: The results...

  19. A Simple and Effective Isocratic HPLC Method for Fast Identification and Quantification of Surfactin

    International Nuclear Information System (INIS)

    Muhammad Qadri Effendy Mubarak; Abdul Rahman Hassan; Aidil Abdul Hamid; Sahaid Khalil; Mohd Hafez Mohd Isa

    2015-01-01

    The aim of this study was to establish a simple, accurate and reproducible method for the identification and quantification of surfactin using high-performance liquid chromatography (HPLC). Previously reported method of identification and quantification of surfactin were time consuming and requires a large quantity of mobile phase. The new method was achieved by application of Chromolith® high performance RP-18 (100 x 4.6 mm, 5 μm) as the stationary phase and optimization of mobile phase ratio and flow rate. Mobile phase consisted of acetonitrile (ACN) and 3.8 mM trifluroacetic acid (TFA) solution of 80:20 ratio at flow rate of 2.2 mL/ min was obtained as the optimal conditions. Total elution time of the obtained surfactin peaks was four times quicker than various methods previously reported in the literature. The method described here allowed for fine separation of surfactin in standard sample (98 % purity) and surfactin in fermentation broth. (author)

  20. High-performance liquid chromatographic quantification of rifampicin in human plasma: method for Therapecutic drug monitoring

    International Nuclear Information System (INIS)

    Sameh, T.; Hanene, E.; Jebali, N.

    2013-01-01

    A high performance liquid chromatography (HPLC) method has been developed that allows quantification of Rifampicin in human plasma. The method is based on the precipitation of proteins in human plasma with methanol. Optimal assay conditions were found with a C18 column and a simple mobile phase consisting of 0.05 M dipotassic hydrogen phosphate buffer and acetonitrile (53/47, V/V) with 0.086 % diethylamin, pH = 4.46. The flow-rate was 0.6 ml /mm and the drug was monitored at 340 nm. Results from the HPLC analyses showed that the assay method is linear in the concentration range of 1-40 micro g/ml, (r2 >0.99). The limit of quantification and limit of detection of Rifampicin were 0.632 micro g/ml and 0.208 micro g/ml, respectively. Intraday and interday coefficient of variation and bias were below 10% for all samples, suggesting good precision and accuracy of the method. Recoveries were greater than 90% in a plasma sample volume of 100 micro l. The method is being successfully applied to therapeutic drug monitoring of Rifapicin in plasma samples of tuberculosis and staphylococcal infections patients. (author)

  1. Evaluation of two autoinducer-2 quantification methods for application in marine environments

    KAUST Repository

    Wang, Tian-Nyu

    2018-02-11

    This study evaluated two methods, namely high performance liquid chromatography with fluorescence detection (HPLC-FLD) and Vibrio harveyi BB170 bioassay, for autoinducer-2 (AI-2) quantification in marine samples. Using both methods, the study also investigated the stability of AI-2 in varying pH, temperature and media, as well as quantified the amount of AI-2 signals in marine samples.HPLC-FLD method showed a higher level of reproducibility and precision compared to V. harveyi BB170 bioassay. Alkaline pH > 8 and high temperature (> 37°C) increased the instability of AI-2. The AI-2 concentrations in seawater were low, ca. 3.2-27.6 pmol l-1 whereas 8- week old marine biofilm grew on an 18.8 cm2 substratum accumulated ca. 0.207 nmol of AI-2.Both methods have pros and cons for AI-2 quantification in marine samples. Regardless, both methods reported a ubiquitous presence of AI-2 in both planktonic and biomass fractions of seawater, as well as in marine biofilm.In this study, AI-2 signals were for the first time enumerated in marine samples to reveal the ubiquitous presence of AI-2 in this environment. The findings suggest a possible role of AI-2 in biofilm formation in marine environment, and the contribution of AI-2 in biofilm-associated problems such as biofouling and biocorrosion. This article is protected by copyright. All rights reserved.

  2. Methods for the physical characterization and quantification of extracellular vesicles in biological samples.

    Science.gov (United States)

    Rupert, Déborah L M; Claudio, Virginia; Lässer, Cecilia; Bally, Marta

    2017-01-01

    Our body fluids contain a multitude of cell-derived vesicles, secreted by most cell types, commonly referred to as extracellular vesicles. They have attracted considerable attention for their function as intercellular communication vehicles in a broad range of physiological processes and pathological conditions. Extracellular vesicles and especially the smallest type, exosomes, have also generated a lot of excitement in view of their potential as disease biomarkers or as carriers for drug delivery. In this context, state-of-the-art techniques capable of comprehensively characterizing vesicles in biological fluids are urgently needed. This review presents the arsenal of techniques available for quantification and characterization of physical properties of extracellular vesicles, summarizes their working principles, discusses their advantages and limitations and further illustrates their implementation in extracellular vesicle research. The small size and physicochemical heterogeneity of extracellular vesicles make their physical characterization and quantification an extremely challenging task. Currently, structure, size, buoyant density, optical properties and zeta potential have most commonly been studied. The concentration of vesicles in suspension can be expressed in terms of biomolecular or particle content depending on the method at hand. In addition, common quantification methods may either provide a direct quantitative measurement of vesicle concentration or solely allow for relative comparison between samples. The combination of complementary methods capable of detecting, characterizing and quantifying extracellular vesicles at a single particle level promises to provide new exciting insights into their modes of action and to reveal the existence of vesicle subpopulations fulfilling key biological tasks. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  4. Review of Polynomial Chaos-Based Methods for Uncertainty Quantification in Modern Integrated Circuits

    Directory of Open Access Journals (Sweden)

    Arun Kaintura

    2018-02-01

    Full Text Available Advances in manufacturing process technology are key ensembles for the production of integrated circuits in the sub-micrometer region. It is of paramount importance to assess the effects of tolerances in the manufacturing process on the performance of modern integrated circuits. The polynomial chaos expansion has emerged as a suitable alternative to standard Monte Carlo-based methods that are accurate, but computationally cumbersome. This paper provides an overview of the most recent developments and challenges in the application of polynomial chaos-based techniques for uncertainty quantification in integrated circuits, with particular focus on high-dimensional problems.

  5. Practical method of breast attenuation correction for cardiac SPECT

    International Nuclear Information System (INIS)

    Oliveira, Anderson de; Nogueira, Tindyua; Gutterres, Ricardo Fraga; Megueriam, Berdj Aram; Santos, Goncalo Rodrigues dos

    2007-01-01

    The breast attenuation effects on SPECT (Single Photon Emission Tomography) myocardium perfusion procedures have been lately scope of continuous inquiry. The requested attenuation correction factors are usually achieved by transmission analysis, making up the exposure of a standard external source to the SPECT, as a routine step. However, its high cost makes this methodology not fully available to the most of nuclear medicines services in Brazil and abroad. To overcome the problem, a new trend is presented in this work, implementing computational models to balance the breast attenuation effects on the left ventricle anterior wall, during myocardium perfusion scintigraphy procedures with SPECT. A neural network was put on in order to provide the attenuation correction indexes, based upon the following patients individual biotypes features: mass, age, height, chest and breast thicknesses, heart size, as well as the imparted activity intake levels. (author)

  6. Practical method of breast attenuation correction for cardiac SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Anderson de; Nogueira, Tindyua; Gutterres, Ricardo Fraga [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil). Coordenacao Geral de Instalacoes Medicas e Industriais (CGMI)]. E-mails: anderson@cnen.gov.br; tnogueira@cnen.gov.br; rguterre@cnen.gov.br; Megueriam, Berdj Aram [Instituto Nacional do Cancer (INCA), Rio de Janeiro, RJ (Brazil)]. E-mail: megueriam@hotmail.com; Santos, Goncalo Rodrigues dos [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)]. E-mail: goncalo@cnen.gov.br

    2007-07-01

    The breast attenuation effects on SPECT (Single Photon Emission Tomography) myocardium perfusion procedures have been lately scope of continuous inquiry. The requested attenuation correction factors are usually achieved by transmission analysis, making up the exposure of a standard external source to the SPECT, as a routine step. However, its high cost makes this methodology not fully available to the most of nuclear medicines services in Brazil and abroad. To overcome the problem, a new trend is presented in this work, implementing computational models to balance the breast attenuation effects on the left ventricle anterior wall, during myocardium perfusion scintigraphy procedures with SPECT. A neural network was put on in order to provide the attenuation correction indexes, based upon the following patients individual biotypes features: mass, age, height, chest and breast thicknesses, heart size, as well as the imparted activity intake levels. (author)

  7. Use of digital computers for correction of gamma method and neutron-gamma method indications

    International Nuclear Information System (INIS)

    Lakhnyuk, V.M.

    1978-01-01

    The program for the NAIRI-S computer is described which is intended for accounting and elimination of the effect of by-processes when interpreting gamma and neutron-gamma logging indications. By means of slight corrections it is possible to use the program as a mathematical basis for logging diagram standardization by the method of multidimensional regressive analysis and estimation of rock reservoir properties

  8. A validated Fourier transform infrared spectroscopy method for quantification of total lactones in Inula racemosa and Andrographis paniculata.

    Science.gov (United States)

    Shivali, Garg; Praful, Lahorkar; Vijay, Gadgil

    2012-01-01

    Fourier transform infrared (FT-IR) spectroscopy is a technique widely used for detection and quantification of various chemical moieties. This paper describes the use of the FT-IR spectroscopy technique for the quantification of total lactones present in Inula racemosa and Andrographis paniculata. To validate the FT-IR spectroscopy method for quantification of total lactones in I. racemosa and A. paniculata. Dried and powdered I. racemosa roots and A. paniculata plant were extracted with ethanol and dried to remove ethanol completely. The ethanol extract was analysed in a KBr pellet by FT-IR spectroscopy. The FT-IR spectroscopy method was validated and compared with a known spectrophotometric method for quantification of lactones in A. paniculata. By FT-IR spectroscopy, the amount of total lactones was found to be 2.12 ± 0.47% (n = 3) in I. racemosa and 8.65 ± 0.51% (n = 3) in A. paniculata. The method showed comparable results with a known spectrophotometric method used for quantification of such lactones: 8.42 ± 0.36% (n = 3) in A. paniculata. Limits of detection and quantification for isoallantolactone were 1 µg and 10 µg respectively; for andrographolide they were 1.5 µg and 15 µg respectively. Recoveries were over 98%, with good intra- and interday repeatability: RSD ≤ 2%. The FT-IR spectroscopy method proved linear, accurate, precise and specific, with low limits of detection and quantification, for estimation of total lactones, and is less tedious than the UV spectrophotometric method for the compounds tested. This validated FT-IR spectroscopy method is readily applicable for the quality control of I. racemosa and A. paniculata. Copyright © 2011 John Wiley & Sons, Ltd.

  9. ASSESSMENT OF ATMOSPHERIC CORRECTION METHODS FOR OPTIMIZING HAZY SATELLITE IMAGERIES

    Directory of Open Access Journals (Sweden)

    Umara Firman Rizidansyah

    2015-04-01

    Full Text Available The purpose of this research is to examine suitability of three types of haze correction methods toward distinctness of surface objects in land cover. Considering the formation of haze therefore the main research are divided into both region namely rural assumed as vegetation and urban assumed as non vegetation area. Region of interest for rural selected Balaraja and urban selected Penjaringan. Haze imagery reduction utilized techniques such as Dark Object Substration, Virtual Cloud Point and Histogram Match. By applying an equation of Haze Optimized Transformation HOT = DNbluesin(∂-DNredcos(∂, the main result of this research includes: in the case of AVNIR-Rural, VCP has good results on Band 1 while the HM has good results on band 2, 3 and 4, therefore in the case of Avnir-Rural can be applied to HM. in the case of AVNIR-Urban, DOS has good result on band 1, 2 and 3 meanwhile HM has good results on band 4, therefore in the case of AVNIR-Urban can be applied to DOS. In the case of Landsat-Rural, DOS has good result on band 1, 2 and 6 meanwhile VCP has good results on band 4 and 5 and the smallest average value of HOT is 106.547 by VCP, therefore in the case of Lansat-Rural can be applied to DOS and VCP. In the case of Landsat-Urban, DOS has good result on band 1, 2 and 6 meanwhile VCP has good results on band 3, 4 and 5, therefore in the case of Landsat-Urban can be applied to VCP.   Tujuan penelitian ini untuk menguji kesesuaian tiga jenis metode koreksi haze terhadap kejelasan obyek permukaan di wilayah tutupan vegetasi dan non vegetasi, berkenaan menghilangkan haze di wilayah citra satelit optis yang memiliki karakteristik tertentu dan diduga proses pembentukan partikel hazenya berbeda. Sehingga daerah penelitian dibagi menjadi wilayah rural yang diasumsikan sebagai daerah vegetasi dan urban sebagai non vegetasi. Pedesaan terpilih kecamatan Balaraja dan Perkotaan terpilih kecamatan Penjaringan. Tiap lokasi menggunakan Avnir-2 dan Landsat

  10. A simple method of digitizing analog scintigrams for quantification and digital archiving

    International Nuclear Information System (INIS)

    Schramm, M.; Kaempfer, B.; Wolf, H.; Clausen, M.; Wendhausen, H.; Henze, E.

    1993-01-01

    This study was undertaken to evaluate a quick, reliable and cheap method of digitizing analog scintigrams. 40 whole-body bone scintigrams were obtained simultaneously in analog and genuine digital format. The analog scans on X-ray film were then digitized seecondarily by three different methods: 300 dpi flatbed scanning, high-resolution camera scanning and camcorder recording. A simple exposure approach using a light box, a cheap camcorder, a PC and image grabber hard- and software proved to be optimal. Visual interpretation showed no differences in clinical findings when comparing the analog images with their secondarily digitized counterparts. To test the possibility of quantification, 126 equivalent ROIs were drawn both in the genuine digital and the secondarily digitized images. Comparing the ROI count to whole-body count percentage of the corresponding ROIs showed the correlation to be linear. The evaluation of phantom studies showed the linear correlation to be true within a wide activity range. Thus, secondary digitalization of analog scintigrams is an easy, cheap and reliable method of archiving images and allows secondary digital quantification. (orig.) [de

  11. [A simple method of digitizing analog scintigrams for quantification and digital archiving].

    Science.gov (United States)

    Schramm, M; Kämpfer, B; Wolf, H; Clausen, M; Wendhausen, H; Henze, E

    1993-02-01

    This study was undertaken to evaluate a quick, reliable and cheap method of digitizing analog scintigrams. 40 whole-body bone scintigrams were obtained simultaneously in analog and genuine digital format. The analog scans on x-ray film were then digitized secondarily by three different methods: 300 dpi flat-bed scanning, high-resolution camera scanning and camcorder recording. A simple exposure approach using a light box, a cheap camcorder, a PC and image grabber hard- and software proved to be optimal. Visual interpretation showed no differences in clinical findings when comparing the analog images with their secondarily digitized counterparts. To test the possibility of quantification, 126 equivalent ROIs were drawn both in the genuine digital and the secondarily digitized images. Comparing the ROI count to whole-body count percentage of the corresponding ROIs showed the correlation to be linear. The evaluation of phantom studies showed the linear correlation to be true within a wide activity range. Thus, secondary digitalization of analog scintigrams is an easy, cheap and reliable method of archiving images and allows secondary digital quantification.

  12. Empirical quantification of lacustrine groundwater discharge - different methods and their limitations

    Science.gov (United States)

    Meinikmann, K.; Nützmann, G.; Lewandowski, J.

    2015-03-01

    Groundwater discharge into lakes (lacustrine groundwater discharge, LGD) can be an important driver of lake eutrophication. Its quantification is difficult for several reasons, and thus often neglected in water and nutrient budgets of lakes. In the present case several methods were applied to determine the expansion of the subsurface catchment, to reveal areas of main LGD and to identify the variability of LGD intensity. Size and shape of the subsurface catchment served as a prerequisite in order to calculate long-term groundwater recharge and thus the overall amount of LGD. Isotopic composition of near-shore groundwater was investigated to validate the quality of catchment delineation in near-shore areas. Heat as a natural tracer for groundwater-surface water interactions was used to find spatial variations of LGD intensity. Via an analytical solution of the heat transport equation, LGD rates were calculated from temperature profiles of the lake bed. The method has some uncertainties, as can be found from the results of two measurement campaigns in different years. The present study reveals that a combination of several different methods is required for a reliable identification and quantification of LGD and groundwater-borne nutrient loads.

  13. Improving the reliability of POD curves in NDI methods using a Bayesian inversion approach for uncertainty quantification

    Science.gov (United States)

    Ben Abdessalem, A.; Jenson, F.; Calmon, P.

    2016-02-01

    This contribution provides an example of the possible advantages of adopting a Bayesian inversion approach to uncertainty quantification in nondestructive inspection methods. In such problem, the uncertainty associated to the random parameters is not always known and needs to be characterised from scattering signal measurements. The uncertainties may then correctly propagated in order to determine a reliable probability of detection curve. To this end, we establish a general Bayesian framework based on a non-parametric maximum likelihood function formulation and some priors from expert knowledge. However, the presented inverse problem is time-consuming and computationally intensive. To cope with this difficulty, we replace the real model by a surrogate one in order to speed-up the model evaluation and to make the problem to be computationally feasible for implementation. The least squares support vector regression is adopted as metamodelling technique due to its robustness to deal with non-linear problems. We illustrate the usefulness of this methodology through the control of tube with enclosed defect using ultrasonic inspection method.

  14. Simultaneous Quantification of Antidiabetic Agents in Human Plasma by a UPLC-QToF-MS Method.

    Directory of Open Access Journals (Sweden)

    Mariana Millan Fachi

    Full Text Available An ultra-performance liquid chromatography quadrupole time-of-flight mass spectrometry method for the simultaneous quantification of chlorpropamide, glibenclamide, gliclazide, glimepiride, metformin, nateglinide, pioglitazone, rosiglitazone, and vildagliptin in human plasma was developed and validated, using isoniazid and sulfaquinoxaline as internal standards. Following plasma protein precipitation using acetonitrile with 1% formic acid, chromatographic separation was performed on a cyano column using gradient elution with water and acetonitrile, both containing 0.1% formic acid. Detection was performed in a quadrupole time-of-flight analyzer, using electrospray ionization operated in the positive mode. Data from validation studies demonstrated that the new method is highly sensitive, selective, precise (RSD 0.99, free of matrix and has no residual effects. The developed method was successfully applied to volunteers' plasma samples. Hence, this method was demonstrated to be appropriate for clinical monitoring of antidiabetic agents.

  15. Method Development for Extraction and Quantification of Glycosides in Leaves of Stevia Rebaudiana

    International Nuclear Information System (INIS)

    Salmah Moosa; Hazlina Ahmad Hassali; Norazlina Noordin

    2015-01-01

    A solid-liquid extraction and an UHPLC method for determination of glycosides from the leave parts of Stevia rebaudiana were developed. Steviol glycosides found in the leaves of Stevia are natural sweetener and commercially sold as sugar substitutes. Extraction of the glycosides consisted of solvent extraction of leaf powder using various solvents followed by its concentration using rotary evaporator and analysis using Ultra High Performance Liquid Chromatography (UHPLC). Existing analytical methods are mainly focused on the quantification of either rebaudioside A or stevioside, whereas other glycosides, such as rebaudioside B and rebaudioside D present in the leaves also contribute to sweetness or its biological activity. Therefore, we developed an improved method by changing the UHPLC conditions to enable a rapid and reliable determination of four steviol glycosides rather than just two using an isocratic UHPLC method. (author)

  16. Quantification of {sup 18}F-florbetapir PET: comparison of two analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Hutton, Chloe; Declerck, Jerome [Siemens Molecular Imaging, Oxford (United Kingdom); Mintun, Mark A.; Pontecorvo, Michael J.; Devous, Michael D.; Joshi, Abhinay D. [Avid Radiopharmaceuticals a wholly owned subsidiary of Eli Lilly and Company, Philadelphia, PA (United States); Collaboration: for the Alzheimer' s Disease Neuroimaging Initiative

    2015-04-01

    {sup 18}F-Florbetapir positron emission tomography (PET) can be used to image amyloid burden in the human brain. A previously developed research method has been shown to have a high test-retest reliability and good correlation between standardized uptake value ratio (SUVR) and amyloid burden at autopsy. The goal of this study was to determine how well SUVRs computed using the research method could be reproduced using an automatic quantification method, developed for clinical use. Two methods for the quantitative analysis of {sup 18}F-florbetapir PET were compared in a diverse clinical population of 604 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) and in a group of 74 younger healthy controls (YHC). Cortex to cerebellum SUVRs were calculated using the research method, which is based on SPM, yielding 'research SUVRs', and using syngo.PET Amyloid Plaque, yielding 'sPAP SUVRs'. Mean cortical SUVRs calculated using the two methods for the 678 subjects were correlated (r = 0.99). Linear regression of sPAP SUVRs on research SUVRs was used to convert the research method SUVR threshold for florbetapir positivity of 1.10 to a corresponding threshold of 1.12 for sPAP. Using the corresponding thresholds, categorization of SUVR values were in agreement between research and sPAP SUVRs for 96.3 % of the ADNI images. SUVRs for all YHC were below the corresponding thresholds. Automatic florbetapir PET quantification using sPAP yielded cortex to cerebellum SUVRs which were correlated and in good agreement with the well-established research method. The research SUVR threshold for florbetapir positivity was reliably converted to a corresponding threshold for sPAP SUVRs. (orig.)

  17. Quantification of 18F-florbetapir PET: comparison of two analysis methods

    International Nuclear Information System (INIS)

    Hutton, Chloe; Declerck, Jerome; Mintun, Mark A.; Pontecorvo, Michael J.; Devous, Michael D.; Joshi, Abhinay D.

    2015-01-01

    18 F-Florbetapir positron emission tomography (PET) can be used to image amyloid burden in the human brain. A previously developed research method has been shown to have a high test-retest reliability and good correlation between standardized uptake value ratio (SUVR) and amyloid burden at autopsy. The goal of this study was to determine how well SUVRs computed using the research method could be reproduced using an automatic quantification method, developed for clinical use. Two methods for the quantitative analysis of 18 F-florbetapir PET were compared in a diverse clinical population of 604 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) and in a group of 74 younger healthy controls (YHC). Cortex to cerebellum SUVRs were calculated using the research method, which is based on SPM, yielding 'research SUVRs', and using syngo.PET Amyloid Plaque, yielding 'sPAP SUVRs'. Mean cortical SUVRs calculated using the two methods for the 678 subjects were correlated (r = 0.99). Linear regression of sPAP SUVRs on research SUVRs was used to convert the research method SUVR threshold for florbetapir positivity of 1.10 to a corresponding threshold of 1.12 for sPAP. Using the corresponding thresholds, categorization of SUVR values were in agreement between research and sPAP SUVRs for 96.3 % of the ADNI images. SUVRs for all YHC were below the corresponding thresholds. Automatic florbetapir PET quantification using sPAP yielded cortex to cerebellum SUVRs which were correlated and in good agreement with the well-established research method. The research SUVR threshold for florbetapir positivity was reliably converted to a corresponding threshold for sPAP SUVRs. (orig.)

  18. Development, optimization, and single laboratory validation of an event-specific real-time PCR method for the detection and quantification of Golden Rice 2 using a novel taxon-specific assay.

    Science.gov (United States)

    Jacchia, Sara; Nardini, Elena; Savini, Christian; Petrillo, Mauro; Angers-Loustau, Alexandre; Shim, Jung-Hyun; Trijatmiko, Kurniawan; Kreysa, Joachim; Mazzara, Marco

    2015-02-18

    In this study, we developed, optimized, and in-house validated a real-time PCR method for the event-specific detection and quantification of Golden Rice 2, a genetically modified rice with provitamin A in the grain. We optimized and evaluated the performance of the taxon (targeting rice Phospholipase D α2 gene)- and event (targeting the 3' insert-to-plant DNA junction)-specific assays that compose the method as independent modules, using haploid genome equivalents as unit of measurement. We verified the specificity of the two real-time PCR assays and determined their dynamic range, limit of quantification, limit of detection, and robustness. We also confirmed that the taxon-specific DNA sequence is present in single copy in the rice genome and verified its stability of amplification across 132 rice varieties. A relative quantification experiment evidenced the correct performance of the two assays when used in combination.

  19. Emphysema quantification from CT scans using novel application of diaphragm curvature estimation: comparison with standard quantification methods and pulmonary function data

    Science.gov (United States)

    Keller, Brad M.; Reeves, Anthony P.; Yankelevitz, David F.; Henschke, Claudia I.; Barr, R. Graham

    2009-02-01

    Emphysema is a disease of the lungs that destroys the alveolar air sacs and induces long-term respiratory dysfunction. CT scans allow for the imaging of the anatomical basis of emphysema and quantification of the underlying disease state. Several measures have been introduced for the quantification emphysema directly from CT data; most,however, are based on the analysis of density information provided by the CT scans, which vary by scanner and can be hard to standardize across sites and time. Given that one of the anatomical variations associated with the progression of emphysema is the flatting of the diaphragm due to the loss of elasticity in the lung parenchyma, curvature analysis of the diaphragm would provide information about emphysema from CT. Therefore, we propose a new, non-density based measure of the curvature of the diaphragm that would allow for further quantification methods in a robust manner. To evaluate the new method, 24 whole-lung scans were analyzed using the ratios of the lung height and diaphragm width to diaphragm height as curvature estimates as well as using the emphysema index as comparison. Pearson correlation coefficients showed a strong trend of several of the proposed diaphragm curvature measures to have higher correlations, of up to r=0.57, with DLCO% and VA than did the emphysema index. Furthermore, we found emphysema index to have only a 0.27 correlation to the proposed measures, indicating that the proposed measures evaluate different aspects of the disease.

  20. Initial evaluation of a practical PET respiratory motion correction method in clinical simultaneous PET/MRI

    International Nuclear Information System (INIS)

    Manber, Richard; Thielemans, Kris; Hutton, Brian; Barnes, Anna; Ourselin, Sebastien; Arridge, Simon; O’Meara, Celia; Atkinson, David

    2014-01-01

    Respiratory motion during PET acquisitions can cause image artefacts, with sharpness and tracer quantification adversely affected due to count ‘smearing’. Motion correction by registration of PET gates becomes increasingly difficult with shorter scan times and less counts. The advent of simultaneous PET/MRI scanners allows the use of high spatial resolution MRI to capture motion states during respiration [1, 2]. In this work, we use a respiratory signal derived from the PET list-mode data [3, ], with no requirement for an external device or MR sequence modifications.

  1. Sensitive quantification of apomorphine in human plasma using a LC-ESI-MS-MS method.

    Science.gov (United States)

    Abe, Emuri; Alvarez, Jean-Claude

    2006-06-01

    An analytical method based on liquid chromatography coupled with ion trap mass spectrometry (MS) detection with electrospray ionization interface has been developed for the identification and quantification of apomorphine in human plasma. Apomorphine was isolated from 0.5 mL of plasma using a liquid-liquid extraction with diethyl ether and boldine as internal standard, with satisfactory extraction recoveries. Analytes were separated on a 5-microm C18 Highpurity (Thermohypersil) column (150 mm x 2.1 mm I.D.) maintained at 30 degrees C, coupled to a precolumn (C18, 5-microm, 10 mm x 2.0 mm I.D., Thermo). The elution was achieved isocratically with a mobile phase of 2 mM NH4COOH buffer pH 3.8/acetonitrile (50/50, vol/vol) at a flow rate of 200 microL per minute. Data were collected either in full-scan MS mode at m/z 150 to 500 or in full-scan tandem mass spectrometry mode, selecting the [M+H]ion at m/z 268.0 for apomorphine and m/z 328.0 for boldine. The most intense daughter ion of apomorphine (m/z 237.1) and boldine (m/z 297.0) were used for quantification. Retention times were 2.03 and 2.11 minutes for boldine and apomorphine, respectively. Calibration curves were linear in the 0.025 to 20 ng/mL range. The limits of detection and quantification were 0.010 ng/mL and 0.025 ng/mL, respectively. Accuracy and precision of the assay were measured by analyzing 54 quality control samples for 3 days. At concentrations of 0.075, 1.5, and 15 ng/mL, intraday precisions were less than 10.1%, 5.3%, and 3.8%, and interday precisions were less than 4.8%, 6.6%, and 6.5%, respectively. Accuracies were in the 99.5 to 104.2% range. An example of a patient who was given 6 mg of apomorphine subcutaneously is shown, with concentrations of 14.1 ng/mL after 30 minutes and 0.20 ng/mL after 6 hours. The method described enables the unambiguous identification and quantification of apomorphine with very good sensitivity using only 0.5 mL of sample, and is very convenient for therapeutic drug

  2. New methods for the correction of 31P NMR spectra in in vivo NMR spectroscopy

    International Nuclear Information System (INIS)

    Starcuk, Z.; Bartusek, K.; Starcuk, Z. jr.

    1994-01-01

    The new methods for the correction of 31 P NMR spectra in vivo NMR spectroscopy have been performed. A method for the baseline correction of the spectra which represents a combination of time-domain and frequency-domain has been discussed.The method is very fast and efficient for minimization of base line artifacts of biological tissues impact

  3. A RP-HPLC method for quantification of diclofenac sodium released from biological macromolecules.

    Science.gov (United States)

    Bhattacharya, Shiv Sankar; Banerjee, Subham; Ghosh, Ashoke Kumar; Chattopadhyay, Pronobesh; Verma, Anurag; Ghosh, Amitava

    2013-07-01

    Interpenetrating network (IPN) microbeads of sodium carboxymethyl locust bean gum (SCMLBG) and sodium carboxymethyl cellulose (SCMC) containing diclofenac sodium (DS), a nonsteroidal anti-inflammatory drug, were prepared by single water-in-water (w/w) emulsion gelation process using AlCl3 as cross-linking agent in a complete aqueous environment. Pharmacokinetic study of these IPN microbeads was then carried out by a simple and feasible high-performance liquid chromatographic method with UV detection which was developed and validated for the quantification of diclofenac sodium in rabbit plasma. The chromatographic separation was carried out in a Hypersil BDS, C18 column (250 mm × 4.6 mm; 5 m). The mobile phase was a mixture of acetonitrile and methanol (70:30, v/v) at a flow rate of 1.0 ml/min. The UV detection was set at 276 nm. The extraction recovery of diclofenac sodium in plasma of three quality control (QC) samples was ranged from 81.52% to 95.29%. The calibration curve was linear in the concentration range of 20-1000 ng/ml with the correlation coefficient (r(2)) above 0.9951. The method was specific and sensitive with the limit of quantification of 20 ng/ml. In stability tests, diclofenac sodium in rabbit plasma was stable during storage and assay procedure. Copyright © 2013. Published by Elsevier B.V.

  4. Scatter correction method with primary modulator for dual energy digital radiography: a preliminary study

    Science.gov (United States)

    Jo, Byung-Du; Lee, Young-Jin; Kim, Dae-Hong; Jeon, Pil-Hyun; Kim, Hee-Joung

    2014-03-01

    In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of

  5. A machine learning approach for efficient uncertainty quantification using multiscale methods

    Science.gov (United States)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  6. Feasibility of the left ventricular volume measurement by acoustic quantification method. Comparison with ultrafast computed tomography

    International Nuclear Information System (INIS)

    Tomimoto, Shigehiro; Nakatani, Satoshi; Tanaka, Norio; Uematsu, Masaaki; Beppu, Shintaro; Nagata, Seiki; Hamada, Seiki; Takamiya, Makoto; Miyatake, Kunio

    1995-01-01

    Acoustic quantification (AQ: the real-time automated boundary detection system) allows instantaneous measurement of cardiac chamber volumes. The feasibility of this method was evaluated by comparing the left ventricular (LV) volumes obtained with AQ to those derived from ultrafast computed tomography (UFCT), which enables accurate measurements of LV volumes even in the presence of LV asynergy, in 23 patients (8 with ischemic heart disease, 5 with cardiomyopathy, 3 with valvular heart disease). Both LV end-diastolic and end-systolic volumes obtained with the AQ method were in good agreement with those obtained with UFCT (y=1.04χ-16.9, r=0.95; y=0.87χ+15.7, r=0.91; respectively). AQ was reliable even in the presence of LV asynergy. Interobserver variability for the AQ measurement was 10.2%. AQ provides a new, clinically useful method for real-time accurate estimation of the left ventricular volume. (author)

  7. Feasibility of the left ventricular volume measurement by acoustic quantification method. Comparison with ultrafast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Tomimoto, Shigehiro; Nakatani, Satoshi; Tanaka, Norio; Uematsu, Masaaki; Beppu, Shintaro; Nagata, Seiki; Hamada, Seiki; Takamiya, Makoto; Miyatake, Kunio [National Cardiovascular Center, Suita, Osaka (Japan)

    1995-01-01

    Acoustic quantification (AQ: the real-time automated boundary detection system) allows instantaneous measurement of cardiac chamber volumes. The feasibility of this method was evaluated by comparing the left ventricular (LV) volumes obtained with AQ to those derived from ultrafast computed tomography (UFCT), which enables accurate measurements of LV volumes even in the presence of LV asynergy, in 23 patients (8 with ischemic heart disease, 5 with cardiomyopathy, 3 with valvular heart disease). Both LV end-diastolic and end-systolic volumes obtained with the AQ method were in good agreement with those obtained with UFCT (y=1.04{chi}-16.9, r=0.95; y=0.87{chi}+15.7, r=0.91; respectively). AQ was reliable even in the presence of LV asynergy. Interobserver variability for the AQ measurement was 10.2%. AQ provides a new, clinically useful method for real-time accurate estimation of the left ventricular volume. (author).

  8. Reliability Quantification Method for Safety Critical Software Based on a Finite Test Set

    International Nuclear Information System (INIS)

    Shin, Sung Min; Kim, Hee Eun; Kang, Hyun Gook; Lee, Seung Jun

    2014-01-01

    Software inside of digitalized system have very important role because it may cause irreversible consequence and affect the whole system as common cause failure. However, test-based reliability quantification method for some safety critical software has limitations caused by difficulties in developing input sets as a form of trajectory which is series of successive values of variables. To address these limitations, this study proposed another method which conduct the test using combination of single values of variables. To substitute the trajectory form of input using combination of variables, the possible range of each variable should be identified. For this purpose, assigned range of each variable, logical relations between variables, plant dynamics under certain situation, and characteristics of obtaining information of digital device are considered. A feasibility of the proposed method was confirmed through an application to the Reactor Protection System (RPS) software trip logic

  9. HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Harold S. Blackman; David I. Gertman; Ronald L. Boring

    2008-09-01

    This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.

  10. Comparison of methods for the quantification of the different carbon fractions in atmospheric aerosol samples

    Science.gov (United States)

    Nunes, Teresa; Mirante, Fátima; Almeida, Elza; Pio, Casimiro

    2010-05-01

    Atmospheric carbon consists of: organic carbon (OC, including various organic compounds), elemental carbon (EC, or black carbon [BC]/soot, a non-volatile/light-absorbing carbon), and a small quantity of carbonate carbon. Thermal/optical methods (TOM) have been widely used for quantifying total carbon (TC), OC, and EC in ambient and source particulate samples. Unfortunately, the different thermal evolution protocols in use can result in a wide elemental carbon-to-total carbon variation. Temperature evolution in thermal carbon analysis is critical to the allocation of carbon fractions. Another critical point in OC and EC quantification by TOM is the interference of carbonate carbon (CC) that could be present in the particulate samples, mainly in the coarse fraction of atmospheric aerosol. One of the methods used to minimize this interference consists on the use of a sample pre-treatment with acid to eliminate CC prior to thermal analysis (Chow et al., 2001; Pio et al., 1994). In Europe, there is currently no standard procedure for determining the carbonaceous aerosol fraction, which implies that data from different laboratories at various sites are of unknown accuracy and cannot be considered comparable. In the framework of the EU-project EUSAAR, a comprehensive study has been carried out to identify the causes of differences in the EC measured using different thermal evolution protocols. From this study an optimised protocol, the EUSAAR-2 protocol, was defined (Cavali et al., 2009). During the last two decades thousands of aerosol samples have been taken over quartz filters at urban, industrial, rural and background sites, and also from plume forest fires and biomass burning in a domestic closed stove. These samples were analysed for OC and EC, by a TOM, similar to that in use in the IMPROVE network (Pio et al., 2007). More recently we reduced the number of steps in thermal evolution protocols, without significant repercussions in the OC/EC quantifications. In order

  11. Application of radioanalytical methods in the quantification of solute transport in plants

    International Nuclear Information System (INIS)

    Hornik, M.

    2016-01-01

    The present habilitation thesis is elaborated as a compilation of published scientific papers supplemented with a commentary. The primary objective of the work was to bring the results and knowledge applicable to the further development of application possibilities of nuclear analytical chemistry, especially in the field of radioindication methods and application of positron emitters in connection with the positron emission tomography (PET) as well. In the work, these methods and techniques are developed mainly in the context of the solution of environmental issues related to the analysis and remediation of contaminated or degraded environment (water and soil), but also partially in the field of plant production or plant research. In terms of the achieved results and knowledge, the work is divided into three separated sections. The first part is dedicated to the application of radioindication methods, as well as others, non-radioanalytical methods and approaches in the characterization of plant biomass (biomass of terrestrial and aquatic mosses, and waste plant biomass) as alternative sorbents served to the separation and removal of (radio)toxic metals from contaminated or waste waters, as well as in the quantification and description of the sorption processes proceed under conditions of batch or continuous flow systems. The second part describes the results concerning on the quantification and visual description of the processes of (radio)toxic metals and microelements uptake and translocation in plant tissues using radioisotopes (β- and γ-emitters) of these metals and application of the methods of direct gamma spectrometry and autoradiography as well. The main aim of these experiments was to evaluate the possibilities of utilization of selected plant species in phytoremediation of contaminated soils and waters, as well as the possibilities affecting the effectiveness of uptake and translocation of these metals in the plant tissues mainly in dependence on their

  12. Accurate and precise DNA quantification in the presence of different amplification efficiencies using an improved Cy0 method.

    Science.gov (United States)

    Guescini, Michele; Sisti, Davide; Rocchi, Marco B L; Panebianco, Renato; Tibollo, Pasquale; Stocchi, Vilberto

    2013-01-01

    Quantitative real-time PCR represents a highly sensitive and powerful technology for the quantification of DNA. Although real-time PCR is well accepted as the gold standard in nucleic acid quantification, there is a largely unexplored area of experimental conditions that limit the application of the Ct method. As an alternative, our research team has recently proposed the Cy0 method, which can compensate for small amplification variations among the samples being compared. However, when there is a marked decrease in amplification efficiency, the Cy0 is impaired, hence determining reaction efficiency is essential to achieve a reliable quantification. The proposed improvement in Cy0 is based on the use of the kinetic parameters calculated in the curve inflection point to compensate for efficiency variations. Three experimental models were used: inhibition of primer extension, non-optimal primer annealing and a very small biological sample. In all these models, the improved Cy0 method increased quantification accuracy up to about 500% without affecting precision. Furthermore, the stability of this procedure was enhanced integrating it with the SOD method. In short, the improved Cy0 method represents a simple yet powerful approach for reliable DNA quantification even in the presence of marked efficiency variations.

  13. Flow cytometry for intracellular SPION quantification: specificity and sensitivity in comparison with spectroscopic methods

    Directory of Open Access Journals (Sweden)

    Friedrich RP

    2015-06-01

    Full Text Available Ralf P Friedrich,1 Christina Janko,1 Marina Poettler,1 Philipp Tripal,1 Jan Zaloga,1 Iwona Cicha,1 Stephan Dürr,1,2 Johannes Nowak,3 Stefan Odenbach,3 Ioana Slabu,4 Maik Liebl,4 Lutz Trahms,4 Marcus Stapf,5 Ingrid Hilger,5 Stefan Lyer,1 Christoph Alexiou1 1Department of Otorhinolaryngology, Head and Neck Surgery, Section of Experimental Oncology and Nanomedicine, University hospital Erlangen, 2Department of Otorhinolaryngology, Head and Neck Surgery, Section of Phoniatrics and Pediatric Audiology, University hospital Erlangen, Erlangen, 3Technische Universität Dresden, Chair of Magnetofluiddynamics, Measuring and Automation Technology, Dresden, 4Physikalisch-Technische Bundesanstalt Berlin, Berlin, 5Department of Radiology, Division of Diagnostic and Interventional Radiology, Experimental Radiology, University hospital Jena, Jena, Germany Abstract: Due to their special physicochemical properties, iron nanoparticles offer new promising possibilities for biomedical applications. For bench to bedside translation of superparamagnetic iron oxide nanoparticles (SPIONs, safety issues have to be comprehensively clarified. To understand concentration-dependent nanoparticle-mediated toxicity, the exact quantification of intracellular SPIONs by reliable methods is of great importance. In the present study, we compared three different SPION quantification methods (ultraviolet spectrophotometry, magnetic particle spectroscopy, atomic adsorption spectroscopy and discussed the shortcomings and advantages of each method. Moreover, we used those results to evaluate the possibility to use flow cytometric technique to determine the cellular SPION content. For this purpose, we correlated the side scatter data received from flow cytometry with the actual cellular SPION amount. We showed that flow cytometry provides a rapid and reliable method to assess the cellular SPION content. Our data also demonstrate that internalization of iron oxide nanoparticles in human

  14. Calibration transfer of a Raman spectroscopic quantification method for the assessment of liquid detergent compositions between two at-line instruments installed at two liquid detergent production plants.

    Science.gov (United States)

    Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T

    2017-09-01

    Calibration transfer of partial least squares (PLS) quantification models is established between two Raman spectrometers located at two liquid detergent production plants. As full recalibration of existing calibration models is time-consuming, labour-intensive and costly, it is investigated whether the use of mathematical correction methods requiring only a handful of standardization samples can overcome the dissimilarities in spectral response observed between both measurement systems. Univariate and multivariate standardization approaches are investigated, ranging from simple slope/bias correction (SBC), local centring (LC) and single wavelength standardization (SWS) to more complex direct standardization (DS) and piecewise direct standardization (PDS). The results of these five calibration transfer methods are compared reciprocally, as well as with regard to a full recalibration. Four PLS quantification models, each predicting the concentration of one of the four main ingredients in the studied liquid detergent composition, are aimed at transferring. Accuracy profiles are established from the original and transferred quantification models for validation purposes. A reliable representation of the calibration models performance before and after transfer is thus established, based on β-expectation tolerance intervals. For each transferred model, it is investigated whether every future measurement that will be performed in routine will be close enough to the unknown true value of the sample. From this validation, it is concluded that instrument standardization is successful for three out of four investigated calibration models using multivariate (DS and PDS) transfer approaches. The fourth transferred PLS model could not be validated over the investigated concentration range, due to a lack of precision of the slave instrument. Comparing these transfer results to a full recalibration on the slave instrument allows comparison of the predictive power of both Raman

  15. Method and apparatus for optical phase error correction

    Science.gov (United States)

    DeRose, Christopher; Bender, Daniel A.

    2014-09-02

    The phase value of a phase-sensitive optical device, which includes an optical transport region, is modified by laser processing. At least a portion of the optical transport region is exposed to a laser beam such that the phase value is changed from a first phase value to a second phase value, where the second phase value is different from the first phase value. The portion of the optical transport region that is exposed to the laser beam can be a surface of the optical transport region or a portion of the volume of the optical transport region. In an embodiment of the invention, the phase value of the optical device is corrected by laser processing. At least a portion of the optical transport region is exposed to a laser beam until the phase value of the optical device is within a specified tolerance of a target phase value.

  16. Genomes correction and assembling: present methods and tools

    Science.gov (United States)

    Wojcieszek, Michał; Pawełkowicz, Magdalena; Nowak, Robert; Przybecki, Zbigniew

    2014-11-01

    Recent rapid development of next generation sequencing (NGS) technologies provided significant impact into genomics field of study enabling implementation of many de novo sequencing projects of new species which was previously confined by technological costs. Along with advancement of NGS there was need for adjustment in assembly programs. New algorithms must cope with massive amounts of data computation in reasonable time limits and processing power and hardware is also an important factor. In this paper, we address the issue of assembly pipeline for de novo genome assembly provided by programs presently available for scientist both as commercial and as open - source software. The implementation of four different approaches - Greedy, Overlap - Layout - Consensus (OLC), De Bruijn and Integrated resulting in variation of performance is the main focus of our discussion with additional insight into issue of short and long reads correction.

  17. Texture analysis by the Schulz reflection method: Defocalization corrections for thin films

    International Nuclear Information System (INIS)

    Chateigner, D.; Germi, P.; Pernet, M.

    1992-01-01

    A new method is described for correcting experimental data obtained from the texture analysis of thin films. The analysis employed for correcting the data usually requires the experimental curves of defocalization for a randomly oriented specimen. In view of difficulties in finding non-oriented films, a theoretical method for these corrections is proposed which uses the defocalization evolution for a bulk sample, the film thickness and the penetration depth of the incident beam in the material. This correction method is applied to a film of YBa 2 CU 3 O 7-δ on an SrTiO 3 single-crystal substrate. (orig.)

  18. Band extension in digital methods of transfer function determination – signal conditioners asymmetry error corrections

    Directory of Open Access Journals (Sweden)

    Zbigniew Staroszczyk

    2014-12-01

    Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors

  19. Automated quantification of renal interstitial fibrosis for computer-aided diagnosis: A comprehensive tissue structure segmentation method.

    Science.gov (United States)

    Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon

    2018-03-01

    Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures

  20. Correction to the method of Talmadge and Fitch

    International Nuclear Information System (INIS)

    Sincero, A.P.

    2002-01-01

    The method of Talmadge and Fitch used for calculating thickener areas was published in 1955. Although in the United States, this method has largely been superseded by the solids flux method, there are other parts in the world that use this method even up to the present. The method, however, is erroneous and this needs to be known to potential users. The error lies in the assumption that the underflow concentration, C u , and the time of thickening, t u , in a continuous-flow thickener can be obtained from data obtained in a single batch settling test. This paper will show that this assumption is incorrect. (author)

  1. Development of a method for detection and quantification of B. brongniartii and B. bassiana in soil

    Science.gov (United States)

    Canfora, L.; Malusà, E.; Tkaczuk, C.; Tartanus, M.; Łabanowska, B. H.; Pinzari, F.

    2016-03-01

    A culture independent method based on qPCR was developed for the detection and quantification of two fungal inoculants in soil. The aim was to adapt a genotyping approach based on SSR (Simple Sequence Repeat) marker to a discriminating tracing of two different species of bioinoculants in soil, after their in-field release. Two entomopathogenic fungi, Beauveria bassiana and B. brongniartii, were traced and quantified in soil samples obtained from field trials. These two fungal species were used as biological agents in Poland to control Melolontha melolontha (European cockchafer), whose larvae live in soil menacing horticultural crops. Specificity of SSR markers was verified using controls consisting of: i) soil samples containing fungal spores of B. bassiana and B. brongniartii in known dilutions; ii) the DNA of the fungal microorganisms; iii) soil samples singly inoculated with each fungus species. An initial evaluation of the protocol was performed with analyses of soil DNA and mycelial DNA. Further, the simultaneous detection and quantification of B. bassiana and B. brongniartii in soil was achieved in field samples after application of the bio-inoculants. The protocol can be considered as a relatively low cost solution for the detection, identification and traceability of fungal bio-inoculants in soil.

  2. Radiation dose determines the method for quantification of DNA double strand breaks

    International Nuclear Information System (INIS)

    Bulat, Tanja; Keta, Olitija; Korićanac, Lela; Žakula, Jelena; Petrović, Ivan; Ristić-Fira, Aleksandra; Todorović, Danijela

    2016-01-01

    Ionizing radiation induces DNA double strand breaks (DSBs) that trigger phosphorylation of the histone protein H2AX (γH2AX). Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany) microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany). Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci. (author)

  3. A HPLC method for the quantification of butyramide and acetamide at ppb levels in hydrogeothermal waters

    Energy Technology Data Exchange (ETDEWEB)

    Gracy Elias; Earl D. Mattson; Jessica E. Little

    2012-01-01

    A quantitative analytical method to determine butyramide and acetamide concentrations at the low ppb levels in geothermal waters has been developed. The analytes are concentrated in a preparation step by evaporation and analyzed using HPLC-UV. Chromatographic separation is achieved isocratically with a RP C-18 column using a 30 mM phosphate buffer solution with 5 mM heptane sulfonic acid and methanol (98:2 ratio) as the mobile phase. Absorbance is measured at 200 nm. The limit of detection (LOD) for BA and AA were 2.0 {mu}g L{sup -1} and 2.5 {mu}g L{sup -1}, respectively. The limit of quantification (LOQ) for BA and AA were 5.7 {mu}g L{sup -1} and 7.7 {mu}g L{sup -1}, respectively, at the detection wavelength of 200 nm. Attaining these levels of quantification better allows these amides to be used as thermally reactive tracers in low-temperature hydrogeothermal systems.

  4. Rapid and simple colorimetric method for the quantification of AI-2 produced from Salmonella Typhimurium.

    Science.gov (United States)

    Wattanavanitchakorn, Siriluck; Prakitchaiwattana, Cheunjit; Thamyongkit, Patchanita

    2014-04-01

    The aim of this study was to evaluate the feasibility of Fe(III) ion reduction for the simple and rapid quantification of autoinducer-2 (AI-2) produced from bacteria using Salmonella Typhimurium as a model. Since the molecular structure of AI-2 is somewhat similar to ascorbic acid it was expected that AI-2 would also act as a reducing agent and reduce Fe(III) ions in the presence of 1,10-phenanthroline to form the colored [(o-phen)3 Fe(II)]SO4 ferroin complex that could be quantified colorimetrically. In support of this, colony rinses and cell free supernatants from cultures of all tested AI-2 producing strains, but not the AI-2 negative Sinorhizobium meliloti, formed a colored complex with a λmax of 510nm. The OD510 values of these culture supernatants or colony rinses were in broad agreement with the % activity observed in the same samples using the standard Vibrio harveyi bioluminescence assay for AI-2 detection, and with previously reported results. This methodology could potentially be developed as an alternative method for the simple and rapid quantification of AI-2 levels produced in bacterial cultures. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Radiation dose determines the method for quantification of DNA double strand breaks

    Energy Technology Data Exchange (ETDEWEB)

    Bulat, Tanja; Keta, Olitija; Korićanac, Lela; Žakula, Jelena; Petrović, Ivan; Ristić-Fira, Aleksandra [University of Belgrade, Vinča Institute of Nuclear Sciences, Belgrade (Serbia); Todorović, Danijela, E-mail: dtodorovic@medf.kg.ac.rs [University of Kragujevac, Faculty of Medical Sciences, Kragujevac (Serbia)

    2016-03-15

    Ionizing radiation induces DNA double strand breaks (DSBs) that trigger phosphorylation of the histone protein H2AX (γH2AX). Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany) microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany). Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci. (author)

  6. Methods of direct (non-chromatographic) quantification of body metabolites utilizing chemical ionization mass spectrometry

    International Nuclear Information System (INIS)

    Mee, J.M.L.

    1978-01-01

    For quantitative determination of known metabolites from the biological sample by direct chemical ionization mass spectrometry (CI-MS), the method of internal standard using stable isotopically labelled analogs appears to be the method of choice. In the case where stable isotope ratio determinations could not be applied, and alternative quantification can be achieved using non-labelled external or internal standards and a calibration curve (sum of peak height per a given number of scans versus concentration). The technique of computer monitoring permits display and plotting of ion current profiles (TIC and SIC) or spectra per a given number of scans or a given range of mass per charge. Examples are given in areas of clinical application and the quantitative data show very good agreement with the conventional chromatographic measurements. (Auth.)

  7. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    Science.gov (United States)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  8. Short overview of PSA quantification methods, pitfalls on the road from approximate to exact results

    International Nuclear Information System (INIS)

    Banov, Reni; Simic, Zdenko; Sterc, Davor

    2014-01-01

    Over time the Probabilistic Safety Assessment (PSA) models have become an invaluable companion in the identification and understanding of key nuclear power plant (NPP) vulnerabilities. PSA is an effective tool for this purpose as it assists plant management to target resources where the largest benefit for plant safety can be obtained. PSA has quickly become an established technique to numerically quantify risk measures in nuclear power plants. As complexity of PSA models increases, the computational approaches become more or less feasible. The various computational approaches can be basically classified in two major groups: approximate and exact (BDD based) methods. In recent time modern commercially available PSA tools started to provide both methods for PSA model quantification. Besides availability of both methods in proven PSA tools the usage must still be taken carefully since there are many pitfalls which can drive to wrong conclusions and prevent efficient usage of PSA tool. For example, typical pitfalls involve the usage of higher precision approximation methods and getting a less precise result, or mixing minimal cuts and prime implicants in the exact computation method. The exact methods are sensitive to selected computational paths in which case a simple human assisted rearrangement may help and even switch from computationally non-feasible to feasible methods. Further improvements to exact method are possible and desirable which opens space for a new research. In this paper we will show how these pitfalls may be detected and how carefully actions must be done especially when working with large PSA models. (authors)

  9. Surface Enhanced Raman Spectroscopy (SERS) methods for endpoint and real-time quantification of miRNA assays

    Science.gov (United States)

    Restaino, Stephen M.; White, Ian M.

    2017-03-01

    Surface Enhanced Raman spectroscopy (SERS) provides significant improvements over conventional methods for single and multianalyte quantification. Specifically, the spectroscopic fingerprint provided by Raman scattering allows for a direct multiplexing potential far beyond that of fluorescence and colorimetry. Additionally, SERS generates a comparatively low financial and spatial footprint compared with common fluorescence based systems. Despite the advantages of SERS, it has remained largely an academic pursuit. In the field of biosensing, techniques to apply SERS to molecular diagnostics are constantly under development but, most often, assay protocols are redesigned around the use of SERS as a quantification method and ultimately complicate existing protocols. Our group has sought to rethink common SERS methodologies in order to produce translational technologies capable of allowing SERS to compete in the evolving, yet often inflexible biosensing field. This work will discuss the development of two techniques for quantification of microRNA, a promising biomarker for homeostatic and disease conditions ranging from cancer to HIV. First, an inkjet-printed paper SERS sensor has been developed to allow on-demand production of a customizable and multiplexable single-step lateral flow assay for miRNA quantification. Second, as miRNA concentrations commonly exist in relatively low concentrations, amplification methods (e.g. PCR) are therefore required to facilitate quantification. This work presents a novel miRNA assay alongside a novel technique for quantification of nuclease driven nucleic acid amplification strategies that will allow SERS to be used directly with common amplification strategies for quantification of miRNA and other nucleic acid biomarkers.

  10. Analysis and development of methods of correcting for heterogeneities to cobalt-60: computing application

    International Nuclear Information System (INIS)

    Kappas, K.

    1982-11-01

    The purpose of this work is the analysis of the influence of inhomogeneities of the human body on the determination of the dose in Cobalt-60 radiation therapy. The first part is dedicated to the physical characteristics of inhomogeneities and to the conventional methods of correction. New methods of correction are proposed based on the analysis of the scatter. This analysis allows to take account, with a greater accuracy of their physical characteristics and of the corresponding modifications of the dose: ''the differential TAR method'' and ''the Beam Substraction Method''. The second part is dedicated to the computer implementation of the second method of correction for routine application in hospital [fr

  11. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    Science.gov (United States)

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  12. Correction of quantification errors in pelvic and spinal lesions caused by ignoring higher photon attenuation of bone in [{sup 18}F]NaF PET/MR

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, Georg, E-mail: georg.schramm@kuleuven.be; Maus, Jens; Hofheinz, Frank; Petr, Jan; Lougovski, Alexandr [Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiopharmaceutical Cancer Research, Dresden 01328 (Germany); Beuthien-Baumann, Bettina; Oehme, Liane [Department of Nuclear Medicine, University Hospital Carl Gustav Carus, Dresden 01307 (Germany); Platzek, Ivan [Department of Radiology, University Hospital Carl Gustav Carus, Dresden 01307 (Germany); Hoff, Jörg van den [Helmholtz-Zentrum Dresden-Rossendorf, Institute for Radiopharmaceutical Cancer Research, Dresden 01328 (Germany); Department of Nuclear Medicine, University Hospital Carl Gustav Carus, Dresden 01307 (Germany)

    2015-11-15

    Purpose: MR-based attenuation correction (MRAC) in routine clinical whole-body positron emission tomography and magnetic resonance imaging (PET/MRI) is based on tissue type segmentation. Due to lack of MR signal in cortical bone and the varying signal of spongeous bone, standard whole-body segmentation-based MRAC ignores the higher attenuation of bone compared to the one of soft tissue (MRAC{sub nobone}). The authors aim to quantify and reduce the bias introduced by MRAC{sub nobone} in the standard uptake value (SUV) of spinal and pelvic lesions in 20 PET/MRI examinations with [{sup 18}F]NaF. Methods: The authors reconstructed 20 PET/MR [{sup 18}F]NaF patient data sets acquired with a Philips Ingenuity TF PET/MRI. The PET raw data were reconstructed with two different attenuation images. First, the authors used the vendor-provided MRAC algorithm that ignores the higher attenuation of bone to reconstruct PET{sub nobone}. Second, the authors used a threshold-based algorithm developed in their group to automatically segment bone structures in the [{sup 18}F]NaF PET images. Subsequently, an attenuation coefficient of 0.11 cm{sup −1} was assigned to the segmented bone regions in the MRI-based attenuation image (MRAC{sub bone}) which was used to reconstruct PET{sub bone}. The automatic bone segmentation algorithm was validated in six PET/CT [{sup 18}F]NaF examinations. Relative SUV{sub mean} and SUV{sub max} differences between PET{sub bone} and PET{sub nobone} of 8 pelvic and 41 spinal lesions, and of other regions such as lung, liver, and bladder, were calculated. By varying the assigned bone attenuation coefficient from 0.11 to 0.13 cm{sup −1}, the authors investigated its influence on the reconstructed SUVs of the lesions. Results: The comparison of [{sup 18}F]NaF-based and CT-based bone segmentation in the six PET/CT patients showed a Dice similarity of 0.7 with a true positive rate of 0.72 and a false discovery rate of 0.33. The [{sup 18}F]NaF-based bone

  13. Leak Rate Quantification Method for Gas Pressure Seals with Controlled Pressure Differential

    Science.gov (United States)

    Daniels, Christopher C.; Braun, Minel J.; Oravec, Heather A.; Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    An enhancement to the pressure decay leak rate method with mass point analysis solved deficiencies in the standard method. By adding a control system, a constant gas pressure differential across the test article was maintained. As a result, the desired pressure condition was met at the onset of the test, and the mass leak rate and measurement uncertainty were computed in real-time. The data acquisition and control system were programmed to automatically stop when specified criteria were met. Typically, the test was stopped when a specified level of measurement uncertainty was attained. Using silicone O-ring test articles, the new method was compared with the standard method that permitted the downstream pressure to be non-constant atmospheric pressure. The two methods recorded comparable leak rates, but the new method recorded leak rates with significantly lower measurement uncertainty, statistical variance, and test duration. Utilizing this new method in leak rate quantification, projects will reduce cost and schedule, improve test results, and ease interpretation between data sets.

  14. Quantification in single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Buvat, Irene

    2005-01-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena; 2 - quantification in SPECT, problems and correction methods: Attenuation, scattering, un-stationary spatial resolution, partial volume effect, movement, tomographic reconstruction, calibration; 3 - Synthesis: actual quantification accuracy; 4 - Beyond the activity concentration measurement

  15. Technical note: Development and validation of a new method for the quantification of soluble and micellar calcium, magnesium, and potassium in milk.

    Science.gov (United States)

    Franzoi, M; Niero, G; Penasa, M; Cassandro, M; De Marchi, M

    2018-03-01

    Milk mineral content is a key trait for its role in dairy processes such as cheese-making, its use as source of minerals for newborns, and for all traits involving salt-protein interactions. This study investigated a new method for measuring mineral partition between soluble and micellar fractions in bovine milk after rennet coagulation. A new whey dilution step was added to correct the quantification bias due to whey trapped in curd and excluded volume. Moreover, the proposed method allowed the quantification of the diffusible volume after milk coagulation. Milk mineral content and concentration in whey, and diluted whey were quantified by acid digestion and inductively coupled plasma optical emission spectrometry. The repeatability of the method for micellar Ca, Mg, and K was between 2.07 and 8.96%, whereas reproducibility ranged from 4.01 to 9.44%. Recovery of total milk minerals over 3 spiking levels ranged from 92 to 97%. The proposed method provided an accurate estimation of micellar and soluble minerals in milk, and curd diffusible volume. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. A Method for Quantification of Epithelium Colonization Capacity by Pathogenic Bacteria

    DEFF Research Database (Denmark)

    Micha Pedersen, Rune; Grønnemose, Rasmus Birkholm; Stærk, Kristian

    2018-01-01

    a method in which epithelia/endothelia are simulated by flow chamber-grown human cell layers, and infection is induced by seeding of pathogenic bacteria on these surfaces under conditions that simulate the physiological microenvironment. Quantification of bacterial adhesion and colonization of the cell......Most bacterial infections initiate at the mucosal epithelium lining the gastrointestinal, respiratory, and urogenital tracts. At these sites, bacterial pathogens must adhere and increase in numbers to effectively breach the outer barrier and invade the host. If the bacterium succeeds in reaching...... the bloodstream, effective dissemination again requires that bacteria in the blood, reestablish contact to distant endothelium sites and form secondary site foci. The infectious potential of bacteria is therefore closely linked to their ability to adhere to, colonize, and invade epithelial and endothelial...

  17. Comparative analysis of experimental methods for quantification of small amounts of oil in water

    DEFF Research Database (Denmark)

    Katika, Konstantina; Ahkami, Mehrdad; Fosbøl, Philip Loldrup

    2016-01-01

    ) and the quantification of oil is then difficult. In this study, we compare four approaches to determine the volume of the collected oil fraction in core flooding effluents. The four methods are: Image analysis, UV/visible spectroscopy, liquid scintillation counting, and low-field nuclear magnetic resonance (NMR...... comparison to a pre-made standard curve. Image analysis, UV/visible spectroscopy, and liquid scintillation counting quantify only the oil fraction by comparing with a pre-made standard curve. The image analysis technique is reliable when more than 0.1 ml oil is present, whereas liquid scintillation counting...... performs well when less than 0.6 ml oil is present. Both UV/visible spectroscopy and NMR spectrometry produced high accuracy results in the entire studied range (0.006-1.1 ml). In terms of laboratory time, the liquid scintillation counting is the fastest and least user dependent, whereas the NMR...

  18. An HPLC-DAD method to quantification of main phenolic compounds from leaves of Cecropia species

    International Nuclear Information System (INIS)

    Costa, Geison M.; Ortmann, Caroline F.; Schenkel, Eloir P.; Reginatto, Flavio H.

    2011-01-01

    An efficient and reproducible HPLC-DAD method was developed and validated for the simultaneous quantification of major compounds (chlorogenic acid, isoorientin, orientin and isovitexin) present in the leaves of two Cecropia species, C. glaziovii and C. pachystachya. From the leaves of C. glaziovii and C. pachystachya were isolated the C-glycosylflavones isoorientin and isovitexin and identified on both species chlorogenic acid (3-O-caffeoylquinic acid) and the O-glycosylflavonol isoquercitrin. The C-glycosylflavone orientin was isolated only from C. pachystachya. Chlorogenic acid was the major compound in both species (11.1 mg g -1 of extract of C. glaziovii and 27.2 mg g -1 of extract of C. pachystachya) and for the flavonoids quantified, isovitexin was the main C-glycosylflavonoid for C. glaziovii (4.6 mg g -1 of extract) and isoorientin the main one for C. pachystachya (17.3 mg g -1 of extract). (author)

  19. Evaluation and parameterization of ATCOR3 topographic correction method for forest cover mapping in mountain areas

    Science.gov (United States)

    Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.

    2012-08-01

    A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.

  20. Research on 3-D terrain correction methods of airborne gamma-ray spectrometry survey

    International Nuclear Information System (INIS)

    Liu Yanyang; Liu Qingcheng; Zhang Zhiyong

    2008-01-01

    The general method of height correction is not effectual in complex terrain during the process of explaining airborne gamma-ray spectrometry data, and the 2-D terrain correction method researched in recent years is just available for correction of section measured. A new method of 3-D sector terrain correction is studied. The ground radiator is divided into many small sector radiators by the method, then the irradiation rate is calculated in certain survey distance, and the total value of all small radiate sources is regarded as the irradiation rate of the ground radiator at certain point of aero- survey, and the correction coefficients of every point are calculated which then applied to correct to airborne gamma-ray spectrometry data. The method can achieve the forward calculation, inversion calculation and terrain correction for airborne gamma-ray spectrometry survey in complex topography by dividing the ground radiator into many small sectors. Other factors are considered such as the un- saturated degree of measure scope, uneven-radiator content on ground, and so on. The results of for- ward model and an example analysis show that the 3-D terrain correction method is proper and effectual. (authors)

  1. On the quantification of the dissolved hydroxyl radicals in the plasma-liquid system using the molecular probe method

    Science.gov (United States)

    Ma, Yupengxue; Gong, Xinning; He, Bangbang; Li, Xiaofei; Cao, Dianyu; Li, Junshuai; Xiong, Qing; Chen, Qiang; Chen, Bing Hui; Huo Liu, Qing

    2018-04-01

    Hydroxyl (OH) radical is one of the most important reactive species produced by plasma-liquid interactions, and the OH in liquid phase (dissolved OH radical, OHdis) takes effect in many plasma-based applications due to its high reactivity. Therefore, the quantification of the OHdis in a plasma-liquid system is of great importance, and a molecular probe method usually used for the OHdis detection might be applied. Herein, we investigate the validity of using the molecular probe method to estimate the [OHdis] in the plasma-liquid system. Dimethyl sulfoxide is used as the molecular probe to estimate the [OHdis] in an air plasma-liquid system, and usually the estimation of [OHdis] is deduced by quantifying the OHdis-induced derivative, the formaldehyde (HCHO). The analysis indicates that the true concentration of the OHdis should be estimated from the sum of three terms: the formed HCHO, the existing OH scavengers, and the H2O2 formed from the OHdis. The results show that the measured [HCHO] needs to be corrected since the HCHO consumption is not negligible in the plasma-liquid system. We conclude from the results and the analysis that the molecular probe method generally underestimates the [OHdis] in the plasma-liquid system. If one wants to obtain the true concentration of the OHdis in the plasma-liquid system, one needs to know the consumption behavior of the OHdis-induced derivatives, the information of the OH scavengers (such as hydrated electron, atomic hydrogen besides the molecular probe), and also the knowledge of the H2O2 formed from the OHdis.

  2. A Method for Quantification of Epithelium Colonization Capacity by Pathogenic Bacteria

    Directory of Open Access Journals (Sweden)

    Rune M. Pedersen

    2018-02-01

    Full Text Available Most bacterial infections initiate at the mucosal epithelium lining the gastrointestinal, respiratory, and urogenital tracts. At these sites, bacterial pathogens must adhere and increase in numbers to effectively breach the outer barrier and invade the host. If the bacterium succeeds in reaching the bloodstream, effective dissemination again requires that bacteria in the blood, reestablish contact to distant endothelium sites and form secondary site foci. The infectious potential of bacteria is therefore closely linked to their ability to adhere to, colonize, and invade epithelial and endothelial surfaces. Measurement of bacterial adhesion to epithelial cells is therefore standard procedure in studies of bacterial virulence. Traditionally, such measurements have been conducted with microtiter plate cell cultures to which bacteria are added, followed by washing procedures and final quantification of retained bacteria by agar plating. This approach is fast and straightforward, but yields only a rough estimate of the adhesive properties of the bacteria upon contact, and little information on the ability of the bacterium to colonize these surfaces under relevant physiological conditions. Here, we present a method in which epithelia/endothelia are simulated by flow chamber-grown human cell layers, and infection is induced by seeding of pathogenic bacteria on these surfaces under conditions that simulate the physiological microenvironment. Quantification of bacterial adhesion and colonization of the cell layers is then performed by in situ time-lapse fluorescence microscopy and automatic detection of bacterial surface coverage. The method is demonstrated in three different infection models, simulating Staphylococcus aureus endothelial infection and Escherichia coli intestinal- and uroepithelial infection. The approach yields valuable information on the fitness of the bacterium to successfully adhere to and colonize epithelial surfaces and can be used

  3. Comparison between PET template-based method and MRI-based method for cortical quantification of florbetapir (AV-45) uptake in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Saint-Aubert, L.; Nemmi, F.; Peran, P. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Barbeau, E.J. [Universite de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, France, CNRS, CerCo, Toulouse (France); Service de Neurologie, Pole Neurosciences, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Payoux, P. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Service de Medecine Nucleaire, Pole Imagerie, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Chollet, F.; Pariente, J. [Inserm, Imagerie Cerebrale et Handicaps neurologiques UMR 825, Centre Hospitalier Universitaire de Toulouse, Toulouse (France); Centre Hospitalier Universitaire de Toulouse, Universite de Toulouse, UPS, Imagerie Cerebrale et Handicaps Neurologiques UMR 825, Toulouse (France); Service de Neurologie, Pole Neurosciences, Centre Hospitalier Universitaire de Toulouse, Toulouse (France)

    2014-05-15

    Florbetapir (AV-45) has been shown to be a reliable tool for assessing in vivo amyloid load in patients with Alzheimer's disease from the early stages. However, nonspecific white matter binding has been reported in healthy subjects as well as in patients with Alzheimer's disease. To avoid this issue, cortical quantification might increase the reliability of AV-45 PET analyses. In this study, we compared two quantification methods for AV-45 binding, a classical method relying on PET template registration (route 1), and a MRI-based method (route 2) for cortical quantification. We recruited 22 patients at the prodromal stage of Alzheimer's disease and 17 matched controls. AV-45 binding was assessed using both methods, and target-to-cerebellum mean global standard uptake values (SUVr) were obtained for each of them, together with SUVr in specific regions of interest. Quantification using the two routes was compared between the clinical groups (intragroup comparison), and between groups for each route (intergroup comparison). Discriminant analysis was performed. In the intragroup comparison, differences in uptake values were observed between route 1 and route 2 in both groups. In the intergroup comparison, AV-45 uptake was higher in patients than controls in all regions of interest using both methods, but the effect size of this difference was larger using route 2. In the discriminant analysis, route 2 showed a higher specificity (94.1 % versus 70.6 %), despite a lower sensitivity (77.3 % versus 86.4 %), and D-prime values were higher for route 2. These findings suggest that, although both quantification methods enabled patients at early stages of Alzheimer's disease to be well discriminated from controls, PET template-based quantification seems adequate for clinical use, while the MRI-based cortical quantification method led to greater intergroup differences and may be more suitable for use in current clinical research. (orig.)

  4. Quantification of viral DNA during HIV-1 infection: A review of relevant clinical uses and laboratory methods.

    Science.gov (United States)

    Alidjinou, E K; Bocket, L; Hober, D

    2015-02-01

    Effective antiretroviral therapy usually leads to undetectable HIV-1 RNA in the plasma. However, the virus persists in some cells of infected patients as various DNA forms, both integrated and unintegrated. This reservoir represents the greatest challenge to the complete cure of HIV-1 infection and its characteristics highly impact the course of the disease. The quantification of HIV-1 DNA in blood samples constitutes currently the most practical approach to measure this residual infection. Real-time quantitative PCR (qPCR) is the most common method used for HIV-DNA quantification and many strategies have been developed to measure the different forms of HIV-1 DNA. In the literature, several "in-house" PCR methods have been used and there is a need for standardization to have comparable results. In addition, qPCR is limited for the precise quantification of low levels by background noise. Among new assays in development, digital PCR was shown to allow an accurate quantification of HIV-1 DNA. Total HIV-1 DNA is most commonly measured in clinical routine. The absolute quantification of proviruses and unintegrated forms is more often used for research purposes. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  5. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  6. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  7. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  8. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    Energy Technology Data Exchange (ETDEWEB)

    Paganelli, Chiara [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133 (Italy); Peroni, Marta [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Paul Scherrer Institut, Zentrum für Protonentherapie, WMSA/C15, CH-5232 Villigen PSI (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, strada Campeggi 53, Pavia 27100 (Italy)

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  9. Development and validation of a method for the quantification of fructooligosaccharides in a prebiotic ice cream

    Directory of Open Access Journals (Sweden)

    Claudia L. González-Aguirre

    2018-02-01

    Full Text Available Context: Fructooligosaccharides (FOS are known as oligofructanes, oligosaccharides or oligofructose, which fall within the concept of prebiotics. One of the methods most commonly used in the industry for quantification and quality control nutraceutical substances classification is the method of high performance liquid chromatography (HPLC. Aims: To develop a procedure for the determination of FOS by HPLC in raw materials and a prebiotic ice cream. Methods: For the chromatographic separation, an HPLC was used with a refractive index detector (IR. The separation was performed using two columns coupled Sugar-pak I™ using an isocratic procedure with water type 1 at 0.35 mL/min. Kestose (GF2, nistose (GF3 and fructofuranosylnystose (GF4 were used as standards. Robustness was assessed by applying the Youden and Steiner test. Results: Good linear correlations were obtained (y = 14191.4470 x + 285684.2, r2 = 0.9904 within the concentration range of 8.0-12.0 mg/mL. The FOS recoveries were 99.5% with the intra-day and inter-day relative standard deviation (RSD less than 0.8%. The robustness test showed that the temperature parameters of the column and flow velocity are critical factors in the method. Conclusions: This reliable, simple and cost-effective method could be applied to the routine monitoring of FOS (GF2, GF3, and GF4 in raw materials and prebiotic ice creams.

  10. Critical assessment of three high performance liquid chromatography analytical methods for food carotenoid quantification.

    Science.gov (United States)

    Dias, M Graça; Oliveira, Luísa; Camões, M Filomena G F C; Nunes, Baltazar; Versloot, Pieter; Hulshof, Paul J M

    2010-05-21

    Three sets of extraction/saponification/HPLC conditions for food carotenoid quantification were technically and economically compared. Samples were analysed for carotenoids alpha-carotene, beta-carotene, beta-cryptoxanthin, lutein, lycopene, and zeaxanthin. All methods demonstrated good performance in the analysis of a composite food standard reference material for the analytes they are applicable to. Methods using two serial connected C(18) columns and a mobile phase based on acetonitrile, achieved a better carotenoid separation than the method using a mobile phase based on methanol and one C(18)-column. Carotenoids from leafy green vegetable matrices appeared to be better extracted with a mixture of methanol and tetrahydrofuran than with tetrahydrofuran alone. Costs of carotenoid determination in foods were lower for the method with mobile phase based on methanol. However for some food matrices and in the case of E-Z isomer separations, this was not technically satisfactory. Food extraction with methanol and tetrahydrofuran with direct evaporation of these solvents, and saponification (when needed) using pyrogallol as antioxidant, combined with a HPLC system using a slight gradient mobile phase based on acetonitrile and a stationary phase composed by two serial connected C(18) columns was the most technically and economically favourable method. 2010. Published by Elsevier B.V.

  11. A method for uncertainty quantification in the life prediction of gas turbine components

    Energy Technology Data Exchange (ETDEWEB)

    Lodeby, K.; Isaksson, O.; Jaervstraat, N. [Volvo Aero Corporation, Trolhaettan (Sweden)

    1998-12-31

    A failure in an aircraft jet engine can have severe consequences which cannot be accepted and high requirements are therefore raised on engine reliability. Consequently, assessment of the reliability of life predictions used in design and maintenance are important. To assess the validity of the predicted life a method to quantify the contribution to the total uncertainty in the life prediction from different uncertainty sources is developed. The method is a structured approach for uncertainty quantification that uses a generic description of the life prediction process. It is based on an approximate error propagation theory combined with a unified treatment of random and systematic errors. The result is an approximate statistical distribution for the predicted life. The method is applied on life predictions for three different jet engine components. The total uncertainty became of reasonable order of magnitude and a good qualitative picture of the distribution of the uncertainty contribution from the different sources was obtained. The relative importance of the uncertainty sources differs between the three components. It is also highly dependent on the methods and assumptions used in the life prediction. Advantages and disadvantages of this method is discussed. (orig.) 11 refs.

  12. Comparative quantification of dietary supplemented neural creatine concentrations with (1)H-MRS peak fitting and basis spectrum methods.

    Science.gov (United States)

    Turner, Clare E; Russell, Bruce R; Gant, Nicholas

    2015-11-01

    Magnetic resonance spectroscopy (MRS) is an analytical procedure that can be used to non-invasively measure the concentration of a range of neural metabolites. Creatine is an important neurometabolite with dietary supplementation offering therapeutic potential for neurological disorders with dysfunctional energetic processes. Neural creatine concentrations can be probed using proton MRS and quantified using a range of software packages based on different analytical methods. This experiment examines the differences in quantification performance of two commonly used analysis packages following a creatine supplementation strategy with potential therapeutic application. Human participants followed a seven day dietary supplementation regime in a placebo-controlled, cross-over design interspersed with a five week wash-out period. Spectroscopy data were acquired the day immediately following supplementation and analyzed with two commonly-used software packages which employ vastly different quantification methods. Results demonstrate that neural creatine concentration was augmented following creatine supplementation when analyzed using the peak fitting method of quantification (105.9%±10.1). In contrast, no change in neural creatine levels were detected with supplementation when analysis was conducted using the basis spectrum method of quantification (102.6%±8.6). Results suggest that software packages that employ the peak fitting procedure for spectral quantification are possibly more sensitive to subtle changes in neural creatine concentrations. The relative simplicity of the spectroscopy sequence and the data analysis procedure suggest that peak fitting procedures may be the most effective means of metabolite quantification when detection of subtle alterations in neural metabolites is necessary. The straightforward technique can be used on a clinical magnetic resonance imaging system. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Autocalibration method for non-stationary CT bias correction.

    Science.gov (United States)

    Vegas-Sánchez-Ferrero, Gonzalo; Ledesma-Carbayo, Maria J; Washko, George R; Estépar, Raúl San José

    2018-02-01

    Computed tomography (CT) is a widely used imaging modality for screening and diagnosis. However, the deleterious effects of radiation exposure inherent in CT imaging require the development of image reconstruction methods which can reduce exposure levels. The development of iterative reconstruction techniques is now enabling the acquisition of low-dose CT images whose quality is comparable to that of CT images acquired with much higher radiation dosages. However, the characterization and calibration of the CT signal due to changes in dosage and reconstruction approaches is crucial to provide clinically relevant data. Although CT scanners are calibrated as part of the imaging workflow, the calibration is limited to select global reference values and does not consider other inherent factors of the acquisition that depend on the subject scanned (e.g. photon starvation, partial volume effect, beam hardening) and result in a non-stationary noise response. In this work, we analyze the effect of reconstruction biases caused by non-stationary noise and propose an autocalibration methodology to compensate it. Our contributions are: 1) the derivation of a functional relationship between observed bias and non-stationary noise, 2) a robust and accurate method to estimate the local variance, 3) an autocalibration methodology that does not necessarily rely on a calibration phantom, attenuates the bias caused by noise and removes the systematic bias observed in devices from different vendors. The validation of the proposed methodology was performed with a physical phantom and clinical CT scans acquired with different configurations (kernels, doses, algorithms including iterative reconstruction). The results confirmed the suitability of the proposed methods for removing the intra-device and inter-device reconstruction biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Simple method for correct enumeration of Staphylococcus aureus

    DEFF Research Database (Denmark)

    Haaber, J.; Cohn, M. T.; Petersen, A.

    2016-01-01

    culture. When grown in such liquid cultures, the human pathogen Staphylococcus aureus is characterized by its aggregation of single cells into clusters of variable size. Here, we show that aggregation during growth in the laboratory standard medium tryptic soy broth (TSB) is common among clinical...... and laboratory S. aureus isolates and that aggregation may introduce significant bias when applying standard enumeration methods on S. aureus growing in laboratory batch cultures. We provide a simple and efficient sonication procedure, which can be applied prior to optical density measurements to give...

  15. Comparison of methods for quantification of global DNA methylation in human cells and tissues.

    Directory of Open Access Journals (Sweden)

    Sofia Lisanti

    Full Text Available DNA methylation is a key epigenetic modification which, in mammals, occurs mainly at CpG dinucleotides. Most of the CpG methylation in the genome is found in repetitive regions, rich in dormant transposons and endogenous retroviruses. Global DNA hypomethylation, which is a common feature of several conditions such as ageing and cancer, can cause the undesirable activation of dormant repeat elements and lead to altered expression of associated genes. DNA hypomethylation can cause genomic instability and may contribute to mutations and chromosomal recombinations. Various approaches for quantification of global DNA methylation are widely used. Several of these approaches measure a surrogate for total genomic methyl cytosine and there is uncertainty about the comparability of these methods. Here we have applied 3 different approaches (luminometric methylation assay, pyrosequencing of the methylation status of the Alu repeat element and of the LINE1 repeat element for estimating global DNA methylation in the same human cell and tissue samples and have compared these estimates with the "gold standard" of methyl cytosine quantification by HPLC. Next to HPLC, the LINE1 approach shows the smallest variation between samples, followed by Alu. Pearson correlations and Bland-Altman analyses confirmed that global DNA methylation estimates obtained via the LINE1 approach corresponded best with HPLC-based measurements. Although, we did not find compelling evidence that the gold standard measurement by HPLC could be substituted with confidence by any of the surrogate assays for detecting global DNA methylation investigated here, the LINE1 assay seems likely to be an acceptable surrogate in many cases.

  16. Quantification of methane emissions from 15 Danish landfills using the mobile tracer dispersion method

    Energy Technology Data Exchange (ETDEWEB)

    Mønster, Jacob [Department of Environmental Engineering, Technical University of Denmark, Miljøvej – Building 113, DK-2800 Lyngby (Denmark); Samuelsson, Jerker, E-mail: jerker.samuelsson@fluxsense.se [Chalmers University of Technology/FluxSense AB, SE-41296 Göteborg (Sweden); Kjeldsen, Peter [Department of Environmental Engineering, Technical University of Denmark, Miljøvej – Building 113, DK-2800 Lyngby (Denmark); Scheutz, Charlotte, E-mail: chas@env.dtu.dk [Department of Environmental Engineering, Technical University of Denmark, Miljøvej – Building 113, DK-2800 Lyngby (Denmark)

    2015-01-15

    Highlights: • Quantification of whole landfill site methane emission at 15 landfills. • Multiple on-site source identification and quantification. • Quantified methane emission from shredder waste and composting. • Large difference between measured and reported methane emissions. - Abstract: Whole-site methane emissions from 15 Danish landfills were assessed using a mobile tracer dispersion method with either Fourier transform infrared spectroscopy (FTIR), using nitrous oxide as a tracer gas, or cavity ring-down spectrometry (CRDS), using acetylene as a tracer gas. The landfills were chosen to represent the different stages of the lifetime of a landfill, including open, active, and closed covered landfills, as well as those with and without gas extraction for utilisation or flaring. Measurements also included landfills with biocover for oxidizing any fugitive methane. Methane emission rates ranged from 2.6 to 60.8 kg h{sup −1}, corresponding to 0.7–13.2 g m{sup −2} d{sup −1}, with the largest emission rates per area coming from landfills with malfunctioning gas extraction systems installed, and the smallest emission rates from landfills closed decades ago and landfills with an engineered biocover installed. Landfills with gas collection and recovery systems had a recovery efficiency of 41–81%. Landfills where shredder waste was deposited showed significant methane emissions, with the largest emission from newly deposited shredder waste. The average methane emission from the landfills was 154 tons y{sup −1}. This average was obtained from a few measurement campaigns conducted at each of the 15 landfills and extrapolating to annual emissions requires more measurements. Assuming that these landfills are representative of the average Danish landfill, the total emission from Danish landfills were calculated at 20,600 tons y{sup −1}, which is significantly lower than the 33,300 tons y{sup −1} estimated for the national greenhouse gas inventory for

  17. A method for the quantification of biased signalling at constitutively active receptors.

    Science.gov (United States)

    Hall, David A; Giraldo, Jesús

    2018-06-01

    Biased agonism, the ability of an agonist to differentially activate one of several signal transduction pathways when acting at a given receptor, is an increasingly recognized phenomenon at many receptors. The Black and Leff operational model lacks a way to describe constitutive receptor activity and hence inverse agonism. Thus, it is impossible to analyse the biased signalling of inverse agonists using this model. In this theoretical work, we develop and illustrate methods for the analysis of biased inverse agonism. Methods were derived for quantifying biased signalling in systems that demonstrate constitutive activity using the modified operational model proposed by Slack and Hall. The methods were illustrated using Monte Carlo simulations. The Monte Carlo simulations demonstrated that, with an appropriate experimental design, the model parameters are 'identifiable'. The method is consistent with methods based on the measurement of intrinsic relative activity (RA i ) (ΔΔlogR or ΔΔlog(τ/K a )) proposed by Ehlert and Kenakin and their co-workers but has some advantages. In particular, it allows the quantification of ligand bias independently of 'system bias' removing the requirement to normalize to a standard ligand. In systems with constitutive activity, the Slack and Hall model provides methods for quantifying the absolute bias of agonists and inverse agonists. This provides an alternative to methods based on RA i and is complementary to the ΔΔlog(τ/K a ) method of Kenakin et al. in systems where use of that method is inappropriate due to the presence of constitutive activity. © 2018 The British Pharmacological Society.

  18. Comparison of classical methods for blade design and the influence of tip correction on rotor performance

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Okulov, Valery; Mikkelsen, Robert Flemming

    2016-01-01

    The classical blade-element/momentum (BE/M) method, which is used together with different types of corrections (e.g. the Prandtl or Glauert tip correction), is today the most basic tool in the design of wind turbine rotors. However, there are other classical techniques based on a combination...

  19. Application of pulse pile-up correction spectrum to the library least-squares method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hoon [Kyungpook National Univ., Daegu (Korea, Republic of)

    2006-12-15

    The Monte Carlo simulation code CEARPPU has been developed and updated to provide pulse pile-up correction spectra for high counting rate cases. For neutron activation analysis, CEARPPU correction spectra were used in library least-squares method to give better isotopic activity results than the convention library least-squares fitting with uncorrected spectra.

  20. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    Science.gov (United States)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  1. Multiplex cDNA quantification method that facilitates the standardization of gene expression data

    Science.gov (United States)

    Gotoh, Osamu; Murakami, Yasufumi; Suyama, Akira

    2011-01-01

    Microarray-based gene expression measurement is one of the major methods for transcriptome analysis. However, current microarray data are substantially affected by microarray platforms and RNA references because of the microarray method can provide merely the relative amounts of gene expression levels. Therefore, valid comparisons of the microarray data require standardized platforms, internal and/or external controls and complicated normalizations. These requirements impose limitations on the extensive comparison of gene expression data. Here, we report an effective approach to removing the unfavorable limitations by measuring the absolute amounts of gene expression levels on common DNA microarrays. We have developed a multiplex cDNA quantification method called GEP-DEAN (Gene expression profiling by DCN-encoding-based analysis). The method was validated by using chemically synthesized DNA strands of known quantities and cDNA samples prepared from mouse liver, demonstrating that the absolute amounts of cDNA strands were successfully measured with a sensitivity of 18 zmol in a highly multiplexed manner in 7 h. PMID:21415008

  2. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    Science.gov (United States)

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Quantification of self pollution from two diesel school buses using three independent methods

    Science.gov (United States)

    Sally Liu, L.-J.; Phuleria, Harish C.; Webber, Whitney; Davey, Mark; Lawson, Douglas R.; Ireson, Robert G.; Zielinska, Barbara; Ondov, John M.; Weaver, Christopher S.; Lapin, Charles A.; Easter, Michael; Hesterberg, Thomas W.; Larson, Timothy

    2010-09-01

    We monitored two Seattle school buses to quantify the buses' self pollution using the dual tracers (DT), lead vehicle (LV), and chemical mass balance (CMB) methods. Each bus drove along a residential route simulating stops, with windows closed or open. Particulate matter (PM) and its constituents were monitored in the bus and from a LV. We collected source samples from the tailpipe and crankcase emissions using an on-board dilution tunnel. Concentrations of PM 1, ultrafine particle counts, elemental and organic carbon (EC/OC) were higher on the bus than the LV. The DT method estimated that the tailpipe and the crankcase emissions contributed 1.1 and 6.8 μg m -3 of PM 2.5 inside the bus, respectively, with significantly higher crankcase self pollution (SP) when windows were closed. Approximately two-thirds of in-cabin PM 2.5 originated from background sources. Using the LV approach, SP estimates from the EC and the active personal DataRAM (pDR) measurements correlated well with the DT estimates for tailpipe and crankcase emissions, respectively, although both measurements need further calibration for accurate quantification. CMB results overestimated SP from the DT method but confirmed crankcase emissions as the major SP source. We confirmed buses' SP using three independent methods and quantified crankcase emissions as the dominant contributor.

  4. A fast isocratic liquid chromatography method for the quantification of xanthophylls and their stereoisomers.

    Science.gov (United States)

    Nimalaratne, Chamila; Lopes-Lutz, Daise; Schieber, Andreas; Wu, Jianping

    2015-12-01

    A fast isocratic liquid chromatography method was developed for the simultaneous quantification of eight xanthophylls (13-Z-lutein, 13'-Z-lutein, 13-Z-zeaxanthin, all-E-lutein, all-E-zeaxanthin, all-E-canthaxanthin, all-E-β-apo-8'-carotenoic acid ethyl ester and all-E-β-apo-8'-carotenal) within 12 min, compared to 90 min by the conventional high-performance liquid chromatography method. The separation was achieved on a YMC C30 reversed-phase column (100 mm x 2.0 mm; 3 μm) operated at 20°C using a methanol/tert-butyl methyl ether/water solvent system at a flow rate of 0.8 mL/min. The method was successfully applied to quantify lutein and zeaxanthin stereoisomers in egg yolk, raw and cooked spinach, and a dietary supplement. The method can be used for the rapid analysis of xanthophyll isomers in different food products and for quality control purposes. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  6. Hepatic fat quantification using the two-point Dixon method and fat color maps based on non-alcoholic fatty liver disease activity score.

    Science.gov (United States)

    Hayashi, Tatsuya; Saitoh, Satoshi; Takahashi, Junji; Tsuji, Yoshinori; Ikeda, Kenji; Kobayashi, Masahiro; Kawamura, Yusuke; Fujii, Takeshi; Inoue, Masafumi; Miyati, Tosiaki; Kumada, Hiromitsu

    2017-04-01

    The two-point Dixon method for magnetic resonance imaging (MRI) is commonly used to non-invasively measure fat deposition in the liver. The aim of the present study was to assess the usefulness of MRI-fat fraction (MRI-FF) using the two-point Dixon method based on the non-alcoholic fatty liver disease activity score. This retrospective study included 106 patients who underwent liver MRI and MR spectroscopy, and 201 patients who underwent liver MRI and histological assessment. The relationship between MRI-FF and MR spectroscopy-fat fraction was used to estimate the corrected MRI-FF for hepatic multi-peaks of fat. Then, a color FF map was generated with the corrected MRI-FF based on the non-alcoholic fatty liver disease activity score. We defined FF variability as the standard deviation of FF in regions of interest. Uniformity of hepatic fat was visually graded on a three-point scale using both gray-scale and color FF maps. Confounding effects of histology (iron, inflammation and fibrosis) on corrected MRI-FF were assessed by multiple linear regression. The linear correlations between MRI-FF and MR spectroscopy-fat fraction, and between corrected MRI-FF and histological steatosis were strong (R 2  = 0.90 and R 2  = 0.88, respectively). Liver fat variability significantly increased with visual fat uniformity grade using both of the maps (ρ = 0.67-0.69, both P Hepatic iron, inflammation and fibrosis had no significant confounding effects on the corrected MRI-FF (all P > 0.05). The two-point Dixon method and the gray-scale or color FF maps based on the non-alcoholic fatty liver disease activity score were useful for fat quantification in the liver of patients without severe iron deposition. © 2016 The Japan Society of Hepatology.

  7. Optimized, fast through-put UHPLC-DAD based method for carotenoid quantification in spinach, serum, chylomicrons and faeces

    DEFF Research Database (Denmark)

    Eriksen, Jane Nygaard; Madsen, Pia Lisbeth; Dragsted, Lars Ove

    2017-01-01

    An improved UHPLC-DAD based method was developed and validated for quantification of major carotenoids present in spinach, serum, chylomicrons and faeces. Separation was achieved with gradient elution within 12.5 min for 6 dietary carotenoids and the internal standard, echinenone. The proposed me...

  8. Absolute quantification method and validation of airborne snow crab allergen tropomyosin using tandem mass spectrometry

    International Nuclear Information System (INIS)

    Rahman, Anas M. Abdel; Lopata, Andreas L.; Randell, Edward W.; Helleur, Robert J.

    2010-01-01

    Measuring the levels of the major airborne allergens of snow crab in the workplace is very important in studying the prevalence of crab asthma in workers. Previously, snow crab tropomyosin (SCTM) was identified as the major aeroallergen in crab plants and a unique signature peptide was identified for this protein. The present study advances our knowledge on aeroallergens by developing a method of quantification of airborne SCTM by using isotope dilution mass spectrometry. Liquid chromatography tandem mass spectrometry was developed for separation and analysis of the signature peptides. The tryptic digestion conditions were optimized to accomplish complete digestion. The validity of the method was studied using international conference on harmonization protocol, Where 2-9% for CV (precision) and 101-110% for accuracy, at three different levels of quality control. Recovery of the spiked protein from PTFE and TopTip filters was measured to be 99% and 96%, respectively. To further demonstrate the applicability and the validity of the method for real samples, 45 kg of whole snow crab were processed in an enclosed (simulated) crab processing line and air samples were collected. The levels of SCTM ranged between 0.36-3.92 μg m -3 and 1.70-2.31 μg m -3 for butchering and cooking stations, respectively.

  9. AO–MW–PLS method applied to rapid quantification of teicoplanin with near-infrared spectroscopy

    Directory of Open Access Journals (Sweden)

    Jiemei Chen

    2017-01-01

    Full Text Available Teicoplanin (TCP is an important lipoglycopeptide antibiotic produced by fermenting Actinoplanes teichomyceticus. The change in TCP concentration is important to measure in the fermentation process. In this study, a reagent-free and rapid quantification method for TCP in the TCP–Tris–HCl mixture samples was developed using near-infrared (NIR spectroscopy by focusing our attention on the fermentation process for TCP. The absorbance optimization (AO partial least squares (PLS was proposed and integrated with the moving window (MW PLS, which is called AO–MW–PLS method, to select appropriate wavebands. A model set that includes various wavebands that were equivalent to the optimal AO–MW–PLS waveband was proposed based on statistical considerations. The public region of all equivalent wavebands was just one of the equivalent wavebands. The obtained public regions were 1540–1868nm for TCP and 1114–1310nm for Tris. The root-mean-square error and correlation coefficient for leave-one-out cross validation were 0.046mg mL−1 and 0.9998mg mL−1 for TCP, and 0.235mg mL−1 and 0.9986mg mL−1 for Tris, respectively. All the models achieved highly accurate prediction effects, and the selected wavebands provided valuable references for designing specialized spectrometers. This study provided a valuable reference for further application of the proposed methods to TCP fermentation broth and to other spectroscopic analysis fields.

  10. Development and validation of an HPLC-FLD method for milbemectin quantification in dog plasma.

    Science.gov (United States)

    Xu, Qianqian; Xiang, Wensheng; Li, Jichang; Liu, Yong; Yu, Xiaolei; Zhang, Yaoteng; Qu, Mingli

    2010-07-15

    Milbemectin is a widely used veterinary antiparasitic agent. A high-performance liquid chromatography with fluorescent detection (HPLC-FLD) method is described for the determination of milbemectin in dog plasma. The derivative procedure included mixing 1-methylimizole [MI, MI-ACN (1:1, v/v), 100 microL], trifluoroacetic anhydride [TFAA, TFAA-ACN (1:2, v/v), 150 microL] with a subsequent incubation for 3s at the room temperature to obtain a fluorescent derivative, which is reproducible in different blood samples and the derivatives proved to be stable for at least 80 h at room temperature. HPLC method was developed on C18 column with FLD detection at an excitation wavelength of 365 nm and emission wavelength of 475 nm, with the mobile phase consisting of methanol and water in the ratio of 98:2 (v/v). The assay lower limit of quantification was 1 ng/mL. The calibration curve was linear over concentration range of 1-200 ng/mL. The intra- and inter-day accuracy was >94% and precision expressed as % coefficient of variation was <5%. This method is specific, simple, accurate, precise and easily adaptable to measure milbemycin in blood of other animals. Crown Copyright 2010. Published by Elsevier B.V. All rights reserved.

  11. Absolute quantification method and validation of airborne snow crab allergen tropomyosin using tandem mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Anas M. Abdel, E-mail: anasar@mun.ca [Department of Chemistry, Memorial University of Newfoundland, St. John' s, Newfoundland A1B 3X7 (Canada); Lopata, Andreas L. [School of Applied Science, Marine Biomedical Sciences and Health Research Group, RMIT University, Bundoora, 3083 Victoria (Australia); Randell, Edward W. [Department of Laboratory Medicine, Memorial University of Newfoundland, Eastern Health, St. John' s, Newfoundland and Labrador A1B 3V6 (Canada); Helleur, Robert J. [Department of Chemistry, Memorial University of Newfoundland, St. John' s, Newfoundland A1B 3X7 (Canada)

    2010-11-29

    Measuring the levels of the major airborne allergens of snow crab in the workplace is very important in studying the prevalence of crab asthma in workers. Previously, snow crab tropomyosin (SCTM) was identified as the major aeroallergen in crab plants and a unique signature peptide was identified for this protein. The present study advances our knowledge on aeroallergens by developing a method of quantification of airborne SCTM by using isotope dilution mass spectrometry. Liquid chromatography tandem mass spectrometry was developed for separation and analysis of the signature peptides. The tryptic digestion conditions were optimized to accomplish complete digestion. The validity of the method was studied using international conference on harmonization protocol, Where 2-9% for CV (precision) and 101-110% for accuracy, at three different levels of quality control. Recovery of the spiked protein from PTFE and TopTip filters was measured to be 99% and 96%, respectively. To further demonstrate the applicability and the validity of the method for real samples, 45 kg of whole snow crab were processed in an enclosed (simulated) crab processing line and air samples were collected. The levels of SCTM ranged between 0.36-3.92 {mu}g m{sup -3} and 1.70-2.31 {mu}g m{sup -3} for butchering and cooking stations, respectively.

  12. A simple and efficient method for poly-3-hydroxybutyrate quantification in diazotrophic bacteria within 5 minutes using flow cytometry

    Directory of Open Access Journals (Sweden)

    L.P.S. Alves

    Full Text Available The conventional method for quantification of polyhydroxyalkanoates based on whole-cell methanolysis and gas chromatography (GC is laborious and time-consuming. In this work, a method based on flow cytometry of Nile red stained bacterial cells was established to quantify poly-3-hydroxybutyrate (PHB production by the diazotrophic and plant-associated bacteria, Herbaspirillum seropedicae and Azospirillum brasilense. The method consists of three steps: i cell permeabilization, ii Nile red staining, and iii analysis by flow cytometry. The method was optimized step-by-step and can be carried out in less than 5 min. The final results indicated a high correlation coefficient (R2=0.99 compared to a standard method based on methanolysis and GC. This method was successfully applied to the quantification of PHB in epiphytic bacteria isolated from rice roots.

  13. A simple and efficient method for poly-3-hydroxybutyrate quantification in diazotrophic bacteria within 5 minutes using flow cytometry.

    Science.gov (United States)

    Alves, L P S; Almeida, A T; Cruz, L M; Pedrosa, F O; de Souza, E M; Chubatsu, L S; Müller-Santos, M; Valdameri, G

    2017-01-16

    The conventional method for quantification of polyhydroxyalkanoates based on whole-cell methanolysis and gas chromatography (GC) is laborious and time-consuming. In this work, a method based on flow cytometry of Nile red stained bacterial cells was established to quantify poly-3-hydroxybutyrate (PHB) production by the diazotrophic and plant-associated bacteria, Herbaspirillum seropedicae and Azospirillum brasilense. The method consists of three steps: i) cell permeabilization, ii) Nile red staining, and iii) analysis by flow cytometry. The method was optimized step-by-step and can be carried out in less than 5 min. The final results indicated a high correlation coefficient (R2=0.99) compared to a standard method based on methanolysis and GC. This method was successfully applied to the quantification of PHB in epiphytic bacteria isolated from rice roots.

  14. A simple and efficient method for poly-3-hydroxybutyrate quantification in diazotrophic bacteria within 5 minutes using flow cytometry

    Science.gov (United States)

    Alves, L.P.S.; Almeida, A.T.; Cruz, L.M.; Pedrosa, F.O.; de Souza, E.M.; Chubatsu, L.S.; Müller-Santos, M.; Valdameri, G.

    2017-01-01

    The conventional method for quantification of polyhydroxyalkanoates based on whole-cell methanolysis and gas chromatography (GC) is laborious and time-consuming. In this work, a method based on flow cytometry of Nile red stained bacterial cells was established to quantify poly-3-hydroxybutyrate (PHB) production by the diazotrophic and plant-associated bacteria, Herbaspirillum seropedicae and Azospirillum brasilense. The method consists of three steps: i) cell permeabilization, ii) Nile red staining, and iii) analysis by flow cytometry. The method was optimized step-by-step and can be carried out in less than 5 min. The final results indicated a high correlation coefficient (R2=0.99) compared to a standard method based on methanolysis and GC. This method was successfully applied to the quantification of PHB in epiphytic bacteria isolated from rice roots. PMID:28099582

  15. Evaluation of Fresnel's corrections to the eikonal approximation by the separabilization method

    International Nuclear Information System (INIS)

    Musakhanov, M.M.; Zubarev, A.L.

    1975-01-01

    Method of separabilization of potential over the Schroedinger approximate solutions, leading to Schwinger's variational principle for scattering amplitude, is suggested. The results are applied to calculation of the Fresnel corrections to the Glauber approximation

  16. A software-based x-ray scatter correction method for breast tomosynthesis

    OpenAIRE

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients.

  17. Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System

    Science.gov (United States)

    Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu

    2017-05-01

    This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.

  18. Error analysis of motion correction method for laser scanning of moving objects

    Science.gov (United States)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  19. Development of a reliable extraction and quantification method for glucosinolates in Moringa oleifera.

    Science.gov (United States)

    Förster, Nadja; Ulrichs, Christian; Schreiner, Monika; Müller, Carsten T; Mewis, Inga

    2015-01-01

    Glucosinolates are the characteristic secondary metabolites of plants in the order Brassicales. To date the common DIN extraction 'desulfo glucosinolates' method remains the common procedure for determination and quantification of glucosinolates. However, the desulfation step in the extraction of glucosinolates from Moringa oleifera leaves resulted in complete conversion and degradation of the naturally occurring glucosinolates in this plant. Therefore, a method for extraction of intact Moringa glucosinolates was developed and no conversion and degradation of the different rhamnopyranosyloxy-benzyl glucosinolates was found. Buffered eluents (0.1 M ammonium acetate) were necessary to stabilize 4-α-rhamnopyranosyloxy-benzyl glucosinolate (Rhamno-Benzyl-GS) and acetyl-4-α-rhamnopyranosyloxy-benzyl glucosinolate isomers (Ac-Isomers-GS) during HPLC analysis. Due to the instability of intact Moringa glucosinolates at room temperature and during the purification process of single glucosinolates, influences of different storage (room temperature, frozen, thawing and refreezing) and buffer conditions on glucosinolate conversion were analysed. Conversion and degradations processes were especially determined for the Ac-Isomers-GS III. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Sensitive liquid chromatography-tandem mass spectrometry method for quantification of hydrochlorothiazide in human plasma.

    Science.gov (United States)

    Ramakrishna, N V S; Vishwottam, K N; Manoj, S; Koteshwara, M; Wishu, S; Varma, D P

    2005-12-01

    A simple, rapid, sensitive and specific liquid chromatography-tandem mass spectrometry method was developed and validated for quantification of hydrochlorothiazide (I), a common diuretic and anti-hypertensive agent. The analyte and internal standard, tamsulosin (II) were extracted by liquid-liquid extraction with diethyl ether-dichloromethane (70:30, v/v) using a Glas-Col Multi-Pulse Vortexer. The chromatographic separation was performed on a reversed-phase column (Waters symmetry C18) with a mobile phase of 10 mm ammonium acetate-methanol (15:85, v/v). The protonated analyte was quantitated in negative ionization by multiple reaction monitoring with a mass spectrometer. The mass transitions m/z 296.1 solidus in circle 205.0 and m/z 407.2 solidus in circle 184.9 were used to measure I and II, respectively. The assay exhibited a linear dynamic range of 0.5-200 ng/mL for hydrochlorothiazide in human plasma. The lower limit of quantitation was 500 pg/mL, with a relative standard deviation of less than 9%. Acceptable precision and accuracy were obtained for concentrations over the standard curve ranges. A run time of 2.5 min for each sample made it possible to analyze a throughput of more than 400 human plasma samples per day. The validated method has been successfully used to analyze human plasma samples for application in pharmacokinetic, bioavailability or bioequivalence studies. (c) 2005 John Wiley & Sons, Ltd.

  1. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  2. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  3. A novel method for quantification of beam's-eye-view tumor tracking performance.

    Science.gov (United States)

    Hu, Yue-Houng; Myronakis, Marios; Rottmann, Joerg; Wang, Adam; Morf, Daniel; Shedlock, Daniel; Baturin, Paul; Star-Lack, Josh; Berbeco, Ross

    2017-11-01

    In-treatment imaging using an electronic portal imaging device (EPID) can be used to confirm patient and tumor positioning. Real-time tumor tracking performance using current digital megavolt (MV) imagers is hindered by poor image quality. Novel EPID designs may help to improve quantum noise response, while also preserving the high spatial resolution of the current clinical detector. Recently investigated EPID design improvements include but are not limited to multi-layer imager (MLI) architecture, thick crystalline and amorphous scintillators, and phosphor pixilation and focusing. The goal of the present study was to provide a method of quantitating improvement in tracking performance as well as to reveal the physical underpinnings of detector design that impact tracking quality. The study employs a generalizable ideal observer methodology for the quantification of tumor tracking performance. The analysis is applied to study both the effect of increasing scintillator thickness on a standard, single-layer imager (SLI) design as well as the effect of MLI architecture on tracking performance. The present study uses the ideal observer signal-to-noise ratio (d') as a surrogate for tracking performance. We employ functions which model clinically relevant tasks and generalized frequency-domain imaging metrics to connect image quality with tumor tracking. A detection task for relevant Cartesian shapes (i.e., spheres and cylinders) was used to quantitate trackability of cases employing fiducial markers. Automated lung tumor tracking algorithms often leverage the differences in benign and malignant lung tissue textures. These types of algorithms (e.g., soft-tissue localization - STiL) were simulated by designing a discrimination task, which quantifies the differentiation of tissue textures, measured experimentally and fit as a power-law in trend (with exponent β) using a cohort of MV images of patient lungs. The modeled MTF and NPS were used to investigate the effect of

  4. A new correction method for determination on carbohydrates in lignocellulosic biomass.

    Science.gov (United States)

    Li, Hong-Qiang; Xu, Jian

    2013-06-01

    The accurate determination on the key components in lignocellulosic biomass is the premise of pretreatment and bioconversion. Currently, the widely used 72% H2SO4 two-step hydrolysis quantitative saccharification (QS) procedure uses loss coefficient of monosaccharide standards to correct monosaccharide loss in the secondary hydrolysis (SH) of QS and may result in excessive correction. By studying the quantitative relationships of glucose and xylose losses during special hydrolysis conditions and the HMF and furfural productions, a simple correction on the monosaccharide loss from both PH and SH was established by using HMF and furfural as the calibrators. This method was used to the component determination on corn stover, Miscanthus and cotton stalk (raw materials and pretreated) and compared to the NREL method. It has been proved that this method can avoid excessive correction on the samples with high-carbohydrate contents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  6. Validation of an HPLC-UV method for the identification and quantification of bioactive amines in chicken meat

    Directory of Open Access Journals (Sweden)

    D.C.S. Assis

    2016-06-01

    Full Text Available ABSTRACT A high-performance liquid chromatography with ultraviolet detection (HPLC-UV method was validated for the study of bioactive amines in chicken meat. A gradient elution system with an ultraviolet detector was used after extraction with trichloroacetic acid and pre-column derivatization with dansyl chloride. Putrescine, cadaverine, histamine, tyramine, spermidine, and spermine standards were used for the evaluation of the following performance parameters: selectivity, linearity, precision, recovery, limits of detection, limits of quantification and ruggedness. The results indicated excellent selectivity, separation of all amines, a coefficient of determination greater than 0.99 and recovery from 92.25 to 102.25% at the concentration of 47.2mg.kg-1, with a limit of detection at 0.3mg.kg-1 and a limit of quantification at 0.9mg.kg-1 for all amines, with the exception of histamine, which exhibited the limit of quantification, of 1mg.kg-1. In conclusion, the performance parameters demonstrated adequacy of the method for the detection and quantification of bioactive amines in chicken meat.

  7. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  8. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Directory of Open Access Journals (Sweden)

    Huiliang Cao

    2016-01-01

    Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  9. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  10. Quantification of methane and nitrous oxide emissions from various waste treatment facilities by tracer dilution method

    Science.gov (United States)

    Mønster, Jacob; Rella, Chris; Jacobson, Gloria; Kjeldsen, Peter; Scheutz, Charlotte

    2013-04-01

    Urban activities generate solid and liquid waste, and the handling and aftercare of the waste results in the emission of various compounds into the surrounding environment. Some of these compounds are emitted as gasses into the atmosphere, including methane and nitrous oxide. Methane and nitrous oxide are strong greenhouse gases and are considered to have 25 and 298 times the greenhouse gas potential of carbon dioxide on a hundred years term (Solomon et al. 2007). Global observations of both gasses have shown increasing concentrations that significantly contribute to the greenhouse gas effect. Methane and nitrous oxide are emitted from both natural and anthropogenic sources and inventories of source specific fugitive emissions from the anthropogenic sources of methane and nitrous oxide of are often estimated on the basis of modeling and mass balance. Though these methods are well-developed, actual measurements for quantification of the emissions is a very useful tool for verifying the modeling and mass balance as well as for validation initiatives done for lowering the emissions of methane and nitrous oxide. One approach to performing such measurements is the tracer dilution method (Galle et al. 2001, Scheutz et al. 2011), where the exact location of the source is located and a tracer gas is released at this source location at a known flow. The ratio of downwind concentrations of the tracer gas and the methane and nitrous oxide gives the emissions rates of the greenhouse gases. This tracer dilution method can be performed using both stationary and mobile measurements and in both cases, real-time measurements of both tracer and quantified gas are required, placing high demands on the analytical detection method. To perform the methane and nitrous oxide measurements, two robust instruments capable of real-time measurements were used, based on cavity ring-down spectroscopy and operating in the near-infrared spectral region. One instrument measured the methane and

  11. Stable isotope dilution HILIC-MS/MS method for accurate quantification of glutamic acid, glutamine, pyroglutamic acid, GABA and theanine in mouse brain tissues.

    Science.gov (United States)

    Inoue, Koichi; Miyazaki, Yasuto; Unno, Keiko; Min, Jun Zhe; Todoroki, Kenichiro; Toyo'oka, Toshimasa

    2016-01-01

    In this study, we developed the stable isotope dilution hydrophilic interaction liquid chromatography with tandem mass spectrometry (HILIC-MS/MS) technique for the accurate, reasonable and simultaneous quantification of glutamic acid (Glu), glutamine (Gln), pyroglutamic acid (pGlu), γ-aminobutyric acid (GABA) and theanine in mouse brain tissues. The quantification of these analytes was accomplished using stable isotope internal standards and the HILIC separating mode to fully correct the intramolecular cyclization during the electrospray ionization. It was shown that linear calibrations were available with high coefficients of correlation (r(2)  > 0.999, range from 10 pmol/mL to 50 mol/mL). For application of the theanine intake, the determination of Glu, Gln, pGlu, GABA and theanine in the hippocampus and central cortex tissues was performed based on our developed method. In the region of the hippocampus, the concentration levels of Glu and pGlu were significantly reduced during reality-based theanine intake. Conversely, the concentration level of GABA increased. This result showed that transited theanine has an effect on the metabolic balance of Glu analogs in the hippocampus. Copyright © 2015 John Wiley & Sons, Ltd.

  12. An HPLC-DAD method to quantification of main phenolic compounds from leaves of Cecropia species

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Geison M.; Ortmann, Caroline F.; Schenkel, Eloir P.; Reginatto, Flavio H., E-mail: freginatto@hotmail.co [Universidade Federal de Santa Catarina (UFSC), Florianopolis (Brazil). Centro de Ciencias da Saude. Dept. de Ciencias Farmaceuticas

    2011-07-01

    An efficient and reproducible HPLC-DAD method was developed and validated for the simultaneous quantification of major compounds (chlorogenic acid, isoorientin, orientin and isovitexin) present in the leaves of two Cecropia species, C. glaziovii and C. pachystachya. From the leaves of C. glaziovii and C. pachystachya were isolated the C-glycosylflavones isoorientin and isovitexin and identified on both species chlorogenic acid (3-O-caffeoylquinic acid) and the O-glycosylflavonol isoquercitrin. The C-glycosylflavone orientin was isolated only from C. pachystachya. Chlorogenic acid was the major compound in both species (11.1 mg g{sup -1} of extract of C. glaziovii and 27.2 mg g{sup -1} of extract of C. pachystachya) and for the flavonoids quantified, isovitexin was the main C-glycosylflavonoid for C. glaziovii (4.6 mg g{sup -1} of extract) and isoorientin the main one for C. pachystachya (17.3 mg g{sup -1} of extract). (author)

  13. Imaging-based quantification of hepatic fat: methods and clinical applications.

    Science.gov (United States)

    Ma, Xiaozhou; Holalkere, Nagaraj-Setty; Kambadakone R, Avinash; Mino-Kenudson, Mari; Hahn, Peter F; Sahani, Dushyant V

    2009-01-01

    Fatty liver disease comprises a spectrum of conditions (simple hepatic steatosis, steatohepatitis with inflammatory changes, and end-stage liver disease with fibrosis and cirrhosis). Hepatic steatosis is often associated with diabetes and obesity and may be secondary to alcohol and drug use, toxins, viral infections, and metabolic diseases. Detection and quantification of liver fat have many clinical applications, and early recognition is crucial to institute appropriate management and prevent progression. Histopathologic analysis is the reference standard to detect and quantify fat in the liver, but results are vulnerable to sampling error. Moreover, it can cause morbidity and complications and cannot be repeated often enough to monitor treatment response. Imaging can be repeated regularly and allows assessment of the entire liver, thus avoiding sampling error. Selection of appropriate imaging methods demands understanding of their advantages and limitations and the suitable clinical setting. Ultrasonography is effective for detecting moderate or severe fatty infiltration but is limited by lack of interobserver reliability and intraobserver reproducibility. Computed tomography allows quantitative and qualitative evaluation and is generally highly accurate and reliable; however, the results may be confounded by hepatic parenchymal changes due to cirrhosis or depositional diseases. Magnetic resonance (MR) imaging with appropriate sequences (eg, chemical shift techniques) has similarly high sensitivity, and MR spectroscopy provides unique advantages for some applications. However, both are expensive and too complex to be used to monitor steatosis. (c) RSNA, 2009.

  14. Study on Meshfree Hermite Radial Point Interpolation Method for Flexural Wave Propagation Modeling and Damage Quantification

    Directory of Open Access Journals (Sweden)

    Hosein Ghaffarzadeh

    Full Text Available Abstract This paper investigates the numerical modeling of the flexural wave propagation in Euler-Bernoulli beams using the Hermite-type radial point interpolation method (HRPIM under the damage quantification approach. HRPIM employs radial basis functions (RBFs and their derivatives for shape function construction as a meshfree technique. The performance of Multiquadric(MQ RBF to the assessment of the reflection ratio was evaluated. HRPIM signals were compared with the theoretical and finite element responses. Results represent that MQ is a suitable RBF for HRPIM and wave propagation. However, the range of the proper shape parameters is notable. The number of field nodes is the main parameter for accurate wave propagation modeling using HRPIM. The size of support domain should be less thanan upper bound in order to prevent high error. With regard to the number of quadrature points, providing the minimum numbers of points are adequate for the stable solution, but the existence of more points in damage region does not leads to necessarily the accurate responses. It is concluded that the pure HRPIM, without any polynomial terms, is acceptable but considering a few terms will improve the accuracy; even though more terms make the problem unstable and inaccurate.

  15. Quantification method analysis of the relationship between occupant injury and environmental factors in traffic accidents.

    Science.gov (United States)

    Ju, Yong Han; Sohn, So Young

    2011-01-01

    Injury analysis following a vehicle crash is one of the most important research areas. However, most injury analyses have focused on one-dimensional injury variables, such as the AIS (Abbreviated Injury Scale) or the IIS (Injury Impairment Scale), at a time in relation to various traffic accident factors. However, these studies cannot reflect the various injury phenomena that appear simultaneously. In this paper, we apply quantification method II to the NASS (National Automotive Sampling System) CDS (Crashworthiness Data System) to find the relationship between the categorical injury phenomena, such as the injury scale, injury position, and injury type, and the various traffic accident condition factors, such as speed, collision direction, vehicle type, and seat position. Our empirical analysis indicated the importance of safety devices, such as restraint equipment and airbags. In addition, we found that narrow impact, ejection, air bag deployment, and higher speed are associated with more severe than minor injury to the thigh, ankle, and leg in terms of dislocation, abrasion, or laceration. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  17. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    International Nuclear Information System (INIS)

    Burdet, Pierre; Saghi, Z.; Filippin, A.N.; Borrás, A.; Midgley, P.A.

    2016-01-01

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  18. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    Energy Technology Data Exchange (ETDEWEB)

    Burdet, Pierre, E-mail: pierre.burdet@a3.epfl.ch [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Saghi, Z. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Filippin, A.N.; Borrás, A. [Nanotechnology on Surfaces Laboratory, Materials Science Institute of Seville (ICMS), CSIC-University of Seville, C/ Americo Vespucio 49, 41092 Seville (Spain); Midgley, P.A. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom)

    2016-01-15

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  19. Simple tool for the rapid, automated quantification of glacier advance/retreat observations using multiple methods

    Science.gov (United States)

    Lea, J.

    2017-12-01

    The quantification of glacier change is a key variable within glacier monitoring, with the method used potentially being crucial to ensuring that data can be appropriately compared with environmental data. The topic and timescales of study (e.g. land/marine terminating environments; sub-annual/decadal/centennial/millennial timescales) often mean that different methods are more suitable for different problems. However, depending on the GIS/coding expertise of the user, some methods can potentially be time consuming to undertake, making large-scale studies problematic. In addition, examples exist where different users have nominally applied the same methods in different studies, though with minor methodological inconsistencies in their approach. In turn, this will have implications for data homogeneity where regional/global datasets may be constructed. Here, I present a simple toolbox scripted in a Matlab® environment that requires only glacier margin and glacier centreline data to quantify glacier length, glacier change between observations, rate of change, in addition to other metrics. The toolbox includes the option to apply the established centreline or curvilinear box methods, or a new method: the variable box method - designed for tidewater margins where box width is defined as the total width of the individual terminus observation. The toolbox is extremely flexible, and has the option to be applied as either Matlab® functions within user scripts, or via a graphical user interface (GUI) for those unfamiliar with a coding environment. In both instances, there is potential to apply the methods quickly to large datasets (100s-1000s of glaciers, with potentially similar numbers of observations each), thus ensuring large scale methodological consistency (and therefore data homogeneity) and allowing regional/global scale analyses to be achievable for those with limited GIS/coding experience. The toolbox has been evaluated against idealised scenarios demonstrating

  20. Methods of correction of carriage of junior schoolchildren by facilities of physical exercises

    Directory of Open Access Journals (Sweden)

    Gagara V.F.

    2012-08-01

    Full Text Available The results of influence of methods of physical rehabilitation on the organism of children are resulted. In research took part 16 children of lower school with the scoliotic changes of pectoral department of spine. The complex of methods of physical rehabilitation included special correction and general health-improving exercises, medical gymnastics, correction position. Employments on a medical gymnastics during 30-45 minutes 3-4 times per a week were conducted. The improvement of indexes of mobility of spine and state of carriage of schoolchildren is marked. The absolute indexes of the state of carriage and flexibility of spine considerably got around physiology sizes. A rehabilitation complex which includes the elements of correction gymnastics is recommended, medical physical culture, correction, massage of muscles of trunk, position. It is also necessary to adhere to the rational mode of day and feed, provide the normative parameters of working furniture and self-control of the state of carriage.

  1. An efficient shutter-less non-uniformity correction method for infrared focal plane arrays

    Science.gov (United States)

    Huang, Xiyan; Sui, Xiubao; Zhao, Yao

    2017-02-01

    The non-uniformity response in infrared focal plane array (IRFPA) detectors has a bad effect on images with fixed pattern noise. At present, it is common to use shutter to prevent from radiation of target and to update the parameters of non-uniformity correction in the infrared imaging system. The use of shutter causes "freezing" image. And inevitably, there exists the problems of the instability and reliability of system, power consumption, and concealment of infrared detection. In this paper, we present an efficient shutter-less non-uniformity correction (NUC) method for infrared focal plane arrays. The infrared imaging system can use the data gaining in thermostat to calculate the incident infrared radiation by shell real-timely. And the primary output of detector except the shell radiation can be corrected by the gain coefficient. This method has been tested in real infrared imaging system, reaching high correction level, reducing fixed pattern noise, adapting wide temperature range.

  2. Efficient color correction method for smartphone camera-based health monitoring application.

    Science.gov (United States)

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  3. Validated reverse transcription droplet digital PCR serves as a higher order method for absolute quantification of Potato virus Y strains.

    Science.gov (United States)

    Mehle, Nataša; Dobnik, David; Ravnikar, Maja; Pompe Novak, Maruša

    2018-05-03

    RNA viruses have a great potential for high genetic variability and rapid evolution that is generated by mutation and recombination under selection pressure. This is also the case of Potato virus Y (PVY), which comprises a high diversity of different recombinant and non-recombinant strains. Consequently, it is hard to develop reverse transcription real-time quantitative PCR (RT-qPCR) with the same amplification efficiencies for all PVY strains which would enable their equilibrate quantification; this is specially needed in mixed infections and other studies of pathogenesis. To achieve this, we initially transferred the PVY universal RT-qPCR assay to a reverse transcription droplet digital PCR (RT-ddPCR) format. RT-ddPCR is an absolute quantification method, where a calibration curve is not needed, and it is less prone to inhibitors. The RT-ddPCR developed and validated in this study achieved a dynamic range of quantification over five orders of magnitude, and in terms of its sensitivity, it was comparable to, or even better than, RT-qPCR. RT-ddPCR showed lower measurement variability. We have shown that RT-ddPCR can be used as a reference tool for the evaluation of different RT-qPCR assays. In addition, it can be used for quantification of RNA based on in-house reference materials that can then be used as calibrators in diagnostic laboratories.

  4. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  5. Investigation of Compton scattering correction methods in cardiac SPECT by Monte Carlo simulations

    International Nuclear Information System (INIS)

    Silva, A.M. Marques da; Furlan, A.M.; Robilotta, C.C.

    2001-01-01

    The goal of this work was the use of Monte Carlo simulations to investigate the effects of two scattering correction methods: dual energy window (DEW) and dual photopeak window (DPW), in quantitative cardiac SPECT reconstruction. MCAT torso-cardiac phantom, with 99m Tc and non-uniform attenuation map was simulated. Two different photopeak windows were evaluated in DEW method: 15% and 20%. Two 10% wide subwindows centered symmetrically within the photopeak were used in DPW method. Iterative ML-EM reconstruction with modified projector-backprojector for attenuation correction was applied. Results indicated that the choice of the scattering and photopeak windows determines the correction accuracy. For the 15% window, fitted scatter fraction gives better results than k = 0.5. For the 20% window, DPW is the best method, but it requires parameters estimation using Monte Carlo simulations. (author)

  6. Ballistic deficit correction methods for large Ge detectors-high counting rate study

    International Nuclear Information System (INIS)

    Duchene, G.; Moszynski, M.

    1995-01-01

    This study presents different ballistic correction methods versus input count rate (from 3 to 50 kcounts/s) using four large Ge detectors of about 70 % relative efficiency. It turns out that the Tennelec TC245 linear amplifier in the BDC mode (Hinshaw method) is the best compromise for energy resolution throughout. All correction methods lead to narrow sum-peaks indistinguishable from single Γ lines. The full energy peak throughput is found representative of the pile-up inspection dead time of the corrector circuits. This work also presents a new and simple representation, plotting simultaneously energy resolution and throughput versus input count rate. (TEC). 12 refs., 11 figs

  7. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function. In t...

  8. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  9. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  10. Application of the homology method for quantification of low-attenuation lung region in patients with and without COPD

    Directory of Open Access Journals (Sweden)

    Nishio M

    2016-09-01

    Full Text Available Mizuho Nishio,1 Kazuaki Nakane,2 Yutaka Tanaka3 1Clinical PET Center, Institute of Biomedical Research and Innovation, Hyogo, Japan; 2Department of Molecular Pathology, Osaka University Graduate School of Medicine and Health Science, Osaka, Japan; 3Department of Radiology, Chibune General Hospital, Osaka, Japan Background: Homology is a mathematical concept that can be used to quantify degree of contact. Recently, image processing with the homology method has been proposed. In this study, we used the homology method and computed tomography images to quantify emphysema.Methods: This study included 112 patients who had undergone computed tomography and pulmonary function test. Low-attenuation lung regions were evaluated by the homology method, and homology-based emphysema quantification (b0, b1, nb0, nb1, and R was performed. For comparison, the percentage of low-attenuation lung area (LAA% was also obtained. Relationships between emphysema quantification and pulmonary function test results were evaluated by Pearson’s correlation coefficients. In addition to the correlation, the patients were divided into the following three groups based on guidelines of the Global initiative for chronic Obstructive Lung Disease: Group A, nonsmokers; Group B, smokers without COPD, mild COPD, and moderate COPD; Group C, severe COPD and very severe COPD. The homology-based emphysema quantification and LAA% were compared among these groups.Results: For forced expiratory volume in 1 second/forced vital capacity, the correlation coefficients were as follows: LAA%, -0.603; b0, -0.460; b1, -0.500; nb0, -0.449; nb1, -0.524; and R, -0.574. For forced expiratory volume in 1 second, the coefficients were as follows: LAA%, -0.461; b0, -0.173; b1, -0.314; nb0, -0.191; nb1, -0.329; and R, -0.409. Between Groups A and B, difference in nb0 was significant (P-value = 0.00858, and those in the other types of quantification were not significant.Conclusion: Feasibility of the

  11. A Geometric Correction Method of Plane Image Based on OpenCV

    Directory of Open Access Journals (Sweden)

    Li Xiaopeng

    2014-02-01

    Full Text Available Using OpenCV, a geometric correction method of plane image from single grid image in a state of unknown camera position is presented. The method can remove the perspective and lens distortions from an image. The method is simple and easy to implement, and the efficiency is high. Experiments indicate that this method has high precision, and can be used in some domains such as plane measurement.

  12. Comparison between modified Dixon MRI techniques, MR spectroscopic relaxometry, and different histologic quantification methods in the assessment of hepatic steatosis

    Energy Technology Data Exchange (ETDEWEB)

    Kukuk, Guido M.; Block, Wolfgang; Willinek, Winfried A.; Schild, Hans H.; Traeber, Frank [University of Bonn, Department of Radiology, Bonn (Germany); Hittatiya, Kanishka; Fischer, Hans-Peter [University of Bonn, Department of Pathology, Bonn (Germany); Sprinkart, Alois M. [University of Bonn, Department of Radiology, Bonn (Germany); Ruhr-University, Institute of Medical Engineering, Bochum (Germany); Eggers, Holger [Philips Research Europe, Hamburg (Germany); Gieseke, Juergen [University of Bonn, Department of Radiology, Bonn (Germany); Philips Healthcare, Best (Netherlands); Moeller, Philipp; Spengler, Ulrich; Trebicka, Jonel [University of Bonn, Department of Internal Medicine I, Bonn (Germany)

    2015-10-15

    To compare systematically quantitative MRI, MR spectroscopy (MRS), and different histological methods for liver fat quantification in order to identify possible incongruities. Fifty-nine consecutive patients with liver disorders were examined on a 3 T MRI system. Quantitative MRI was performed using a dual- and a six-echo variant of the modified Dixon (mDixon) sequence, calculating proton density fat fraction (PDFF) maps, in addition to single-voxel MRS. Histological fat quantification included estimation of the percentage of hepatocytes containing fat vesicles as well as semi-automatic quantification (qHisto) using tissue quantification software. In 33 of 59 patients, the hepatic fat fraction was >5 % as determined by MRS (maximum 45 %, mean 17 %). Dual-echo mDixon yielded systematically lower PDFF values than six-echo mDixon (mean difference 1.0 %; P < 0.001). Six-echo mDixon correlated excellently with MRS, qHisto, and the estimated percentage of hepatocytes containing fat vesicles (R = 0.984, 0.967, 0.941, respectively, all P < 0.001). Mean values obtained by the estimated percentage of hepatocytes containing fat were higher by a factor of 2.5 in comparison to qHisto. Six-echo mDixon and MRS showed the best agreement with values obtained by qHisto. Six-echo mDixon, MRS, and qHisto provide the most robust and congruent results and are therefore most appropriate for reliable quantification of liver fat. (orig.)

  13. A New Online Calibration Method Based on Lord's Bias-Correction.

    Science.gov (United States)

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  14. An Investigation on the Efficiency Correction Method of the Turbocharger at Low Speed

    Directory of Open Access Journals (Sweden)

    Jin Eun Chung

    2018-01-01

    Full Text Available The heat transfer in the turbocharger occurs due to the temperature difference between the exhaust gas and intake air, coolant, and oil. This heat transfer causes the efficiency of the compressor and turbine to be distorted, which is known to be exacerbated during low rotational speeds. Thus, this study proposes a method to mitigate the distortion of the test result data caused by heat transfer in the turbocharger. With this method, the representative compressor temperature is defined and the heat transfer rate of the compressor is calculated by considering the effect of the oil and turbine inlet temperatures at low rotation speeds, when the cold and the hot gas test are simultaneously performed. The correction of compressor efficiency, depending on the turbine inlet temperature, was performed through both hot and cold gas tests and the results showed a maximum of 16% error prior to correction and a maximum of 3% error after the correction. In addition, it shows that it is possible to correct the efficiency distortion of the turbocharger by heat transfer by correcting to the combined turbine efficiency based on the corrected compressor efficiency.

  15. A Study of Method Development, Validation, and Forced Degradation for Simultaneous Quantification of Paracetamol and Ibuprofen in Pharmaceutical Dosage Form by RP-HPLC Method

    OpenAIRE

    Jahan, Md. Sarowar; Islam, Md. Jahirul; Begum, Rehana; Kayesh, Ruhul; Rahman, Asma

    2014-01-01

    A rapid and stability-indicating reversed phase high-performance liquid chromatography (RP-HPLC) method was developed for simultaneous quantification of paracetamol and ibuprofen in their combined dosage form especially to get some more advantages over other methods already developed for this combination. The method was validated according to United States Pharmacopeia (USP) guideline with respect to accuracy, precision, specificity, linearity, solution stability, robustness, sensitivity, and...

  16. Evaluation of bias-correction methods for ensemble streamflow volume forecasts

    Directory of Open Access Journals (Sweden)

    T. Hashino

    2007-01-01

    Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.

  17. A multilevel correction adaptive finite element method for Kohn-Sham equation

    Science.gov (United States)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  18. Analysis of slippery droplet on tilted plate by development of optical correction method

    Science.gov (United States)

    Ko, Han Seo; Gim, Yeonghyeon; Choi, Sung Ho; Jang, Dong Kyu; Sohn, Dong Kee

    2017-11-01

    Because of distortion effects on a surface of a sessile droplet, the inner flow field of the droplet is measured by a PIV (particle image velocimetry) method with low reliability. In order to solve this problem, many researchers have studied and developed the optical correction method. However, the method cannot be applied for various cases such as the tilted droplet or other asymmetric shaped droplets since most methods were considered only for the axisymmetric shaped droplets. For the optical correction of the asymmetric shaped droplet, the surface function was calculated by the three-dimensional reconstruction using the ellipse curve fitting method. Also, the optical correction using the surface function was verified by the numerical simulation. Then, the developed method was applied to reconstruct the inner flow field of the droplet on the tilted plate. The colloidal droplet of water on the tilted surface was used, and the distorted effect on the surface of the droplet was calculated. Using the obtained results and the PIV method, the corrected flow field for the inner and interface parts of the droplet was reconstructed. Consequently, the error caused by the distortion effect of the velocity vector located on the apex of the droplet was removed. National Research Foundation (NRF) of Korea, (2016R1A2B4011087).

  19. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  20. Development and validation of a bioanalytical LC-MS method for the quantification of GHRP-6 in human plasma.

    Science.gov (United States)

    Gil, Jeovanis; Cabrales, Ania; Reyes, Osvaldo; Morera, Vivian; Betancourt, Lázaro; Sánchez, Aniel; García, Gerardo; Moya, Galina; Padrón, Gabriel; Besada, Vladimir; González, Luis Javier

    2012-02-23

    Growth hormone-releasing peptide 6 (GHRP-6, His-(DTrp)-Ala-Trp-(DPhe)-Lys-NH₂, MW=872.44 Da) is a potent growth hormone secretagogue that exhibits a cytoprotective effect, maintaining tissue viability during acute ischemia/reperfusion episodes in different organs like small bowel, liver and kidneys. In the present work a quantitative method to analyze GHRP-6 in human plasma was developed and fully validated following FDA guidelines. The method uses an internal standard (IS) of GHRP-6 with ¹³C-labeled Alanine for quantification. Sample processing includes a precipitation step with cold acetone to remove the most abundant plasma proteins, recovering the GHRP-6 peptide with a high yield. Quantification was achieved by LC-MS in positive full scan mode in a Q-Tof mass spectrometer. The sensitivity of the method was evaluated, establishing the lower limit of quantification at 5 ng/mL and a range for the calibration curve from 5 ng/mL to 50 ng/mL. A dilution integrity test was performed to analyze samples at higher concentration of GHRP-6. The validation process involved five calibration curves and the analysis of quality control samples to determine accuracy and precision. The calibration curves showed R² higher than 0.988. The stability of the analyte and its internal standard (IS) was demonstrated in all conditions the samples would experience in a real time analyses. This method was applied to the quantification of GHRP-6 in plasma from nine healthy volunteers participating in a phase I clinical trial. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. A validated and densitometric HPTLC method for the simultaneous quantification of reserpine and ajmalicine in Rauvolfia serpentina and Rauvolfia tetraphylla

    OpenAIRE

    Pandey, Devendra Kumar; Radha,; Dey, Abhijit

    2016-01-01

    ABSTRACT High performance thin layer chromatographic method (HPTLC) has been developed for the quantification of reserpine and ajmalicine in root part of two different population of Rauvolfia serpentina (L.) Benth. ex Kurz and Rauvolfia tetraphylla L., Apocynaceae, collected from Punjab and Uttarakhand. HPTLC of methanolic extract of root containing indole alkaloids, i.e., reserpine and ajmalicine, was performed on TLC Silicagel 60 F254 (10 cm × 10 cm) plates with toluene:ethyl acetate:formic...

  2. Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay

    Science.gov (United States)

    Huang, Jian

    2013-03-12

    A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.

  3. Correction method of slit modulation transfer function on digital medical imaging system

    International Nuclear Information System (INIS)

    Kim, Jung Min; Jung, Hoi Woun; Min, Jung Whan; Im, Eon Kyung

    2006-01-01

    By using CR image pixel data, We examined the way how to calculate the MTF and digital characteristic curve. It can be changed to the text-file (Excel) from a pixel data which was printed with a digital x-ray equipment. In this place, We described the way how to figure out and correct the sharpness of a digital images of the MTF from FUJITA. Excel program was utilized to calculate from radiography of slit. Digital characteristic curve, Line Spread Function, Discrete Fourier Transform, Fast Fourier Transform digital specification curve, were indicated in regular sequence. A big advantage of this method, It can be understood easily and you can get results without costly program an without full knowledge of computer language. It shows many different values by using different correction methods. Therefore we need to be handy with appropriate correction method and we should try many experiments to get a precise MTF figures

  4. A third-generation dispersion and third-generation hydrogen bonding corrected PM6 method

    DEFF Research Database (Denmark)

    Kromann, Jimmy Charnley; Christensen, Anders Steen; Svendsen, Casper Steinmann

    2014-01-01

    We present new dispersion and hydrogen bond corrections to the PM6 method, PM6-D3H+, and its implementation in the GAMESS program. The method combines the DFT-D3 dispersion correction by Grimme et al. with a modified version of the H+ hydrogen bond correction by Korth. Overall, the interaction...... in GAMESS, while the corresponding numbers for PM6-DH+ implemented in MOPAC are 54, 17, 15, and 2. The PM6-D3H+ method as implemented in GAMESS offers an attractive alternative to PM6-DH+ in MOPAC in cases where the LBFGS optimizer must be used and a vibrational analysis is needed, e.g., when computing...... vibrational free energies. While the GAMESS implementation is up to 10 times slower for geometry optimizations of proteins in bulk solvent, compared to MOPAC, it is sufficiently fast to make geometry optimizations of small proteins practically feasible....

  5. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  6. Standardized Method for Quantification of Developing Lymphedema in Patients Treated for Breast Cancer

    International Nuclear Information System (INIS)

    Ancukiewicz, Marek; Russell, Tara A.; Otoole, Jean; Specht, Michelle; Singer, Marybeth; Kelada, Alexandra; Murphy, Colleen D.; Pogachar, Jessica; Gioioso, Valeria; Patel, Megha; Skolny, Melissa; Smith, Barbara L.; Taghian, Alphonse G.

    2011-01-01

    Purpose: To develop a simple and practical formula for quantifying breast cancer-related lymphedema, accounting for both the asymmetry of upper extremities' volumes and their temporal changes. Methods and Materials: We analyzed bilateral perometer measurements of the upper extremity in a series of 677 women who prospectively underwent lymphedema screening during treatment for unilateral breast cancer at Massachusetts General Hospital between August 2005 and November 2008. Four sources of variation were analyzed: between repeated measurements on the same arm at the same session; between both arms at baseline (preoperative) visit; in follow-up measurements; and between patients. Effects of hand dominance, time since diagnosis and surgery, age, weight, and body mass index were also analyzed. Results: The statistical distribution of variation of measurements suggests that the ratio of volume ratios is most appropriate for quantification of both asymmetry and temporal changes. Therefore, we present the formula for relative volume change (RVC): RVC = (A 2 U 1 )/(U 2 A 1 ) - 1, where A 1 , A 2 are arm volumes on the side of the treated breast at two different time points, and U 1 , U 2 are volumes on the contralateral side. Relative volume change is not significantly associated with hand dominance, age, or time since diagnosis. Baseline weight correlates (p = 0.0074) with higher RVC; however, baseline body mass index or weight changes over time do not. Conclusions: We propose the use of the RVC formula to assess the presence and course of breast cancer-related lymphedema in clinical practice and research.

  7. A novel method for identification and quantification of consistently differentially methylated regions.

    Directory of Open Access Journals (Sweden)

    Ching-Lin Hsiao

    Full Text Available Advances in biotechnology have resulted in large-scale studies of DNA methylation. A differentially methylated region (DMR is a genomic region with multiple adjacent CpG sites that exhibit different methylation statuses among multiple samples. Many so-called "supervised" methods have been established to identify DMRs between two or more comparison groups. Methods for the identification of DMRs without reference to phenotypic information are, however, less well studied. An alternative "unsupervised" approach was proposed, in which DMRs in studied samples were identified with consideration of nature dependence structure of methylation measurements between neighboring probes from tiling arrays. Through simulation study, we investigated effects of dependencies between neighboring probes on determining DMRs where a lot of spurious signals would be produced if the methylation data were analyzed independently of the probe. In contrast, our newly proposed method could successfully correct for this effect with a well-controlled false positive rate and a comparable sensitivity. By applying to two real datasets, we demonstrated that our method could provide a global picture of methylation variation in studied samples. R source codes to implement the proposed method were freely available at http://www.csjfann.ibms.sinica.edu.tw/eag/programlist/ICDMR/ICDMR.html.

  8. Furan quantification in bread crust: development of a simple and sensitive method using headspace-trap GC-MS.

    Science.gov (United States)

    Huault, Lucie; Descharles, Nicolas; Rega, Barbara; Bistac, Sophie; Bosc, Véronique; Giampaoli, Pierre

    2016-01-01

    To study reactivity in bread crust during the baking process in the pan, we followed furan mainly resulting from Maillard and caramelisation reactions in cereal products. Furan quantification is commonly performed with automatic HS-static GC-MS. However, we showed that the automatic HS-trap GC-MS method can improve the sensitivity of the furan quantification. Indeed, this method allowed the LOD to be decreased from 0.3 ng g(-1) with HS-static mode to 0.03 ng g(-1) with HS-trap mode under these conditions. After validation of this method for furan quantification in bread crust, a difference between the crust extracted from the bottom and from the sides of the bread was evident. The quantity of furan in the bottom crust was five times lower than in the side crust, revealing less reactivity on the bottom than on the sides of the bread during the baking process in the pan. Differences in water content may explain these variations in reactivity.

  9. Quantification of protein concentration by the Bradford method in the presence of pharmaceutical polymers.

    Science.gov (United States)

    Carlsson, Nils; Borde, Annika; Wölfel, Sebastian; Kerman, Björn; Larsson, Anette

    2011-04-01

    We investigated how the Bradford assay for measurements of protein released from a drug formulation may be affected by a concomitant release of a pharmaceutical polymer used to formulate the protein delivery device. The main result is that polymer-caused perturbations of the Coomassie dye absorbance at the Bradford monitoring wavelength (595nm) can be identified and corrected by recording absorption spectra in the region of 350-850mm. The pharmaceutical polymers Carbopol and chitosan illustrate two potential types of perturbations in the Bradford assay, whereas the third polymer, hydroxypropylmethylcellulose (HPMC), acts as a nonperturbing control. Carbopol increases the apparent absorbance at 595nm because the polymer aggregates at the low pH of the Bradford protocol, causing a turbidity contribution that can be corrected quantitatively at 595nm by measuring the sample absorbance at 850nm outside the dye absorption band. Chitosan is a cationic polymer under Bradford conditions and interacts directly with the anionic Coomassie dye and perturbs its absorption spectrum, including 595nm. In this case, the Bradford method remains useful if the polymer concentration is known but should be used with caution in release studies where the polymer concentration may vary and needs to be measured independently. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Characteristic of methods for prevention and correction of moral of alienation of students

    Directory of Open Access Journals (Sweden)

    Z. K. Malieva

    2014-01-01

    Full Text Available A moral alienation is a complex integrative phenomenon characterized by individual’s rejection of universal spiritual and moral values of society. The last opportunity to find a purposeful competent solution of the problem of individual’s moral alienation lies in the space of professional education.The subject of study of this article is to identify methods for prevention and correction of moral alienation of students that can be used by teachers both in the process of extracurricular activities, and in conducting classes in humanitarian disciplines.The purpose of the work is to study methods and techniques that enhance the effectiveness of the prevention and correction of moral alienation of students, identify their characteristics and application in the educational activities of teachers.The paper concretizes a definition of methods to prevent and correct the moral alienation of students who represent a system of interrelated actions of educator and students aimed at: redefining of negative values, rules and norms of behavior; overcoming the negative mental states, negative attitudes, interests and aptitudes of aducatees.The article distinguishes and characterizes the most effective methods for prevention and correction of moral alienation of students: the conviction, the method of "Socrates"; understanding; semiotic analysis; suggestion, method of "explosion." It also presents the rules and necessary conditions for the application of these methods in the educational process.It is ascertained that the choice of effective preventive and corrective methods and techniques is determined by the content of intrapersonal, psychological sources of moral alienation associated with the following: negative attitude due to previous experience; orientation to these or those negative values; inadequate self-esteem, having a negative impact on the development and functioning of the individual’s psyche and behavior; mental states.The conclusions of the

  11. Orbit Determination from Tracking Data of Artificial Satellite Using the Method of Differential Correction

    Directory of Open Access Journals (Sweden)

    Byoung-Sun Lee

    1988-06-01

    Full Text Available The differential correction process determining osculating orbital elements as correct as possible at a given instant of time from tracking data of artificial satellite was accomplished. Preliminary orbital elements were used as an initial value of the differential correction procedure and iterated until the residual of real observation(O and computed observation(C was minimized. Tracking satellite was NOAA-9 or TIROS-N series. Two types of tracking data were prediction data precomputed from mean orbital elements of TBUS and real data obtained from tracking 1.707GHz HRPT signal of NOAA-9 using 5 meter auto-track antenna in Radio Research Laboratory. According to tracking data either Gauss method or Herrick-Gibbs method was applied to preliminary orbit determination. In the differential correction stage we used both of the Escobal(1975's analytical method and numerical ones are nearly consistent. And the differentially corrected orbit converged to the same value in spite of the differences between preliminary orbits of each time span.

  12. Bias-correction of CORDEX-MENA projections using the Distribution Based Scaling method

    Science.gov (United States)

    Bosshard, Thomas; Yang, Wei; Sjökvist, Elin; Arheimer, Berit; Graham, L. Phil

    2014-05-01

    Within the Regional Initiative for the Assessment of the Impact of Climate Change on Water Resources and Socio-Economic Vulnerability in the Arab Region (RICCAR) lead by UN ESCWA, CORDEX RCM projections for the Middle East Northern Africa (MENA) domain are used to drive hydrological impacts models. Bias-correction of newly available CORDEX-MENA projections is a central part of this project. In this study, the distribution based scaling (DBS) method has been applied to 6 regional climate model projections driven by 2 RCP emission scenarios. The DBS method uses a quantile mapping approach and features a conditional temperature correction dependent on the wet/dry state in the climate model data. The CORDEX-MENA domain is particularly challenging for bias-correction as it spans very diverse climates showing pronounced dry and wet seasons. Results show that the regional climate models simulate too low temperatures and often have a displaced rainfall band compared to WATCH ERA-Interim forcing data in the reference period 1979-2008. DBS is able to correct the temperature biases as well as some aspects of the precipitation biases. Special focus is given to the analysis of the influence of the dry-frequency bias (i.e. climate models simulating too few rain days) on the bias-corrected projections and on the modification of the climate change signal by the DBS method.

  13. Validation of a method for accurate and highly reproducible quantification of brain dopamine transporter SPECT studies

    DEFF Research Database (Denmark)

    Jensen, Peter S; Ziebell, Morten; Skouboe, Glenna

    2011-01-01

    In nuclear medicine brain imaging, it is important to delineate regions of interest (ROIs) so that the outcome is both accurate and reproducible. The purpose of this study was to validate a new time-saving algorithm (DATquan) for accurate and reproducible quantification of the striatal dopamine t...... transporter (DAT) with appropriate radioligands and SPECT and without the need for structural brain scanning....

  14. A new method to make gamma-ray self-absorption correction

    International Nuclear Information System (INIS)

    Tian Dongfeng; Xie Dong; Ho Yukun; Yang Fujia

    2001-01-01

    This paper is devoted to discuss a new method to directly extract the information of the geometric self-absorption correction through the measurement of characteristic γ radiation emitted spontaneously from nuclear fissile material. The numerical simulation tests show that this method can extract the purely original information needed for nondestructive assay method by the γ-ray spectra to be measured, even though the geometric shape of the sample and materials between sample and detector are not known in advance. (author)

  15. A simplified method for rapid quantification of intracellular nucleoside triphosphates by one-dimensional thin-layer chromatography

    DEFF Research Database (Denmark)

    Jendresen, Christian Bille; Kilstrup, Mogens; Martinussen, Jan

    2011-01-01

    -pyrophosphate (PRPP), and inorganic pyrophosphate (PPi) in cell extracts. The method uses one-dimensional thin-layer chromatography (TLC) and radiolabeled biological samples. Nucleotides are resolved at the level of ionic charge in an optimized acidic ammonium formate and chloride solvent, permitting...... quantification of NTPs. The method is significantly simpler and faster than both current two-dimensional methods and high-performance liquid chromatography (HPLC)-based procedures, allowing a higher throughput while common sources of inaccuracies and technical problems are avoided. For determination of PPi...

  16. Corrected entropy of Friedmann-Robertson-Walker universe in tunneling method

    International Nuclear Information System (INIS)

    Zhu, Tao; Ren, Ji-Rong; Li, Ming-Fan

    2009-01-01

    In this paper, we study the thermodynamic quantities of Friedmann-Robertson-Walker (FRW) universe by using the tunneling formalism beyond semiclassical approximation developed by Banerjee and Majhi [25]. For this we first calculate the corrected Hawking-like temperature on apparent horizon by considering both scalar particle and fermion tunneling. With this corrected Hawking-like temperature, the explicit expressions of the corrected entropy of apparent horizon for various gravity theories including Einstein gravity, Gauss-Bonnet gravity, Lovelock gravity, f(R) gravity and scalar-tensor gravity, are computed. Our results show that the corrected entropy formula for different gravity theories can be written into a general expression (4.39) of a same form. It is also shown that this expression is also valid for black holes. This might imply that the expression for the corrected entropy derived from tunneling method is independent of gravity theory, spacetime and dimension of the spacetime. Moreover, it is concluded that the basic thermodynamical property that the corrected entropy on apparent horizon is a state function is satisfied by the FRW universe

  17. Quantification of Hydroxyl Radical reactivity in the urban environment using the Comparative Reactivity Method (CRM)

    Science.gov (United States)

    Panchal, Rikesh; Monks, Paul

    2015-04-01

    Hydroxyl (OH) radicals play an important role in 'cleansing' the atmosphere of many pollutants such as, NOx, CH4 and various VOCs, through oxidation. To measure the reactivity of OH, both the sinks and sources of OH need to be quantified, and currently the overall sinks of OH seem not to be fully constrained. In order to measure the total rate loss of OH in an ambient air sample, all OH reactive species must be considered and their concentrations and reaction rate coefficients with OH known. Using the method pioneered by Sinha and Williams at the Max Plank Institute Mainz, the Comparative Reactivity Method (CRM) which directly quantifies total OH reactivity in ambient air without the need to consider the concentrations of individual species within the sample that can react with OH, has been developed and applied in a urban setting. The CRM measures the concentration of a reactive species that is present only in low concentrations in ambient air, in this case pyrrole, flowing through a reaction vessel and detected using Proton Transfer Reaction - Mass Spectrometry (PTR-MS). The poster will show a newly developed and tested PTR-TOF-MS system for CRM. The correction regime will be detailed to account for the influence of the varying humidity between ambient air and clean air on the pyrrole signal. Further, examination of the sensitivity dependence of the PTR-MS as a function of relative humidity and H3O+(H2O) (m/z=37) cluster ion allows the correction for the humidity variation, between the clean humid air entering the reaction vessel and ambient air will be shown. NO, present within ambient air, is also a potential interference and can cause recycling of OH, resulting in an overestimation of OH reactivity. Tests have been conducted on the effects of varying NO concentrations on OH reactivity and a correction factor determined for application to data when sampling ambient air. Finally, field tests in the urban environment at the University of Leicester will be shown

  18. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    OpenAIRE

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60?years. By comparison, recent data (1999?2006) show that hearing thresholds in the US population have improved....

  19. A Novel Flood Forecasting Method Based on Initial State Variable Correction

    Directory of Open Access Journals (Sweden)

    Kuang Li

    2017-12-01

    Full Text Available The influence of initial state variables on flood forecasting accuracy by using conceptual hydrological models is analyzed in this paper and a novel flood forecasting method based on correction of initial state variables is proposed. The new method is abbreviated as ISVC (Initial State Variable Correction. The ISVC takes the residual between the measured and forecasted flows during the initial period of the flood event as the objective function, and it uses a particle swarm optimization algorithm to correct the initial state variables, which are then used to drive the flood forecasting model. The historical flood events of 11 watersheds in south China are forecasted and verified, and important issues concerning the ISVC application are then discussed. The study results show that the ISVC is effective and applicable in flood forecasting tasks. It can significantly improve the flood forecasting accuracy in most cases.

  20. Method for the determination of spectroradiometric corrections of data from multichannel aerospatial spectrometers

    International Nuclear Information System (INIS)

    Bakalova, K.P.; Bakalov, D.D.

    1984-01-01

    Various factors in the aerospatial conditions of operation may lead to changes in the transmission characteristics of the electron-optical medium or environment of spectrometers for remote sensing of the Earth. Consequently, the data obtained need spectroradiometric corrections. In the paper, a unified approach to the determination of these corrections is suggested. The method uses measurements of standard sources with a smooth emission spectrum that is much wider than the width of the channels, such as a lamp with an incandescent filament, Sun and other natural objects, without special spectral reference standards. The presence of additional information about the character of changes occuring in the measurements may considerably simplify the determination of corrections through setting appropriate values of a coefficient and the spectral shift. The method has been used with the Spectrum-15 and SMP-32 spectrometers on the Salyut-7 orbital station and the 'Meteor-Priroda' satelite of the Bulgaria-1300-ii project

  1. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  2. Comparison of fluorescence rejection methods of baseline correction and shifted excitation Raman difference spectroscopy

    Science.gov (United States)

    Cai, Zhijian; Zou, Wenlong; Wu, Jianhong

    2017-10-01

    Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.

  3. Simultaneous Assessment of Cardiomyocyte DNA Synthesis and Ploidy: A Method to Assist Quantification of Cardiomyocyte Regeneration and Turnover.

    Science.gov (United States)

    Richardson, Gavin D

    2016-05-23

    Although it is accepted that the heart has a limited potential to regenerate cardiomyocytes following injury and that low levels of cardiomyocyte turnover occur during normal ageing, quantification of these events remains challenging. This is in part due to the rarity of the process and the fact that multiple cellular sources contribute to myocardial maintenance. Furthermore, DNA duplication within cardiomyocytes often leads to a polyploid cardiomyocyte and only rarely leads to new cardiomyocytes by cellular division. In order to accurately quantify cardiomyocyte turnover discrimination between these processes is essential. The protocol described here employs long term nucleoside labeling in order to label all nuclei which have arisen as a result of DNA replication and cardiomyocyte nuclei identified by utilizing nuclei isolation and subsequent PCM1 immunolabeling. Together this allows the accurate and sensitive identification of the nucleoside labeling of the cardiomyocyte nuclei population. Furthermore, 4',6-diamidino-2-phenylindole labeling and analysis of nuclei ploidy, enables the discrimination of neo-cardiomyocyte nuclei from nuclei which have incorporated nucleoside during polyploidization. Although this method cannot control for cardiomyocyte binucleation, it allows a rapid and robust quantification of neo-cardiomyocyte nuclei while accounting for polyploidization. This method has a number of downstream applications including assessing the potential therapeutics to enhance cardiomyocyte regeneration or investigating the effects of cardiac disease on cardiomyocyte turnover and ploidy. This technique is also compatible with additional downstream immunohistological techniques, allowing quantification of nucleoside incorporation in all cardiac cell types.

  4. Itô-SDE MCMC method for Bayesian characterization of errors associated with data limitations in stochastic expansion methods for uncertainty quantification

    Science.gov (United States)

    Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.

    2017-11-01

    This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.

  5. Development and evaluation of attenuation and scatter correction techniques for SPECT using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ljungberg, M.

    1990-05-01

    Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57 Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)

  6. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  7. Assessment of Atmospheric Correction Methods for Sentinel-2 MSI Images Applied to Amazon Floodplain Lakes

    Directory of Open Access Journals (Sweden)

    Vitor Souza Martins

    2017-03-01

    Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for

  8. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    Science.gov (United States)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  9. A novel energy conversion based method for velocity correction in molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Hanhui [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Collaborative Innovation Center of Advanced Aero-Engine, Hangzhou 310027 (China); Liu, Ningning [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Ku, Xiaoke, E-mail: xiaokeku@zju.edu.cn [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Fan, Jianren [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2017-05-01

    Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.

  10. Semiempirical Quantum-Chemical Orthogonalization-Corrected Methods: Benchmarks for Ground-State Properties.

    Science.gov (United States)

    Dral, Pavlo O; Wu, Xin; Spörkel, Lasse; Koslowski, Axel; Thiel, Walter

    2016-03-08

    The semiempirical orthogonalization-corrected OMx methods (OM1, OM2, and OM3) go beyond the standard MNDO model by including additional interactions in the electronic structure calculation. When augmented with empirical dispersion corrections, the resulting OMx-Dn approaches offer a fast and robust treatment of noncovalent interactions. Here we evaluate the performance of the OMx and OMx-Dn methods for a variety of ground-state properties using a large and diverse collection of benchmark sets from the literature, with a total of 13035 original and derived reference data. Extensive comparisons are made with the results from established semiempirical methods (MNDO, AM1, PM3, PM6, and PM7) that also use the NDDO (neglect of diatomic differential overlap) integral approximation. Statistical evaluations show that the OMx and OMx-Dn methods outperform the other methods for most of the benchmark sets.

  11. A novel energy conversion based method for velocity correction in molecular dynamics simulations

    International Nuclear Information System (INIS)

    Jin, Hanhui; Liu, Ningning; Ku, Xiaoke; Fan, Jianren

    2017-01-01

    Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.

  12. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    Science.gov (United States)

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  13. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  14. A New High-Precision Correction Method of Temperature Distribution in Model Stellar Atmospheres

    Directory of Open Access Journals (Sweden)

    Sapar A.

    2013-06-01

    Full Text Available The main features of the temperature correction methods, suggested and used in modeling of plane-parallel stellar atmospheres, are discussed. The main features of the new method are described. Derivation of the formulae for a version of the Unsöld-Lucy method, used by us in the SMART (Stellar Model Atmospheres and Radiative Transport software for modeling stellar atmospheres, is presented. The method is based on a correction of the model temperature distribution based on minimizing differences of flux from its accepted constant value and on the requirement of the lack of its gradient, meaning that local source and sink terms of radiation must be equal. The final relative flux constancy obtainable by the method with the SMART code turned out to have the precision of the order of 0.5 %. Some of the rapidly converging iteration steps can be useful before starting the high-precision model correction. The corrections of both the flux value and of its gradient, like in Unsöld-Lucy method, are unavoidably needed to obtain high-precision flux constancy. A new temperature correction method to obtain high-precision flux constancy for plane-parallel LTE model stellar atmospheres is proposed and studied. The non-linear optimization is carried out by the least squares, in which the Levenberg-Marquardt correction method and thereafter additional correction by the Broyden iteration loop were applied. Small finite differences of temperature (δT/T = 10−3 are used in the computations. A single Jacobian step appears to be mostly sufficient to get flux constancy of the order 10−2 %. The dual numbers and their generalization – the dual complex numbers (the duplex numbers – enable automatically to get the derivatives in the nilpotent part of the dual numbers. A version of the SMART software is in the stage of refactorization to dual and duplex numbers, what enables to get rid of the finite differences, as an additional source of lowering precision of the

  15. A direct qPCR method for residual DNA quantification in monoclonal antibody drugs produced in CHO cells.

    Science.gov (United States)

    Hussain, Musaddeq

    2015-11-10

    Chinese hamster ovary (CHO) cells are the host cell of choice for manufacturing of monoclonal antibody (mAb) drugs in the biopharmaceutical industry. Host cell DNA is an impurity of such manufacturing process and must be controlled and monitored in order to ensure drug purity and safety. A conventional method for quantification of host residual DNA in drug requires extraction of DNA from the mAb drug substance with subsequent quantification of the extracted DNA using real-time PCR (qPCR). Here we report a method where the DNA extraction step is eliminated prior to qPCR. In this method, which we have named 'direct resDNA qPCR', the mAb drug substance is digested with a protease called KAPA in a 96-well PCR plate, the protease in the digest is then denatured at high temperature, qPCR reagents are added to the resultant reaction wells in the plate along with standards and controls in other wells of the same plate, and the plate subjected to qPCR for analysis of residual host DNA in the samples. This direct resDNA qPCR method for CHO is sensitive to 5.0fg of DNA with high precision and accuracy and has a wide linear range of determination. The method has been successfully tested with four mAbs drug, two IgG1 and two IgG4. Both the purified drug substance as well as a number of process intermediate samples, e.g., bioreactor harvest, Protein A column eluate and ion-exchange column eluates were tested. This method simplifies the residual DNA quantification protocol, reduces time of analysis and leads to increased assay sensitivity and development of automated high-throughput methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. High-order multi-implicit spectral deferred correction methods for problems of reactive flow

    International Nuclear Information System (INIS)

    Bourlioux, Anne; Layton, Anita T.; Minion, Michael L.

    2003-01-01

    Models for reacting flow are typically based on advection-diffusion-reaction (A-D-R) partial differential equations. Many practical cases correspond to situations where the relevant time scales associated with each of the three sub-processes can be widely different, leading to disparate time-step requirements for robust and accurate time-integration. In particular, interesting regimes in combustion correspond to systems in which diffusion and reaction are much faster processes than advection. The numerical strategy introduced in this paper is a general procedure to account for this time-scale disparity. The proposed methods are high-order multi-implicit generalizations of spectral deferred correction methods (MISDC methods), constructed for the temporal integration of A-D-R equations. Spectral deferred correction methods compute a high-order approximation to the solution of a differential equation by using a simple, low-order numerical method to solve a series of correction equations, each of which increases the order of accuracy of the approximation. The key feature of MISDC methods is their flexibility in handling several sub-processes implicitly but independently, while avoiding the splitting errors present in traditional operator-splitting methods and also allowing for different time steps for each process. The stability, accuracy, and efficiency of MISDC methods are first analyzed using a linear model problem and the results are compared to semi-implicit spectral deferred correction methods. Furthermore, numerical tests on simplified reacting flows demonstrate the expected convergence rates for MISDC methods of orders three, four, and five. The gain in efficiency by independently controlling the sub-process time steps is illustrated for nonlinear problems, where reaction and diffusion are much stiffer than advection. Although the paper focuses on this specific time-scales ordering, the generalization to any ordering combination is straightforward

  17. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  18. A non-parametric method for correction of global radiation observations

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Perers, Bengt

    2013-01-01

    in the observations are corrected. These are errors such as: tilt in the leveling of the sensor, shadowing from surrounding objects, clipping and saturation in the signal processing, and errors from dirt and wear. The method is based on a statistical non-parametric clear-sky model which is applied to both...

  19. An FFT-based Method for Attenuation Correction in Fluorescence Confocal Microscopy

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Bakker, M.

    1993-01-01

    A problem in three-dimensional imaging by a confocal scanning laser microscope (CSLM) in the (epi)fluorescence mode is the darkening of the deeper layers due to absorption and scattering of both the excitation and the fluorescence light. In this paper we propose a new method to correct for these

  20. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  1. Comparison of fatty liver index with noninvasive methods for steatosis detection and quantification

    Science.gov (United States)

    Zelber-Sagi, Shira; Webb, Muriel; Assy, Nimer; Blendis, Laurie; Yeshua, Hanny; Leshno, Moshe; Ratziu, Vlad; Halpern, Zamir; Oren, Ran; Santo, Erwin

    2013-01-01

    AIM: To compare noninvasive methods presently used for steatosis detection and quantification in nonalcoholic fatty liver disease (NAFLD). METHODS: Cross-sectional study of subjects from the general population, a subgroup from the First Israeli National Health Survey, without excessive alcohol consumption or viral hepatitis. All subjects underwent anthropometric measurements and fasting blood tests. Evaluation of liver fat was performed using four noninvasive methods: the SteatoTest; the fatty liver index (FLI); regular abdominal ultrasound (AUS); and the hepatorenal ultrasound index (HRI). Two of the noninvasive methods have been validated vs liver biopsy and were considered as the reference methods: the HRI, the ratio between the median brightness level of the liver and right kidney cortex; and the SteatoTest, a biochemical surrogate marker of liver steatosis. The FLI is calculated by an algorithm based on triglycerides, body mass index, γ-glutamyl-transpeptidase and waist circumference, that has been validated only vs AUS. FLI fatty liver. RESULTS: Three hundred and thirty-eight volunteers met the inclusion and exclusion criteria and had valid tests. The prevalence rate of NAFLD was 31.1% according to AUS. The FLI was very strongly correlated with SteatoTest (r = 0.91, P fatty liver by SteatoTest (≥ S2) and by FLI (≥ 60) was 0.74, which represented good agreement. The sensitivity of FLI vs SteatoTest was 85.5%, specificity 92.6%, positive predictive value (PPV) 74.7%, and negative predictive value (NPV) 96.1%. Most subjects (84.2%) with FLI fatty liver by HRI (≥ 1.5) and by FLI (≥ 60) was 0.43, which represented only moderate agreement. The sensitivity of FLI vs HRI was 56.3%, specificity 86.5%, PPV 57.0%, and NPV 86.1%. The diagnostic accuracy of FLI for steatosis > 5%, as predicted by SteatoTest, yielded an area under the receiver operating characteristic curve (AUROC) of 0.97 (95% CI: 0.95-0.98). The diagnostic accuracy of FLI for steatosis > 5%, as

  2. Characterisation and optimisation of a method for the detection and quantification of atmospherically relevant carbonyl compounds in aqueous medium

    Science.gov (United States)

    Rodigast, M.; Mutzel, A.; Iinuma, Y.; Haferkorn, S.; Herrmann, H.

    2015-01-01

    Carbonyl compounds are ubiquitous in the atmosphere and either emitted primarily from anthropogenic and biogenic sources or they are produced secondarily from the oxidation of volatile organic compounds (VOC). Despite a number of studies about the quantification of carbonyl compounds a comprehensive description of optimised methods is scarce for the quantification of atmospherically relevant carbonyl compounds. Thus a method was systematically characterised and improved to quantify carbonyl compounds. Quantification with the present method can be carried out for each carbonyl compound sampled in the aqueous phase regardless of their source. The method optimisation was conducted for seven atmospherically relevant carbonyl compounds including acrolein, benzaldehyde, glyoxal, methyl glyoxal, methacrolein, methyl vinyl ketone and 2,3-butanedione. O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was used as derivatisation reagent and the formed oximes were detected by gas chromatography/mass spectrometry (GC/MS). The main advantage of the improved method presented in this study is the low detection limit in the range of 0.01 and 0.17 μmol L-1 depending on carbonyl compounds. Furthermore best results were found for extraction with dichloromethane for 30 min followed by derivatisation with PFBHA for 24 h with 0.43 mg mL-1 PFBHA at a pH value of 3. The optimised method was evaluated in the present study by the OH radical initiated oxidation of 3-methylbutanone in the aqueous phase. Methyl glyoxal and 2,3-butanedione were found to be oxidation products in the samples with a yield of 2% for methyl glyoxal and 14% for 2,3-butanedione.

  3. A new measurement method for quantification and speciation of technetium-99 in sample at environmental concentrations

    International Nuclear Information System (INIS)

    Kasprzak, L.M.; Aubert, C.; Cossonnet, C.; Fattahi, M.

    2006-01-01

    Technetium-99 is a pure β- emitter and important long half-lived multi-valent radionuclide to be considered in radiation protection of the environment and the public. It a fission product of both 235 U and 239 Pu with approximately a 6% yield. The most stable and very mobile form of Tc is the pertechnetate anion (TcO 4 - ). Therefore, environmental monitoring requires the knowledge of the redox and chemical properties of this element in order to predict its behaviour and transfer in the environment. Given the extremely low concentration of 99 Tc in the environment (10 -10 M to 10 -12 M), its determination currently necessitates an enrichment and separation from the sample matrix prior to instrumental measurement. To this end, the development of a suitable analytical technique is required. The advantages of Capillary Electrophoresis (CE) as a powerful separation technique can be combined with the atomic specificity, multi-elemental character and extremely high sensitivity of an inductively coupled plasma mass spectrometer (ICP-MS) for trace metal-speciation studies in different fields of interest. However, the coupling of both commercially available instruments deserves particular attention if separative resolution, high analyte transport efficiency and sensitive detection are to be achieved. In, that technical vein, the interface itself may be considered as the 'key to success'. Several attempts to develop interfaces for CE-ICP-MS have been described over the last few years. However, the 99 Tc quantification by ICP-MS can be disturbed by isobaric overlaps 99 Ru and interferences induced by the matrix, including those associated with hydride formation ( 98 Mo 1 H (23.8%), 98 Ru 1 H (1.9%)). The aim of the present study was to develop a rapid and efficient method for the determination of 99 Tc in environmental samples by CE-ICP-MS without preliminary classical radiochemical separation to eliminate the interfering elements. In this paper, we describe the development

  4. A new measurement method for quantification and speciation of technetium-99 in sample at environmental concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Kasprzak, L.M. [IRSN/DEI/STEME/LMRE, ORSAY, F-91400 (France); SUBATECH, EMN-IN2P3/CNRS-Universit Nantes, F-44307 (France); Aubert, C.; Cossonnet, C. [IRSN/DEI/STEME/LMRE, ORSAY, F-91400 (France); Fattahi, M. [SUBATECH, EMN-IN2P3/CNRS-Universit Nantes, F-44307 (France)

    2006-07-01

    Technetium-99 is a pure {beta}- emitter and important long half-lived multi-valent radionuclide to be considered in radiation protection of the environment and the public. It a fission product of both {sup 235}U and {sup 239}Pu with approximately a 6% yield. The most stable and very mobile form of Tc is the pertechnetate anion (TcO{sub 4}{sup -}). Therefore, environmental monitoring requires the knowledge of the redox and chemical properties of this element in order to predict its behaviour and transfer in the environment. Given the extremely low concentration of {sup 99}Tc in the environment (10{sup -10} M to 10{sup -12} M), its determination currently necessitates an enrichment and separation from the sample matrix prior to instrumental measurement. To this end, the development of a suitable analytical technique is required. The advantages of Capillary Electrophoresis (CE) as a powerful separation technique can be combined with the atomic specificity, multi-elemental character and extremely high sensitivity of an inductively coupled plasma mass spectrometer (ICP-MS) for trace metal-speciation studies in different fields of interest. However, the coupling of both commercially available instruments deserves particular attention if separative resolution, high analyte transport efficiency and sensitive detection are to be achieved. In, that technical vein, the interface itself may be considered as the 'key to success'. Several attempts to develop interfaces for CE-ICP-MS have been described over the last few years. However, the {sup 99}Tc quantification by ICP-MS can be disturbed by isobaric overlaps {sup 99}Ru and interferences induced by the matrix, including those associated with hydride formation ({sup 98}Mo{sup 1}H (23.8%), {sup 98}Ru{sup 1}H (1.9%)). The aim of the present study was to develop a rapid and efficient method for the determination of {sup 99}Tc in environmental samples by CE-ICP-MS without preliminary classical radiochemical separation to

  5. QED radiative correction for the single-W production using a parton shower method

    International Nuclear Information System (INIS)

    Kurihara, Y.; Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Tobimatsu, K.; Munehisa, T.

    2001-01-01

    A parton shower method for the photonic radiative correction is applied to single W-boson production processes. The energy scale for the evolution of the parton shower is determined so that the correct soft-photon emission is reproduced. Photon spectra radiated from the partons are compared with those from the exact matrix elements, and show a good agreement. Possible errors due to an inappropriate energy-scale selection or due to the ambiguity of the energy-scale determination are also discussed, particularly for the measurements on triple gauge couplings. (orig.)

  6. Correction method for the error of diamond tool's radius in ultra-precision cutting

    Science.gov (United States)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  7. A distortion correction method for image intensifier and electronic portal images used in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Ioannidis, G T; Geramani, K N; Zamboglou, N [Strahlenklinik, Stadtische Kliniken Offenbach, Offenbach (Germany); Uzunoglu, N [Department of Electrical and Computer Engineering, National Technical University of Athens, Athens (Greece)

    1999-12-31

    At the most of radiation departments a simulator and an `on line` verification system of the treated volume, in form of an electronic portal imaging device (EPID), are available. Networking and digital handling (saving, archiving etc.) of the image information is a necessity in the image processing procedures in order to evaluate verification and simulation recordings at the computer screen. Distortion is on the other hand prerequisite for quantitative comparison of both image modalities. Another limitation factor, in order to make quantitative assertions, is the fact that the irradiation fields in radiotherapy are usually bigger than the field of view of an image intensifier. Several segments of the irradiation field must therefore be acquired. Using pattern recognition techniques these segments can be composed into a single image. In this paper a distortion correction method will be presented. The method is based upon a well defined Grid which is embedded during the registration process on the image. The video signal from the image intensifier is acquired and processed. The grid is then recognised using image processing techniques. Ideally if all grid points are recognised, various methods can be applied in order to correct the distortion. But in practice this is not the case. Overlapping structures (bones etc.) have as a consequence that not all of the grid points can be recognised. Mathematical models from the Graph theory are applied in order to reconstruct the whole grid. The deviation of the grid points positions from the rated value is then used to calculate correction coefficients. This method (well defined grid, grid recognition, correction factors) can also be applied in verification images from the EPID or in other image modalities, and therefore a quantitative comparison in radiation treatment is possible. The distortion correction method and the application on simulator images will be presented. (authors)

  8. Linearization of the Bradford protein assay to application in cow milk proteins quantification by UV-Vis spectrophotometry method.

    OpenAIRE

    SANTOS, A. S. de O. dos; COSTA, F. F.; ESTEVES, W. T.; BRITO, M. A. V. P. e; FURTADO, M. A. M.; MARTINS, M. F.

    2015-01-01

    Reliable methods for determination and quantification of total protein in food are essential information to ensure quality and safety of food trade. The objective of this study was to evaluate the linearity of calibration curves obtained from different proteins (blood serum albumin-BSA, α-LA, β-LG, αs, β and κ-CAS) with the reagent of Bradford. Comercial UHT skimmed bovine milk was analyzed for the determination of total protein using the Bradford method by reading at 595 nm. The determinatio...

  9. An investigation of natural genetic variation in the circadian system of Drosophila melanogaster: rhythm characteristics and methods of quantification.

    Science.gov (United States)

    Emery, P T; Morgan, E; Birley, A J

    1994-04-01

    Variation in four characteristics of the circadian locomotor activity rhythm was investigated in 24 true-breeding strains of Drosophila melanogaster with a view to establishing methods of phenotypic measurement sufficiently robust to allow subsequent biometric analysis. Between them, these strains formed a representative sample of the genetic variability of a natural population. Period, phase, definition (the degree to which a rhythmic signal was obscured by noise), and rhythm waveform were all found to vary continuously among the strains, although within each strain the rhythm phenotype was remarkably consistent. Each characteristic was found to be sufficiently robust to permit objective measurement using several different methods of quantification, which were then compared.

  10. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  11. A Time-Walk Correction Method for PET Detectors Based on Leading Edge Discriminators.

    Science.gov (United States)

    Du, Junwei; Schmall, Jeffrey P; Judenhofer, Martin S; Di, Kun; Yang, Yongfeng; Cherry, Simon R

    2017-09-01

    The leading edge timing pick-off technique is the simplest timing extraction method for PET detectors. Due to the inherent time-walk of the leading edge technique, corrections should be made to improve timing resolution, especially for time-of-flight PET. Time-walk correction can be done by utilizing the relationship between the threshold crossing time and the event energy on an event by event basis. In this paper, a time-walk correction method is proposed and evaluated using timing information from two identical detectors both using leading edge discriminators. This differs from other techniques that use an external dedicated reference detector, such as a fast PMT-based detector using constant fraction techniques to pick-off timing information. In our proposed method, one detector was used as reference detector to correct the time-walk of the other detector. Time-walk in the reference detector was minimized by using events within a small energy window (508.5 - 513.5 keV). To validate this method, a coincidence detector pair was assembled using two SensL MicroFB SiPMs and two 2.5 mm × 2.5 mm × 20 mm polished LYSO crystals. Coincidence timing resolutions using different time pick-off techniques were obtained at a bias voltage of 27.5 V and a fixed temperature of 20 °C. The coincidence timing resolution without time-walk correction were 389.0 ± 12.0 ps (425 -650 keV energy window) and 670.2 ± 16.2 ps (250-750 keV energy window). The timing resolution with time-walk correction improved to 367.3 ± 0.5 ps (425 - 650 keV) and 413.7 ± 0.9 ps (250 - 750 keV). For comparison, timing resolutions were 442.8 ± 12.8 ps (425 - 650 keV) and 476.0 ± 13.0 ps (250 - 750 keV) using constant fraction techniques, and 367.3 ± 0.4 ps (425 - 650 keV) and 413.4 ± 0.9 ps (250 - 750 keV) using a reference detector based on the constant fraction technique. These results show that the proposed leading edge based time-walk correction method works well. Timing resolution obtained

  12. A method to quantify infectious airborne pathogens at concentrations below the threshold of quantification by culture

    Science.gov (United States)

    Cutler, Timothy D.; Wang, Chong; Hoff, Steven J.; Zimmerman, Jeffrey J.

    2013-01-01

    In aerobiology, dose-response studies are used to estimate the risk of infection to a susceptible host presented by exposure to a specific dose of an airborne pathogen. In the research setting, host- and pathogen-specific factors that affect the dose-response continuum can be accounted for by experimental design, but the requirement to precisely determine the dose of infectious pathogen to which the host was exposed is often challenging. By definition, quantification of viable airborne pathogens is based on the culture of micro-organisms, but some airborne pathogens are transmissible at concentrations below the threshold of quantification by culture. In this paper we present an approach to the calculation of exposure dose at microbiologically unquantifiable levels using an application of the “continuous-stirred tank reactor (CSTR) model” and the validation of this approach using rhodamine B dye as a surrogate for aerosolized microbial pathogens in a dynamic aerosol toroid (DAT). PMID:24082399

  13. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    Science.gov (United States)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  14. Validated LC-MS/MS Method for the Quantification of Ponatinib in Plasma: Application to Metabolic Stability.

    Directory of Open Access Journals (Sweden)

    Adnan A Kadi

    Full Text Available In the current work, a rapid, specific, sensitive and validated liquid chromatography tandem mass-spectrometric method was developed for the quantification of ponatinib (PNT in human plasma and rat liver microsomes (RLMs with its application to metabolic stability. Chromatographic separation of PNT and vandetanib (IS were accomplished on Agilent eclipse plus C18 analytical column (50 mm × 2.1 mm, 1.8 μm particle size maintained at 21±2°C. Flow rate was 0.25 mLmin-1 with run time of 4 min. Mobile phase consisted of solvent A (10 mM ammonium formate, pH adjusted to 4.1 with formic acid and solvent B (acetonitrile. Ions were generated by electrospray (ESI and multiple reaction monitoring (MRM was used as basis for quantification. The results revealed a linear calibration curve in the range of 5-400 ngmL-1 (r2 ≥ 0.9998 with lower limit of quantification (LOQ and lower limit of detection (LOD of 4.66 and 1.53 ngmL-1 in plasma, 4.19 and 1.38 ngmL-1 in RLMs. The intra- and inter-day precision and accuracy in plasma ranged from1.06 to 2.54% and -1.48 to -0.17, respectively. Whereas in RLMs ranged from 0.97 to 2.31% and -1.65 to -0.3%. The developed procedure was applied for quantification of PNT in human plasma and RLMs for study metabolic stability of PNT. PNT disappeared rapidly in the 1st 10 minutes of RLM incubation and the disappearance plateaued out for the rest of the incubation. In vitro half-life (t1/2 was 6.26 min and intrinsic clearance (CLin was 15.182± 0.477.

  15. The impact of reconstruction method on the quantification of DaTSCAN images

    Energy Technology Data Exchange (ETDEWEB)

    Dickson, John C.; Erlandsson, Kjell; Hutton, Brian F. [UCLH NHS Foundation Trust and University College London, Institute of Nuclear Medicine, London (United Kingdom); Tossici-Bolt, Livia [Southampton University Hospitals NHS Trust, Department of Medical Physics, Southampton (United Kingdom); Sera, Terez [University of Szeged, Department of Nuclear Medicine and Euromedic Szeged, Szeged (Hungary); Varrone, Andrea [Psychiatry Section and Stockholm Brain Institute, Karolinska Institute, Department of Clinical Neuroscience, Stockholm (Sweden); Tatsch, Klaus [EANM/European Network of Excellence for Brain Imaging, Vienna (Austria)

    2010-01-15

    Reconstruction of DaTSCAN brain studies using OS-EM iterative reconstruction offers better image quality and more accurate quantification than filtered back-projection. However, reconstruction must proceed for a sufficient number of iterations to achieve stable and accurate data. This study assessed the impact of the number of iterations on the image quantification, comparing the results of the iterative reconstruction with filtered back-projection data. A striatal phantom filled with {sup 123}I using striatal to background ratios between 2:1 and 10:1 was imaged on five different gamma camera systems. Data from each system were reconstructed using OS-EM (which included depth-independent resolution recovery) with various combinations of iterations and subsets to achieve up to 200 EM-equivalent iterations and with filtered back-projection. Using volume of interest analysis, the relationships between image reconstruction strategy and quantification of striatal uptake were assessed. For phantom filling ratios of 5:1 or less, significant convergence of measured ratios occurred close to 100 EM-equivalent iterations, whereas for higher filling ratios, measured uptake ratios did not display a convergence pattern. Assessment of the count concentrations used to derive the measured uptake ratio showed that nonconvergence of low background count concentrations caused peaking in higher measured uptake ratios. Compared to filtered back-projection, OS-EM displayed larger uptake ratios because of the resolution recovery applied in the iterative algorithm. The number of EM-equivalent iterations used in OS-EM reconstruction influences the quantification of DaTSCAN studies because of incomplete convergence and possible bias in areas of low activity due to the nonnegativity constraint in OS-EM reconstruction. Nevertheless, OS-EM using 100 EM-equivalent iterations provides the best linear discriminatory measure to quantify the uptake in DaTSCAN studies. (orig.)

  16. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    Science.gov (United States)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  17. A method of bias correction for maximal reliability with dichotomous measures.

    Science.gov (United States)

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  18. Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers

    Science.gov (United States)

    Danby, Gordon T.; Jackson, John W.

    1991-01-01

    A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations in the particle beam.

  19. Consistent calculation of the polarization electric dipole moment by the shell-correction method

    International Nuclear Information System (INIS)

    Denisov, V.Yu.

    1992-01-01

    Macroscopic calculations of the polarization electric dipole moment which arises in nuclei with an octupole deformation are discussed in detail. This dipole moment is shown to depend on the position of the center of gravity. The conditions of consistency of the radii of the proton and neutron potentials and the radii of the proton and neutron surfaces, respectively, are discussed. These conditions must be incorporated in a shell-correction calculation of this dipole moment. A correct calculation of this moment by the shell-correction method is carried out. Dipole transitions between (on the one hand) levels belonging to an octupole vibrational band and (on the other) the ground state in rare-earth nuclei with a large quadrupole deformation are studied. 19 refs., 3 figs

  20. Simultaneous quantification of carotenoids, retinol, and tocopherols in forage, bovine plasma, and milk: validation of a novel UPLC method

    Energy Technology Data Exchange (ETDEWEB)

    Chauveau-Duriot, B.; Doreau, M.; Noziere, P.; Graulet, B. [UR1213 Research Unit on Herbivores, INRA, Saint Genes Champanelle (France)

    2010-05-15

    Simultaneous quantification of various liposoluble micronutrients is not a new area of interest since these compounds participate in the nutritional quality of feeds that is largely explored in human, and also in animal diet. However, the development of related methods is still under concern, especially when the carotenoid composition is complex such as in forage given to ruminants or in lipid-rich matrices like milk. In this paper, an original method for simultaneous extraction and quantification of all carotenoids, vitamins E, and A in milk was proposed. Moreover, a new UPLC method allowing simultaneous determination of carotenoids and vitamins A and E in forage, plasma and milk, and separation of 23 peaks of carotenoids in forage was described. This UPLC method using a HSS T3 column and a gradient solvent system was compared to a previously published reverse-phase HPLC using two C18 columns in series and an isocratic solvent system. The UPLC method gave similar concentrations of carotenoids and vitamins A and E than the HPLC method. Moreover, UPLC allowed a better resolution for xanthophylls, especially lutein and zeaxanthin, for the three isomers of {beta}-carotene (all-E-, 9Z- and 13Z-) and for vitamins A, an equal or better sensitivity according to gradient, and a better reproducibility of peak areas and retention times, but did not reduce the time required for analysis. (orig.)

  1. Simultaneous quantification of carotenoids, retinol, and tocopherols in forages, bovine plasma, and milk: validation of a novel UPLC method.

    Science.gov (United States)

    Chauveau-Duriot, B; Doreau, M; Nozière, P; Graulet, B

    2010-05-01

    Simultaneous quantification of various liposoluble micronutrients is not a new area of interest since these compounds participate in the nutritional quality of feeds that is largely explored in human, and also in animal diet. However, the development of related methods is still under concern, especially when the carotenoid composition is complex such as in forages given to ruminants or in lipid-rich matrices like milk. In this paper, an original method for simultaneous extraction and quantification of all carotenoids, vitamins E, and A in milk was proposed. Moreover, a new UPLC method allowing simultaneous determination of carotenoids and vitamins A and E in forage, plasma and milk, and separation of 23 peaks of carotenoids in forages was described. This UPLC method using a HSS T3 column and a gradient solvent system was compared to a previously published reverse-phase HPLC using two C18 columns in series and an isocratic solvent system. The UPLC method gave similar concentrations of carotenoids and vitamins A and E than the HPLC method. Moreover, UPLC allowed a better resolution for xanthophylls, especially lutein and zeaxanthin, for the three isomers of beta-carotene (all-E-, 9Z- and 13Z-) and for vitamins A, an equal or better sensitivity according to gradient, and a better reproducibility of peak areas and retention times, but did not reduce the time required for analysis.

  2. Relative quantification of mRNA: comparison of methods currently used for real-time PCR data analysis

    Directory of Open Access Journals (Sweden)

    Koppel Juraj

    2007-12-01

    Full Text Available Abstract Background Fluorescent data obtained from real-time PCR must be processed by some method of data analysis to obtain the relative quantity of target mRNA. The method chosen for data analysis can strongly influence results of the quantification. Results To compare the performance of six techniques which are currently used for analysing fluorescent data in real-time PCR relative quantification, we quantified four cytokine transcripts (IL-1β, IL-6 TNF-α, and GM-CSF in an in vivo model of colonic inflammation. Accuracy of the methods was tested by quantification on samples with known relative amounts of target mRNAs. Reproducibility of the methods was estimated by the determination of the intra-assay and inter-assay variability. Cytokine expression normalized to the expression of three reference genes (ACTB, HPRT, SDHA was then determined using the six methods for data analysis. The best results were obtained with the relative standard curve method, comparative Ct method and with DART-PCR, LinRegPCR and Liu & Saint exponential methods when average amplification efficiency was used. The use of individual amplification efficiencies in DART-PCR, LinRegPCR and Liu & Saint exponential methods significantly impaired the results. The sigmoid curve-fitting (SCF method produced medium performance; the results indicate that the use of appropriate type of fluorescence data and in some instances manual selection of the number of amplification cycles included in the analysis is necessary when the SCF method is applied. We also compared amplification efficiencies (E and found that although the E values determined by different methods of analysis were not identical, all the methods were capable to identify two genes whose E values significantly differed from other genes. Conclusion Our results show that all the tested methods can provide quantitative values reflecting the amounts of measured mRNA in samples, but they differ in their accuracy and

  3. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  4. Correcting for cryptic relatedness by a regression-based genomic control method

    Directory of Open Access Journals (Sweden)

    Yang Yaning

    2009-12-01

    Full Text Available Abstract Background Genomic control (GC method is a useful tool to correct for the cryptic relatedness in population-based association studies. It was originally proposed for correcting for the variance inflation of Cochran-Armitage's additive trend test by using information from unlinked null markers, and was later generalized to be applicable to other tests with the additional requirement that the null markers are matched with the candidate marker in allele frequencies. However, matching allele frequencies limits the number of available null markers and thus limits the applicability of the GC method. On the other hand, errors in genotype/allele frequencies may cause further bias and variance inflation and thereby aggravate the effect of GC correction. Results In this paper, we propose a regression-based GC method using null markers that are not necessarily matched in allele frequencies with the candidate marker. Variation of allele frequencies of the null markers is adjusted by a regression method. Conclusion The proposed method can be readily applied to the Cochran-Armitage's trend tests other than the additive trend test, the Pearson's chi-square test and other robust efficiency tests. Simulation results show that the proposed method is effective in controlling type I error in the presence of population substructure.

  5. Attenuation correction of myocardial SPECT by scatter-photopeak window method in normal subjects

    International Nuclear Information System (INIS)

    Okuda, Koichi; Nakajima, Kenichi; Matsuo, Shinro; Kinuya, Seigo; Motomura, Nobutoku; Kubota, Masahiro; Yamaki, Noriyasu; Maeda, Hisato

    2009-01-01

    Segmentation with scatter and photopeak window data using attenuation correction (SSPAC) method can provide a patient-specific non-uniform attenuation coefficient map only by using photopeak and scatter images without X-ray computed tomography (CT). The purpose of this study is to evaluate the performance of attenuation correction (AC) by the SSPAC method on normal myocardial perfusion database. A total of 32 sets of exercise-rest myocardial images with Tc-99m-sestamibi were acquired in both photopeak (140 keV±10%) and scatter (7% of lower side of the photopeak window) energy windows. Myocardial perfusion databases by the SSPAC method and non-AC (NC) were created from 15 female and 17 male subjects with low likelihood of cardiac disease using quantitative perfusion SPECT software. Segmental myocardial counts of a 17-segment model from these databases were compared on the basis of paired t test. AC average myocardial perfusion count was significantly higher than that in NC in the septal and inferior regions (P<0.02). On the contrary, AC average count was significantly lower in the anterolateral and apical regions (P<0.01). Coefficient variation of the AC count in the mid, apical and apex regions was lower than that of NC. The SSPAC method can improve average myocardial perfusion uptake in the septal and inferior regions and provide uniform distribution of myocardial perfusion. The SSPAC method could be a practical method of attenuation correction without X-ray CT. (author)

  6. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  7. Attenuation correction with region growing method used in the positron emission mammography imaging system

    Science.gov (United States)

    Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Yun, Ming-Kai; Chai, Pei; Huang, Xian-Chao; Sun, Xiao-Li; Wei, Long

    2015-10-01

    The Positron Emission Mammography imaging system (PEMi) provides a novel nuclear diagnosis method dedicated for breast imaging. With a better resolution than whole body PET, PEMi can detect millimeter-sized breast tumors. To address the requirement of semi-quantitative analysis with a radiotracer concentration map of the breast, a new attenuation correction method based on a three-dimensional seeded region growing image segmentation (3DSRG-AC) method has been developed. The method gives a 3D connected region as the segmentation result instead of image slices. The continuity property of the segmentation result makes this new method free of activity variation of breast tissues. The threshold value chosen is the key process for the segmentation method. The first valley in the grey level histogram of the reconstruction image is set as the lower threshold, which works well in clinical application. Results show that attenuation correction for PEMi improves the image quality and the quantitative accuracy of radioactivity distribution determination. Attenuation correction also improves the probability of detecting small and early breast tumors. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)

  8. Validation and implementation of liquid chromatographic-mass spectrometric (LC-MS) methods for the quantification of tenofovir prodrugs.

    Science.gov (United States)

    Hummert, Pamela; Parsons, Teresa L; Ensign, Laura M; Hoang, Thuy; Marzinke, Mark A

    2018-04-15

    The nucleotide reverse transcriptase inhibitor tenofovir (TFV) is widely administered in a disoproxil prodrug form (tenofovir disoproxil fumarate, TDF) for HIV management and prevention. Recently, novel prodrugs tenofovir alafenamide fumarate (TAF) and hexadecyloxypropyl tenofovir (CMX157) have been pursued for HIV treatment while minimizing adverse effects associated with systemic TFV exposure. Dynamic and sensitive bioanalytical tools are required to characterize the pharmacokinetics of these prodrugs in systemic circulation. Two parallel methods have been developed, one to combinatorially quantify TAF and TFV, and a second method for CMX157 quantification, in plasma. K 2 EDTA plasma was spiked with TAF and TFV, or CMX157. Following the addition of isotopically labeled internal standards and sample extraction via solid phase extraction (TAF and TFV) or protein precipitation (CMX157), samples were subjected to liquid chromatographic-tandem mass spectrometric (LC-MS/MS) analysis. For TAF and TFV, separation occurred using a Zorbax Eclipse Plus C18 Narrow Bore RR, 2.1 × 50 mm, 3.5 μm column and analytes were detected on an API5000 mass analyzer; CMX157 was separated using a Kinetex C8, 2.1 × 50 mm, 2.6 μm column and quantified using an API4500 mass spectrometer. Methods were validated according to FDA Bioanalytical Method Validation guidelines. Analytical methods: were optimized for the multiplexed monitoring of TAF and TFV, and CMX157 in plasma. The lower limits of quantification (LLOQs) for TAF, TFV, and CMX157 were 0.03, 1.0, and 0.25 ng/mL, respectively. Calibration curves were generated via weighted linear regression of standards. Intra- and inter-assay precision and accuracy studies demonstrated %CVs ≤ 14.4% and %DEVs ≤ ± 7.95%, respectively. Stability and matrix effects studies were also performed. All results were acceptable and in accordance with the recommended guidelines for bioanalytical methods. Assays were also

  9. Scatter correction method for x-ray CT using primary modulation: Phantom studies

    International Nuclear Information System (INIS)

    Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun Mingshan; Star-Lack, Josh; Zhu Lei

    2010-01-01

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an

  10. A software-based x-ray scatter correction method for breast tomosynthesis

    International Nuclear Information System (INIS)

    Jia Feng, Steve Si; Sechopoulos, Ioannis

    2011-01-01

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%-66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%-29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%-62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected

  11. Towards Implementing an MR-based PET Attenuation Correction Method for Neurological Studies on the MR-PET Brain Prototype

    Science.gov (United States)

    Catana, Ciprian; van der Kouwe, Andre; Benner, Thomas; Michel, Christian J.; Hamm, Michael; Fenchel, Matthias; Fischl, Bruce; Rosen, Bruce; Schmand, Matthias; Sorensen, A. Gregory

    2013-01-01

    A number of factors have to be considered for implementing an accurate attenuation correction (AC) in a combined MR-PET scanner. In this work, some of these challenges were investigated and an AC method based entirely on the MR data obtained with a single dedicated sequence was developed and used for neurological studies performed with the MR-PET human brain scanner prototype. Methods The focus was on the bone/air segmentation problem, the bone linear attenuation coefficient selection and the RF coil positioning. The impact of these factors on the PET data quantification was studied in simulations and experimental measurements performed on the combined MR-PET scanner. A novel dual-echo ultra-short echo time (DUTE) MR sequence was proposed for head imaging. Simultaneous MR-PET data were acquired and the PET images reconstructed using the proposed MR-DUTE-based AC method were compared with the PET images reconstructed using a CT-based AC. Results Our data suggest that incorrectly accounting for the bone tissue attenuation can lead to large underestimations (>20%) of the radiotracer concentration in the cortex. Assigning a linear attenuation coefficient of 0.143 or 0.151 cm−1 to bone tissue appears to give the best trade-off between bias and variability in the resulting images. Not identifying the internal air cavities introduces large overestimations (>20%) in adjacent structures. Based on these results, the segmented CT AC method was established as the “silver standard” for the segmented MR-based AC method. Particular to an integrated MR-PET scanner, ignoring the RF coil attenuation can cause large underestimations (i.e. up to 50%) in the reconstructed images. Furthermore, the coil location in the PET field of view has to be accurately known. Good quality bone/air segmentation can be performed using the DUTE data. The PET images obtained using the MR-DUTE- and CT-based AC methods compare favorably in most of the brain structures. Conclusion An MR-DUTE-based AC

  12. Methane fugitive emissions quantification using the novel 'plume camera' (spatial correlation) method

    Science.gov (United States)

    Crosson, E.; Rella, C.

    2012-12-01

    Fugitive emissions of methane into the atmosphere are a major concern facing the natural gas production industry. Given that the global warming potential of methane is many times greater than that of carbon dioxide, the importance of quantifying methane emissions becomes clear. The rapidly increasing reliance on shale gas (or other unconventional sources) is only intensifying the interest in fugitive methane releases. Natural gas (which is predominantly methane) is an attractive energy source, as it emits 40% less carbon dioxide per Joule of energy generated than coal. However, if just a small percentage of the natural gas consumed is lost due to fugitive emissions during production, processing, or transport, this global warming benefit is lost (Howarth et al. 2012). It is therefore imperative, as production of natural gas increases, that the fugitive emissions of methane are quantified accurately. Traditional direct measurement techniques often involve physical access of the leak itself to quantify the emissions rate, and are generally require painstaking effort to first find the leak and then quantify the emissions rate. With over half a million natural gas producing wells in the U.S. (U.S. Energy Information Administration), not including the associated processing, storage, and transport facilities, and with each facility having hundreds or even thousands of fittings that can potentially leak, the need is clear to develop methodologies that can provide a rapid and accurate assessment of the total emissions rate on a per-well head basis. In this paper we present a novel method for emissions quantification which uses a 'plume camera' with three 'pixels' to quantify emissions using direct measurements of methane concentration in the downwind plume. By analyzing the spatial correlation between the pixels, the spatial extent of the instantaneous plume can be inferred. This information, when combined with the wind speed through the measurement plane, provides a direct

  13. Joint de-blurring and nonuniformity correction method for infrared microscopy imaging

    Science.gov (United States)

    Jara, Anselmo; Torres, Sergio; Machuca, Guillermo; Ramírez, Wagner; Gutiérrez, Pablo A.; Viafora, Laura A.; Godoy, Sebastián E.; Vera, Esteban

    2018-05-01

    In this work, we present a new technique to simultaneously reduce two major degradation artifacts found in mid-wavelength infrared microscopy imagery, namely the inherent focal-plane array nonuniformity noise and the scene defocus presented due to the point spread function of the infrared microscope. We correct both nuisances using a novel, recursive method that combines the constant range nonuniformity correction algorithm with a frame-by-frame deconvolution approach. The ability of the method to jointly compensate for both nonuniformity noise and blur is demonstrated using two different real mid-wavelength infrared microscopic video sequences, which were captured from two microscopic living organisms using a Janos-Sofradir mid-wavelength infrared microscopy setup. The performance of the proposed method is assessed on real and simulated infrared data by computing the root mean-square error and the roughness-laplacian pattern index, which was specifically developed for the present work.

  14. Stability Indicating HPLC Method for Simultaneous Quantification of Trihexyphenidyl Hydrochloride, Trifluoperazine Hydrochloride and Chlorpromazine Hydrochloride from Tablet Formulation

    Directory of Open Access Journals (Sweden)

    P. Shetti

    2010-01-01

    Full Text Available A new, simple, precise, rapid, selective and stability indicating reversed-phase high performance liquid chromatographic (HPLC method has been developed and validated for simultaneous quantification of trihexyphenidyl hydrochloride, trifluoperazine hydrochloride and chlorpromazine hydrochloride from combined tablet formulation. The method is based on reverse-phase using C-18 (250×4.6 mm, 5 μm particle size column. The separation is achieved using isocratic elution by methanol and ammonium acetate buffer (1% w/v, pH 6.5 in the ratio of 85:15 v/v, pumped at flow rate 1.0 mL/min and UV detection at 215 nm. The column is maintained at 30 °C through out the analysis. This method gives baseline resolution. The total run time is 15 min. Stability indicating capability is established buy forced degradation experiment. The method is validated for specificity, accuracy, precision and linearity as per International conference of harmonisation (ICH. The method is accurate and linear for quantification of trihexyphenidyl hydrochloride, trifluoperazine hydrochloride and Chlorpromazine hydrochloride between 5 - 15 μg/mL, 12.5- 37.5 μg/mL and 62.5 - 187.5 μg/mL respectively.

  15. MERCURY QUANTIFICATION IN SOILS USING THERMAL DESORPTION AND ATOMIC ABSORPTION SPECTROMETRY: PROPOSAL FOR AN ALTERNATIVE METHOD OF ANALYSIS

    Directory of Open Access Journals (Sweden)

    Liliane Catone Soares

    2015-08-01

    Full Text Available Despite the considerable environmental importance of mercury (Hg, given its high toxicity and ability to contaminate large areas via atmospheric deposition, little is known about its activity in soils, especially tropical soils, in comparison with other heavy metals. This lack of information about Hg arises because analytical methods for determination of Hg are more laborious and expensive compared to methods for other heavy metals. The situation is even more precarious regarding speciation of Hg in soils since sequential extraction methods are also inefficient for this metal. The aim of this paper is to present a technique of thermal desorption associated with atomic absorption spectrometry, TDAAS, as an efficient tool for quantitative determination of Hg in soils. The method consists of the release of Hg by heating, followed by its quantification by atomic absorption spectrometry. It was developed by constructing calibration curves in different soil samples based on increasing volumes of standard Hg2+ solutions. Performance, accuracy, precision, and quantification and detection limit parameters were evaluated. No matrix interference was detected. Certified reference samples and comparison with a Direct Mercury Analyzer, DMA (another highly recognized technique, were used in validation of the method, which proved to be accurate and precise.

  16. Conservative multi-implicit integral deferred correction methods with adaptive mesh refinement

    International Nuclear Information System (INIS)

    Layton, A.T.

    2004-01-01

    In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and diffusive time scales, rendering the reaction part of the model equations stiff. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or shocks, the robust behavior and correct propagation of which require the use of specialized spatial discretization procedures. This study presents high-order conservative methods for the temporal integration of model equations of reacting flows. By means of a method of lines discretization on the flux difference form of the equations, these methods compute approximations to the cell-averaged or finite-volume solution. The temporal discretization is based on a multi-implicit generalization of integral deferred correction methods. The advection term is integrated explicitly, and the diffusion and reaction terms are treated implicitly but independently, with the splitting errors present in traditional operator splitting methods reduced via the integral deferred correction procedure. To reduce computational cost, time steps used to integrate processes with widely-differing time scales may differ in size. (author)

  17. Scatter measurement and correction method for cone-beam CT based on single grating scan

    Science.gov (United States)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  18. Estimating Population Turnover Rates by Relative Quantification Methods Reveals Microbial Dynamics in Marine Sediment.

    Science.gov (United States)

    Kevorkian, Richard; Bird, Jordan T; Shumaker, Alexander; Lloyd, Karen G

    2018-01-01

    The difficulty involved in quantifying biogeochemically significant microbes in marine sediments limits our ability to assess interspecific interactions, population turnover times, and niches of uncultured taxa. We incubated surface sediments from Cape Lookout Bight, North Carolina, USA, anoxically at 21°C for 122 days. Sulfate decreased until day 68, after which methane increased, with hydrogen concentrations consistent with the predicted values of an electron donor exerting thermodynamic control. We measured turnover times using two relative quantification methods, quantitative PCR (qPCR) and the product of 16S gene read abundance and total cell abundance (FRAxC, which stands for "fraction of read abundance times cells"), to estimate the population turnover rates of uncultured clades. Most 16S rRNA reads were from deeply branching uncultured groups, and ∼98% of 16S rRNA genes did not abruptly shift in relative abundance when sulfate reduction gave way to methanogenesis. Uncultured Methanomicrobiales and Methanosarcinales increased at the onset of methanogenesis with population turnover times estimated from qPCR at 9.7 ± 3.9 and 12.6 ± 4.1 days, respectively. These were consistent with FRAxC turnover times of 9.4 ± 5.8 and 9.2 ± 3.5 days, respectively. Uncultured Syntrophaceae , which are possibly fermentative syntrophs of methanogens, and uncultured Kazan-3A-21 archaea also increased at the onset of methanogenesis, with FRAxC turnover times of 14.7 ± 6.9 and 10.6 ± 3.6 days. Kazan-3A-21 may therefore either perform methanogenesis or form a fermentative syntrophy with methanogens. Three genera of sulfate-reducing bacteria, Desulfovibrio , Desulfobacter , and Desulfobacterium , increased in the first 19 days before declining rapidly during sulfate reduction. We conclude that population turnover times on the order of days can be measured robustly in organic-rich marine sediment, and the transition from sulfate-reducing to methanogenic conditions stimulates

  19. The study on the X-ray correction method of long fracture displacement

    International Nuclear Information System (INIS)

    Jia Bin; Huang Ailing; Chen Fuzhong; Men Chunyan; Sui Chengzong; Cui Yiming; Yang Yundong

    2010-01-01

    Objective: To explore the image correction of fracture displacement by conventional X-ray photography (ortho tropic and lateral) and test by computed tomography (CT). Methods: The correction method of fracture displacement was designed according to geometry of X-ray photography. Selected one midhumeral fracture specimen which designed with lateral shift and angular displacement, and scanned from anteroposterior and position respectively, and also volume scanned using CT, the data obtained from volume scan were processed using multiplanar reconstruction (MPR) and shaded surface display (SSD). The displacement data relied on X-ray image, CT with MPR and SSD processing, actual design of specimens were compared respectively. Results: The direction and degree of displacement among correction data of X-ray images and the data from MPR and SSD, actual design of specimen were little difference, location difference <1.5 mm, degree difference <1.5 degree. Conclusion: It is really reliable for fracture displacement by conventional X-ray photography with coordinate correction, and it is helpful to obviously improve the diagnostic accuracy of the degree of fracture displacement. (authors)

  20. Modular correction method of bending elastic modulus based on sliding behavior of contact point

    International Nuclear Information System (INIS)

    Ma, Zhichao; Zhao, Hongwei; Zhang, Qixun; Liu, Changyi

    2015-01-01

    During the three-point bending test, the sliding behavior of the contact point between the specimen and supports was observed, the sliding behavior was verified to affect the measurements of both deflection and span length, which directly affect the calculation of the bending elastic modulus. Based on the Hertz formula to calculate the elastic contact deformation and the theoretical calculation of the sliding behavior of the contact point, a theoretical model to precisely describe the deflection and span length as a function of bending load was established. Moreover, a modular correction method of bending elastic modulus was proposed, via the comparison between the corrected elastic modulus of three materials (H63 copper–zinc alloy, AZ31B magnesium alloy and 2026 aluminum alloy) and the standard modulus obtained from standard uniaxial tensile tests, the universal feasibility of the proposed correction method was verified. Also, the ratio of corrected to raw elastic modulus presented a monotonically decreasing tendency as the raw elastic modulus of materials increased. (technical note)

  1. Experimental aspects of buoyancy correction in measuring reliable highpressure excess adsorption isotherms using the gravimetric method.

    Science.gov (United States)

    Nguyen, Huong Giang T; Horn, Jarod C; Thommes, Matthias; van Zee, Roger D; Espinal, Laura

    2017-12-01

    Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO 2 and supercritical N 2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.

  2. A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy

    International Nuclear Information System (INIS)

    Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock

    2005-01-01

    An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle

  3. A Correction Method for UAV Helicopter Airborne Temperature and Humidity Sensor

    Directory of Open Access Journals (Sweden)

    Longqing Fan

    2017-01-01

    Full Text Available This paper presents a correction method for UAV helicopter airborne temperature and humidity including an error correction scheme and a bias-calibration scheme. As rotor downwash flow brings measurement error on helicopter airborne sensors inevitably, the error correction scheme constructs a model between the rotor induced velocity and temperature and humidity by building the heat balance equation for platinum resistor temperature sensor and the pressure correction term for humidity sensor. The induced velocity of a spatial point below the rotor disc plane can be calculated by the sum of the induced velocities excited by center line vortex, rotor disk vortex, and skew cylinder vortex based on the generalized vortex theory. In order to minimize the systematic biases, the bias-calibration scheme adopts a multiple linear regression to achieve a systematically consistent result with the tethered balloon profiles. Two temperature and humidity sensors were mounted on “Z-5” UAV helicopter in the field experiment. Overall, the result of applying the calibration method shows that the temperature and relative humidity obtained by UAV helicopter closely align with tethered balloon profiles in providing measurements of the temperature profiles and humidity profiles within marine atmospheric boundary layers.

  4. Perturbation theory corrections to the two-particle reduced density matrix variational method.

    Science.gov (United States)

    Juhasz, Tamas; Mazziotti, David A

    2004-07-15

    In the variational 2-particle-reduced-density-matrix (2-RDM) method, the ground-state energy is minimized with respect to the 2-particle reduced density matrix, constrained by N-representability conditions. Consider the N-electron Hamiltonian H(lambda) as a function of the parameter lambda where we recover the Fock Hamiltonian at lambda=0 and we recover the fully correlated Hamiltonian at lambda=1. We explore using the accuracy of perturbation theory at small lambda to correct the 2-RDM variational energies at lambda=1 where the Hamiltonian represents correlated atoms and molecules. A key assumption in the correction is that the 2-RDM method will capture a fairly constant percentage of the correlation energy for lambda in (0,1] because the nonperturbative 2-RDM approach depends more significantly upon the nature rather than the strength of the two-body Hamiltonian interaction. For a variety of molecules we observe that this correction improves the 2-RDM energies in the equilibrium bonding region, while the 2-RDM energies at stretched or nearly dissociated geometries, already highly accurate, are not significantly changed. At equilibrium geometries the corrected 2-RDM energies are similar in accuracy to those from coupled-cluster singles and doubles (CCSD), but at nonequilibrium geometries the 2-RDM energies are often dramatically more accurate as shown in the bond stretching and dissociation data for water and nitrogen. (c) 2004 American Institute of Physics.

  5. A Novel Reverse-Transcriptase Real-Time PCR Method for Quantification of Viable Vibrio Parahemolyticus in Raw Shrimp Based on a Rapid Construction of Standard Curve Method

    OpenAIRE

    Mengtong Jin; Haiquan Liu; Wenshuo Sun; Qin Li; Zhaohuan Zhang; Jibing Li; Yingjie Pan; Yong Zhao

    2015-01-01

    Vibrio parahemolyticus is an important pathogen that leads to food illness associated seafood. Therefore, rapid and reliable methods to detect and quantify the total viable V. parahaemolyticus in seafood are needed. In this assay, a RNA-based real-time reverse-transcriptase PCR (RT-qPCR) without an enrichment step has been developed for detection and quantification of the total viable V. parahaemolyticus in shrimp. RNA standards with the target segments were synthesized in vitro with T7 RNA p...

  6. Improving PET Quantification of Small Animal [68Ga]DOTA-Labeled PET/CT Studies by Using a CT-Based Positron Range Correction.

    Science.gov (United States)

    Cal-Gonzalez, Jacobo; Vaquero, Juan José; Herraiz, Joaquín L; Pérez-Liva, Mailyn; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Udías, José Manuel

    2018-01-19

    Image quality of positron emission tomography (PET) tracers that emits high-energy positrons, such as Ga-68, Rb-82, or I-124, is significantly affected by positron range (PR) effects. PR effects are especially important in small animal PET studies, since they can limit spatial resolution and quantitative accuracy of the images. Since generators accessibility has made Ga-68 tracers wide available, the aim of this study is to show how the quantitative results of [ 68 Ga]DOTA-labeled PET/X-ray computed tomography (CT) imaging of neuroendocrine tumors in mice can be improved using positron range correction (PRC). Eighteen scans in 12 mice were evaluated, with three different models of tumors: PC12, AR42J, and meningiomas. In addition, three different [ 68 Ga]DOTA-labeled radiotracers were used to evaluate the PRC with different tracer distributions: [ 68 Ga]DOTANOC, [ 68 Ga]DOTATOC, and [ 68 Ga]DOTATATE. Two PRC methods were evaluated: a tissue-dependent (TD-PRC) and a tissue-dependent spatially-variant correction (TDSV-PRC). Taking a region in the liver as reference, the tissue-to-liver ratio values for tumor tissue (TLR tumor ), lung (TLR lung ), and necrotic areas within the tumors (TLR necrotic ) and their respective relative variations (ΔTLR) were evaluated. All TLR values in the PRC images were significantly different (p DOTA-labeled PET/CT imaging of mice with neuroendocrine tumors, hence demonstrating that these techniques could also ameliorate the deleterious effect of the positron range in clinical PET imaging.

  7. Phytochemical analysis of Vernonanthura tweedieana and a validated UPLC-PDA method for the quantification of eriodictyol

    Directory of Open Access Journals (Sweden)

    Layzon Antonio Lemos da Silva

    Full Text Available AbstractVernonanthura tweedieana (Baker H. Rob., Asteraceae, is used in the Brazilian folk medicine for the treatment of respiratory diseases. In this work the phytochemical investigation of its ethanol extracts as well as the development and validation of an UPLC-PDA method for the quantification of the eriodictyol from the leaves were performed. The phytochemical study for this species lead to the identification of ethyl caffeate, naringenin and chrysoeriol in mixture, eriodictyol from leaves, and the mixture of 3-hydroxy-1-(4-hydroxy-3,5-dimethoxyphenyl-propan-1-one and evofolin B, apigenin, the mixture of caffeic and protocatechuic acid and luteolin from stems with roots, being reported for the first time for V. tweedieana, except for eriodictyol. The structural elucidation of all isolated compounds was achieved by 1H and 2D NMR spectroscopy, and in comparison with published data. An UPLC-PDA method for quantification of the eriodictyol in leaves of V. tweedieana was developed and validated for specificity, linearity, precision (repeatability and intermediate precision, limit of detection (LOD and limit of quantification (LOQ, accuracy and robustness. In this study, an excellent linearity was obtained (r2 = 0.9999, good precision (repeatability RSD = 2% and intermediate precision RSD = 8% and accuracy (average recovery from 98.6% to 99.7%. The content of eriodictyol in the extract of leaves of V. tweedieana was 41.40 ± 0.13 mg/g. Thus, this study allowed the optimization of a simple, fast and validated UPLC-PDA method which can be used to support the quality assessment of this herbal material.

  8. HPLC MS/MS method for quantification of meprobamate in human plasma: application to 24/7 clinical toxicology.

    Science.gov (United States)

    Delavenne, Xavier; Gay-Montchamp, Jean Pierre; Basset, Thierry

    2011-01-15

    We described the development and full validation of rapid and accurate liquid chromatography method, coupled with tandem mass spectrometry detection, for quantification of meprobamate in human plasma with [(13)C-(2)H(3)]-meprobamate as internal standard. Plasma pretreatment involved a one-step protein precipitation with acetonitrile. Separation was performed by reversed-phase chromatography on a Luna MercuryMS C18 (20 mm×4 mm×3 μm) column using a gradient elution mode. The mobile phase was a mix of distilled water containing 0.1% formic acid and acetonitrile containing 0.1% formic acid. The selected reaction monitoring transitions, in electrospray positive ionization, used for quantification were 219.2→158.2 m/z and 223.1→161.1m/z for meprobamate and internal standard, respectively. Qualification transitions were 219.2→97.0 and 223.1→101.1 m/z for meprobamate and internal standard, respectively. The method was linear over the concentration range of 1-300 mg/L. The intra- and inter-day precision values were below 6.4% and accuracy was within 95.3% and 103.6% for all QC levels (5, 75 and 200 mg/L). The lower limit of quantification was 1 mg/L. Total analysis time was reduced to 6 min including sample preparation. The present method is successfully applied to 24/7 clinical toxicology and demonstrated its usefulness to detect meprobamate poisoning. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Development of indirect spectrophotometric method for quantification of cephalexin in pure form and commercial formulation using complexation reaction

    International Nuclear Information System (INIS)

    Khan, M.N.; Hussain, R.; Kalsoom, S.; Saadiq, M.

    2016-01-01

    A simple, accurate and indirect spectrophotometric method was developed for the quantification of cephalexin in pure form and pharmaceutical products using complexation reaction. The developed method is based on the oxidation of the cephalexin with Fe/sup 3+/ in acidic medium. Then 1, 10- phenanthroline reacts with Fe/sup 2+/ and a red colored complex was formed. The absorbance of the complex was measured at 510 nm by spectrophotometer. Different experimental parameters affecting the complexation reactions were studied and optimized. Beer law was obeyed in the concentration range 0.4 -10 micro gmL/sup -1/ with a good correlation of 0.992. The limit of detection and limit of quantification were found to be 0.065 micro gmL/sup -1/ and 0.218 micro gmL/sup -1/ , respectively. The method have good reproducibility with a relative standard deviation of 6.26 percent (n = 6). The method was successfully applied for the determination of cephalexin in bulk powder and commercial formulation. Percent recoveries were found to range from 95.47 to 103.87 percent for the pure form and 98.62 to 103.35 percent for commercial formulations. (author)

  10. Determination and Quantification of the Vinblastine Content in Purple, Red, and White Catharanthus Roseus Leaves Using RP-HPLC Method

    Directory of Open Access Journals (Sweden)

    Rohanizah Abdul Rahim

    2018-03-01

    Full Text Available Purpose: To determine and quantify vinblastine in different varieties of Catharanthus roseus using reversed-phase HPLC method. Methods: The liquid chromatographic separation was performed using a reversed phase C18, Microsorb - MV column (250 mm x 4.6 mm, 5 µm at room temperature and eluted with a mobile phase containing methanol – phosphate buffer (5 mM, pH 6.0 – acetonitrile with different proportion gradient elution at a flow rate of 2.0 mL min-1 and detection at 254 nm. Results: The HPLC method was utilized for the quantification of vinblastine in purple, red and white varieties of Catharanthus roseus leaves. The separation was achieved in less than 8 min. The peak confirmation was done based on the retention times and UV spectra of the reference substance. The method was validated with respect to linearity, precision, recovery, limit of detection and quantification. Results showed that the purple variety gives 1.2 and 1.5 times more vinblastine concentration compared to the white and pink varieties, respectively. Conclusion: The obtained results from different varieties are thus useful for the purpose of vinblastine production from Catharanthus roseus plant.

  11. Computer method to detect and correct cycle skipping on sonic logs

    International Nuclear Information System (INIS)

    Muller, D.C.

    1985-01-01

    A simple but effective computer method has been developed to detect cycle skipping on sonic logs and to replace cycle skips with estimates of correct traveltimes. The method can be used to correct observed traveltime pairs from the transmitter to both receivers. The basis of the method is the linearity of a plot of theoretical traveltime from the transmitter to the first receiver versus theoretical traveltime from the transmitter to the second receiver. Theoretical traveltime pairs are calculated assuming that the sonic logging tool is centered in the borehole, that the borehole diameter is constant, that the borehole fluid velocity is constant, and that the formation is homogeneous. The plot is linear for the full range of possible formation-rock velocity. Plots of observed traveltime pairs from a sonic logging tool are also linear but have a large degree of scatter due to borehole rugosity, sharp boundaries exhibiting large velocity contrasts, and system measurement uncertainties. However, this scatter can be reduced to a level that is less than scatter due to cycle skipping, so that cycle skips may be detected and discarded or replaced with estimated values of traveltime. Advantages of the method are that it can be applied in real time, that it can be used with data collected by existing tools, that it only affects data that exhibit cycle skipping and leaves other data unchanged, and that a correction trace can be generated which shows where cycle skipping occurs and the amount of correction applied. The method has been successfully tested on sonic log data taken in two holes drilled at the Nevada Test Site, Nye County, Nevada

  12. A new method of CCD dark current correction via extracting the dark Information from scientific images

    Science.gov (United States)

    Ma, Bin; Shang, Zhaohui; Hu, Yi; Liu, Qiang; Wang, Lifan; Wei, Peng

    2014-07-01

    We have developed a new method to correct dark current at relatively high temperatures for Charge-Coupled Device (CCD) images when dark frames cannot be obtained on the telescope. For images taken with the Antarctic Survey Telescopes (AST3) in 2012, due to the low cooling efficiency, the median CCD temperature was -46°C, resulting in a high dark current level of about 3e-/pix/sec, even comparable to the sky brightness (10e-/pix/sec). If not corrected, the nonuniformity of the dark current could even overweight the photon noise of the sky background. However, dark frames could not be obtained during the observing season because the camera was operated in frame-transfer mode without a shutter, and the telescope was unattended in winter. Here we present an alternative, but simple and effective method to derive the dark current frame from the scientific images. Then we can scale this dark frame to the temperature at which the scientific images were taken, and apply the dark frame corrections to the scientific images. We have applied this method to the AST3 data, and demonstrated that it can reduce the noise to a level roughly as low as the photon noise of the sky brightness, solving the high noise problem and improving the photometric precision. This method will also be helpful for other projects that suffer from similar issues.

  13. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    International Nuclear Information System (INIS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility. (paper)

  14. Imaging and quantification of anomaly volume using an eight-electrode 'hemiarray' EIT reconstruction method.

    Science.gov (United States)

    Sadleir, R J; Zhang, S U; Tucker, A S; Oh, Sungho

    2008-08-01

    Electrical impedance tomography (EIT) is particularly well-suited to applications where its portability, rapid acquisition speed and sensitivity give it a practical advantage over other monitoring or imaging systems. An EIT system's patient interface can potentially be adapted to match the target environment, and thereby increase its utility. It may thus be appropriate to use different electrode positions from those conventionally used in EIT in these cases. One application that may require this is the use of EIT on emergency medicine patients; in particular those who have suffered blunt abdominal trauma. In patients who have suffered major trauma, it is desirable to minimize the risk of spinal cord injury by avoiding lifting them. To adapt EIT to this requirement, we devised and evaluated a new electrode topology (the 'hemiarray') which comprises a set of eight electrodes placed only on the subject's anterior surface. Images were obtained using a two-dimensional sensitivity matrix and weighted singular value decomposition reconstruction. The hemiarray method's ability to quantify bleeding was evaluated by comparing its performance with conventional 2D reconstruction methods using data gathered from a saline phantom. We found that without applying corrections to reconstructed images it was possible to estimate blood volume in a two-dimensional hemiarray case with an uncertainty of around 27 ml. In an approximately 3D hemiarray case, volume prediction was possible with a maximum uncertainty of around 38 ml in the centre of the electrode plane. After application of a QI normalizing filter, average uncertainties in a two-dimensional hemiarray case were reduced to about 15 ml. Uncertainties in the approximate 3D case were reduced to about 30 ml.

  15. Characterization of heterogeneous reservoirs: sentinels method and quantification of uncertainties; Caracterisation des reservoirs heterogenes: methode des sentinelles et quantification des incertitudes

    Energy Technology Data Exchange (ETDEWEB)

    Mezghani, M.

    1999-02-11

    The aim of this thesis is to propose a new inversion method to allow both an improved reservoir characterization and a management of uncertainties. In this approach, the identification of the permeability distribution is conducted using the sentinel method in order to match the pressure data. This approach, based on optimal control theory, can be seen as an alternative of least-squares method. Here, we prove the existence of exact sentinels under regularity hypothesis. From a numerical point of view, we consider regularized sentinels. We suggest a novel approach to update the penalization coefficient in order to improve numerical robustness. Moreover, the flexibility of the sentinel method enables to develop a way to treat noisy pressure data. To deal with geostatistical modelling of permeability distribution, we propose to link the pilot point method with sentinels to reach the identification of permeability. We particularly focus on the optimal location of pilot points. Finally, we present an original method, based on adjoint state computations, to quantify the dynamic data contribution to the characterisation of a calibrated geostatistical model. (author) 67 refs.

  16. High-performance ion chromatography method for separation and quantification of inositol phosphates in diets and digesta

    DEFF Research Database (Denmark)

    Blaabjerg, Karoline; Hansen-Møller, Jens; Poulsen, Hanne Damgaard

    2010-01-01

    A gradient high-performance ion chromatographic method for separation and quantification of inositol phosphates (InsP2-InsP6) in feedstuffs, diets, gastric and ileal digesta from pigs was developed and validated. The InsP2-InsP6 were separated on a Dionex CarboPacTM PA1 column using a gradient...... with 1.5 mol L-1 methanesulfonic acid and water. The exchange of the commonly used HCl with methanesulfonic acid has two advantages: (i) the obtained baseline during the separation is almost horizontal and (ii) it is not necessary to use an inert HPIC equipment as the methanesulfonic acid...

  17. A method for correcting the depth-of-interaction blurring in PET cameras

    International Nuclear Information System (INIS)

    Rogers, J.G.

    1993-11-01

    A method is presented for the purpose of correcting PET images for the blurring caused by variations in the depth-of-interaction in position-sensitive gamma ray detectors. In the case of a fine-cut 50x50x30 mm BGO block detector, the method is shown to improve the detector resolution by about 25%, measured in the geometry corresponding to detection at the edge of the field-of-view. Strengths and weaknesses of the method are discussed and its potential usefulness for improving the images of future PET cameras is assessed. (author). 8 refs., 3 figs

  18. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    Science.gov (United States)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of

  19. A Short Review of FDTD-Based Methods for Uncertainty Quantification in Computational Electromagnetics

    Directory of Open Access Journals (Sweden)

    Theodoros T. Zygiridis

    2017-01-01

    Full Text Available We provide a review of selected computational methodologies that are based on the deterministic finite-difference time-domain algorithm and are suitable for the investigation of electromagnetic problems involving uncertainties. As it will become apparent, several alternatives capable of performing uncertainty quantification in a variety of cases exist, each one exhibiting different qualities and ranges of applicability, which we intend to point out here. Given the numerous available approaches, the purpose of this paper is to clarify the main strengths and weaknesses of the described methodologies and help the potential readers to safely select the most suitable approach for their problem under consideration.

  20. THE EFFECT OF DIFFERENT CORRECTIVE FEEDBACK METHODS ON THE OUTCOME AND SELF CONFIDENCE OF YOUNG ATHLETES

    Directory of Open Access Journals (Sweden)

    George Tzetzis

    2008-09-01

    Full Text Available This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty. Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective

  1. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  2. A method of measuring and correcting tilt of anti - vibration wind turbines based on screening algorithm

    Science.gov (United States)

    Xiao, Zhongxiu

    2018-04-01

    A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.

  3. Corrected direct force balance method for atomic force microscopy lateral force calibration

    International Nuclear Information System (INIS)

    Asay, David B.; Hsiao, Erik; Kim, Seong H.

    2009-01-01

    This paper reports corrections and improvements of the previously reported direct force balance method (DFBM) developed for lateral calibration of atomic force microscopy. The DFBM method employs the lateral force signal obtained during a force-distance measurement on a sloped surface and relates this signal to the applied load and the slope of the surface to determine the lateral calibration factor. In the original publication [Rev. Sci. Instrum. 77, 043903 (2006)], the tip-substrate contact was assumed to be pinned at the point of contact, i.e., no slip along the slope. In control experiments, the tip was found to slide along the slope during force-distance curve measurement. This paper presents the correct force balance for lateral force calibration.

  4. Calibration of an accountability tank by bubbling pressure method: correction factors to be taken into account

    International Nuclear Information System (INIS)

    Cauchetier, Ph.

    1993-01-01

    To obtain the needed precision in the calibration of an accountability tank by bubbling pressure method, it requires to use very slow bubbling. The measured data (mass and pressure) must be transformed into physical sizes of the vessel (height and cubic capacity). All corrections to take in account (buoyancy, calibration curve of the sensor, density of the liquid, weight of the gas column, bubbling overpressure, temperature...) are reviewed and valuated. We give the used equations. (author). 3 figs., 1 tab., 2 refs

  5. The method of edge anxiety-depressive disorder correction in patients with diabetes mellitus

    Directory of Open Access Journals (Sweden)

    A. Kozhanova

    2015-11-01

    4.    Kazimierz Wielki University, Bydgoszcz, Poland Abstract   The article presents the results of research on the effectiveness of the method developed by the authors for correcting the anxiety and depressive edge disorders in patients with type 2 diabetes through the use of magnetic-therapy.   Tags: anxiety-depressive disorder, hidden depression, diabetes, medical rehabilitation, singlet-oxygen therapy.

  6. BIOFEEDBACK: A NEW METHOD FOR CORRECTION OF MOTOR DISORDERS IN PATIENTS WITH MULTIPLE SCLEROSIS

    Directory of Open A