WorldWideScience

Sample records for input function quantification

  1. An open tool for input function estimation and quantification of dynamic PET FDG brain scans.

    Science.gov (United States)

    Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro

    2016-08-01

    Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main

  2. Hybrid image and blood sampling input function for quantification of small animal dynamic PET data

    International Nuclear Information System (INIS)

    Shoghi, Kooresh I.; Welch, Michael J.

    2007-01-01

    We describe and validate a hybrid image and blood sampling (HIBS) method to derive the input function for quantification of microPET mice data. The HIBS algorithm derives the peak of the input function from the image, which is corrected for recovery, while the tail is derived from 5 to 6 optimally placed blood sampling points. A Bezier interpolation algorithm is used to link the rightmost image peak data point to the leftmost blood sampling point. To assess the performance of HIBS, 4 mice underwent 60-min microPET imaging sessions following a 0.40-0.50-mCi bolus administration of 18 FDG. In total, 21 blood samples (blood-sampled plasma time-activity curve, bsPTAC) were obtained throughout the imaging session to compare against the proposed HIBS method. MicroPET images were reconstructed using filtered back projection with a zoom of 2.75 on the heart. Volumetric regions of interest (ROIs) were composed by drawing circular ROIs 3 pixels in diameter on 3-4 transverse planes of the left ventricle. Performance was characterized by kinetic simulations in terms of bias in parameter estimates when bsPTAC and HIBS are used as input functions. The peak of the bsPTAC curve was distorted in comparison to the HIBS-derived curve due to temporal limitations and delay in blood sampling, which affected the rates of bidirectional exchange between plasma and tissue. The results highlight limitations in using bsPTAC. The HIBS method, however, yields consistent results, and thus, is a substitute for bsPTAC

  3. Estimating the input function non-invasively for FDG-PET quantification with multiple linear regression analysis: simulation and verification with in vivo data

    International Nuclear Information System (INIS)

    Fang, Yu-Hua; Kao, Tsair; Liu, Ren-Shyan; Wu, Liang-Chih

    2004-01-01

    A novel statistical method, namely Regression-Estimated Input Function (REIF), is proposed in this study for the purpose of non-invasive estimation of the input function for fluorine-18 2-fluoro-2-deoxy-d-glucose positron emission tomography (FDG-PET) quantitative analysis. We collected 44 patients who had undergone a blood sampling procedure during their FDG-PET scans. First, we generated tissue time-activity curves of the grey matter and the whole brain with a segmentation technique for every subject. Summations of different intervals of these two curves were used as a feature vector, which also included the net injection dose. Multiple linear regression analysis was then applied to find the correlation between the input function and the feature vector. After a simulation study with in vivo data, the data of 29 patients were applied to calculate the regression coefficients, which were then used to estimate the input functions of the other 15 subjects. Comparing the estimated input functions with the corresponding real input functions, the averaged error percentages of the area under the curve and the cerebral metabolic rate of glucose (CMRGlc) were 12.13±8.85 and 16.60±9.61, respectively. Regression analysis of the CMRGlc values derived from the real and estimated input functions revealed a high correlation (r=0.91). No significant difference was found between the real CMRGlc and that derived from our regression-estimated input function (Student's t test, P>0.05). The proposed REIF method demonstrated good abilities for input function and CMRGlc estimation, and represents a reliable replacement for the blood sampling procedures in FDG-PET quantification. (orig.)

  4. Quantification of allochthonous nutrient input into freshwater bodies by herbivorous waterbirds

    NARCIS (Netherlands)

    Hahn, S.M.; Bauer, S.; Klaassen, M.R.J.

    2008-01-01

    1. Waterbirds are considered to import large quantities of nutrients to freshwater bodies but quantification of these loadings remains problematic. We developed two general models to calculate such allochthonous nutrient inputs considering food intake, foraging behaviour and digestive performance of

  5. Simplified quantification of small animal [{sup 18}F]FDG PET studies using a standard arterial input function

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philipp T. [University Hospital Aachen, Department of Neurology, Aachen (Germany); Circiumaru, Valentina; Thomas, Daniel H. [University of Pennsylvania, Department of Radiology, Philadelphia (United States); Cardi, Christopher A.; Bal, Harshali; Acton, Paul D. [Thomas Jefferson University, Department of Radiology, Philadelphia (United States)

    2006-08-15

    Arterial input function (AIF) measurement for quantification of small animal PET studies is technically challenging and limited by the small blood volume of small laboratory animals. The present study investigated the use of a standard arterial input function (SAIF) to simplify the experimental procedure. Twelve [{sup 18}F]fluorodeoxyglucose ([{sup 18}F]FDG) PET studies accompanied by serial arterial blood sampling were acquired in seven male Sprague-Dawley rats under isoflurane anaesthesia without (every rat) and with additional (five rats) vibrissae stimulation. A leave-one-out procedure was employed to validate the use of a SAIF with individual scaling by one (1S) or two (2S) arterial blood samples. Automatic slow bolus infusion of [{sup 18}F]FDG resulted in highly similar AIF in all rats. The average differences of the area under the curve of the measured AIF and the individually scaled SAIF were 0.11{+-}4.26% and 0.04{+-}2.61% for the 1S (6-min sample) and the 2S (4-min/43-min samples) approach, respectively. The average differences between the cerebral metabolic rates of glucose (CMR{sub glc}) calculated using the measured AIF and the scaled SAIF were 1.31{+-}5.45% and 1.30{+-}3.84% for the 1S and the 2S approach, respectively. The use of a SAIF scaled by one or (preferably) two arterial blood samples can serve as a valid substitute for individual AIF measurements to quantify [{sup 18}F]FDG PET studies in rats. The SAIF approach minimises the loss of blood and should be ideally suited for longitudinal quantitative small animal [{sup 18}F]FDG PET studies. (orig.)

  6. Input-profile-based software failure probability quantification for safety signal generation systems

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Lim, Ho Gon; Lee, Ho Jung; Kim, Man Cheol; Jang, Seung Cheol

    2009-01-01

    The approaches for software failure probability estimation are mainly based on the results of testing. Test cases represent the inputs, which are encountered in an actual use. The test inputs for the safety-critical application such as a reactor protection system (RPS) of a nuclear power plant are the inputs which cause the activation of protective action such as a reactor trip. A digital system treats inputs from instrumentation sensors as discrete digital values by using an analog-to-digital converter. Input profile must be determined in consideration of these characteristics for effective software failure probability quantification. Another important characteristic of software testing is that we do not have to repeat the test for the same input value since the software response is deterministic for each specific digital input. With these considerations, we propose an effective software testing method for quantifying the failure probability. As an example application, the input profile of the digital RPS is developed based on the typical plant data. The proposed method in this study is expected to provide a simple but realistic mean to quantify the software failure probability based on input profile and system dynamics.

  7. Noninvasive quantification of cerebral metabolic rate for glucose in rats using 18F-FDG PET and standard input function

    Science.gov (United States)

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-01-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947

  8. Error correction in multi-fidelity molecular dynamics simulations using functional uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu

    2017-04-01

    We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated under three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.

  9. Simultaneous acquisition of dynamic PET-MRI: arterial input function using DSC-MRI and [18F]-FET

    Energy Technology Data Exchange (ETDEWEB)

    Caldeira, Liliana; Yun, Seong Dae; Silva, Nuno da; Filss, Christian; Scheins, Juergen; Telmann, Lutz; Herzog, Hans; Shah, Jon [Institute of Neuroscience and Medicine - 4, Forschungszentrum Juelich GmbH (Germany)

    2015-05-18

    This work focuses on the study of simultaneous dynamic MR-PET acquisition in brain tumour patients. MR-based perfusion-weighted imaging (PWI) and PET [18F]-FET are dynamic methods, which allow to evaluate tumour metabolism in a quantitative way. In both methods, arterial input function (AIF) is necessary for quantification. However, the AIF estimation is a challenging task. In this work, we explore the possibilities to combine dynamic MR and PET AIF.

  10. Simultaneous acquisition of dynamic PET-MRI: arterial input function using DSC-MRI and [18F]-FET

    International Nuclear Information System (INIS)

    Caldeira, Liliana; Yun, Seong Dae; Silva, Nuno da; Filss, Christian; Scheins, Juergen; Telmann, Lutz; Herzog, Hans; Shah, Jon

    2015-01-01

    This work focuses on the study of simultaneous dynamic MR-PET acquisition in brain tumour patients. MR-based perfusion-weighted imaging (PWI) and PET [18F]-FET are dynamic methods, which allow to evaluate tumour metabolism in a quantitative way. In both methods, arterial input function (AIF) is necessary for quantification. However, the AIF estimation is a challenging task. In this work, we explore the possibilities to combine dynamic MR and PET AIF.

  11. Combining MRI with PET for partial volume correction improves image-derived input functions in mice

    Energy Technology Data Exchange (ETDEWEB)

    Evans, Eleanor; Buonincontri, Guido [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Izquierdo, David [Athinoula A Martinos Centre, Harvard University, Cambridge, MA (United States); Methner, Carmen [Department of Medicine, University of Cambridge, Cambridge (United Kingdom); Hawkes, Rob C [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Ansorge, Richard E [Department of Physics, University of Cambridge, Cambridge (United Kingdom); Kreig, Thomas [Department of Medicine, University of Cambridge, Cambridge (United Kingdom); Carpenter, T Adrian [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Sawiak, Stephen J [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Behavioural and Clinical Neurosciences Institute, University of Cambridge, Cambridge (United Kingdom)

    2014-07-29

    Kinetic modelling in PET requires the arterial input function (AIF), defined as the time-activity curve (TAC) in plasma. This measure is challenging to obtain in mice due to low blood volumes, resulting in a reliance on image-based methods for AIF derivation. We present a comparison of PET- and MR-based region-of-interest (ROI) analysis to obtain image-derived AIFs from the left ventricle (LV) of a mouse model. ROI-based partial volume correction (PVC) was performed to improve quantification.

  12. Combining MRI with PET for partial volume correction improves image-derived input functions in mice

    International Nuclear Information System (INIS)

    Evans, Eleanor; Buonincontri, Guido; Izquierdo, David; Methner, Carmen; Hawkes, Rob C; Ansorge, Richard E; Kreig, Thomas; Carpenter, T Adrian; Sawiak, Stephen J

    2014-01-01

    Kinetic modelling in PET requires the arterial input function (AIF), defined as the time-activity curve (TAC) in plasma. This measure is challenging to obtain in mice due to low blood volumes, resulting in a reliance on image-based methods for AIF derivation. We present a comparison of PET- and MR-based region-of-interest (ROI) analysis to obtain image-derived AIFs from the left ventricle (LV) of a mouse model. ROI-based partial volume correction (PVC) was performed to improve quantification.

  13. Estimation of the input function in dynamic positron emission tomography applied to fluorodeoxyglucose

    International Nuclear Information System (INIS)

    Jouvie, Camille

    2013-01-01

    Positron Emission Tomography (PET) is a method of functional imaging, used in particular for drug development and tumor imaging. In PET, the estimation of the arterial plasmatic activity concentration of the non-metabolized compound (the 'input function') is necessary for the extraction of the pharmacokinetic parameters. These parameters enable the quantification of the compound dynamics in the tissues. This PhD thesis contributes to the study of the input function by the development of a minimally invasive method to estimate the input function. This method uses the PET image and a few blood samples. In this work, the example of the FDG tracer is chosen. The proposed method relies on compartmental modeling: it deconvoluates the three-compartment-model. The originality of the method consists in using a large number of regions of interest (ROIs), a large number of sets of three ROIs, and an iterative process. To validate the method, simulations of PET images of increasing complexity have been performed, from a simple image simulated with an analytic simulator to a complex image simulated with a Monte-Carlo simulator. After simulation of the acquisition, reconstruction and corrections, the images were segmented (through segmentation of an IRM image and registration between PET and IRM images) and corrected for partial volume effect by a variant of Rousset's method, to obtain the kinetics in the ROIs, which are the input data of the estimation method. The evaluation of the method on simulated and real data is presented, as well as a study of the method robustness to different error sources, for example in the segmentation, in the registration or in the activity of the used blood samples. (author) [fr

  14. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    NARCIS (Netherlands)

    Vriens, D.; Geus-Oei, L.F. de; Oyen, W.J.G.; Visser, E.P.

    2009-01-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible

  15. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    Science.gov (United States)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  16. Partial volume effect (PVE) on the arterial input function (AIF) in T1-weighted perfusion imaging and limitations of the multiplicative rescaling approach

    DEFF Research Database (Denmark)

    Hansen, Adam Espe; Pedersen, Henrik; Rostrup, Egill

    2009-01-01

    The partial volume effect (PVE) on the arterial input function (AIF) remains a major obstacle to absolute quantification of cerebral blood flow (CBF) using MRI. This study evaluates the validity and performance of a commonly used multiplicative rescaling of the AIF to correct for the PVE. In a gr...

  17. Plasticity of the cis-regulatory input function of a gene.

    Directory of Open Access Journals (Sweden)

    Avraham E Mayo

    2006-04-01

    Full Text Available The transcription rate of a gene is often controlled by several regulators that bind specific sites in the gene's cis-regulatory region. The combined effect of these regulators is described by a cis-regulatory input function. What determines the form of an input function, and how variable is it with respect to mutations? To address this, we employ the well-characterized lac operon of Escherichia coli, which has an elaborate input function, intermediate between Boolean AND-gate and OR-gate logic. We mapped in detail the input function of 12 variants of the lac promoter, each with different point mutations in the regulator binding sites, by means of accurate expression measurements from living cells. We find that even a few mutations can significantly change the input function, resulting in functions that resemble Pure AND gates, OR gates, or single-input switches. Other types of gates were not found. The variant input functions can be described in a unified manner by a mathematical model. The model also lets us predict which functions cannot be reached by point mutations. The input function that we studied thus appears to be plastic, in the sense that many of the mutations do not ruin the regulation completely but rather result in new ways to integrate the inputs.

  18. PREMIUM - Benchmark on the quantification of the uncertainty of the physical models in the system thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Skorek, Tomasz; Crecy, Agnes de

    2013-01-01

    PREMIUM (Post BEMUSE Reflood Models Input Uncertainty Methods) is an activity launched with the aim to push forward the methods of quantification of physical models uncertainties in thermal-hydraulic codes. It is endorsed by OECD/NEA/CSNI/WGAMA. The benchmark PREMIUM is addressed to all who applies uncertainty evaluation methods based on input uncertainties quantification and propagation. The benchmark is based on a selected case of uncertainty analysis application to the simulation of quench front propagation in an experimental test facility. Application to an experiment enables evaluation and confirmation of the quantified probability distribution functions on the basis of experimental data. The scope of the benchmark comprises a review of the existing methods, selection of potentially important uncertain input parameters, preliminary quantification of the ranges and distributions of the identified parameters, evaluation of the probability density function using experimental results of tests performed on FEBA test facility and confirmation/validation of the performed quantification on the basis of blind calculation of Reflood 2-D PERICLES experiment. (authors)

  19. Recovery coefficients for the quantification of the arterial input function from dynamic pet measurements: experimental and theoretical determination

    International Nuclear Information System (INIS)

    Brix, G.; Bellemann, M.E.; Hauser, H.; Doll, J.

    2002-01-01

    Aim: For kinetic modelling of dynamic PET data, the arterial input function can be determined directly from the PET scans if a large artery is visualized on the images. It was the purpose of this study to experimentally and theoretically determine recovery coefficients for cylinders as a function of the diameter and level of background activity. Methods: The measurements were performed using a phantom with seven cylinder inserts (φ = 5-46 mm). The cylinders were filled with an aqueous 68 Ga solution while the main chamber was filled with a 18 F solution in order to obtain a varying concentration ratio between the cylinders and the background due to the different isotope half lives. After iterative image reconstruction, the activity concentrations were measured in the center of the cylinders and the recovery coefficients were calculated as a function of the diameter and the background activity. Based on the imaging properties of the PET system, we also developed a model for the quantitative assessment of recovery coefficients. Results: The functional dependence of the measured recovery data from the cylinder diameter and the concentration ratio is well described by our model. For dynamic PET measurements, the recovery correction must take into account the decreasing concentration ratio between the blood vessel and the surrounding tissue. Under the realized measurement and data analysis conditions, a recovery correction is required for vessels with a diameter of up to 25 mm. Conclusions: Based on the experimentally verified model, the activity concentration in large arteries can be calculated from the measured activity concentration in the blood vessel and the background activity. The presented approach offers the possibility to determine the arterial input function for pharmacokinetic PET studies non-invasively from large arteries (especially the aorta). (orig.) [de

  20. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  1. Uncertainty quantification in ion–solid interaction simulations

    Energy Technology Data Exchange (ETDEWEB)

    Preuss, R., E-mail: preuss@ipp.mpg.de; Toussaint, U. von

    2017-02-15

    Within the framework of Bayesian uncertainty quantification we propose a non-intrusive reduced-order spectral approach (polynomial chaos expansion) to the simulation of ion–solid interactions. The method not only reduces the number of function evaluations but provides simultaneously a quantitative measure for which combinations of inputs have the most important impact on the result. It is applied to SDTRIM-simulations (Möller et al., 1988) with several uncertain and Gaussian distributed input parameters (i.e. angle, projectile energy, surface binding energy, target composition) and the results are compared to full-grid based approaches and sampling based methods with respect to reliability, efficiency and scalability.

  2. [11C]Harmine Binding to Brain Monoamine Oxidase A: Test-Retest Properties and Noninvasive Quantification.

    Science.gov (United States)

    Zanderigo, Francesca; D'Agostino, Alexandra E; Joshi, Nandita; Schain, Martin; Kumar, Dileep; Parsey, Ramin V; DeLorenzo, Christine; Mann, J John

    2018-02-08

    Inhibition of the isoform A of monoamine oxidase (MAO-A), a mitochondrial enzyme catalyzing deamination of monoamine neurotransmitters, is useful in treatment of depression and anxiety disorders. [ 11 C]harmine, a MAO-A PET radioligand, has been used to study mood disorders and antidepressant treatment. However, [ 11 C]harmine binding test-retest characteristics have to date only been partially investigated. Furthermore, since MAO-A is ubiquitously expressed, no reference region is available, thus requiring arterial blood sampling during PET scanning. Here, we investigate [ 11 C]harmine binding measurements test-retest properties; assess effects of using a minimally invasive input function estimation on binding quantification and repeatability; and explore binding potentials estimation using a reference region-free approach. Quantification of [ 11 C]harmine distribution volume (V T ) via kinetic models and graphical analyses was compared based on absolute test-retest percent difference (TRPD), intraclass correlation coefficient (ICC), and identifiability. The optimal procedure was also used with a simultaneously estimated input function in place of the measured curve. Lastly, an approach for binding potentials quantification in absence of a reference region was evaluated. [ 11 C]harmine V T estimates quantified using arterial blood and kinetic modeling showed average absolute TRPD values of 7.7 to 15.6 %, and ICC values between 0.56 and 0.86, across brain regions. Using simultaneous estimation (SIME) of input function resulted in V T estimates close to those obtained using arterial input function (r = 0.951, slope = 1.073, intercept = - 1.037), with numerically but not statistically higher test-retest difference (range 16.6 to 22.0 %), but with overall poor ICC values, between 0.30 and 0.57. Prospective studies using [ 11 C]harmine are possible given its test-retest repeatability when binding is quantified using arterial blood. Results with SIME of

  3. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    Science.gov (United States)

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Direct qPCR quantification using the Quantifiler(®) Trio DNA quantification kit.

    Science.gov (United States)

    Liu, Jason Yingjie

    2014-11-01

    The effectiveness of a direct quantification assay is essential to the adoption of the combined direct quantification/direct STR workflow. In this paper, the feasibility of using the Quantifiler(®) Trio DNA quantification kit for the direct quantification of forensic casework samples was investigated. Both low-level touch DNA samples and blood samples were collected on PE swabs and quantified directly. The increased sensitivity of the Quantifiler(®) Trio kit enabled the detection of less than 10pg of DNA in unprocessed touch samples and also minimizes the stochastic effect experienced by different targets in the same sample. The DNA quantity information obtained from a direct quantification assay using the Quantifiler(®) Trio kit can also be used to accurately estimate the optimal input DNA quantity for a direct STR amplification reaction. The correlation between the direct quantification results (Quantifiler(®) Trio kit) and the direct STR results (GlobalFiler™ PCR amplification kit(*)) for low-level touch DNA samples indicates that direct quantification using the Quantifiler(®) Trio DNA quantification kit is more reliable than the Quantifiler(®) Duo DNA quantification kit for predicting the STR results of unprocessed touch DNA samples containing less than 10pg of DNA. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    OpenAIRE

    Priska Arindya Purnama

    2017-01-01

    The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt) sequence expected to be effected by an input series (Xt) and other inputs in a group called a noise series (Nt). Multi input transfer function model obtained is (b1,s1,r1) (b2,s2,r2) (b3,s3,r3) (b4,s4,r4)(pn,qn) = (0,0,0)...

  6. Temporal and spatial quantification of farm and landscape functions

    DEFF Research Database (Denmark)

    Andersen, Peter Stubkjær

    , residence, habitat, and recreation; development of a method for quantifying farm functionality and assessing multifunctionality; and definition of a farm typology based on multifunctionality strategies. Empirical data from farm interviews were used in the study to test the developed methods. The results...... is generally decreases and a tendency of increased segregation of the rural landscape is observed. In perspective, further studies on quantification in tangible units, synergies and trade-offs between functions at different scales, and correlations between structures and functions are needed....

  7. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  8. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    Directory of Open Access Journals (Sweden)

    Priska Arindya Purnama

    2017-11-01

    Full Text Available The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt sequence expected to be effected by an input series (Xt and other inputs in a group called a noise series (Nt. Multi input transfer function model obtained is (b1,s1,r1 (b2,s2,r2 (b3,s3,r3 (b4,s4,r4(pn,qn = (0,0,0 (23,0,0 (1,2,0 (0,0,0 ([5,8],2 and shows that air temperature on t-day affects rainfall on t-day, rainfall on t-day is influenced by air humidity in the previous 23 days, rainfall on t-day is affected by wind speed in the previous day , and rainfall on day t is affected by clouds on day t. The results of rainfall forecasting in Batu City with multi input transfer function model can be said to be accurate, because it produces relatively small RMSE value. The value of RMSE data forecasting training is 7.7921 while forecasting data testing is 4.2184. Multi-input transfer function model is suitable for rainfall in Batu City.

  9. IMPROVED DERIVATION OF INPUT FUNCTION IN DYNAMIC MOUSE [18F]FDG PET USING BLADDER RADIOACTIVITY KINETICS

    Science.gov (United States)

    Wong, Koon-Pong; Zhang, Xiaoli; Huang, Sung-Cheng

    2013-01-01

    Purpose Accurate determination of the plasma input function (IF) is essential for absolute quantification of physiological parameters in positron emission tomography (PET). However, it requires an invasive and tedious procedure of arterial blood sampling that is challenging in mice because of the limited blood volume. In this study, a hybrid modeling approach is proposed to estimate the plasma IF of 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) in mice using accumulated radioactivity in urinary bladder together with a single late-time blood sample measurement. Methods Dynamic PET scans were performed on nine isoflurane-anesthetized male C57BL/6 mice after a bolus injection of [18F]FDG at the lateral caudal vein. During a 60- or 90-min scan, serial blood samples were taken from the femoral artery. Image data were reconstructed using filtered backprojection with CT-based attenuation correction. Total accumulated radioactivity in the urinary bladder was fitted to a renal compartmental model with the last blood sample and a 1-exponential function that described the [18F]FDG clearance in blood. Multiple late-time blood sample estimates were calculated by the blood [18F]FDG clearance equation. A sum of 4-exponentials was assumed for the plasma IF that served as a forcing function to all tissues. The estimated plasma IF was obtained by simultaneously fitting the [18F]FDG model to the time-activity curves (TACs) of liver and muscle and the forcing function to early (0–1 min) left-ventricle data (corrected for delay, dispersion, partial-volume effects and erythrocytes uptake) and the late-time blood estimates. Using only the blood sample acquired at the end of the study to estimate the IF and the use of liver TAC as an alternative IF were also investigated. Results The area under the plasma TACs calculated for all studies using the hybrid approach was not significantly different from that using all blood samples. [18F]FDG uptake constants in brain, myocardium, skeletal

  10. The human motor neuron pools receive a dominant slow‐varying common synaptic input

    Science.gov (United States)

    Negro, Francesco; Yavuz, Utku Şükrü

    2016-01-01

    Key points Motor neurons in a pool receive both common and independent synaptic inputs, although the proportion and role of their common synaptic input is debated.Classic correlation techniques between motor unit spike trains do not measure the absolute proportion of common input and have limitations as a result of the non‐linearity of motor neurons.We propose a method that for the first time allows an accurate quantification of the absolute proportion of low frequency common synaptic input (60%) of common input, irrespective of their different functional and control properties.These results increase our knowledge about the role of common and independent input to motor neurons in force control. Abstract Motor neurons receive both common and independent synaptic inputs. This observation is classically based on the presence of a significant correlation between pairs of motor unit spike trains. The functional significance of different relative proportions of common input across muscles, individuals and conditions is still debated. One of the limitations in our understanding of correlated input to motor neurons is that it has not been possible so far to quantify the absolute proportion of common input with respect to the total synaptic input received by the motor neurons. Indeed, correlation measures of pairs of output spike trains only allow for relative comparisons. In the present study, we report for the first time an approach for measuring the proportion of common input in the low frequency bandwidth (60%) proportion of common low frequency oscillations with respect to their total synaptic input. These results suggest that the central nervous system provides a large amount of common input to motor neuron pools, in a similar way to that for muscles with different functional and control properties. PMID:27151459

  11. Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    Science.gov (United States)

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.

  12. Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model

    Science.gov (United States)

    Nikbay, Melike; Heeg, Jennifer

    2017-01-01

    This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.

  13. Uncertainty quantification for nuclear density functional theory and information content of new measurements.

    Science.gov (United States)

    McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-03-27

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  14. Uncertainty quantification theory, implementation, and applications

    CERN Document Server

    Smith, Ralph C

    2014-01-01

    The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...

  15. Variance gradients and uncertainty budgets for nonlinear measurement functions with independent inputs

    International Nuclear Information System (INIS)

    Campanelli, Mark; Kacker, Raghu; Kessel, Rüdiger

    2013-01-01

    A novel variance-based measure for global sensitivity analysis, termed a variance gradient (VG), is presented for constructing uncertainty budgets under the Guide to the Expression of Uncertainty in Measurement (GUM) framework for nonlinear measurement functions with independent inputs. The motivation behind VGs is the desire of metrologists to understand which inputs' variance reductions would most effectively reduce the variance of the measurand. VGs are particularly useful when the application of the first supplement to the GUM is indicated because of the inadequacy of measurement function linearization. However, VGs reduce to a commonly understood variance decomposition in the case of a linear(ized) measurement function with independent inputs for which the original GUM readily applies. The usefulness of VGs is illustrated by application to an example from the first supplement to the GUM, as well as to the benchmark Ishigami function. A comparison of VGs to other available sensitivity measures is made. (paper)

  16. A Method to Select Test Input Cases for Safety-critical Software

    International Nuclear Information System (INIS)

    Kim, Heeeun; Kang, Hyungook; Son, Hanseong

    2013-01-01

    This paper proposes a new testing methodology for effective and realistic quantification of RPS software failure probability. Software failure probability quantification is important factor in digital system safety assessment. In this study, the method for software test case generation is briefly described. The test cases generated by this method reflect the characteristics of safety-critical software and past inputs. Furthermore, the number of test cases can be reduced, but it is possible to perform exhaustive test. Aspect of software also can be reflected as failure data, so the final failure data can include the failure of software itself and external influences. Software reliability is generally accepted as the key factor in software quality since it quantifies software failures which can make a powerful system inoperative. In the KNITS (Korea Nuclear Instrumentation and Control Systems) project, the software for the fully digitalized reactor protection system (RPS) was developed under a strict procedure including unit testing and coverage measurement. Black box testing is one type of Verification and validation (V and V), in which given input values are entered and the resulting output values are compared against the expected output values. Programmable logic controllers (PLCs) were used in implementing critical systems and function block diagram (FBD) is a commonly used implementation language for PLC

  17. A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification

    Science.gov (United States)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.

    MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.

  18. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  19. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  20. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    Energy Technology Data Exchange (ETDEWEB)

    McDonnell, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schunck, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Higdon, D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sarich, J. [Argonne National Lab. (ANL), Argonne, IL (United States); Wild, S. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Nazarewicz, W. [Michigan State Univ., East Lansing, MI (United States); Oak Ridge National Lab., Oak Ridge, TN (United States); Univ. of Warsaw, Warsaw (Poland)

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  1. Assessment of input function distortions on kinetic model parameters in simulated dynamic 82Rb PET perfusion studies

    International Nuclear Information System (INIS)

    Meyer, Carsten; Peligrad, Dragos-Nicolae; Weibrecht, Martin

    2007-01-01

    Cardiac 82 rubidium dynamic PET studies allow quantifying absolute myocardial perfusion by using tracer kinetic modeling. Here, the accurate measurement of the input function, i.e. the tracer concentration in blood plasma, is a major challenge. This measurement is deteriorated by inappropriate temporal sampling, spillover, etc. Such effects may influence the measured input peak value and the measured blood pool clearance. The aim of our study is to evaluate the effect of input function distortions on the myocardial perfusion as estimated by the model. To this end, we simulate noise-free myocardium time activity curves (TACs) with a two-compartment kinetic model. The input function to the model is a generic analytical function. Distortions of this function have been introduced by varying its parameters. Using the distorted input function, the compartment model has been fitted to the simulated myocardium TAC. This analysis has been performed for various sets of model parameters covering a physiologically relevant range. The evaluation shows that ±10% error in the input peak value can easily lead to ±10-25% error in the model parameter K 1 , which relates to myocardial perfusion. Variations in the input function tail are generally less relevant. We conclude that an accurate estimation especially of the plasma input peak is crucial for a reliable kinetic analysis and blood flow estimation

  2. The cerebral blood flow measurement without absolute input function value for PET O-15 injection method

    International Nuclear Information System (INIS)

    Matsuda, Tadashige

    2004-01-01

    This paper shows the method of the measurement for the region of interest's (ROI's) cerebral blood flow (CBF) using PET data and the input function without the absolute density value of the radio activity. The value of the input function and the output function are fixed from the clinical data by regression analysis. The input function and the output function are transformed by the Fourier transform. The transfer function of the differential equation of the compartment model is got by these Fourier transforms. The CBF can be estimated by the transfer function regression analysis. Results are compared between the proposal and conventional methods. (author)

  3. EMP damage function of input port of DC solid state relay

    International Nuclear Information System (INIS)

    Sun Beiyun; Chen Xiangyue; Mao Congguang; Zhou Hui

    2009-01-01

    The principle of using pivotal quantity to estimate confidence interval for the cumulative distribution function at a specific value when the distribution is assumed to be normal is introduced. The damage function of input port of DC solide state relay is calculated by this method. This method can be used for vulnerability assessment. (authors)

  4. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  5. Cerebral blood flow with [15O]water PET studies using an image-derived input function and MR-defined carotid centerlines

    Science.gov (United States)

    Fung, Edward K.; Carson, Richard E.

    2013-03-01

    average aRC values, the means were unchanged, and intersubject variability was noticeably reduced. This MR-based centerline method with local re-registration to [15O]water PET yields a consistent IDIF over multiple injections in the same subject, thus permitting the absolute quantification of CBF without arterial input function measurements.

  6. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    International Nuclear Information System (INIS)

    Jochimsen, Thies H.; Zeisig, Vilia; Schulz, Jessica; Werner, Peter; Patt, Marianne; Patt, Jörg; Dreyer, Antje Y.; Boltze, Johannes; Barthel, Henryk; Sabri, Osama; Sattler, Bernhard

    2016-01-01

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  7. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    Energy Technology Data Exchange (ETDEWEB)

    Jochimsen, Thies H.; Zeisig, Vilia [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Schulz, Jessica [Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, Leipzig, D-04103 (Germany); Werner, Peter; Patt, Marianne; Patt, Jörg [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Dreyer, Antje Y. [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Boltze, Johannes [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Fraunhofer Research Institution of Marine Biotechnology and Institute for Medical and Marine Biotechnology, University of Lübeck, Lübeck (Germany); Barthel, Henryk; Sabri, Osama; Sattler, Bernhard [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany)

    2016-02-13

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  8. Targeted quantification of functional enzyme dynamics in environmental samples for microbially mediated biogeochemical processes: Targeted quantification of functional enzyme dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Li, Minjing [School of Environmental Studies, China University of Geosciences, Wuhan 430074 People' s Republic of China; Gao, Yuqian [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Qian, Wei-Jun [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Shi, Liang [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Liu, Yuanyuan [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Nelson, William C. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Nicora, Carrie D. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Resch, Charles T. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Thompson, Christopher [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Yan, Sen [School of Environmental Studies, China University of Geosciences, Wuhan 430074 People' s Republic of China; Fredrickson, James K. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Liu, Chongxuan [Pacific Northwest National Laboratory, Richland, WA 99354 USA; School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055 People' s Republic of China

    2017-07-13

    Microbially mediated biogeochemical processes are catalyzed by enzymes that control the transformation of carbon, nitrogen, and other elements in environment. The dynamic linkage between enzymes and biogeochemical species transformation has, however, rarely been investigated because of the lack of analytical approaches to efficiently and reliably quantify enzymes and their dynamics in soils and sediments. Herein, we developed a signature peptide-based technique for sensitively quantifying dissimilatory and assimilatory enzymes using nitrate-reducing enzymes in a hyporheic zone sediment as an example. Moreover, the measured changes in enzyme concentration were found to correlate with the nitrate reduction rate in a way different from that inferred from biogeochemical models based on biomass or functional genes as surrogates for functional enzymes. This phenomenon has important implications for understanding and modeling the dynamics of microbial community functions and biogeochemical processes in environments. Our results also demonstrate the importance of enzyme quantification for the identification and interrogation of those biogeochemical processes with low metabolite concentrations as a result of faster enzyme-catalyzed consumption of metabolites than their production. The dynamic enzyme behaviors provide a basis for the development of enzyme-based models to describe the relationship between the microbial community and biogeochemical processes.

  9. Determination of arterial input function in dynamic susceptibility contrast MRI using group independent component analysis technique

    International Nuclear Information System (INIS)

    Chen, S.; Liu, H.-L.; Yang Yihong; Hsu, Y.-Y.; Chuang, K.-S.

    2006-01-01

    Quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast (DSC) magnetic resonance imaging (MRI) requires the determination of the arterial input function (AIF). The segmentation of surrounding tissue by manual selection is error-prone due to the partial volume artifacts. Independent component analysis (ICA) has the advantage in automatically decomposing the signals into interpretable components. Recently group ICA technique has been applied to fMRI study and showed reduced variance caused by motion artifact and noise. In this work, we investigated the feasibility and efficacy of the use of group ICA technique to extract the AIF. Both simulated and in vivo data were analyzed in this study. The simulation data of eight phantoms were generated using randomized lesion locations and time activity curves. The clinical data were obtained from spin-echo EPI MR scans performed in seven normal subjects. Group ICA technique was applied to analyze data through concatenating across seven subjects. The AIFs were calculated from the weighted average of the signals in the region selected by ICA. Preliminary results of this study showed that group ICA technique could not extract accurate AIF information from regions around the vessel. The mismatched location of vessels within the group reduced the benefits of group study

  10. Distance-Ranked Fault Identification of Reconfigurable Hardware Bitstreams via Functional Input

    Directory of Open Access Journals (Sweden)

    Naveed Imran

    2014-01-01

    Full Text Available Distance-Ranked Fault Identification (DRFI is a dynamic reconfiguration technique which employs runtime inputs to conduct online functional testing of fielded FPGA logic and interconnect resources without test vectors. At design time, a diverse set of functionally identical bitstream configurations are created which utilize alternate hardware resources in the FPGA fabric. An ordering is imposed on the configuration pool as updated by the PageRank indexing precedence. The configurations which utilize permanently damaged resources and hence manifest discrepant outputs, receive lower rank are thus less preferred for instantiation on the FPGA. Results indicate accurate identification of fault-free configurations in a pool of pregenerated bitstreams with a low number of reconfigurations and input evaluations. For MCNC benchmark circuits, the observed reduction in input evaluations is up to 75% when comparing the DRFI technique to unguided evaluation. The DRFI diagnosis method is seen to isolate all 14 healthy configurations from a pool of 100 pregenerated configurations, and thereby offering a 100% isolation accuracy provided the fault-free configurations exist in the design pool. When a complete recovery is not feasible, graceful degradation may be realized which is demonstrated by the PSNR improvement of images processed in a video encoder case study.

  11. Synthesis of nanodiamond derivatives carrying amino functions and quantification by a modified Kaiser test

    Directory of Open Access Journals (Sweden)

    Gerald Jarre

    2014-11-01

    Full Text Available Nanodiamonds functionalized with different organic moieties carrying terminal amino groups have been synthesized. These include conjugates generated by Diels–Alder reactions of ortho-quinodimethanes formed in situ from pyrazine and 5,6-dihydrocyclobuta[d]pyrimidine derivatives. For the quantification of primary amino groups a modified photometric assay based on the Kaiser test has been developed and validated for different types of aminated nanodiamond. The results correspond well to values obtained by thermogravimetry. The method represents an alternative wet-chemical quantification method in cases where other techniques like elemental analysis fail due to unfavourable combustion behaviour of the analyte or other impediments.

  12. Multi detector input and function generator for polarized neutron experiments

    International Nuclear Information System (INIS)

    De Blois, J.; Beunes, A.J.H.; Ende, P. v.d.; Osterholt, E.A.; Rekveldt, M.T.; Schipper, M.N.; Velthuis, S.G.E. te

    1998-01-01

    In this paper a VME module is described for static or stroboscopic measurements with a neutron scattering instrument, consisting essentially of a series of up to 64 3 He neutron detectors around a sample environment. Each detector is provided with an amplifier and a discriminator to separate the neutrons from noise. To reduce the wiring, the discriminator outputs are connected to the module by coding boxes. Two 16-inputs to one-output coding boxes generate serial output codes on a fiber optic connection. This basically fast connection reduces the dead time introduced by the coding, and the influence of environmental noise. With stroboscopic measurements a periodic function is used to affect the sample surrounded by a field coil. Each detected neutron is labeled with a data label containing the detector number and the time of detection with respect to a time reference. The data time base can be programmed on a linear or a nonlinear scale. An external source or an attribute of the periodic function may generate the time reference pulse. A 12-bit DAC connected to the output of an 8 K, 16-bits memory, where the pattern of the current has been stored before, generates the function. The function memory is scanned by the programmable function time base. Attributes are set by the four remaining bits of the memory. One separate detector input connects a monitor detector in the neutron beam with a 32-bit counter/timer that provides measuring on a preset count, preset time or preset frame. (orig.)

  13. Quantification of discreteness effects in cosmological N-body simulations: Initial conditions

    International Nuclear Information System (INIS)

    Joyce, M.; Marcos, B.

    2007-01-01

    The relation between the results of cosmological N-body simulations, and the continuum theoretical models they simulate, is currently not understood in a way which allows a quantification of N dependent effects. In this first of a series of papers on this issue, we consider the quantification of such effects in the initial conditions of such simulations. A general formalism developed in [A. Gabrielli, Phys. Rev. E 70, 066131 (2004).] allows us to write down an exact expression for the power spectrum of the point distributions generated by the standard algorithm for generating such initial conditions. Expanded perturbatively in the amplitude of the input (i.e. theoretical, continuum) power spectrum, we obtain at linear order the input power spectrum, plus two terms which arise from discreteness and contribute at large wave numbers. For cosmological type power spectra, one obtains as expected, the input spectrum for wave numbers k smaller than that characteristic of the discreteness. The comparison of real space correlation properties is more subtle because the discreteness corrections are not as strongly localized in real space. For cosmological type spectra the theoretical mass variance in spheres and two-point correlation function are well approximated above a finite distance. For typical initial amplitudes this distance is a few times the interparticle distance, but it diverges as this amplitude (or, equivalently, the initial redshift of the cosmological simulation) goes to zero, at fixed particle density. We discuss briefly the physical significance of these discreteness terms in the initial conditions, in particular, with respect to the definition of the continuum limit of N-body simulations

  14. Compositional Solution Space Quantification for Probabilistic Software Analysis

    Science.gov (United States)

    Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem

    2014-01-01

    Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.

  15. How the type of input function affects the dynamic response of conducting polymer actuators

    Science.gov (United States)

    Xiang, Xingcan; Alici, Gursel; Mutlu, Rahim; Li, Weihua

    2014-10-01

    There has been a growing interest in smart actuators typified by conducting polymer actuators, especially in their (i) fabrication, modeling and control with minimum external data and (ii) applications in bio-inspired devices, robotics and mechatronics. Their control is a challenging research problem due to the complex and nonlinear properties of these actuators, which cannot be predicted accurately. Based on an input-shaping technique, we propose a new method to improve the conducting polymer actuators’ command-following ability, while minimizing their electric power consumption. We applied four input functions with smooth characteristics to a trilayer conducting polymer actuator to experimentally evaluate its command-following ability under an open-loop control strategy and a simulated feedback control strategy, and, more importantly, to quantify how the type of input function affects the dynamic response of this class of actuators. We have found that the four smooth inputs consume less electrical power than sharp inputs such as a step input with discontinuous higher-order derivatives. We also obtained an improved transient response performance from the smooth inputs, especially under the simulated feedback control strategy, which we have proposed previously [X Xiang, R Mutlu, G Alici, and W Li, 2014 “Control of conducting polymer actuators without physical feedback: simulated feedback control approach with particle swarm optimization’, Journal of Smart Materials and Structure, 23]. The idea of using a smooth input command, which results in lower power consumption and better control performance, can be extended to other smart actuators. Consuming less electrical energy or power will have a direct effect on enhancing the operational life of these actuators.

  16. How the type of input function affects the dynamic response of conducting polymer actuators

    International Nuclear Information System (INIS)

    Xiang, Xingcan; Alici, Gursel; Mutlu, Rahim; Li, Weihua

    2014-01-01

    There has been a growing interest in smart actuators typified by conducting polymer actuators, especially in their (i) fabrication, modeling and control with minimum external data and (ii) applications in bio-inspired devices, robotics and mechatronics. Their control is a challenging research problem due to the complex and nonlinear properties of these actuators, which cannot be predicted accurately. Based on an input-shaping technique, we propose a new method to improve the conducting polymer actuators’ command-following ability, while minimizing their electric power consumption. We applied four input functions with smooth characteristics to a trilayer conducting polymer actuator to experimentally evaluate its command-following ability under an open-loop control strategy and a simulated feedback control strategy, and, more importantly, to quantify how the type of input function affects the dynamic response of this class of actuators. We have found that the four smooth inputs consume less electrical power than sharp inputs such as a step input with discontinuous higher-order derivatives. We also obtained an improved transient response performance from the smooth inputs, especially under the simulated feedback control strategy, which we have proposed previously [X Xiang, R Mutlu, G Alici, and W Li, 2014 “Control of conducting polymer actuators without physical feedback: simulated feedback control approach with particle swarm optimization’, Journal of Smart Materials and Structure, 23]. The idea of using a smooth input command, which results in lower power consumption and better control performance, can be extended to other smart actuators. Consuming less electrical energy or power will have a direct effect on enhancing the operational life of these actuators. (paper)

  17. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    Science.gov (United States)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  18. Evaluation of severe accident risks: Quantification of major input parameters: MAACS [MELCOR Accident Consequence Code System] input

    International Nuclear Information System (INIS)

    Sprung, J.L.; Jow, H-N; Rollstin, J.A.; Helton, J.C.

    1990-12-01

    Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric and biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs

  19. Estimating the basilar-membrane input-output function in normal-hearing and hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    To partly characterize the function of cochlear processing in humans, the basilar membrane (BM) input-output function can be estimated. In recent studies, forward masking has been used to estimate BM compression. If an on-frequency masker is processed compressively, while an off-frequency masker...... is transformed more linearly, the ratio between the slopes of growth of masking (GOM) functions provides an estimate of BM compression at the signal frequency. In this study, this paradigm is extended to also estimate the knee-point of the I/O-function between linear rocessing at low levels and compressive...... processing at medium levels. If a signal can be masked by a low-level on-frequency masker such that signal and masker fall in the linear region of the I/O-function, then a steeper GOM function is expected. The knee-point can then be estimated in the input level region where the GOM changes significantly...

  20. Concepts in production ecology for analysis and quantification of agricultural input-output combinations.

    NARCIS (Netherlands)

    Ittersum, van M.K.; Rabbinge, R.

    1997-01-01

    Definitions and concepts of production ecology are presented as a basis for development of alternative production technologies characterized by their input-output combinations. With these concepts the relative importance of several growth factors and inputs is investigated to explain actual yield

  1. A Microneedle Functionalized with Polyethyleneimine and Nanotubes for Highly Sensitive, Label-Free Quantification of DNA

    OpenAIRE

    Saadat-Moghaddam, Darius; Kim, Jong-Hoon

    2017-01-01

    The accurate measure of DNA concentration is necessary for many DNA-based biological applications. However, the current methods are limited in terms of sensitivity, reproducibility, human error, and contamination. Here, we present a microneedle functionalized with polyethyleneimine (PEI) and single-walled carbon nanotubes (SWCNTs) for the highly sensitive quantification of DNA. The microneedle was fabricated using ultraviolet (UV) lithography and anisotropic etching, and then functionalized w...

  2. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    International Nuclear Information System (INIS)

    Brown, C.S.; Zhang, Hongbin

    2016-01-01

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis. A 2 × 2 fuel assembly model was developed and simulated by VERA-CS, and uncertainty quantification and sensitivity analysis were performed with fourteen uncertain input parameters. The minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surface temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. Parameters used as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.

  3. Minimally invasive input function for 2-{sup 18}F-fluoro-A-85380 brain PET studies

    Energy Technology Data Exchange (ETDEWEB)

    Zanotti-Fregonara, Paolo [National Institute of Mental Health, NIH, Molecular Imaging Branch, Bethesda, MD (United States); Maroy, Renaud; Peyronneau, Marie-Anne; Trebossen, Regine [CEA, DSV, I2BM, Service Hospitalier Frederic Joliot, Orsay (France); Bottlaender, Michel [CEA, DSV, I2BM, NeuroSpin, Gif-sur-Yvette (France)

    2012-04-15

    Quantitative neuroreceptor positron emission tomography (PET) studies often require arterial cannulation to measure input function. While population-based input function (PBIF) would be a less invasive alternative, it has only rarely been used in conjunction with neuroreceptor PET tracers. The aims of this study were (1) to validate the use of PBIF for 2-{sup 18}F-fluoro-A-85380, a tracer for nicotinic receptors; (2) to compare the accuracy of measures obtained via PBIF to those obtained via blood-scaled image-derived input function (IDIF) from carotid arteries; and (3) to explore the possibility of using venous instead of arterial samples for both PBIF and IDIF. Ten healthy volunteers underwent a dynamic 2-{sup 18}F-fluoro-A-85380 brain PET scan with arterial and, in seven subjects, concurrent venous serial blood sampling. PBIF was obtained by averaging the normalized metabolite-corrected arterial input function and subsequently scaling each curve with individual blood samples. IDIF was obtained from the carotid arteries using a blood-scaling method. Estimated Logan distribution volume (V{sub T}) values were compared to the reference values obtained from arterial cannulation. For all subjects, PBIF curves scaled with arterial samples were similar in shape and magnitude to the reference arterial input function. The Logan V{sub T} ratio was 1.00 {+-} 0.05; all subjects had an estimation error <10%. IDIF gave slightly less accurate results (V{sub T} ratio 1.03 {+-} 0.07; eight of ten subjects had an error <10%). PBIF scaled with venous samples yielded inaccurate results (V{sub T} ratio 1.13 {+-} 0.13; only three of seven subjects had an error <10%). Due to arteriovenous differences at early time points, IDIF could not be calculated using venous samples. PBIF scaled with arterial samples accurately estimates Logan V{sub T} for 2-{sup 18}F-fluoro-A-85380. Results obtained with PBIF were slightly better than those obtained with IDIF. Due to arteriovenous concentration

  4. Development of Quantification Method for Bioluminescence Imaging

    International Nuclear Information System (INIS)

    Kim, Hyeon Sik; Min, Jung Joon; Lee, Byeong Il; Choi, Eun Seo; Tak, Yoon O; Choi, Heung Kook; Lee, Ju Young

    2009-01-01

    Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time

  5. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies

    Science.gov (United States)

    Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.

    2012-01-01

    Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum

  6. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies

    International Nuclear Information System (INIS)

    Winant, Celeste D; Aparici, Carina Mari; Bacharach, Stephen L; Gullberg, Grant T; Zelnik, Yuval R; Reutter, Bryan W; Sitek, Arkadiusz

    2012-01-01

    Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94 Tc-methoxyisobutylisonitrile ( 94 Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K 1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K 1 . For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94 Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99m Tc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal

  7. Simultaneous determination of arterial input function of the internal carotid and middle cerebral arteries for dynamic susceptibility contrast MRI

    International Nuclear Information System (INIS)

    Scholdei, R.; Wenz, F.; Fuss, M.; Essig, M.; Knopp, M.V.

    1999-01-01

    Purpose: The determination of the arterial input function (AIF) is necessary for absolute quantification of the regional cerebral blood volume and blood flow using dynamic susceptibility contrast MRI. The suitability of different vessels (ICA-internal carotid artery, MCA-middle cerebral artery) for AIF determination was compared in this study. Methods: A standard 1.5 T MR system and a simultaneous dual FLASH sequence (TR/TE1/TE2/α=32/15/25/10 ) were used to follow a bolus of contrast agent. Slice I was chosen to cut the ICA perpendicularly. Slice II included the MCA. Seventeen data sets from ten subjects were evaluated. Results: The number of AIF-relevant pixels, the area under the AIF and the maximum concentration were all lower when the AIF was determined from the MCA compared to the ICA. Additionally, the mean transit time (MTT) and the time to maximum concentration (TTM) were longer in the MCA, complicating the computerized identification of AIF-relevant pixels. Data from one subject, who was examined five times, demonstrated that the intraindividual variance of the measured parameters was markedly lower than the interpersonal variance. Conclusions: It appears to be advantageous to measure the AIF in the ICA rather than the MCA. (orig.) [de

  8. Entrainment and phase-shifting by centrifugation abolished in mice lacking functional vestibular input

    Science.gov (United States)

    Fuller, Charles; Ringgold, Kristyn

    The circadian pacemaker can be phase shifted and entrained by appropriately timed locomotor activity, however the mechanism(s) involved remain poorly understood. Recent work in our lab has suggested the involvement of the vestibular otolith organs in activity-induced changes within the circadian timing system (CTS). For example, we have shown that changes in circa-dian period and phase in response to locomotion (wheel running) require functional macular gravity receptors. We believe the neurovestibular system is responsible for the transduction of gravitoinertial input associated with the types of locomotor activity that are known to af-fect the pacemaker. This study investigated the hypothesis that daily, timed gravitoinertial stimuli, as applied by centrifugation. would induce entrainment of circadian rhythms in only those animals with functional afferent vestibular input. To test this hypothesis, , chemically labyrinthectomized (Labx) mice, mice lacking macular vestibular input (head tilt or hets) and wildtype (WT) littermates were implanted i.p. with biotelemetry and individually housed in a 4-meter diameter centrifuge in constant darkness (DD). After 2 weeks in DD, the mice were exposed daily to 2G via centrifugation from 1000-1200 for 9 weeks. Only WT mice showed entrainment to the daily 2G pulse. The 2G pulse was then re-set to occur at 1200-1400 for 4 weeks. Only WT mice demonstrated a phase shift in response to the re-setting of the 2G pulse and subsequent re-entrainment to the new centrifugation schedule. These results provide further evidence that gravitoinertial stimuli require a functional vestibular system to both en-train and phase shift the CTS. Entrainment among only WT mice supports the role of macular gravity receptive cells in modulation of the CTS while also providing a functional mechanism by which gravitoinertial stimuli, including locomotor activity, may affect the pacemaker.

  9. Input to the PRAST computer code used in the SRS probabilistic risk assessment

    International Nuclear Information System (INIS)

    Kearnaghan, D.P.

    1992-01-01

    The PRAST (Production Reactor Algorithm for Source Terms) computer code was developed by Westinghouse Savannah River Company and Science Application International Corporation for the quantification of source terms for the SRS Savannah River Site (SRS) Reactor Probabilistic Risk Assessment. PRAST requires as input a set of release fractions, decontamination factors, transfer fractions and source term characteristics that accurately reflect the conditions that are evaluated by PRAST. This document links the analyses which form the basis for the PRAST input parameters. In addition, it gives the distribution of the input parameters that are uncertain and considered to be important to the evaluation of the source terms to the environment

  10. Early uneven ear input induces long-lasting differences in left-right motor function.

    Science.gov (United States)

    Antoine, Michelle W; Zhu, Xiaoxia; Dieterich, Marianne; Brandt, Thomas; Vijayakumar, Sarath; McKeehan, Nicholas; Arezzo, Joseph C; Zukin, R Suzanne; Borkholder, David A; Jones, Sherri M; Frisina, Robert D; Hébert, Jean M

    2018-03-01

    How asymmetries in motor behavior become established normally or atypically in mammals remains unclear. An established model for motor asymmetry that is conserved across mammals can be obtained by experimentally inducing asymmetric striatal dopamine activity. However, the factors that can cause motor asymmetries in the absence of experimental manipulations to the brain remain unknown. Here, we show that mice with inner ear dysfunction display a robust left or right rotational preference, and this motor preference reflects an atypical asymmetry in cortico-striatal neurotransmission. By unilaterally targeting striatal activity with an antagonist of extracellular signal-regulated kinase (ERK), a downstream integrator of striatal neurotransmitter signaling, we can reverse or exaggerate rotational preference in these mice. By surgically biasing vestibular failure to one ear, we can dictate the direction of motor preference, illustrating the influence of uneven vestibular failure in establishing the outward asymmetries in motor preference. The inner ear-induced striatal asymmetries identified here intersect with non-ear-induced asymmetries previously linked to lateralized motor behavior across species and suggest that aspects of left-right brain function in mammals can be ontogenetically influenced by inner ear input. Consistent with inner ear input contributing to motor asymmetry, we also show that, in humans with normal ear function, the motor-dominant hemisphere, measured as handedness, is ipsilateral to the ear with weaker vestibular input.

  11. Estimation of an image derived input function with MR-defined carotid arteries in FDG-PET human studies using a novel partial volume correction method

    DEFF Research Database (Denmark)

    Sari, Hasan; Erlandsson, Kjell; Law, Ian

    2017-01-01

    Kinetic analysis of18F-fluorodeoxyglucose positron emission tomography data requires an accurate knowledge the arterial input function. The gold standard method to measure the arterial input function requires collection of arterial blood samples and is an invasive method. Measuring an image deriv...... input function (p > 0.12 for grey matter and white matter). Hence, the presented image derived input function extraction method can be a practical alternative to noninvasively analyze dynamic18F-fluorodeoxyglucose data without the need for blood sampling....

  12. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    Science.gov (United States)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  13. Parametrically defined cerebral blood vessels as non-invasive blood input functions for brain PET studies

    International Nuclear Information System (INIS)

    Asselin, Marie-Claude; Cunningham, Vincent J; Amano, Shigeko; Gunn, Roger N; Nahmias, Claude

    2004-01-01

    A non-invasive alternative to arterial blood sampling for the generation of a blood input function for brain positron emission tomography (PET) studies is presented. The method aims to extract the dimensions of the blood vessel directly from PET images and to simultaneously correct the radioactivity concentration for partial volume and spillover. This involves simulation of the tomographic imaging process to generate images of different blood vessel and background geometries and selecting the one that best fits, in a least-squares sense, the acquired PET image. A phantom experiment was conducted to validate the method which was then applied to eight subjects injected with 6-[ 18 F]fluoro-L-DOPA and one subject injected with [ 11 C]CO-labelled red blood cells. In the phantom study, the diameter of syringes filled with an 11 C solution and inserted into a water-filled cylinder were estimated with an accuracy of half a pixel (1 mm). The radioactivity concentration was recovered to 100 ± 4% in the 8.7 mm diameter syringe, the one that most closely approximated the superior sagittal sinus. In the human studies, the method systematically overestimated the calibre of the superior sagittal sinus by 2-3 mm compared to measurements made in magnetic resonance venograms on the same subjects. Sources of discrepancies related to the anatomy of the blood vessel were found not to be fundamental limitations to the applicability of the method to human subjects. This method has the potential to provide accurate quantification of blood radioactivity concentration from PET images without the need for blood samples, corrections for delay and dispersion, co-registered anatomical images, or manually defined regions of interest

  14. Estimation of the pulmonary input function in dynamic whole body PET

    International Nuclear Information System (INIS)

    Ho-Shon, K.; Buchen, P.; Meikle, S.R.; Fulham, M.J.; University of Sydney, Sydney, NSW

    1998-01-01

    Full text: Dynamic data acquisition in Whole Body PET (WB-PET) has the potential to measure the metabolic rate of glucose (MRGlc) in tissue in-vivo. Estimation of changes in tumoral MRGlc may be a valuable tool in cancer by providing an quantitative index of response to treatment. A necessary requirement is an input function (IF) that can be obtained from arterial, 'arterialised' venous or pulmonary arterial blood in the case of lung tumours. Our aim was to extract the pulmonary input function from dynamic WB-PET data using Principal Component Analysis (PCA), Factor Analysis (FA) and Maximum Entropy (ME) for the evaluation of patients undergoing induction chemotherapy for non-small cell lung cancer. PCA is first used as a method of dimension reduction to obtain a signal space, defined by an optimal metric and a set of vectors. FA is used together with a ME constraint to rotate these vectors to obtain 'physiological' factors. A form of entropy function that does not require normalised data was used. This enabled the introduction of a penalty function based on the blood concentration at the last time point which provides an additional constraint. Tissue functions from 10 planes through normal lung were simulated. The model was a linear combination of an IF and a tissue time activity curve (TAC). The proportion of the IF to TAC was varied over the planes to simulate the apical to basal gradient in vascularity of the lung and pseudo Poisson noise was added. The method accurately extracted the IF at noise levels spanning the expected range for dynamic ROI data acquired with the interplane septa extended. Our method is minimally invasive because it requires only 1 late venous blood sample and is applicable to a wide range of tracers since it does not assume a particular compartmental model. Pilot data from 2 patients have been collected enabling comparison of the estimated IF with direct blood sampling from the pulmonary artery

  15. Feasibility study of the non-invasive estimation of the β+ arterial input function for human PET imaging

    International Nuclear Information System (INIS)

    Hubert, X.

    2009-12-01

    This work deals with the estimation of the concentration of molecules in arterial blood which are labelled with positron-emitting radioelements. This concentration is called 'β + arterial input function'. This concentration has to be estimated for a large number of pharmacokinetic analyses. Nowadays it is measured through series of arterial sampling, which is an accurate method but requiring a stringent protocol. Complications might occur during arterial blood sampling because this method is invasive (hematomas, nosocomial infections). The objective of this work is to overcome this risk through a non-invasive estimation of β + input function with an external detector and a collimator. This allows the reconstruction of blood vessels and thus the discrimination of arterial signal from signals in other tissues. Collimators in medical imaging are not adapted to estimate β + input function because their sensitivity is very low. During this work, they are replaced by coded-aperture collimators, originally developed for astronomy. New methods where coded apertures are used with statistical reconstruction algorithms are presented. Techniques for analytical ray-tracing and for the acceleration of reconstructions are proposed. A new method which decomposes reconstructions on temporal sets and on spatial sets is also developed to efficiently estimate arterial input function from series of temporal acquisitions. This work demonstrates that the trade-off between sensitivity and spatial resolution in PET can be improved thanks to coded aperture collimators and statistical reconstruction algorithm; it also provides new tools to implement such improvements. (author)

  16. Direct quantification of negatively charged functional groups on membrane surfaces

    KAUST Repository

    Tiraferri, Alberto

    2012-02-01

    Surface charge plays an important role in membrane-based separations of particulates, macromolecules, and dissolved ionic species. In this study, we present two experimental methods to determine the concentration of negatively charged functional groups at the surface of dense polymeric membranes. Both techniques consist of associating the membrane surface moieties with chemical probes, followed by quantification of the bound probes. Uranyl acetate and toluidine blue O dye, which interact with the membrane functional groups via complexation and electrostatic interaction, respectively, were used as probes. The amount of associated probes was quantified using liquid scintillation counting for uranium atoms and visible light spectroscopy for the toluidine blue dye. The techniques were validated using self-assembled monolayers of alkanethiols with known amounts of charged moieties. The surface density of negatively charged functional groups of hand-cast thin-film composite polyamide membranes, as well as commercial cellulose triacetate and polyamide membranes, was quantified under various conditions. Using both techniques, we measured a negatively charged functional group density of 20-30nm -2 for the hand-cast thin-film composite membranes. The ionization behavior of the membrane functional groups, determined from measurements with toluidine blue at varying pH, was consistent with published data for thin-film composite polyamide membranes. Similarly, the measured charge densities on commercial membranes were in general agreement with previous investigations. The relative simplicity of the two methods makes them a useful tool for quantifying the surface charge concentration of a variety of surfaces, including separation membranes. © 2011 Elsevier B.V.

  17. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    Science.gov (United States)

    Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P

    2009-12-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0

  18. WE-D-204-07: Development of An ImageJ Plugin for Renal Function Quantification: RenalQuant

    Energy Technology Data Exchange (ETDEWEB)

    Marques da Silva, A; Narciso, L [PUCRS, Porto Alegre, RS (Brazil)

    2015-06-15

    Purpose: Commercial workstations usually have their own software to calculate dynamic renal functions. However, usually they have low flexibility and subjectivity on delimiting kidney and background areas. The aim of this paper is to present a public domain software, called RenalQuant, capable to semi-automatically draw regions of interest on dynamic renal scintigraphies, extracting data and generating renal function quantification parameters. Methods: The software was developed in Java and written as an ImageJ-based plugin. The preprocessing and segmentation steps include the user’s selection of one time frame with higher activity in kidney’s region, compared with background, and low activity in the liver. Next, the chosen time frame is smoothed using a Gaussian low pass spatial filter (σ = 3) for noise reduction and better delimitation of kidneys. The maximum entropy thresholding method is used for segmentation. A background area is automatically placed below each kidney, and the user confirms if these regions are correctly segmented and positioned. Quantitative data are extracted and each renogram and relative renal function (RRF) value is calculated and displayed. Results: RenalQuant plugin was validated using retrospective 20 patients’ 99mTc-DTPA exams, and compared with results produced by commercial workstation software, referred as reference. The renograms intraclass correlation coefficients (ICC) were calculated and false-negative and false-positive RRF values were analyzed. The results showed that ICC values between RenalQuant plugin and reference software for both kidneys’ renograms were higher than 0.75, showing excellent reliability. Conclusion: Our results indicated RenalQuant plugin can be trustingly used to generate renograms, using DICOM dynamic renal scintigraphy exams as input. It is user friendly and user’s interaction occurs at a minimum level. Further studies have to investigate how to increase RRF accuracy and explore how to solve

  19. Sensitivity Analysis and Uncertainty Quantification for the LAMMPS Molecular Dynamics Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bhat, Kabekode Ghanasham [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-18

    We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.

  20. The effect of blood inflow and B(1)-field inhomogeneity on measurement of the arterial input function in axial 3D spoiled gradient echo dynamic contrast-enhanced MRI.

    Science.gov (United States)

    Roberts, Caleb; Little, Ross; Watson, Yvonne; Zhao, Sha; Buckley, David L; Parker, Geoff J M

    2011-01-01

    A major potential confound in axial 3D dynamic contrast-enhanced magnetic resonance imaging studies is the blood inflow effect; therefore, the choice of slice location for arterial input function measurement within the imaging volume must be considered carefully. The objective of this study was to use computer simulations, flow phantom, and in vivo studies to describe and understand the effect of blood inflow on the measurement of the arterial input function. All experiments were done at 1.5 T using a typical 3D dynamic contrast-enhanced magnetic resonance imaging sequence, and arterial input functions were extracted for each slice in the imaging volume. We simulated a set of arterial input functions based on the same imaging parameters and accounted for blood inflow and radiofrequency field inhomogeneities. Measured arterial input functions along the vessel length from both in vivo and the flow phantom agreed with simulated arterial input functions and show large overestimations in the arterial input function in the first 30 mm of the vessel, whereas arterial input functions measured more centrally achieve accurate contrast agent concentrations. Use of inflow-affected arterial input functions in tracer kinetic modeling shows potential errors of up to 80% in tissue microvascular parameters. These errors emphasize the importance of careful placement of the arterial input function definition location to avoid the effects of blood inflow. © 2010 Wiley-Liss, Inc.

  1. Microbial Communities Are Well Adapted to Disturbances in Energy Input.

    Science.gov (United States)

    Fernandez-Gonzalez, Nuria; Huber, Julie A; Vallino, Joseph J

    2016-01-01

    Although microbial systems are well suited for studying concepts in ecological theory, little is known about how microbial communities respond to long-term periodic perturbations beyond diel oscillations. Taking advantage of an ongoing microcosm experiment, we studied how methanotrophic microbial communities adapted to disturbances in energy input over a 20-day cycle period. Sequencing of bacterial 16S rRNA genes together with quantification of microbial abundance and ecosystem function were used to explore the long-term dynamics (510 days) of methanotrophic communities under continuous versus cyclic chemical energy supply. We observed that microbial communities appeared inherently well adapted to disturbances in energy input and that changes in community structure in both treatments were more dependent on internal dynamics than on external forcing. The results also showed that the rare biosphere was critical to seeding the internal community dynamics, perhaps due to cross-feeding or other strategies. We conclude that in our experimental system, internal feedbacks were more important than external drivers in shaping the community dynamics over time, suggesting that ecosystems can maintain their function despite inherently unstable community dynamics. IMPORTANCE Within the broader ecological context, biological communities are often viewed as stable and as only experiencing succession or replacement when subject to external perturbations, such as changes in food availability or the introduction of exotic species. Our findings indicate that microbial communities can exhibit strong internal dynamics that may be more important in shaping community succession than external drivers. Dynamic "unstable" communities may be important for ecosystem functional stability, with rare organisms playing an important role in community restructuring. Understanding the mechanisms responsible for internal community dynamics will certainly be required for understanding and manipulating

  2. A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y

    2011-10-27

    Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.

  3. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  4. Development of an exchange–correlation functional with uncertainty quantification capabilities for density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Aldegunde, Manuel, E-mail: M.A.Aldegunde-Rodriguez@warwick.ac.uk; Kermode, James R., E-mail: J.R.Kermode@warwick.ac.uk; Zabaras, Nicholas

    2016-04-15

    This paper presents the development of a new exchange–correlation functional from the point of view of machine learning. Using atomization energies of solids and small molecules, we train a linear model for the exchange enhancement factor using a Bayesian approach which allows for the quantification of uncertainties in the predictions. A relevance vector machine is used to automatically select the most relevant terms of the model. We then test this model on atomization energies and also on bulk properties. The average model provides a mean absolute error of only 0.116 eV for the test points of the G2/97 set but a larger 0.314 eV for the test solids. In terms of bulk properties, the prediction for transition metals and monovalent semiconductors has a very low test error. However, as expected, predictions for types of materials not represented in the training set such as ionic solids show much larger errors.

  5. Characterizing the Input-Output Function of the Olfactory-Limbic Pathway in the Guinea Pig

    Directory of Open Access Journals (Sweden)

    Gian Luca Breschi

    2015-01-01

    Full Text Available Nowadays the neuroscientific community is taking more and more advantage of the continuous interaction between engineers and computational neuroscientists in order to develop neuroprostheses aimed at replacing damaged brain areas with artificial devices. To this end, a technological effort is required to develop neural network models which can be fed with the recorded electrophysiological patterns to yield the correct brain stimulation to recover the desired functions. In this paper we present a machine learning approach to derive the input-output function of the olfactory-limbic pathway in the in vitro whole brain of guinea pig, less complex and more controllable than an in vivo system. We first experimentally characterized the neuronal pathway by delivering different sets of electrical stimuli from the lateral olfactory tract (LOT and by recording the corresponding responses in the lateral entorhinal cortex (l-ERC. As a second step, we used information theory to evaluate how much information output features carry about the input. Finally we used the acquired data to learn the LOT-l-ERC “I/O function,” by means of the kernel regularized least squares method, able to predict l-ERC responses on the basis of LOT stimulation features. Our modeling approach can be further exploited for brain prostheses applications.

  6. A simple method for the quantification of benzodiazepine receptors using iodine-123 iomazenil and single-photon emission tomography

    International Nuclear Information System (INIS)

    Ito, Hiroshi; Goto, Ryoui; Koyama, Masamichi; Kawashima, Ryuta; Ono, Shuichi; Sato, Kazunori; Fukuda, Hiroshi

    1996-01-01

    Iodine-123 iomazenil (Iomazenil) is a ligand for central type benzodiazepine receptors that is suitable for single-photon emission tomography (SPET). The purpose of this study was to develop a simple method for the quantification of its binding potential (BP). The method is based on a two-compartment model (K 1 , influx rate constant; k 2 ', efflux rate constant; V T '(=K 1 /k 2 '), the total distribution volumes relative to the total arterial tracer concentration), and requires two SPET scans and one blood sampling. For a given input function, the radioactivity ratio of the early to delayed scans can be considered to tabulate as a function of k 2 ', and a table lookup procedure provides the corresponding k 2 ' value, from which K 1 and V t ' values are then calculated. The arterial input function is obtained by calibration of the standard input function by the single blood sampling. SPET studies were performed on 14 patients with cerebrovascular diseases, dementia or brain tumours (mean age ±SD, 56.0±12.2). None of the patients had any heart, renal or liver disease. A dynamic SPET scan was performed following intravenous bolus injection of Iomazenil. A static SPET scan was performed at 180 min after injection. Frequent blood sampling from the brachial artery was performed on all subjects for determination of the arterial input function. Two-compartment model analysis was validated for calculation of the V T ' value of Iomazenil. Good correlations were observed between V T ' values calculated by three-compartment model analysis and those calculated by the present method, in which the scan time combinations (early scan/delayed scan) used were 15/180 min, 30/180 min or 45/180 min (all combinations: r=0.92), supporting the validity of this method. The present method is simple and applicable for clinical use. (orig.)

  7. Uncertainty quantification tools for multiphase gas-solid flow simulations using MFIX

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Rodney O. [Iowa State Univ., Ames, IA (United States); Passalacqua, Alberto [Iowa State Univ., Ames, IA (United States)

    2016-02-01

    Computational fluid dynamics (CFD) has been widely studied and used in the scientific community and in the industry. Various models were proposed to solve problems in different areas. However, all models deviate from reality. Uncertainty quantification (UQ) process evaluates the overall uncertainties associated with the prediction of quantities of interest. In particular it studies the propagation of input uncertainties to the outputs of the models so that confidence intervals can be provided for the simulation results. In the present work, a non-intrusive quadrature-based uncertainty quantification (QBUQ) approach is proposed. The probability distribution function (PDF) of the system response can be then reconstructed using extended quadrature method of moments (EQMOM) and extended conditional quadrature method of moments (ECQMOM). The report first explains the theory of QBUQ approach, including methods to generate samples for problems with single or multiple uncertain input parameters, low order statistics, and required number of samples. Then methods for univariate PDF reconstruction (EQMOM) and multivariate PDF reconstruction (ECQMOM) are explained. The implementation of QBUQ approach into the open-source CFD code MFIX is discussed next. At last, QBUQ approach is demonstrated in several applications. The method is first applied to two examples: a developing flow in a channel with uncertain viscosity, and an oblique shock problem with uncertain upstream Mach number. The error in the prediction of the moment response is studied as a function of the number of samples, and the accuracy of the moments required to reconstruct the PDF of the system response is discussed. The QBUQ approach is then demonstrated by considering a bubbling fluidized bed as example application. The mean particle size is assumed to be the uncertain input parameter. The system is simulated with a standard two-fluid model with kinetic theory closures for the particulate phase implemented into

  8. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  9. Methods for modeling and quantification in functional imaging by positron emissions tomography and magnetic resonance imaging

    International Nuclear Information System (INIS)

    Costes, Nicolas

    2017-01-01

    This report presents experiences and researches in the field of in vivo medical imaging by positron emission tomography (PET) and magnetic resonance imaging (MRI). In particular, advances in terms of reconstruction, quantification and modeling in PET are described. The validation of processing and analysis methods is supported by the creation of data by simulation of the imaging process in PET. The recent advances of combined PET/MRI clinical cameras, allowing simultaneous acquisition of molecular/metabolic PET information, and functional/structural MRI information opens the door to unique methodological innovations, exploiting spatial alignment and simultaneity of the PET and MRI signals. It will lead to an increase in accuracy and sensitivity in the measurement of biological phenomena. In this context, the developed projects address new methodological issues related to quantification, and to the respective contributions of MRI or PET information for a reciprocal improvement of the signals of the two modalities. They open perspectives for combined analysis of the two imaging techniques, allowing optimal use of synchronous, anatomical, molecular and functional information for brain imaging. These innovative concepts, as well as data correction and analysis methods, will be easily translated into other areas of investigation using combined PET/MRI. (author) [fr

  10. Development of Input Function Measurement System for Small Animal PET Study

    International Nuclear Information System (INIS)

    Kim, Jong Guk; Kim, Byung Su; Kim, Jin Su

    2010-01-01

    For quantitative measurement of radioactivity concentration in tissue and a validated tracer kinetic model, the high sensitive detection system has been required for blood sampling. With the accurate measurement of time activity curves (TACs) of labeled compounds in blood (plasma) enable to provide quantitative information on biological parameters of interest in local tissue. Especially, the development of new tracers for PET imaging requires knowledge of the kinetics of the tracer in the body and in arterial blood and plasma. Conventional approaches of obtaining an input function are to sample arterial blood sequentially by manual as a function of time. Several continuous blood sampling systems have been developed and used in nuclear medicine research field to overcome the limited temporal resolution in sampling by the conventional method. In this work, we developed the high sensitive and unique geometric design of GSO detector for small animal blood activity measurement

  11. Quantification of Na+,K+ pumps and their transport rate in skeletal muscle: Functional significance

    Science.gov (United States)

    2013-01-01

    During excitation, muscle cells gain Na+ and lose K+, leading to a rise in extracellular K+ ([K+]o), depolarization, and loss of excitability. Recent studies support the idea that these events are important causes of muscle fatigue and that full use of the Na+,K+-ATPase (also known as the Na+,K+ pump) is often essential for adequate clearance of extracellular K+. As a result of their electrogenic action, Na+,K+ pumps also help reverse depolarization arising during excitation, hyperkalemia, and anoxia, or from cell damage resulting from exercise, rhabdomyolysis, or muscle diseases. The ability to evaluate Na+,K+-pump function and the capacity of the Na+,K+ pumps to fill these needs require quantification of the total content of Na+,K+ pumps in skeletal muscle. Inhibition of Na+,K+-pump activity, or a decrease in their content, reduces muscle contractility. Conversely, stimulation of the Na+,K+-pump transport rate or increasing the content of Na+,K+ pumps enhances muscle excitability and contractility. Measurements of [3H]ouabain binding to skeletal muscle in vivo or in vitro have enabled the reproducible quantification of the total content of Na+,K+ pumps in molar units in various animal species, and in both healthy people and individuals with various diseases. In contrast, measurements of 3-O-methylfluorescein phosphatase activity associated with the Na+,K+-ATPase may show inconsistent results. Measurements of Na+ and K+ fluxes in intact isolated muscles show that, after Na+ loading or intense excitation, all the Na+,K+ pumps are functional, allowing calculation of the maximum Na+,K+-pumping capacity, expressed in molar units/g muscle/min. The activity and content of Na+,K+ pumps are regulated by exercise, inactivity, K+ deficiency, fasting, age, and several hormones and pharmaceuticals. Studies on the α-subunit isoforms of the Na+,K+-ATPase have detected a relative increase in their number in response to exercise and the glucocorticoid dexamethasone but have not

  12. Quantification of Na+,K+ pumps and their transport rate in skeletal muscle: functional significance.

    Science.gov (United States)

    Clausen, Torben

    2013-10-01

    During excitation, muscle cells gain Na(+) and lose K(+), leading to a rise in extracellular K(+) ([K(+)]o), depolarization, and loss of excitability. Recent studies support the idea that these events are important causes of muscle fatigue and that full use of the Na(+),K(+)-ATPase (also known as the Na(+),K(+) pump) is often essential for adequate clearance of extracellular K(+). As a result of their electrogenic action, Na(+),K(+) pumps also help reverse depolarization arising during excitation, hyperkalemia, and anoxia, or from cell damage resulting from exercise, rhabdomyolysis, or muscle diseases. The ability to evaluate Na(+),K(+)-pump function and the capacity of the Na(+),K(+) pumps to fill these needs require quantification of the total content of Na(+),K(+) pumps in skeletal muscle. Inhibition of Na(+),K(+)-pump activity, or a decrease in their content, reduces muscle contractility. Conversely, stimulation of the Na(+),K(+)-pump transport rate or increasing the content of Na(+),K(+) pumps enhances muscle excitability and contractility. Measurements of [(3)H]ouabain binding to skeletal muscle in vivo or in vitro have enabled the reproducible quantification of the total content of Na(+),K(+) pumps in molar units in various animal species, and in both healthy people and individuals with various diseases. In contrast, measurements of 3-O-methylfluorescein phosphatase activity associated with the Na(+),K(+)-ATPase may show inconsistent results. Measurements of Na(+) and K(+) fluxes in intact isolated muscles show that, after Na(+) loading or intense excitation, all the Na(+),K(+) pumps are functional, allowing calculation of the maximum Na(+),K(+)-pumping capacity, expressed in molar units/g muscle/min. The activity and content of Na(+),K(+) pumps are regulated by exercise, inactivity, K(+) deficiency, fasting, age, and several hormones and pharmaceuticals. Studies on the α-subunit isoforms of the Na(+),K(+)-ATPase have detected a relative increase in their

  13. Estimation of Input Function from Dynamic PET Brain Data Using Bayesian Blind Source Separation

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav

    2015-01-01

    Roč. 12, č. 4 (2015), s. 1273-1287 ISSN 1820-0214 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind source separation * Variational Bayes method * dynamic PET * input function * deconvolution Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.623, year: 2015 http://library.utia.cas.cz/separaty/2015/AS/tichy-0450509.pdf

  14. PERSPECTIVES ON A DOE CONSEQUENCE INPUTS FOR ACCIDENT ANALYSIS APPLICATIONS

    International Nuclear Information System (INIS)

    O'Kula, K.R.; Thoman, D.C.; Lowrie, J.; Keller, A.

    2008-01-01

    Department of Energy (DOE) accident analysis for establishing the required control sets for nuclear facility safety applies a series of simplifying, reasonably conservative assumptions regarding inputs and methodologies for quantifying dose consequences. Most of the analytical practices are conservative, have a technical basis, and are based on regulatory precedent. However, others are judgmental and based on older understanding of phenomenology. The latter type of practices can be found in modeling hypothetical releases into the atmosphere and the subsequent exposure. Often the judgments applied are not based on current technical understanding but on work that has been superseded. The objective of this paper is to review the technical basis for the major inputs and assumptions in the quantification of consequence estimates supporting DOE accident analysis, and to identify those that could be reassessed in light of current understanding of atmospheric dispersion and radiological exposure. Inputs and assumptions of interest include: Meteorological data basis; Breathing rate; and Inhalation dose conversion factor. A simple dose calculation is provided to show the relative difference achieved by improving the technical bases

  15. Direct quantification of negatively charged functional groups on membrane surfaces

    KAUST Repository

    Tiraferri, Alberto; Elimelech, Menachem

    2012-01-01

    groups at the surface of dense polymeric membranes. Both techniques consist of associating the membrane surface moieties with chemical probes, followed by quantification of the bound probes. Uranyl acetate and toluidine blue O dye, which interact

  16. Quantitative contrast-enhanced first-pass cardiac perfusion MRI at 3 tesla with accurate arterial input function and myocardial wall enhancement.

    Science.gov (United States)

    Breton, Elodie; Kim, Daniel; Chung, Sohae; Axel, Leon

    2011-09-01

    To develop, and validate in vivo, a robust quantitative first-pass perfusion cardiovascular MR (CMR) method with accurate arterial input function (AIF) and myocardial wall enhancement. A saturation-recovery (SR) pulse sequence was modified to sequentially acquire multiple slices after a single nonselective saturation pulse at 3 Tesla. In each heartbeat, an AIF image is acquired in the aortic root with a short time delay (TD) (50 ms), followed by the acquisition of myocardial images with longer TD values (∼150-400 ms). Longitudinal relaxation rates (R(1) = 1/T(1)) were calculated using an ideal saturation recovery equation based on the Bloch equation, and corresponding gadolinium contrast concentrations were calculated assuming fast water exchange condition. The proposed method was validated against a reference multi-point SR method by comparing their respective R(1) measurements in the blood and left ventricular myocardium, before and at multiple time-points following contrast injections, in 7 volunteers. R(1) measurements with the proposed method and reference multi-point method were strongly correlated (r > 0.88, P < 10(-5)) and in good agreement (mean difference ±1.96 standard deviation 0.131 ± 0.317/0.018 ± 0.140 s(-1) for blood/myocardium, respectively). The proposed quantitative first-pass perfusion CMR method measured accurate R(1) values for quantification of AIF and myocardial wall contrast agent concentrations in 3 cardiac short-axis slices, in a total acquisition time of 523 ms per heartbeat. Copyright © 2011 Wiley-Liss, Inc.

  17. Autoregressive moving average (ARMA) model applied to quantification of cerebral blood flow using dynamic susceptibility contrast-enhanced magnetic resonance imaging

    International Nuclear Information System (INIS)

    Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki

    2003-01-01

    The purpose of this study was to investigate the feasibility of the autoregressive moving average (ARMA) model for quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI) in comparison with deconvolution analysis based on singular value decomposition (DA-SVD). Using computer simulations, we generated a time-dependent concentration of the contrast agent in the volume of interest (VOI) from the arterial input function (AIF) modeled as a gamma-variate function under various CBFs, cerebral blood volumes and signal-to-noise ratios (SNRs) for three different types of residue function (exponential, triangular, and box-shaped). We also considered the effects of delay and dispersion in AIF. The ARMA model and DA-SVD were used to estimate CBF values from the simulated concentration-time curves in the VOI and AIFs, and the estimated values were compared with the assumed values. We found that the CBF value estimated by the ARMA model was more sensitive to the SNR and the delay in AIF than that obtained by DA-SVD. Although the ARMA model considerably overestimated CBF at low SNRs, it estimated the CBF more accurately than did DA-SVD at high SNRs for the exponential or triangular residue function. We believe this study will contribute to an understanding of the usefulness and limitations of the ARMA model when applied to quantification of CBF with DSC-MRI. (author)

  18. Quantification of cellular uptake of DNA nanostructures by qPCR

    DEFF Research Database (Denmark)

    Okholm, Anders Hauge; Nielsen, Jesper Sejrup; Vinther, Mathias

    2014-01-01

    interactions and structural and functional features of the DNA delivery device must be thoroughly investigated. Here, we present a rapid and robust method for the precise quantification of the component materials of DNA origami structures capable of entering cells in vitro. The quantification is performed...

  19. Semi-parametric arterial input functions for quantitative dynamic contrast enhanced magnetic resonance imaging in mice

    Czech Academy of Sciences Publication Activity Database

    Taxt, T.; Reed, R. K.; Pavlin, T.; Rygh, C. B.; Andersen, E.; Jiřík, Radovan

    2018-01-01

    Roč. 46, FEB (2018), s. 10-20 ISSN 0730-725X R&D Projects: GA ČR GA17-13830S; GA MŠk(CZ) LO1212 Institutional support: RVO:68081731 Keywords : DCE-MRI * blind deconvolution * arterial input function Subject RIV: FA - Cardiovascular Diseases incl. Cardiotharic Surgery Impact factor: 2.225, year: 2016

  20. freeQuant: A Mass Spectrometry Label-Free Quantification Software Tool for Complex Proteome Analysis.

    Science.gov (United States)

    Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong

    2015-01-01

    Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.

  1. Inverse uncertainty quantification of reactor simulations under the Bayesian framework using surrogate models constructed by polynomial chaos expansion

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Xu, E-mail: xuwu2@illinois.edu; Kozlowski, Tomasz

    2017-03-15

    Modeling and simulations are naturally augmented by extensive Uncertainty Quantification (UQ) and sensitivity analysis requirements in the nuclear reactor system design, in which uncertainties must be quantified in order to prove that the investigated design stays within acceptance criteria. Historically, expert judgment has been used to specify the nominal values, probability density functions and upper and lower bounds of the simulation code random input parameters for the forward UQ process. The purpose of this paper is to replace such ad-hoc expert judgment of the statistical properties of input model parameters with inverse UQ process. Inverse UQ seeks statistical descriptions of the model random input parameters that are consistent with the experimental data. Bayesian analysis is used to establish the inverse UQ problems based on experimental data, with systematic and rigorously derived surrogate models based on Polynomial Chaos Expansion (PCE). The methods developed here are demonstrated with the Point Reactor Kinetics Equation (PRKE) coupled with lumped parameter thermal-hydraulics feedback model. Three input parameters, external reactivity, Doppler reactivity coefficient and coolant temperature coefficient are modeled as uncertain input parameters. Their uncertainties are inversely quantified based on synthetic experimental data. Compared with the direct numerical simulation, surrogate model by PC expansion shows high efficiency and accuracy. In addition, inverse UQ with Bayesian analysis can calibrate the random input parameters such that the simulation results are in a better agreement with the experimental data.

  2. Lessons learned using HAMMLAB experimenter systems: Input for HAMMLAB 2000 functional requirements

    International Nuclear Information System (INIS)

    Sebok, Angelia L.

    1998-02-01

    To design a usable HAMMLAB 2000, lessons learned from use of the existing HAMMLAB must be documented. User suggestions are important and must be taken into account. Different roles in HAMMLAB experimental sessions are identified, and major functions of each role were specified. A series of questionnaires were developed and administered to different users of HAMMLAB, each tailored to the individual job description. The results of those questionnaires are included in this report. Previous HAMMLAB modification recommendations were also reviewed, to provide input to this document. A trial experimental session was also conducted, to give an overview of the tasks in HAMMLAB. (author)

  3. A Method to Select Software Test Cases in Consideration of Past Input Sequence

    International Nuclear Information System (INIS)

    Kim, Hee Eun; Kim, Bo Gyung; Kang, Hyun Gook

    2015-01-01

    In the Korea Nuclear I and C Systems (KNICS) project, the software for the fully-digitalized reactor protection system (RPS) was developed under a strict procedure. Even though the behavior of the software is deterministic, the randomness of input sequence produces probabilistic behavior of software. A software failure occurs when some inputs to the software occur and interact with the internal state of the digital system to trigger a fault that was introduced into the software during the software lifecycle. In this paper, the method to select test set for software failure probability estimation is suggested. This test set reflects past input sequence of software, which covers all possible cases. In this study, the method to select test cases for software failure probability quantification was suggested. To obtain profile of paired state variables, relationships of the variables need to be considered. The effect of input from human operator also have to be considered. As an example, test set of PZR-PR-Lo-Trip logic was examined. This method provides framework for selecting test cases of safety-critical software

  4. Demonstration of uncertainty quantification and sensitivity analysis for PWR fuel performance with BISON

    International Nuclear Information System (INIS)

    Zhang, Hongbin; Zhao, Haihua; Zou, Ling; Burns, Douglas; Ladd, Jacob

    2017-01-01

    BISON is an advanced fuels performance code being developed at Idaho National Laboratory and is the code of choice for fuels performance by the U.S. Department of Energy (DOE)’s Consortium for Advanced Simulation of Light Water Reactors (CASL) Program. An approach to uncertainty quantification and sensitivity analysis with BISON was developed and a new toolkit was created. A PWR fuel rod model was developed and simulated by BISON, and uncertainty quantification and sensitivity analysis were performed with eighteen uncertain input parameters. The maximum fuel temperature and gap conductance were selected as the figures of merit (FOM). Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis. (author)

  5. Demonstration of Uncertainty Quantification and Sensitivity Analysis for PWR Fuel Performance with BISON

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Hongbin; Ladd, Jacob; Zhao, Haihua; Zou, Ling; Burns, Douglas

    2015-11-01

    BISON is an advanced fuels performance code being developed at Idaho National Laboratory and is the code of choice for fuels performance by the U.S. Department of Energy (DOE)’s Consortium for Advanced Simulation of Light Water Reactors (CASL) Program. An approach to uncertainty quantification and sensitivity analysis with BISON was developed and a new toolkit was created. A PWR fuel rod model was developed and simulated by BISON, and uncertainty quantification and sensitivity analysis were performed with eighteen uncertain input parameters. The maximum fuel temperature and gap conductance were selected as the figures of merit (FOM). Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis.

  6. Quantification of uncertainties in source term estimates for a BWR with Mark I containment

    International Nuclear Information System (INIS)

    Khatib-Rahbar, M.; Cazzoli, E.; Davis, R.; Ishigami, T.; Lee, M.; Nourbakhsh, H.; Schmidt, E.; Unwin, S.

    1988-01-01

    A methodology for quantification and uncertainty analysis of source terms for severe accident in light water reactors (QUASAR) has been developed. The objectives of the QUASAR program are (1) to develop a framework for performing an uncertainty evaluation of the input parameters of the phenomenological models used in the Source Term Code Package (STCP), and (2) to quantify the uncertainties in certain phenomenological aspects of source terms (that are not modeled by STCP) using state-of-the-art methods. The QUASAR methodology consists of (1) screening sensitivity analysis, where the most sensitive input variables are selected for detailed uncertainty analysis, (2) uncertainty analysis, where probability density functions (PDFs) are established for the parameters identified by the screening stage and propagated through the codes to obtain PDFs for the outputs (i.e., release fractions to the environment), and (3) distribution sensitivity analysis, which is performed to determine the sensitivity of the output PDFs to the input PDFs. In this paper attention is limited to a single accident progression sequence, namely; a station blackout accident in a BWR with a Mark I containment buildings. Identified as an important accident in the draft NUREG-1150 a station blackout involves loss of both off-site power and DC power resulting in failure of the diesels to start and in the unavailability of the high pressure injection and core isolation coding systems

  7. Quantification of (R)-[11C]PK11195 binding in rheumatoid arthritis

    International Nuclear Information System (INIS)

    Kropholler, M.A.; Boellaard, R.; Kloet, R.W.; Lammertsma, A.A.; Elzinga, E.H.; Voskuyl, A.E.; Laken, C.J. van der; Dijkmans, B.A.C.; Maruyama, K.

    2009-01-01

    Rheumatoid arthritis (RA) involves migration of macrophages into inflamed areas. (R)-[ 11 C]PK11195 binds to peripheral benzodiazepine receptors, expressed on macrophages, and may be used to quantify inflammation using positron emission tomography (PET). This study evaluated methods for the quantification of (R)-[ 11 C]PK11195 binding in the knee joints of RA patients. Data from six patients with RA were analysed. Dynamic PET scans were acquired in 3-D mode following (R)-[ 11 C]PK11195 injection. During scanning arterial radioactivity concentrations were measured to determine the plasma (R)-[ 11 C]PK11195 concentrations. Data were analysed using irreversible and reversible one-tissue and two-tissue compartment models and input functions with various types of metabolite correction. Model preferences according to the Akaike information criterion (AIC) and correlations between measures were evaluated. Correlations between distribution volume (V d ) and standardized uptake values (SUV) were evaluated. AIC indicated optimal performance for a one-tissue reversible compartment model including blood volume. High correlations were observed between V d obtained using different input functions (R 2 =0.80-1.00) and between V d obtained with one- and two-tissue reversible compartment models (R 2 =0.75-0.94). A high correlation was observed between optimal V d and SUV after injection (R 2 =0.73). (R)-[ 11 C]PK11195 kinetics in the knee were best described by a reversible single-tissue compartment model including blood volume. Applying metabolite corrections did not increase sensitivity. Due to the high correlation with V d , SUV is a practical alternative for clinical use. (orig.)

  8. An overview of uncertainty quantification techniques with application to oceanic and oil-spill simulations

    KAUST Repository

    Iskandarani, Mohamed; Wang, Shitao; Srinivasan, Ashwanth; Carlisle Thacker, W.; Winokur, Justin; Knio, Omar

    2016-01-01

    We give an overview of four different ensemble-based techniques for uncertainty quantification and illustrate their application in the context of oil plume simulations. These techniques share the common paradigm of constructing a model proxy that efficiently captures the functional dependence of the model output on uncertain model inputs. This proxy is then used to explore the space of uncertain inputs using a large number of samples, so that reliable estimates of the model's output statistics can be calculated. Three of these techniques use polynomial chaos (PC) expansions to construct the model proxy, but they differ in their approach to determining the expansions' coefficients; the fourth technique uses Gaussian Process Regression (GPR). An integral plume model for simulating the Deepwater Horizon oil-gas blowout provides examples for illustrating the different techniques. A Monte Carlo ensemble of 50,000 model simulations is used for gauging the performance of the different proxies. The examples illustrate how regression-based techniques can outperform projection-based techniques when the model output is noisy. They also demonstrate that robust uncertainty analysis can be performed at a fraction of the cost of the Monte Carlo calculation.

  9. An overview of uncertainty quantification techniques with application to oceanic and oil-spill simulations

    KAUST Repository

    Iskandarani, Mohamed

    2016-04-22

    We give an overview of four different ensemble-based techniques for uncertainty quantification and illustrate their application in the context of oil plume simulations. These techniques share the common paradigm of constructing a model proxy that efficiently captures the functional dependence of the model output on uncertain model inputs. This proxy is then used to explore the space of uncertain inputs using a large number of samples, so that reliable estimates of the model\\'s output statistics can be calculated. Three of these techniques use polynomial chaos (PC) expansions to construct the model proxy, but they differ in their approach to determining the expansions\\' coefficients; the fourth technique uses Gaussian Process Regression (GPR). An integral plume model for simulating the Deepwater Horizon oil-gas blowout provides examples for illustrating the different techniques. A Monte Carlo ensemble of 50,000 model simulations is used for gauging the performance of the different proxies. The examples illustrate how regression-based techniques can outperform projection-based techniques when the model output is noisy. They also demonstrate that robust uncertainty analysis can be performed at a fraction of the cost of the Monte Carlo calculation.

  10. Reactor protection system software test-case selection based on input-profile considering concurrent events and uncertainties

    International Nuclear Information System (INIS)

    Khalaquzzaman, M.; Lee, Seung Jun; Cho, Jaehyun; Jung, Wondea

    2016-01-01

    Recently, the input-profile-based testing for safety critical software has been proposed for determining the number of test cases and quantifying the failure probability of the software. Input-profile of a reactor protection system (RPS) software is the input which causes activation of the system for emergency shutdown of a reactor. This paper presents a method to determine the input-profile of a RPS software which considers concurrent events/transients. A deviation of a process parameter value begins through an event and increases owing to the concurrent multi-events depending on the correlation of process parameters and severity of incidents. A case of reactor trip caused by feedwater loss and main steam line break is simulated and analyzed to determine the RPS software input-profile and estimate the number of test cases. The different sizes of the main steam line breaks (e.g., small, medium, large break) with total loss of feedwater supply are considered in constructing the input-profile. The uncertainties of the simulation related to the input-profile-based software testing are also included. Our study is expected to provide an option to determine test cases and quantification of RPS software failure probability. (author)

  11. Sensitivity of the 252Cf(sf neutron observables to the FREYA input yield functions Y(A, Z, TKE

    Directory of Open Access Journals (Sweden)

    Randrup Jørgen

    2017-01-01

    Full Text Available Within the framework of the fission event generator FREYA, we are studying the sensitivity of various neutron observables to the yield distribution Y (A,Z,TKE used as input to the code. Concentrating on spontaneous fission of 252Cf, we have sampled a large number of different input yield functions based on χ2 fits to the experimental data on Y (A and Y (TKE|A. For each of these input yield distributions, we then use FREYA to generate a large sample of complete fission events from which we extract a variety of neutron observables, including the multiplicity distribution, the associated correlation coefficients, and its factorial moments, the dependence of the mean neutron multiplicity on the total fragment kinetic energy TKE and on the fragment mass number A, the neutron energy spectrum, and the two-neutron angular correlation function. In this way, we can determine the variation of these observables resulting from the uncertainties in the experimental mesurements. The imposition of a constraint on the resulting mean neutron multiplicity reduces the variation of the calculated neutron observables and provides a means for shrinking the uncertainties associated with the measured data.

  12. Sensitivity of the 252Cf(sf) neutron observables to the FREYA input yield functions Y(A, Z, TKE)

    Science.gov (United States)

    Randrup, Jørgen; Talou, Patrick; Vogt, Ramona

    2017-09-01

    Within the framework of the fission event generator FREYA, we are studying the sensitivity of various neutron observables to the yield distribution Y (A,Z,TKE) used as input to the code. Concentrating on spontaneous fission of 252Cf, we have sampled a large number of different input yield functions based on χ2 fits to the experimental data on Y (A) and Y (TKE|A). For each of these input yield distributions, we then use FREYA to generate a large sample of complete fission events from which we extract a variety of neutron observables, including the multiplicity distribution, the associated correlation coefficients, and its factorial moments, the dependence of the mean neutron multiplicity on the total fragment kinetic energy TKE and on the fragment mass number A, the neutron energy spectrum, and the two-neutron angular correlation function. In this way, we can determine the variation of these observables resulting from the uncertainties in the experimental mesurements. The imposition of a constraint on the resulting mean neutron multiplicity reduces the variation of the calculated neutron observables and provides a means for shrinking the uncertainties associated with the measured data.

  13. Preclinical In vivo Imaging for Fat Tissue Identification, Quantification, and Functional Characterization.

    Science.gov (United States)

    Marzola, Pasquina; Boschi, Federico; Moneta, Francesco; Sbarbati, Andrea; Zancanaro, Carlo

    2016-01-01

    Localization, differentiation, and quantitative assessment of fat tissues have always collected the interest of researchers. Nowadays, these topics are even more relevant as obesity (the excess of fat tissue) is considered a real pathology requiring in some cases pharmacological and surgical approaches. Several weight loss medications, acting either on the metabolism or on the central nervous system, are currently under preclinical or clinical investigation. Animal models of obesity have been developed and are widely used in pharmaceutical research. The assessment of candidate drugs in animal models requires non-invasive methods for longitudinal assessment of efficacy, the main outcome being the amount of body fat. Fat tissues can be either quantified in the entire animal or localized and measured in selected organs/regions of the body. Fat tissues are characterized by peculiar contrast in several imaging modalities as for example Magnetic Resonance Imaging (MRI) that can distinguish between fat and water protons thank to their different magnetic resonance properties. Since fat tissues have higher carbon/hydrogen content than other soft tissues and bones, they can be easily assessed by Computed Tomography (CT) as well. Interestingly, MRI also discriminates between white and brown adipose tissue (BAT); the latter has long been regarded as a potential target for anti-obesity drugs because of its ability to enhance energy consumption through increased thermogenesis. Positron Emission Tomography (PET) performed with 18 F-FDG as glucose analog radiotracer reflects well the metabolic rate in body tissues and consequently is the technique of choice for studies of BAT metabolism. This review will focus on the main, non-invasive imaging techniques (MRI, CT, and PET) that are fundamental for the assessment, quantification and functional characterization of fat deposits in small laboratory animals. The contribution of optical techniques, which are currently regarded with

  14. Preclinical in vivo imaging for fat tissue identification, quantification and functional characterization

    Directory of Open Access Journals (Sweden)

    Pasquina Marzola

    2016-09-01

    Full Text Available Localization, differentiation and quantitative assessment of fat tissues have always collected the interest of researchers. Nowadays, these topics are even more relevant as obesity (the excess of fat tissue is considered a real pathology requiring in some cases pharmacological and surgical approaches. Several weight loss medications, acting either on the metabolism or on the central nervous system, are currently under preclinical or clinical investigation. Animal models of obesity have been developed which are widely used in pharmaceutical research. The assessment of candidate drugs in animal models requires non-invasive methods for longitudinal assessment of efficacy, the main outcome being the amount of body fat. Fat tissues can be either quantified in the entire animal or localized and measured in selected organs/regions of the body. Fat tissues are characterized by peculiar contrast in several imaging modalities as for example Magnetic Resonance Imaging (MRI that can distinguish between fat and water protons thank to their different magnetic resonance properties. Since fat tissues have higher carbon/hydrogen content than other soft tissues and bones, they can be easily assessed by Computed Tomography (CT as well. Interestingly, MRI also discriminates between white and brown adipose tissue; the latter has long been regarded as a potential target for anti-obesity drugs because of its ability to enhance energy consumption through increased thermogenesis. Positron Emission Tomography (PET performed with 18F-FDG as glucose analogue radiotracer reflects well the metabolic rate in body tissues and consequently is the technique of choice for studies of BAT metabolism. This review will focus on the main, non-invasive imaging techniques (MRI, CT and PET that are fundamental for the assessment, quantification and functional characterization of fat deposits in small laboratory animals. The contribution of optical techniques, which are currently regarded

  15. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    Science.gov (United States)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of

  16. Dynamic Contrast-Enhanced Perfusion MRI of High Grade Brain Gliomas Obtained with Arterial or Venous Waveform Input Function.

    Science.gov (United States)

    Filice, Silvano; Crisi, Girolamo

    2016-01-01

    The aim of this study was to evaluate the differences in dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) perfusion estimates of high-grade brain gliomas (HGG) due to the use of an input function (IF) obtained respectively from arterial (AIF) and venous (VIF) approaches by two different commercially available software applications. This prospective study includes 20 patients with pathologically confirmed diagnosis of high-grade gliomas. The data source was processed by using two DCE dedicated commercial packages, both based on the extended Toft model, but the first customized to obtain input function from arterial measurement and the second from sagittal sinus sampling. The quantitative parametric perfusion maps estimated from the two software packages were compared by means of a region of interest (ROI) analysis. The resulting input functions from venous and arterial data were also compared. No significant difference has been found between the perfusion parameters obtained with the two different software packages (P-value < .05). The comparison of the VIFs and AIFs obtained by the two packages showed no statistical differences. Direct comparison of DCE-MRI measurements with IF generated by means of arterial or venous waveform led to no statistical difference in quantitative metrics for evaluating HGG. However, additional research involving DCE-MRI acquisition protocols and post-processing would be beneficial to further substantiate the effectiveness of venous approach as the IF method compared with arterial-based IF measurement. Copyright © 2015 by the American Society of Neuroimaging.

  17. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Science.gov (United States)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  18. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Geraci, Gianluca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vane, Zachary P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lacaze, Guilhem [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Oefelein, Joseph C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  19. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a formulation of a fully nonlinear and dispersive potential flow water wave model with random inputs for the probabilistic description...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...... benchmarks often used for validation of deterministic water wave models. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in comparison with experimental measurements could be partially explained...

  20. Ultra-low input transcriptomics reveal the spore functional content and phylogenetic affiliations of poorly studied arbuscular mycorrhizal fungi.

    Science.gov (United States)

    Beaudet, Denis; Chen, Eric C H; Mathieu, Stephanie; Yildirir, Gokalp; Ndikumana, Steve; Dalpé, Yolande; Séguin, Sylvie; Farinelli, Laurent; Stajich, Jason E; Corradi, Nicolas

    2017-12-02

    Arbuscular mycorrhizal fungi (AMF) are a group of soil microorganisms that establish symbioses with the vast majority of land plants. To date, generation of AMF coding information has been limited to model genera that grow well axenically; Rhizoglomus and Gigaspora. Meanwhile, data on the functional gene repertoire of most AMF families is non-existent. Here, we provide primary large-scale transcriptome data from eight poorly studied AMF species (Acaulospora morrowiae, Diversispora versiforme, Scutellospora calospora, Racocetra castanea, Paraglomus brasilianum, Ambispora leptoticha, Claroideoglomus claroideum and Funneliformis mosseae) using ultra-low input ribonucleic acid (RNA)-seq approaches. Our analyses reveals that quiescent spores of many AMF species harbour a diverse functional diversity and solidify known evolutionary relationships within the group. Our findings demonstrate that RNA-seq data obtained from low-input RNA are reliable in comparison to conventional RNA-seq experiments. Thus, our methodology can potentially be used to deepen our understanding of fungal microbial function and phylogeny using minute amounts of RNA material. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  1. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  2. Advances in forensic DNA quantification: a review.

    Science.gov (United States)

    Lee, Steven B; McCord, Bruce; Buel, Eric

    2014-11-01

    This review focuses upon a critical step in forensic biology: detection and quantification of human DNA from biological samples. Determination of the quantity and quality of human DNA extracted from biological evidence is important for several reasons. Firstly, depending on the source and extraction method, the quality (purity and length), and quantity of the resultant DNA extract can vary greatly. This affects the downstream method as the quantity of input DNA and its relative length can determine which genotyping procedure to use-standard short-tandem repeat (STR) typing, mini-STR typing or mitochondrial DNA sequencing. Secondly, because it is important in forensic analysis to preserve as much of the evidence as possible for retesting, it is important to determine the total DNA amount available prior to utilizing any destructive analytical method. Lastly, results from initial quantitative and qualitative evaluations permit a more informed interpretation of downstream analytical results. Newer quantitative techniques involving real-time PCR can reveal the presence of degraded DNA and PCR inhibitors, that provide potential reasons for poor genotyping results and may indicate methods to use for downstream typing success. In general, the more information available, the easier it is to interpret and process the sample resulting in a higher likelihood of successful DNA typing. The history of the development of quantitative methods has involved two main goals-improving precision of the analysis and increasing the information content of the result. This review covers advances in forensic DNA quantification methods and recent developments in RNA quantification. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Quantification of fossil organic matter in contaminated sediments from an industrial watershed: Validation of the quantitative multimolecular approach by radiocarbon analysis

    International Nuclear Information System (INIS)

    Jeanneau, Laurent; Faure, Pierre

    2010-01-01

    The quantitative multimolecular approach (QMA) based on an exhaustive identification and quantification of molecules from the extractable organic matter (EOM) has been recently developed in order to investigate organic contamination in sediments by a more complete method than the restrictive quantification of target contaminants. Such an approach allows (i) the comparison between natural and anthropogenic inputs, (ii) between modern and fossil organic matter and (iii) the differentiation between several anthropogenic sources. However QMA is based on the quantification of molecules recovered by organic solvent and then analyzed by gas chromatography-mass spectrometry, which represent a small fraction of sedimentary organic matter (SOM). In order to extend the conclusions of QMA to SOM, radiocarbon analyses have been performed on organic extracts and decarbonated sediments. This analysis allows (i) the differentiation between modern biomass (contemporary 14 C) and fossil organic matter ( 14 C-free) and (ii) the calculation of the modern carbon percentage (PMC). At the confluence between Fensch and Moselle Rivers, a catchment highly contaminated by both industrial activities and urbanization, PMC values in decarbonated sediments are well correlated with the percentage of natural molecular markers determined by QMA. It highlights that, for this type of contamination by fossil organic matter inputs, the conclusions of QMA can be scaled up to SOM. QMA is an efficient environmental diagnostic tool that leads to a more realistic quantification of fossil organic matter in sediments.

  4. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics.

    Science.gov (United States)

    Möller, Birgit; Poeschl, Yvonne; Plötner, Romina; Bürstenbinder, Katharina

    2017-11-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. © 2017 American Society of Plant Biologists. All Rights Reserved.

  5. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  6. Terahertz identification and quantification of penicillamine enantiomers

    International Nuclear Information System (INIS)

    Ji Te; Zhao Hongwei; Chen Min; Xiao Tiqiao; Han Pengyu

    2013-01-01

    Identification and characterization of L-, D- and DL- penicillamine were demonstrated by Terahertz time-domain spectroscopy (THz-TDS). To understand the physical origins of the low frequency resonant modes, the density functional theory (DFT) was adopted for theoretical calculation. It was found that the collective THz frequency motions were decided by the intramolecular and intermolecular hydrogen bond interactions. Moreover, the quantification of penicillamine enantiomers mixture was demonstrated by a THz spectra fitting method with a relative error of less than 3.5%. This technique can be a valuable tool for the discrimination and quantification of chiral drugs in pharmaceutical industry. (authors)

  7. Quantification of dopamine transporter density with [18F]FECNT PET in healthy humans

    International Nuclear Information System (INIS)

    Nye, Jonathon A.; Votaw, John R.; Bremner, J. Douglas; Davis, Margaret R.; Voll, Ronald J.; Camp, Vernon M.; Goodman, Mark M.

    2014-01-01

    Introduction: Fluorine-18 labeled 2β-carbomethoxy-3β-(4-chlorophenyl)-8-(2-fluoroethyl)nortropane ([ 18 F]FECNT) binds reversibly to the dopamine transporter (DAT) with high selectivity. [ 18 F]FECNT has been used extensively in the quantification of DAT occupancy in non-human primate brain and can distinguish between Parkinson's and healthy controls in humans. The purpose of this work was to develop a compartment model to characterize the kinetics of [ 18 F]FECNT for quantification of DAT density in healthy human brain. Methods: Twelve healthy volunteers underwent 180 min dynamic [ 18 F]FECNT PET imaging including sampling of arterial blood. Regional time-activity curves were extracted from the caudate, putamen and midbrain including a reference region placed in the cerebellum. Binding potential, BP ND , was calculated for all regions using kinetic parameters estimated from compartmental and Logan graphical model fits to the time-activity data. Simulations were performed to determine whether the compartment model could reliably fit time-activity data over a range of BP ND values. Results: The kinetics of [ 18 F]FECNT were well-described by the reversible 2-tissue arterial input and full reference tissue compartment models. Calculated binding potentials in the caudate, putamen and midbrain were in good agreement between the arterial input model, reference tissue model and the Logan graphical model. The distribution volume in the cerebellum did not reach a plateau over the duration of the study, which may be a result of non-specific binding in the cerebellum. Simulations that included non-specific binding show that the reference and arterial input models are able to estimate BP ND for DAT densities well below that observed in normal volunteers. Conclusion: The kinetics of [ 18 F]FECNT in human brain are well-described by arterial input and reference tissue compartment models. Measured and simulated data show that BP ND calculated with reference tissue model

  8. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  9. A Microneedle Functionalized with Polyethyleneimine and Nanotubes for Highly Sensitive, Label-Free Quantification of DNA.

    Science.gov (United States)

    Saadat-Moghaddam, Darius; Kim, Jong-Hoon

    2017-08-16

    The accurate measure of DNA concentration is necessary for many DNA-based biological applications. However, the current methods are limited in terms of sensitivity, reproducibility, human error, and contamination. Here, we present a microneedle functionalized with polyethyleneimine (PEI) and single-walled carbon nanotubes (SWCNTs) for the highly sensitive quantification of DNA. The microneedle was fabricated using ultraviolet (UV) lithography and anisotropic etching, and then functionalized with PEI and SWCNTs through a dip coating process. The electrical characteristics of the microneedle change with the accumulation of DNA on the surface. Current-voltage measurements in deionized water were conducted to study these changes in the electrical properties of the sensor. The sensitivity test found the signal to be discernable from the noise level down to 100 attomolar (aM), demonstrating higher sensitivity than currently available UV fluorescence and UV absorbance based methods. A microneedle without any surface modification only had a 100 femtomolar (fM) sensitivity. All measurement results were consistent with fluorescence microscopy.

  10. Origin and function of short-latency inputs to the neural substrates underlying the acoustic startle reflex

    Directory of Open Access Journals (Sweden)

    Ricardo eGómez-Nieto

    2014-07-01

    Full Text Available The acoustic startle reflex (ASR is a survival mechanism of alarm, which rapidly alerts the organism to a sudden loud auditory stimulus. In rats, the primary ASR circuit encompasses three serially connected structures: cochlear root neurons (CRNs, neurons in the caudal pontine reticular nucleus (PnC, and motoneurons in the medulla and spinal cord. It is well established that both CRNs and PnC neurons receive short-latency auditory inputs to mediate the ASR. Here, we investigated the anatomical origin and functional role of these inputs using a multidisciplinary approach that combines morphological, electrophysiological and behavioural techniques. Anterograde tracer injections into the cochlea suggest that CRNs somata and dendrites receive inputs depending, respectively, on their basal or apical cochlear origin. Confocal colocalization experiments demonstrated that these cochlear inputs are immunopositive for the vesicular glutamate transporter 1. Using extracellular recordings in vivo followed by subsequent tracer injections, we investigated the response of PnC neurons after contra-, ipsi-, and bilateral acoustic stimulation and identified the source of their auditory afferents. Our results showed that the binaural firing rate of PnC neurons was higher than the monaural, exhibiting higher spike discharges with contralateral than ipsilateral acoustic stimulations. Our histological analysis confirmed the CRNs as the principal source of short-latency acoustic inputs, and indicated that other areas of the cochlear nucleus complex are not likely to innervate PnC. Behaviourally, we observed a strong reduction of ASR amplitude in monaural earplugged rats that corresponds with the binaural summation process shown in our electrophysiological findings. Our study contributes to understand better the role of neuronal mechanisms in auditory alerting behaviours and provides strong evidence that the CRNs-PnC pathway mediates fast neurotransmission and binaural

  11. Statistical identification of effective input variables

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications

  12. Rapid quantification of vesicle concentration for DOPG/DOPC and Cardiolipin/DOPC mixed lipid systems of variable composition.

    Science.gov (United States)

    Elmer-Dixon, Margaret M; Bowler, Bruce E

    2018-05-19

    A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.

  13. Smart mobility solution with multiple input Output interface.

    Science.gov (United States)

    Sethi, Aartika; Deb, Sujay; Ranjan, Prabhat; Sardar, Arghya

    2017-07-01

    Smart wheelchairs are commonly used to provide solution for mobility impairment. However their usage is limited primarily due to high cost owing from sensors required for giving input, lack of adaptability for different categories of input and limited functionality. In this paper we propose a smart mobility solution using smartphone with inbuilt sensors (accelerometer, camera and speaker) as an input interface. An Emotiv EPOC+ is also used for motor imagery based input control synced with facial expressions in cases of extreme disability. Apart from traction, additional functions like home security and automation are provided using Internet of Things (IoT) and web interfaces. Although preliminary, our results suggest that this system can be used as an integrated and efficient solution for people suffering from mobility impairment. The results also indicate a decent accuracy is obtained for the overall system.

  14. QUANTIFICATION OF AN C-11 LABELED BETA-ADRENOCEPTOR LIGAND, S-(-)CGP-12177, IN PLASMA OF HUMANS AND RATS

    NARCIS (Netherlands)

    VANWAARDE, A; ANTHONIO, RL; ELSINGA, PH; POSTHUMUS, H; WEEMAES, AMA; BLANKSMA, PK; PAANS, AMJ; VAALBURG, W; Visser, Ton J.; Visser, Gerben

    1995-01-01

    beta-Adrenoceptors in human lungs and heart can be imaged with the radioligand 4-[3-[(1,1-dimethylethyl)amino]-2-hydroxypropoxy]-1 ,3-dihydro-2H-benzimidazol-2-C-11-one (CGP 12177, [C-11]I). For quantification of receptor density with compartment models by adjustment of rate constants, an 'input

  15. Short-Term Memory in Mathematics-Proficient and Mathematics-Disabled Students as a Function of Input-Modality/Output-Modality Pairings.

    Science.gov (United States)

    Webster, Raymond E.

    1980-01-01

    A significant two-way input modality by output modality interaction suggested that short term memory capacity among the groups differed as a function of the modality used to present the items in combination with the output response required. (Author/CL)

  16. Emphysema quantification from CT scans using novel application of diaphragm curvature estimation: comparison with standard quantification methods and pulmonary function data

    Science.gov (United States)

    Keller, Brad M.; Reeves, Anthony P.; Yankelevitz, David F.; Henschke, Claudia I.; Barr, R. Graham

    2009-02-01

    Emphysema is a disease of the lungs that destroys the alveolar air sacs and induces long-term respiratory dysfunction. CT scans allow for the imaging of the anatomical basis of emphysema and quantification of the underlying disease state. Several measures have been introduced for the quantification emphysema directly from CT data; most,however, are based on the analysis of density information provided by the CT scans, which vary by scanner and can be hard to standardize across sites and time. Given that one of the anatomical variations associated with the progression of emphysema is the flatting of the diaphragm due to the loss of elasticity in the lung parenchyma, curvature analysis of the diaphragm would provide information about emphysema from CT. Therefore, we propose a new, non-density based measure of the curvature of the diaphragm that would allow for further quantification methods in a robust manner. To evaluate the new method, 24 whole-lung scans were analyzed using the ratios of the lung height and diaphragm width to diaphragm height as curvature estimates as well as using the emphysema index as comparison. Pearson correlation coefficients showed a strong trend of several of the proposed diaphragm curvature measures to have higher correlations, of up to r=0.57, with DLCO% and VA than did the emphysema index. Furthermore, we found emphysema index to have only a 0.27 correlation to the proposed measures, indicating that the proposed measures evaluate different aspects of the disease.

  17. On the Nature of the Input in Optimality Theory

    DEFF Research Database (Denmark)

    Heck, Fabian; Müller, Gereon; Vogel, Ralf

    2002-01-01

    The input has two main functions in optimality theory (Prince and Smolensky 1993). First, the input defines the candidate set, in other words it determines which output candidates compete for optimality, and which do not. Second, the input is referred to by faithfulness constraints that prohibit...... output candidates from deviating from specifications in the input. Whereas there is general agreement concerning the relevance of the input in phonology, the nature of the input in syntax is notoriously unclear. In this article, we show that the input should not be taken to define syntactic candidate...... and syntax is due to a basic, irreducible difference between these two components of grammar: Syntax is an information preserving system, phonology is not....

  18. Measuring Input Thresholds on an Existing Board

    Science.gov (United States)

    Kuperman, Igor; Gutrich, Daniel G.; Berkun, Andrew C.

    2011-01-01

    A critical PECL (positive emitter-coupled logic) interface to Xilinx interface needed to be changed on an existing flight board. The new Xilinx input interface used a CMOS (complementary metal-oxide semiconductor) type of input, and the driver could meet its thresholds typically, but not in worst-case, according to the data sheet. The previous interface had been based on comparison with an external reference, but the CMOS input is based on comparison with an internal divider from the power supply. A way to measure what the exact input threshold was for this device for 64 inputs on a flight board was needed. The measurement technique allowed an accurate measurement of the voltage required to switch a Xilinx input from high to low for each of the 64 lines, while only probing two of them. Directly driving an external voltage was considered too risky, and tests done on any other unit could not be used to qualify the flight board. The two lines directly probed gave an absolute voltage threshold calibration, while data collected on the remaining 62 lines without probing gave relative measurements that could be used to identify any outliers. The PECL interface was forced to a long-period square wave by driving a saturated square wave into the ADC (analog to digital converter). The active pull-down circuit was turned off, causing each line to rise rapidly and fall slowly according to the input s weak pull-down circuitry. The fall time shows up as a change in the pulse width of the signal ready by the Xilinx. This change in pulse width is a function of capacitance, pulldown current, and input threshold. Capacitance was known from the different trace lengths, plus a gate input capacitance, which is the same for all inputs. The pull-down current is the same for all inputs including the two that are probed directly. The data was combined, and the Excel solver tool was used to find input thresholds for the 62 lines. This was repeated over different supply voltages and

  19. The functional upregulation of piriform cortex is associated with cross-modal plasticity in loss of whisker tactile inputs.

    Directory of Open Access Journals (Sweden)

    Bing Ye

    Full Text Available Cross-modal plasticity is characterized as the hypersensitivity of remaining modalities after a sensory function is lost in rodents, which ensures their awareness to environmental changes. Cellular and molecular mechanisms underlying cross-modal sensory plasticity remain unclear. We aim to study the role of different types of neurons in cross-modal plasticity.In addition to behavioral tasks in mice, whole-cell recordings at the excitatory and inhibitory neurons, and their two-photon imaging, were conducted in piriform cortex. We produced a mouse model of cross-modal sensory plasticity that olfactory function was upregulated by trimming whiskers to deprive their sensory inputs. In the meantime of olfactory hypersensitivity, pyramidal neurons and excitatory synapses were functionally upregulated, as well as GABAergic cells and inhibitory synapses were downregulated in piriform cortex from the mice of cross-modal sensory plasticity, compared with controls. A crosswire connection between barrel cortex and piriform cortex was established in cross-modal plasticity.An upregulation of pyramidal neurons and a downregulation of GABAergic neurons strengthen the activities of neuronal networks in piriform cortex, which may be responsible for olfactory hypersensitivity after a loss of whisker tactile input. This finding provides the clues for developing therapeutic strategies to promote sensory recovery and substitution.

  20. Trigeminal, Visceral and Vestibular Inputs May Improve Cognitive Functions by Acting through the Locus Coeruleus and the Ascending Reticular Activating System: A New Hypothesis

    Directory of Open Access Journals (Sweden)

    Vincenzo De Cicco

    2018-01-01

    Full Text Available It is known that sensory signals sustain the background discharge of the ascending reticular activating system (ARAS which includes the noradrenergic locus coeruleus (LC neurons and controls the level of attention and alertness. Moreover, LC neurons influence brain metabolic activity, gene expression and brain inflammatory processes. As a consequence of the sensory control of ARAS/LC, stimulation of a sensory channel may potential influence neuronal activity and trophic state all over the brain, supporting cognitive functions and exerting a neuroprotective action. On the other hand, an imbalance of the same input on the two sides may lead to an asymmetric hemispheric excitability, leading to an impairment in cognitive functions. Among the inputs that may drive LC neurons and ARAS, those arising from the trigeminal region, from visceral organs and, possibly, from the vestibular system seem to be particularly relevant in regulating their activity. The trigeminal, visceral and vestibular control of ARAS/LC activity may explain why these input signals: (1 affect sensorimotor and cognitive functions which are not directly related to their specific informational content; and (2 are effective in relieving the symptoms of some brain pathologies, thus prompting peripheral activation of these input systems as a complementary approach for the treatment of cognitive impairments and neurodegenerative disorders.

  1. ESTIMATION OF COB-DOUGLAS AND TRANSLOG PRODUCTION FUNCTIONS WITH CAPITAL AND GENDER DISAGGREGATED LABOR INPUTS IN THE USA

    Directory of Open Access Journals (Sweden)

    Gertrude Sebunya Muwanga

    2018-01-01

    Full Text Available This is an empirical investigation of the homogeneity of gender disaggregated labor using the Douglas, single/multi-factor translog production functions; and labor productivity functions for the USA.   The results based on the single factor translog model, indicated that: an increase in the capita/female labor ratio increases aggregate output; male labor is more productive than female labor, which is more productive than capital; a simultaneous increase in quantity allocated and productivity of the leads to an increase in output; female labor productivity has grown slower than male labor productivity; it much easier to substitute male labor for capital compared to female labor; and the three inputs are neither perfect substitutes nor perfect complements. As a consequence, male and female labor are not homogenous inputs. Efforts to investigate the factors influencing gender disaggregated labor productivity; and designing policies to achieve gender parity in numbers/productivity in the labor force and increasing the ease of substitutability between male labor and female labor are required.

  2. Separation of input function for rapid measurement of quantitative CMRO2 and CBF in a single PET scan with a dual tracer administration method

    International Nuclear Information System (INIS)

    Kudomi, Nobuyuki; Watabe, Hiroshi; Hayashi, Takuya; Iida, Hidehiro

    2007-01-01

    Cerebral metabolic rate of oxygen (CMRO 2 ), oxygen extraction fraction (OEF) and cerebral blood flow (CBF) images can be quantified using positron emission tomography (PET) by administrating 15 O-labelled water (H 15 2 O) and oxygen ( 15 O 2 ). Conventionally, those images are measured with separate scans for three tracers C 15 O for CBV, H 15 2 O for CBF and 15 O 2 for CMRO 2 , and there are additional waiting times between the scans in order to minimize the influence of the radioactivity from the previous tracers, which results in a relatively long study period. We have proposed a dual tracer autoradiographic (DARG) approach (Kudomi et al 2005), which enabled us to measure CBF, OEF and CMRO 2 rapidly by sequentially administrating H 15 2 O and 15 O 2 within a short time. Because quantitative CBF and CMRO 2 values are sensitive to arterial input function, it is necessary to obtain accurate input function and a drawback of this approach is to require separation of the measured arterial blood time-activity curve (TAC) into pure water and oxygen input functions under the existence of residual radioactivity from the first injected tracer. For this separation, frequent manual sampling was required. The present paper describes two calculation methods: namely a linear and a model-based method, to separate the measured arterial TAC into its water and oxygen components. In order to validate these methods, we first generated a blood TAC for the DARG approach by combining the water and oxygen input functions obtained in a series of PET studies on normal human subjects. The combined data were then separated into water and oxygen components by the present methods. CBF and CMRO 2 were calculated using those separated input functions and tissue TAC. The quantitative accuracy in the CBF and CMRO 2 values by the DARG approach did not exceed the acceptable range, i.e., errors in those values were within 5%, when the area under the curve in the input function of the second tracer

  3. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2004-03-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  4. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    International Nuclear Information System (INIS)

    Reer, B.

    2004-01-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  5. Flexible and re-configurable optical three-input XOR logic gate of phase-modulated signals with multicast functionality for potential application in optical physical-layer network coding.

    Science.gov (United States)

    Lu, Guo-Wei; Qin, Jun; Wang, Hongxiang; Ji, XuYuefeng; Sharif, Gazi Mohammad; Yamaguchi, Shigeru

    2016-02-08

    Optical logic gate, especially exclusive-or (XOR) gate, plays important role in accomplishing photonic computing and various network functionalities in future optical networks. On the other hand, optical multicast is another indispensable functionality to efficiently deliver information in optical networks. In this paper, for the first time, we propose and experimentally demonstrate a flexible optical three-input XOR gate scheme for multiple input phase-modulated signals with a 1-to-2 multicast functionality for each XOR operation using four-wave mixing (FWM) effect in single piece of highly-nonlinear fiber (HNLF). Through FWM in HNLF, all of the possible XOR operations among input signals could be simultaneously realized by sharing a single piece of HNLF. By selecting the obtained XOR components using a followed wavelength selective component, the number of XOR gates and the participant light in XOR operations could be flexibly configured. The re-configurability of the proposed XOR gate and the function integration of the optical logic gate and multicast in single device offer the flexibility in network design and improve the network efficiency. We experimentally demonstrate flexible 3-input XOR gate for four 10-Gbaud binary phase-shift keying signals with a multicast scale of 2. Error-free operations for the obtained XOR results are achieved. Potential application of the integrated XOR and multicast function in network coding is also discussed.

  6. Self-Structured Organizing Single-Input CMAC Control for Robot Manipulator

    Directory of Open Access Journals (Sweden)

    ThanhQuyen Ngo

    2011-09-01

    Full Text Available This paper represents a self-structured organizing single-input control system based on differentiable cerebellar model articulation controller (CMAC for an n-link robot manipulator to achieve the high-precision position tracking. In the proposed scheme, the single-input CMAC controller is solely used to control the plant, so the input space dimension of CMAC can be simplified and no conventional controller is needed. The structure of single-input CMAC will also be self-organized; that is, the layers of single-input CMAC will grow or prune systematically and their receptive functions can be automatically adjusted. The online tuning laws of single-input CMAC parameters are derived in gradient-descent learning method and the discrete-type Lyapunov function is applied to determine the learning rates of proposed control system so that the stability of the system can be guaranteed. The simulation results of robot manipulator are provided to verify the effectiveness of the proposed control methodology.

  7. Six axis force feedback input device

    Science.gov (United States)

    Ohm, Timothy (Inventor)

    1998-01-01

    The present invention is a low friction, low inertia, six-axis force feedback input device comprising an arm with double-jointed, tendon-driven revolute joints, a decoupled tendon-driven wrist, and a base with encoders and motors. The input device functions as a master robot manipulator of a microsurgical teleoperated robot system including a slave robot manipulator coupled to an amplifier chassis, which is coupled to a control chassis, which is coupled to a workstation with a graphical user interface. The amplifier chassis is coupled to the motors of the master robot manipulator and the control chassis is coupled to the encoders of the master robot manipulator. A force feedback can be applied to the input device and can be generated from the slave robot to enable a user to operate the slave robot via the input device without physically viewing the slave robot. Also, the force feedback can be generated from the workstation to represent fictitious forces to constrain the input device's control of the slave robot to be within imaginary predetermined boundaries.

  8. GARFEM input deck description

    Energy Technology Data Exchange (ETDEWEB)

    Zdunek, A.; Soederberg, M. (Aeronautical Research Inst. of Sweden, Bromma (Sweden))

    1989-01-01

    The input card deck for the finite element program GARFEM version 3.2 is described in this manual. The program includes, but is not limited to, capabilities to handle the following problems: * Linear bar and beam element structures, * Geometrically non-linear problems (bar and beam), both static and transient dynamic analysis, * Transient response dynamics from a catalog of time varying external forcing function types or input function tables, * Eigenvalue solution (modes and frequencies), * Multi point constraints (MPC) for the modelling of mechanisms and e.g. rigid links. The MPC definition is used only in the geometrically linearized sense, * Beams with disjunct shear axis and neutral axis, * Beams with rigid offset. An interface exist that connects GARFEM with the program GAROS. GAROS is a program for aeroelastic analysis of rotating structures. Since this interface was developed GARFEM now serves as a preprocessor program in place of NASTRAN which was formerly used. Documentation of the methods applied in GARFEM exists but is so far limited to the capacities in existence before the GAROS interface was developed.

  9. Activity and function recognition for moving and static objects in urban environments from wide-area persistent surveillance inputs

    Science.gov (United States)

    Levchuk, Georgiy; Bobick, Aaron; Jones, Eric

    2010-04-01

    In this paper, we describe results from experimental analysis of a model designed to recognize activities and functions of moving and static objects from low-resolution wide-area video inputs. Our model is based on representing the activities and functions using three variables: (i) time; (ii) space; and (iii) structures. The activity and function recognition is achieved by imposing lexical, syntactic, and semantic constraints on the lower-level event sequences. In the reported research, we have evaluated the utility and sensitivity of several algorithms derived from natural language processing and pattern recognition domains. We achieved high recognition accuracy for a wide range of activity and function types in the experiments using Electro-Optical (EO) imagery collected by Wide Area Airborne Surveillance (WAAS) platform.

  10. Superlattice band structure: New and simple energy quantification condition

    Energy Technology Data Exchange (ETDEWEB)

    Maiz, F., E-mail: fethimaiz@gmail.com [University of Cartage, Nabeul Engineering Preparatory Institute, Merazka, 8000 Nabeul (Tunisia); King Khalid University, Faculty of Science, Physics Department, P.O. Box 9004, Abha 61413 (Saudi Arabia)

    2014-10-01

    Assuming an approximated effective mass and using Bastard's boundary conditions, a simple method is used to calculate the subband structure for periodic semiconducting heterostructures. Our method consists to derive and solve the energy quantification condition (EQC), this is a simple real equation, composed of trigonometric and hyperbolic functions, and does not need any programming effort or sophistic machine to solve it. For less than ten wells heterostructures, we have derived and simplified the energy quantification conditions. The subband is build point by point; each point presents an energy level. Our simple energy quantification condition is used to calculate the subband structure of the GaAs/Ga{sub 0.5}Al{sub 0.5}As heterostructures, and build its subband point by point for 4 and 20 wells. Our finding shows a good agreement with previously published results.

  11. Influential input parameters for reflood model of MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Bang, Young Seok [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-10-15

    Best Estimate (BE) calculation has been more broadly used in nuclear industries and regulations to reduce the significant conservatism for evaluating Loss of Coolant Accident (LOCA). Reflood model has been identified as one of the problems in BE calculation. The objective of the Post BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) program of OECD/NEA is to make progress the issue of the quantification of the uncertainty of the physical models in system thermal hydraulic codes, by considering an experimental result especially for reflood. It is important to establish a methodology to identify and select the parameters influential to the response of reflood phenomena following Large Break LOCA. For this aspect, a reference calculation and sensitivity analysis to select the dominant influential parameters for FEBA experiment are performed.

  12. Input Shaping to Reduce Solar Array Structural Vibrations

    Science.gov (United States)

    Doherty, Michael J.; Tolson, Robert J.

    1998-01-01

    Structural vibrations induced by actuators can be minimized using input shaping. Input shaping is a feedforward method in which actuator commands are convolved with shaping functions to yield a shaped set of commands. These commands are designed to perform the maneuver while minimizing the residual structural vibration. In this report, input shaping is extended to stepper motor actuators. As a demonstration, an input-shaping technique based on pole-zero cancellation was used to modify the Solar Array Drive Assembly (SADA) actuator commands for the Lewis satellite. A series of impulses were calculated as the ideal SADA output for vibration control. These impulses were then discretized for use by the SADA stepper motor actuator and simulated actuator outputs were used to calculate the structural response. The effectiveness of input shaping is limited by the accuracy of the knowledge of the modal frequencies. Assuming perfect knowledge resulted in significant vibration reduction. Errors of 10% in the modal frequencies caused notably higher levels of vibration. Controller robustness was improved by incorporating additional zeros in the shaping function. The additional zeros did not require increased performance from the actuator. Despite the identification errors, the resulting feedforward controller reduced residual vibrations to the level of the exactly modeled input shaper and well below the baseline cases. These results could be easily applied to many other vibration-sensitive applications involving stepper motor actuators.

  13. An EPGPT-based approach for uncertainty quantification

    International Nuclear Information System (INIS)

    Wang, C.; Abdel-Khalik, H. S.

    2012-01-01

    Generalized Perturbation Theory (GPT) has been widely used by many scientific disciplines to perform sensitivity analysis and uncertainty quantification. This manuscript employs recent developments in GPT theory, collectively referred to as Exact-to-Precision Generalized Perturbation Theory (EPGPT), to enable uncertainty quantification for computationally challenging models, e.g. nonlinear models associated with many input parameters and many output responses and with general non-Gaussian parameters distributions. The core difference between EPGPT and existing GPT is in the way the problem is formulated. GPT formulates an adjoint problem that is dependent on the response of interest. It tries to capture via the adjoint solution the relationship between the response of interest and the constraints on the state variations. EPGPT recasts the problem in terms of a smaller set of what is referred to as the 'active' responses which are solely dependent on the physics model and the boundary and initial conditions rather than on the responses of interest. The objective of this work is to apply an EPGPT methodology to propagate cross-sections variations in typical reactor design calculations. The goal is to illustrate its use and the associated impact for situations where the typical Gaussian assumption for parameters uncertainties is not valid and when nonlinear behavior must be considered. To allow this demonstration, exaggerated variations will be employed to stimulate nonlinear behavior in simple prototypical neutronics models. (authors)

  14. Parallel, but Dissociable, Processing in Discrete Corticostriatal Inputs Encodes Skill Learning.

    Science.gov (United States)

    Kupferschmidt, David A; Juczewski, Konrad; Cui, Guohong; Johnson, Kari A; Lovinger, David M

    2017-10-11

    Changes in cortical and striatal function underlie the transition from novel actions to refined motor skills. How discrete, anatomically defined corticostriatal projections function in vivo to encode skill learning remains unclear. Using novel fiber photometry approaches to assess real-time activity of associative inputs from medial prefrontal cortex to dorsomedial striatum and sensorimotor inputs from motor cortex to dorsolateral striatum, we show that associative and sensorimotor inputs co-engage early in action learning and disengage in a dissociable manner as actions are refined. Disengagement of associative, but not sensorimotor, inputs predicts individual differences in subsequent skill learning. Divergent somatic and presynaptic engagement in both projections during early action learning suggests potential learning-related in vivo modulation of presynaptic corticostriatal function. These findings reveal parallel processing within associative and sensorimotor circuits that challenges and refines existing views of corticostriatal function and expose neuronal projection- and compartment-specific activity dynamics that encode and predict action learning. Published by Elsevier Inc.

  15. A time-resolved model of the mesospheric Na layer: constraints on the meteor input function

    Directory of Open Access Journals (Sweden)

    J. M. C. Plane

    2004-01-01

    Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.

  16. Visualization and quantification of large bowel motility with functional cine-MRI

    International Nuclear Information System (INIS)

    Buhmann, S.; Wielage, C.; Fischer, T.; Reiser, M.; Lienemann, A.; Kirchhoff, C.; Mussack, T.

    2005-01-01

    Purpose: to develop and evaluate a method to visualize and quantify large bowel motility using functional cine MRI. Methods: fifteen healthy individuals (8males, 7 females, 20 to 45 years old) with no history or present symptoms of bowel disorders were enrolled in a functional cine MRI examination at 6 a. m. after a starving phase for at least eight hours before and after oral administration of Senna tea (mild stimulating purgative). Two consecutive sets of repeated measurements of the entire abdomen were performed using a 1.5T MRI system with coronal T2-weighted HASTE sequences anatomically adjusted to the course of the large bowel. A navigator technique was used for respiratory gating at the level of the right dorsal diaphragm. The changes in diameter (given in cm) were measured at 5 different locations of the ascending (AC), transverse (TC) and descending colon (DC), and assessed as parameters for the bowel motility. Results: the mean values as a statistical measure for large bowel relaxation were determined. Before ingestion of Senna tea, the mean diameter measured 3.41 cm (ascending colon), 3 cm (transverse colon) and 2.67 cm (descending colon). After the ingestion of Senna tea, the mean diameter increased to 3.69 cm (ascending colon) to 3.4 cm (transverse colon) and to 2.9 cm (descending colon). A statistically significant difference was demonstrated with the Wilcoxon test (level of confidence 0.05). For the determination of dynamic increase, the changes of the statistical scatter amplitude to the mean value were expressed as percentage before and after the ingestion of Senna tea. Thereby, an increase in variation and dynamic range was detected for the AC (112.9%) and DC (100%), but a decrease in the dynamics for the TC (69%). Conclusion: a non-invasive method for the assessment of bowel motility was developed for the first time. The use of functional cine MRI utilizing a prokinetic stimulus allowed visualisation and quantification of large bowel motility

  17. Image-derived and arterial blood sampled input functions for quantitative PET imaging of the angiotensin II subtype 1 receptor in the kidney

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Tao; Tsui, Benjamin M. W.; Li, Xin; Vranesic, Melin; Lodge, Martin A.; Gulaldi, Nedim C. M.; Szabo, Zsolt, E-mail: zszabo@jhmi.edu [Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins School of Medicine, Baltimore, Maryland 21287 (United States)

    2015-11-15

    Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method in pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent

  18. Latitudinal and seasonal variability of the micrometeor input function: A study using model predictions and observations from Arecibo and PFISR

    Science.gov (United States)

    Fentzke, J. T.; Janches, D.; Sparks, J. J.

    2009-05-01

    In this work, we use a semi-empirical model of the micrometeor input function (MIF) together with meteor head-echo observations obtained with two high power and large aperture (HPLA) radars, the 430 MHz Arecibo Observatory (AO) radar in Puerto Rico (18°N, 67°W) and the 450 MHz Poker flat incoherent scatter radar (PFISR) in Alaska (65°N, 147°W), to study the seasonal and geographical dependence of the meteoric flux in the upper atmosphere. The model, recently developed by Janches et al. [2006a. Modeling the global micrometeor input function in the upper atmosphere observed by high power and large aperture radars. Journal of Geophysical Research 111] and Fentzke and Janches [2008. A semi-empirical model of the contribution from sporadic meteoroid sources on the meteor input function observed at arecibo. Journal of Geophysical Research (Space Physics) 113 (A03304)], includes an initial mass flux that is provided by the six known meteor sources (i.e. orbital families of dust) as well as detailed modeling of meteoroid atmospheric entry and ablation physics. In addition, we use a simple ionization model to treat radar sensitivity issues by defining minimum electron volume density production thresholds required in the meteor head-echo plasma for detection. This simplified approach works well because we use observations from two radars with similar frequencies, but different sensitivities and locations. This methodology allows us to explore the initial input of particles and how it manifests in different parts of the MLT as observed by these instruments without the need to invoke more sophisticated plasma models, which are under current development. The comparisons between model predictions and radar observations show excellent agreement between diurnal, seasonal, and latitudinal variability of the detected meteor rate and radial velocity distributions, allowing us to understand how individual meteoroid populations contribute to the overall flux at a particular

  19. Advancement in PET quantification using 3D-OP-OSEM point spread function reconstruction with the HRRT

    Energy Technology Data Exchange (ETDEWEB)

    Varrone, Andrea; Sjoeholm, Nils; Gulyas, Balazs; Halldin, Christer; Farde, Lars [Karolinska Hospital, Karolinska Institutet, Department of Clinical Neuroscience, Psychiatry Section and Stockholm Brain Institute, Stockholm (Sweden); Eriksson, Lars [Karolinska Hospital, Karolinska Institutet, Department of Clinical Neuroscience, Psychiatry Section and Stockholm Brain Institute, Stockholm (Sweden); Siemens Molecular Imaging, Knoxville, TN (United States); University of Stockholm, Department of Physics, Stockholm (Sweden)

    2009-10-15

    Image reconstruction including the modelling of the point spread function (PSF) is an approach improving the resolution of the PET images. This study assessed the quantitative improvements provided by the implementation of the PSF modelling in the reconstruction of the PET data using the High Resolution Research Tomograph (HRRT). Measurements were performed on the NEMA-IEC/2001 (Image Quality) phantom for image quality and on an anthropomorphic brain phantom (STEPBRAIN). PSF reconstruction was also applied to PET measurements in two cynomolgus monkeys examined with [{sup 18}F]FE-PE2I (dopamine transporter) and with [{sup 11}C]MNPA (D{sub 2} receptor), and in one human subject examined with [{sup 11}C]raclopride (D{sub 2} receptor). PSF reconstruction increased the recovery coefficient (RC) in the NEMA phantom by 11-40% and the grey to white matter ratio in the STEPBRAIN phantom by 17%. PSF reconstruction increased binding potential (BP{sub ND}) in the striatum and midbrain by 14 and 18% in the [{sup 18}F]FE-PE2I study, and striatal BP{sub ND} by 6 and 10% in the [{sup 11}C]MNPA and [{sup 11}C]raclopride studies. PSF reconstruction improved quantification by increasing the RC and thus reducing the partial volume effect. This method provides improved conditions for PET quantification in clinical studies with the HRRT system, particularly when targeting receptor populations in small brain structures. (orig.)

  20. Direct electrical stimulation as an input gate into brain functional networks: principles, advantages and limitations.

    Science.gov (United States)

    Mandonnet, Emmanuel; Winkler, Peter A; Duffau, Hugues

    2010-02-01

    While the fundamental and clinical contribution of direct electrical stimulation (DES) of the brain is now well acknowledged, its advantages and limitations have not been re-evaluated for a long time. Here, we critically review exactly what DES can tell us about cerebral function. First, we show that DES is highly sensitive for detecting the cortical and axonal eloquent structures. Moreover, DES also provides a unique opportunity to study brain connectivity, since each area responsive to stimulation is in fact an input gate into a large-scale network rather than an isolated discrete functional site. DES, however, also has a limitation: its specificity is suboptimal. Indeed, DES may lead to interpretations that a structure is crucial because of the induction of a transient functional response when stimulated, whereas (1) this effect is caused by the backward spreading of the electro-stimulation along the network to an essential area and/or (2) the stimulated region can be functionally compensated owing to long-term brain plasticity mechanisms. In brief, although DES is still the gold standard for brain mapping, its combination with new methods such as perioperative neurofunctional imaging and biomathematical modeling is now mandatory, in order to clearly differentiate those networks that are actually indispensable to function from those that can be compensated.

  1. A statistical methodology for quantification of uncertainty in best estimate code physical models

    International Nuclear Information System (INIS)

    Vinai, Paolo; Macian-Juan, Rafael; Chawla, Rakesh

    2007-01-01

    A novel uncertainty assessment methodology, based on a statistical non-parametric approach, is presented in this paper. It achieves quantification of code physical model uncertainty by making use of model performance information obtained from studies of appropriate separate-effect tests. Uncertainties are quantified in the form of estimated probability density functions (pdf's), calculated with a newly developed non-parametric estimator. The new estimator objectively predicts the probability distribution of the model's 'error' (its uncertainty) from databases reflecting the model's accuracy on the basis of available experiments. The methodology is completed by applying a novel multi-dimensional clustering technique based on the comparison of model error samples with the Kruskall-Wallis test. This takes into account the fact that a model's uncertainty depends on system conditions, since a best estimate code can give predictions for which the accuracy is affected by the regions of the physical space in which the experiments occur. The final result is an objective, rigorous and accurate manner of assigning uncertainty to coded models, i.e. the input information needed by code uncertainty propagation methodologies used for assessing the accuracy of best estimate codes in nuclear systems analysis. The new methodology has been applied to the quantification of the uncertainty in the RETRAN-3D void model and then used in the analysis of an independent separate-effect experiment. This has clearly demonstrated the basic feasibility of the approach, as well as its advantages in yielding narrower uncertainty bands in quantifying the code's accuracy for void fraction predictions

  2. Pseudo-BINPUT, a free formal input package for Fortran programmes

    International Nuclear Information System (INIS)

    Gubbins, M.E.

    1977-11-01

    Pseudo - BINPUT is an input package for reading free format data in codeword control in a FORTRAN programme. To a large degree it mimics in function the Winfrith Subroutine Library routine BINPUT. By using calls of the data input package DECIN to mimic the input routine BINPUT, Pseudo - BINPUT combines some of the advantages of both systems. (U.K.)

  3. Quantification of DNA in Neonatal Dried Blood Spots by Adenine Tandem Mass Spectrometry.

    Science.gov (United States)

    Durie, Danielle; Yeh, Ed; McIntosh, Nathan; Fisher, Lawrence; Bulman, Dennis E; Birnboim, H Chaim; Chakraborty, Pranesh; Al-Dirbashi, Osama Y

    2018-01-02

    Newborn screening programs have expanded to include molecular-based assays as first-tier tests and the success of these assays depends on the quality and yield of DNA extracted from neonatal dried blood spots (DBS). To meet high throughput and rapid turnaround time requirements, newborn screening laboratories adopted rapid DNA extraction methods that produce crude extracts. Quantification of DNA in neonatal DBS is not routinely performed due to technical challenges; however, this may enhance the performance of assays that are sensitive to amounts of input DNA. In this study, we developed a novel high throughput method to quantify total DNA in DBS. It is based on specific acid-catalyzed depurination of DNA followed by mass spectrometric quantification of adenine. The amount of adenine was used to calculate DNA quantity per 3.2 mm DBS. Reference intervals were established using archived, neonatal DBS (n = 501) and a median of 130.6 ng of DNA per DBS was obtained, which is in agreement with literature values. The intra- and interday variations were quantification were 12.5 and 37.8 nmol/L adenine, respectively. We demonstrated that DNA from neonatal DBS can be successfully quantified in high throughput settings using instruments currently deployed in NBS laboratories.

  4. Determination of the arterial input function in mouse-models using clinical MRI

    International Nuclear Information System (INIS)

    Theis, D.; Fachhochschule Giessen-Friedberg; Keil, B.; Heverhagen, J.T.; Klose, K.J.; Behe, M.; Fiebich, M.

    2008-01-01

    Dynamic contrast enhanced magnetic resonance imaging is a promising method for quantitative analysis of tumor perfusion and is increasingly used in study of cancer in small animal models. In those studies the determination of the arterial input function (AIF) of the target tissue can be the first step. Series of short-axis images of the heart were acquired during administration of a bolus of Gd-DTPA using saturation-recovery gradient echo pulse sequences. The AIF was determined from the changes of the signal intensity in the left ventricle. The native T1 relaxation times and AIF were determined for 11 mice. An average value of (1.16 ± 0.09) s for the native T1 relaxation time was measured. However, the AIF showed significant inter animal variability, as previously observed by other authors. The inter-animal variability shows, that a direct measurement of the AIF is reasonable to avoid significant errors. The proposed method for determination of the AIF proved to be reliable. (orig.)

  5. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    Science.gov (United States)

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis

  6. Quantification of the radio-metabolites of the serotonin-1A receptor radioligand [carbonyl-11C]WAY-100635 in human plasma: An HPLC-assay which enables measurement of two patients in parallel

    International Nuclear Information System (INIS)

    Nics, L.; Hahn, A.; Zeilinger, M.; Vraka, C.; Ungersboeck, J.; Haeusler, D.; Hartmann, S.; Wagner, K-H.; Lanzenberger, R.; Wadsak, W.; Mitterhauser, M.

    2012-01-01

    [Carbonyl- 11 C]WAY-100635 is a potent and effective antagonist for the 5-HT 1A receptor subtype. We aimed to assess the status of [carbonyl- 11 C]WAY-100635 and its main radio-metabolites, [carbonyl- 11 C]desmethyl-WAY-100635 and [carbonyl- 11 C]cyclohexanecarboxylic acid, on the basis of an improved radio-HPLC method. Common methods were characterized by preparative HPLC columns with long runtimes and/or high flow rates. Considering the short half-life of C-11, we developed a more rapid and solvent saving HPLC assay, allowing a fast, efficient and reliable quantification of these major metabolites. - Highlights: ► We developed a HPLC assay which allows the measurement of two patients in parallel. ► It allows a fast and efficient quantification of WAY-100635 and its metabolites. ► Better counting statistics with late samples for modeling the input function is achieved. ► The fastest assay so far is about 40% slower in comparison to the presented method.

  7. Uncertainties and quantification of common cause failure rates and probabilities for system analyses

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    2005-01-01

    Simultaneous failures of multiple components due to common causes at random times are modelled by constant multiple-failure rates. A procedure is described for quantification of common cause failure (CCF) basic event probabilities for system models using plant-specific and multiple-plant failure-event data. Methodology is presented for estimating CCF-rates from event data contaminated with assessment uncertainties. Generalised impact vectors determine the moments for the rates of individual systems or plants. These moments determine the effective numbers of events and observation times to be input to a Bayesian formalism to obtain plant-specific posterior CCF-rates. The rates are used to determine plant-specific common cause event probabilities for the basic events of explicit fault tree models depending on test intervals, test schedules and repair policies. Three methods are presented to determine these probabilities such that the correct time-average system unavailability can be obtained with single fault tree quantification. Recommended numerical values are given and examples illustrate different aspects of the methodology

  8. Developmental validation of the Quantifiler(®) HP and Trio Kits for human DNA quantification in forensic samples.

    Science.gov (United States)

    Holt, Allison; Wootton, Sharon Chao; Mulero, Julio J; Brzoska, Pius M; Langit, Emanuel; Green, Robert L

    2016-03-01

    The quantification of human genomic DNA is a necessary first step in the DNA casework sample analysis workflow. DNA quantification determines optimal sample input amounts for subsequent STR (short tandem repeat) genotyping procedures, as well as being a useful screening tool to identify samples most likely to provide probative genotypic evidence. To better mesh with the capabilities of newest-generation STR analysis assays, the Quantifiler(®) HP and Quantifiler(®) Trio DNA Quantification Kits were designed for greater detection sensitivity and more robust performance with samples that contain PCR inhibitors or degraded DNA. The new DNA quantification kits use multiplex TaqMan(®) assay-based fluorescent probe technology to simultaneously quantify up to three human genomic targets, allowing samples to be assessed for total human DNA, male contributor (i.e., Y-chromosome) DNA, as well as a determination of DNA degradation state. The Quantifiler HP and Trio Kits use multiple-copy loci to allow for significantly improved sensitivity compared to earlier-generation kits that employ single-copy target loci. The kits' improved performance provides better predictive ability for results with downstream, newest-generation STR assays, and their shortened time-to-result allows more efficient integration into the forensic casework analysis workflow. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Serotonin 1B Receptors Regulate Prefrontal Function by Gating Callosal and Hippocampal Inputs

    DEFF Research Database (Denmark)

    Kjaerby, Celia; Athilingam, Jegath; Robinson, Sarah E

    2016-01-01

    Both medial prefrontal cortex (mPFC) and serotonin play key roles in anxiety; however, specific mechanisms through which serotonin might act on the mPFC to modulate anxiety-related behavior remain unknown. Here, we use a combination of optogenetics and synaptic physiology to show that serotonin...... acts presynaptically via 5-HT1B receptors to selectively suppress inputs from the contralateral mPFC and ventral hippocampus (vHPC), while sparing those from mediodorsal thalamus. To elucidate how these actions could potentially regulate prefrontal circuit function, we infused a 5-HT1B agonist...... into the mPFC of freely behaving mice. Consistent with previous studies that have optogenetically inhibited vHPC-mPFC projections, activating prefrontal 5-HT1B receptors suppressed theta-frequency mPFC activity (4-12 Hz), and reduced avoidance of anxiogenic regions in the elevated plus maze. These findings...

  10. Uncertainty Quantification with Applications to Engineering Problems

    DEFF Research Database (Denmark)

    Bigoni, Daniele

    in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor...... and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose......-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix....

  11. Calibrated image-derived input functions for the determination of the metabolic uptake rate of glucose with [18F]-FDG PET

    DEFF Research Database (Denmark)

    Christensen, Anders Nymark; Reichkendler, Michala H.; Larsen, Rasmus

    2014-01-01

    We investigated the use of a simple calibration method to remove bias in previously proposed approaches to image-derived input functions (IDIFs) when used to calculate the metabolic uptake rate of glucose (Km) from dynamic [18F]-FDG PET scans of the thigh. Our objective was to obtain nonbiased, low...

  12. Stabilization of (state, input)-disturbed CSTRs through the port-Hamiltonian systems approach

    OpenAIRE

    Lu, Yafei; Fang, Zhou; Gao, Chuanhou

    2017-01-01

    It is a universal phenomenon that the state and input of the continuous stirred tank reactor (CSTR) systems are both disturbed. This paper proposes a (state, input)-disturbed port-Hamiltonian framework that can be used to model and further designs a stochastic passivity based controller to asymptotically stabilize in probability the (state, input)-disturbed CSTR (sidCSTR) systems. The opposite entropy function and the availability function are selected as the Hamiltonian for the model and con...

  13. Evidence-based quantification of uncertainties induced via simulation-based modeling

    International Nuclear Information System (INIS)

    Riley, Matthew E.

    2015-01-01

    The quantification of uncertainties in simulation-based modeling traditionally focuses upon quantifying uncertainties in the parameters input into the model, referred to as parametric uncertainties. Often neglected in such an approach are the uncertainties induced by the modeling process itself. This deficiency is often due to a lack of information regarding the problem or the models considered, which could theoretically be reduced through the introduction of additional data. Because of the nature of this epistemic uncertainty, traditional probabilistic frameworks utilized for the quantification of uncertainties are not necessarily applicable to quantify the uncertainties induced in the modeling process itself. This work develops and utilizes a methodology – incorporating aspects of Dempster–Shafer Theory and Bayesian model averaging – to quantify uncertainties of all forms for simulation-based modeling problems. The approach expands upon classical parametric uncertainty approaches, allowing for the quantification of modeling-induced uncertainties as well, ultimately providing bounds on classical probability without the loss of epistemic generality. The approach is demonstrated on two different simulation-based modeling problems: the computation of the natural frequency of a simple two degree of freedom non-linear spring mass system and the calculation of the flutter velocity coefficient for the AGARD 445.6 wing given a subset of commercially available modeling choices. - Highlights: • Modeling-induced uncertainties are often mishandled or ignored in the literature. • Modeling-induced uncertainties are epistemic in nature. • Probabilistic representations of modeling-induced uncertainties are restrictive. • Evidence theory and Bayesian model averaging are integrated. • Developed approach is applicable for simulation-based modeling problems

  14. Computer Generated Inputs for NMIS Processor Verification

    International Nuclear Information System (INIS)

    J. A. Mullens; J. E. Breeding; J. A. McEvers; R. W. Wysor; L. G. Chiang; J. R. Lenarduzzi; J. T. Mihalczo; J. K. Mattingly

    2001-01-01

    Proper operation of the Nuclear Identification Materials System (NMIS) processor can be verified using computer-generated inputs [BIST (Built-In-Self-Test)] at the digital inputs. Preselected sequences of input pulses to all channels with known correlation functions are compared to the output of the processor. These types of verifications have been utilized in NMIS type correlation processors at the Oak Ridge National Laboratory since 1984. The use of this test confirmed a malfunction in a NMIS processor at the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) in 1998. The NMIS processor boards were returned to the U.S. for repair and subsequently used in NMIS passive and active measurements with Pu at VNIIEF in 1999

  15. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    International Nuclear Information System (INIS)

    Perko, Z.; Gilli, L.; Lathouwers, D.; Kloosterman, J. L.

    2013-01-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used technique proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)

  16. Development and evaluation of a sandwich ELISA for quantification of the 20S proteasome in human plasma

    DEFF Research Database (Denmark)

    Dutaud, Dominique; Aubry, Laurent; Henry, Laurent

    2002-01-01

    Because quantification of the 20S proteasome by functional activity measurements is difficult and inaccurate, we have developed an indirect sandwich enzyme-linked immunosorbent assays (ELISA) for quantification of the 20S proteasome in human plasma. This sandwich ELISA uses a combination...

  17. Stochastic weather inputs for improved urban water demand forecasting: application of nonlinear input variable selection and machine learning methods

    Science.gov (United States)

    Quilty, J.; Adamowski, J. F.

    2015-12-01

    Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.

  18. Dynamic PET of human liver inflammation: impact of kinetic modeling with optimization-derived dual-blood input function.

    Science.gov (United States)

    Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik

    2018-05-30

    The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018

  19. Quantification of cellular uptake of DNA nanostructures by qPCR.

    Science.gov (United States)

    Okholm, Anders Hauge; Nielsen, Jesper Sejrup; Vinther, Mathias; Sørensen, Rasmus Schøler; Schaffert, David; Kjems, Jørgen

    2014-05-15

    DNA nanostructures facilitating drug delivery are likely soon to be realized. In the past few decades programmed self-assembly of DNA building blocks have successfully been employed to construct sophisticated nanoscale objects. By conjugating functionalities to DNA, other molecules such as peptides, proteins and polymers can be precisely positioned on DNA nanostructures. This exceptional ability to produce modular nanoscale devices with tunable and controlled behavior has initiated an interest in employing DNA nanostructures for drug delivery. However, to obtain this the relationship between cellular interactions and structural and functional features of the DNA delivery device must be thoroughly investigated. Here, we present a rapid and robust method for the precise quantification of the component materials of DNA origami structures capable of entering cells in vitro. The quantification is performed by quantitative polymerase chain reaction, allowing a linear dynamic range of detection of five orders of magnitude. We demonstrate the use of this method for high-throughput screening, which could prove efficient to identify key features of DNA nanostructures enabling cell penetration. The method described here is suitable for quantification of in vitro uptake studies but should easily be extended to quantify DNA nanostructures in blood or tissue samples. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Reliability Quantification Method for Safety Critical Software Based on a Finite Test Set

    International Nuclear Information System (INIS)

    Shin, Sung Min; Kim, Hee Eun; Kang, Hyun Gook; Lee, Seung Jun

    2014-01-01

    Software inside of digitalized system have very important role because it may cause irreversible consequence and affect the whole system as common cause failure. However, test-based reliability quantification method for some safety critical software has limitations caused by difficulties in developing input sets as a form of trajectory which is series of successive values of variables. To address these limitations, this study proposed another method which conduct the test using combination of single values of variables. To substitute the trajectory form of input using combination of variables, the possible range of each variable should be identified. For this purpose, assigned range of each variable, logical relations between variables, plant dynamics under certain situation, and characteristics of obtaining information of digital device are considered. A feasibility of the proposed method was confirmed through an application to the Reactor Protection System (RPS) software trip logic

  1. The Influence of Prosodic Input in the Second Language Classroom: Does It Stimulate Child Acquisition of Word Order and Function Words?

    Science.gov (United States)

    Campfield, Dorota E.; Murphy, Victoria A.

    2017-01-01

    This paper reports on an intervention study with young Polish beginners (mean age: 8 years, 3 months) learning English at school. It seeks to identify whether exposure to rhythmic input improves knowledge of word order and function words. The "prosodic bootstrapping hypothesis", relevant in developmental psycholinguistics, provided the…

  2. Evaluation of the use of a standard input function for compartment analysis of [123I]iomazenil data. Factors influencing the quantitative results

    International Nuclear Information System (INIS)

    Seike, Yujiro; Hashikawa, Kazuo; Oku, Naohiko

    2004-01-01

    Adoption of standard input function (SIF) has been proposed for kinetic analysis of receptor binding potential (BP), instead of invasive frequent arterial samplings. The purpose of this study was to assess the SIP method in quantitative analysis of [ 123 I]iomazenil (IMZ), a central benzodiazepine antagonist, for SPECT. SPECT studies were performed on 10 patients with cerebrovascular disease or Alzheimer disease. Intermittent dynamic SPECT scans were performed from 0 to 201 min after IMZ-injection. BPs calculated from SIFs obtained from normal volunteers (BP s ) were compared with those of individual arterial samplings (BP O ). Good correlations were shown between BP O s and BP S s in the 9 subjects, but maximum BP S s were four times larger than the corresponding BP O s in one case. There were no abnormal laboratory data in this patient, but the relative arterial input count in the late period was higher than the SIF. Simulation studies with modified input functions revealed that height in the late period can produce significant errors in estimated BPs. These results suggested that the simplified method with one-point arterial sampling and SIF can not be applied clinically. One additional arterial sampling in the late period may be useful. (author)

  3. Quantification in emission tomography

    International Nuclear Information System (INIS)

    Buvat, Irene

    2011-11-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) and positron emission tomography (PET) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena 2 - Main problems impacting quantification in PET and SPECT: problems, consequences, correction methods, results (Attenuation, scattering, partial volume effect, movement, un-stationary spatial resolution in SPECT, fortuitous coincidences in PET, standardisation in PET); 3 - Synthesis: accessible efficiency, know-how, Precautions, beyond the activity measurement

  4. Accident sequence quantification with KIRAP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP`s cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs.

  5. Accident sequence quantification with KIRAP

    International Nuclear Information System (INIS)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong.

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP's cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs

  6. Estimating the arterial input function from dynamic contrast-enhanced MRI data with compensation for flow enhancement (I): Theory, method, and phantom experiments

    NARCIS (Netherlands)

    van Schie, Jeroen J. N.; Lavini, Cristina; van Vliet, Lucas J.; Vos, Frans M.

    2017-01-01

    The arterial input function (AIF) represents the time-dependent arterial contrast agent (CA) concentration that is used in pharmacokinetic modeling. To develop a novel method for estimating the AIF from dynamic contrast-enhanced (DCE-) MRI data, while compensating for flow enhancement. Signal

  7. Jointness through vessel capacity input in a multispecies fishery

    DEFF Research Database (Denmark)

    Hansen, Lars Gårn; Jensen, Carsten Lynge

    2014-01-01

    capacity. We develop a fixed but allocatable input model of purse seine fisheries capturing this particular type of jointness. We estimate the model for the Norwegian purse seine fishery and find that it is characterized by nonjointness, while estimations for this fishery using the standard models imply...... are typically modeled as either independent single species fisheries or using standard multispecies functional forms characterized by jointness in inputs. We argue that production of each species is essentially independent but that jointness may be caused by competition for fixed but allocable input of vessel...

  8. Quantification of thymidine kinase (TK1) mRNA in normal and leukemic cells and investigation of structure-function relatiosnhip of recombinant TK1enzyme

    DEFF Research Database (Denmark)

    Kristensen, Tina

    Thymidine kinase (TK) catalyses the ATP-dependent phosphorylation of thymidine to thymidine monophosphate, which is subsequency phosphorylated to thymidine triphosphate and utilized for DNA synthesis. Human cytosolic TK (TKI) is cell cycle regulated, e.g. the TK1 activity increases sharply at the G...... patients with chronic lymphatic leukemia (CLL). 2: Structure-function relationship of recombinant TKI. In the first part a sensitive method (competitive PCR) for quantification of TKI mRNA was established. The TKI mRNA level was quantified in quiescent lymphocytes from control donors (n = 6...... are characterized as being quiescent, the TK activity was in the same range as in quiescent lymphocytes from control donors. However, quantification of the TKI mRNA level shows that all five CLL patients had a very high level (6 to 22 x IO6 copies mg-’ protein) of TKI mRNA, corresponding to the level in dividing...

  9. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  10. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    Science.gov (United States)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at

  11. Biosensor for label-free DNA quantification based on functionalized LPGs.

    Science.gov (United States)

    Gonçalves, Helena M R; Moreira, Luis; Pereira, Leonor; Jorge, Pedro; Gouveia, Carlos; Martins-Lopes, Paula; Fernandes, José R A

    2016-10-15

    A label-free fiber optic biosensor based on a long period grating (LPG) and a basic optical interrogation scheme using off the shelf components is used for the detection of in-situ DNA hybridization. A new methodology is proposed for the determination of the spectral position of the LPG mode resonance. The experimental limit of detection obtained for the DNA was 62±2nM and the limit of quantification was 209±7nM. The sample specificity was experimentally demonstrated using DNA targets with different base mismatches relatively to the probe and was found that the system has a single base mismatch selectivity. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Quantification of heterogeneity observed in medical images

    OpenAIRE

    Brooks, Frank J; Grigsby, Perry W

    2013-01-01

    Background There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging mod...

  13. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    Energy Technology Data Exchange (ETDEWEB)

    Alwan, Aravind; Aluru, N.R.

    2013-12-15

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.

  14. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    International Nuclear Information System (INIS)

    Alwan, Aravind; Aluru, N.R.

    2013-01-01

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems

  15. A data input controller for an alphanumeric and function keyboard with ports to the CAMAC-dataway or the serial plasma display controller

    International Nuclear Information System (INIS)

    Zahn, J.; Komor, Z.; Geldmeyer, H.J.

    1976-01-01

    A data input controller has been developed to allow the data transfer from an alphanumeric and function keyboard to the CAMAC-dataway or via the plasma display controller SIG-8AS/S and a serial transmission line to the TTY-/V.24-port of a computer. (orig.) [de

  16. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, Putri Wikie; Suhartono, Suhartono

    2017-01-01

    -searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those

  17. Comparison of five DNA quantification methods

    DEFF Research Database (Denmark)

    Nielsen, Karsten; Mogensen, Helle Smidt; Hedman, Johannes

    2008-01-01

    Six commercial preparations of human genomic DNA were quantified using five quantification methods: UV spectrometry, SYBR-Green dye staining, slot blot hybridization with the probe D17Z1, Quantifiler Human DNA Quantification kit and RB1 rt-PCR. All methods measured higher DNA concentrations than...... Quantification kit in two experiments. The measured DNA concentrations with Quantifiler were 125 and 160% higher than expected based on the manufacturers' information. When the Quantifiler human DNA standard (Raji cell line) was replaced by the commercial human DNA preparation G147A (Promega) to generate the DNA...... standard curve in the Quantifiler Human DNA Quantification kit, the DNA quantification results of the human DNA preparations were 31% higher than expected based on the manufacturers' information. The results indicate a calibration problem with the Quantifiler human DNA standard for its use...

  18. Canonical multi-valued input Reed-Muller trees and forms

    Science.gov (United States)

    Perkowski, M. A.; Johnson, P. D.

    1991-01-01

    There is recently an increased interest in logic synthesis using EXOR gates. The paper introduces the fundamental concept of Orthogonal Expansion, which generalizes the ring form of the Shannon expansion to the logic with multiple-valued (mv) inputs. Based on this concept we are able to define a family of canonical tree circuits. Such circuits can be considered for binary and multiple-valued input cases. They can be multi-level (trees and DAG's) or flattened to two-level AND-EXOR circuits. Input decoders similar to those used in Sum of Products (SOP) PLA's are used in realizations of multiple-valued input functions. In the case of the binary logic the family of flattened AND-EXOR circuits includes several forms discussed by Davio and Green. For the case of the logic with multiple-valued inputs, the family of the flattened mv AND-EXOR circuits includes three expansions known from literature and two new expansions.

  19. The Input-Output Relationship of the Cholinergic Basal Forebrain

    Directory of Open Access Journals (Sweden)

    Matthew R. Gielow

    2017-02-01

    Full Text Available Basal forebrain cholinergic neurons influence cortical state, plasticity, learning, and attention. They collectively innervate the entire cerebral cortex, differentially controlling acetylcholine efflux across different cortical areas and timescales. Such control might be achieved by differential inputs driving separable cholinergic outputs, although no input-output relationship on a brain-wide level has ever been demonstrated. Here, we identify input neurons to cholinergic cells projecting to specific cortical regions by infecting cholinergic axon terminals with a monosynaptically restricted viral tracer. This approach revealed several circuit motifs, such as central amygdala neurons synapsing onto basolateral amygdala-projecting cholinergic neurons or strong somatosensory cortical input to motor cortex-projecting cholinergic neurons. The presence of input cells in the parasympathetic midbrain nuclei contacting frontally projecting cholinergic neurons suggest that the network regulating the inner eye muscles are additionally regulating cortical state via acetylcholine efflux. This dataset enables future circuit-level experiments to identify drivers of known cortical cholinergic functions.

  20. Standardless quantification by parameter optimization in electron probe microanalysis

    International Nuclear Information System (INIS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-01-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: ► A method for standardless quantification in EPMA is presented. ► It gives better results than the commercial software GENESIS Spectrum. ► It gives better results than the software DTSA. ► It allows the determination of the conductive coating thickness. ► It gives an estimation for the concentration uncertainties.

  1. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Conrad, Patrick [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Bigoni, Daniele [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Parno, Matthew [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-06-09

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a history of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT

  2. Identification and Quantification of Uncertainties Related to Using Distributed X-band Radar Estimated Precipitation as input in Urban Drainage Models

    DEFF Research Database (Denmark)

    Pedersen, Lisbeth

    The Local Area Weather Radar (LAWR) is a small scale weather radar providing distributed measurements of rainfall primarily for use as input in hydrological applications. As any other weather radar the LAWR measurement of the rainfall is an indirect measurement since it does not measure the rainf......The Local Area Weather Radar (LAWR) is a small scale weather radar providing distributed measurements of rainfall primarily for use as input in hydrological applications. As any other weather radar the LAWR measurement of the rainfall is an indirect measurement since it does not measure...... are quantified using statistical methods. Furthermore, the present calibration method is reviewed and a new extended calibration method has been developed and tested resulting in improved rainfall estimates. As part of the calibration analysis a number of elements affecting the LAWR performance were identified...... in connection with boundary assignment besides general improved understanding of the benefits and pitfalls in using distributed rainfall data as input to models. In connection with the use of LAWR data in urban drainage context, the potential for using LAWR data for extreme rainfall statistics has been studied...

  3. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  4. Response sensitivity of barrel neuron subpopulations to simulated thalamic input.

    Science.gov (United States)

    Pesavento, Michael J; Rittenhouse, Cynthia D; Pinto, David J

    2010-06-01

    Our goal is to examine the relationship between neuron- and network-level processing in the context of a well-studied cortical function, the processing of thalamic input by whisker-barrel circuits in rodent neocortex. Here we focus on neuron-level processing and investigate the responses of excitatory and inhibitory barrel neurons to simulated thalamic inputs applied using the dynamic clamp method in brain slices. Simulated inputs are modeled after real thalamic inputs recorded in vivo in response to brief whisker deflections. Our results suggest that inhibitory neurons require more input to reach firing threshold, but then fire earlier, with less variability, and respond to a broader range of inputs than do excitatory neurons. Differences in the responses of barrel neuron subtypes depend on their intrinsic membrane properties. Neurons with a low input resistance require more input to reach threshold but then fire earlier than neurons with a higher input resistance, regardless of the neuron's classification. Our results also suggest that the response properties of excitatory versus inhibitory barrel neurons are consistent with the response sensitivities of the ensemble barrel network. The short response latency of inhibitory neurons may serve to suppress ensemble barrel responses to asynchronous thalamic input. Correspondingly, whereas neurons acting as part of the barrel circuit in vivo are highly selective for temporally correlated thalamic input, excitatory barrel neurons acting alone in vitro are less so. These data suggest that network-level processing of thalamic input in barrel cortex depends on neuron-level processing of the same input by excitatory and inhibitory barrel neurons.

  5. Quantification of benign lesion regression as a function of 532-nm pulsed potassium titanyl phosphate laser parameter selection.

    Science.gov (United States)

    Mallur, Pavan S; Tajudeen, Bobby A; Aaronson, Nicole; Branski, Ryan C; Amin, Milan R

    2011-03-01

    Although the potassium titanyl phosphate (KTP) laser is versatile, the variability in laser parameters for laryngeal pathologies and the lack of clinical efficacy data remain problematic. We provide preliminary data regarding these parameters for benign lesion regression. In addition, we describe a novel method for the quantification of the effects of the KTP laser on vocal fold (VF) lesions. Retrospective chart review. Images were captured from examinations before and after in-office KTP treatment in patients with a range of benign lesions. Laser settings were noted for each patient. Imaging software was then used to calculate a ratio of lesion area to VF length. Ten percent of images were requantified to determine inter-rater reliability. Thirty-two patients underwent 47 procedures for lesions including hemorrhagic polyp, nonhemorrhagic polyp, vocal process granuloma, Reinke's edema, cyst/pseudocyst, leukoplakia, and squamous cell carcinoma in situ. No statistically significant differences were observed with regard to the laser parameters used as a function of lesion type. Regardless, by 1 month following treatment, all lesions had significantly decreased in size, except nonhemorrhagic polyps. Similar data were obtained at 2-month follow-up. We then compared the pre-KTP lesion size with the smallest lesion size quantified during the 1-year follow-up period. All lesions were significantly smaller, with the exception of Reinke's edema. Inter-rater reliability was quite good. KTP laser effectively reduced VF lesion size, irrespective of the laser parameters used. In addition, our quantification method for lesion size appeared to be both viable and reliable. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.

  6. TART input manual

    International Nuclear Information System (INIS)

    Kimlinger, J.R.; Plechaty, E.F.

    1982-01-01

    The TART code is a Monte Carlo neutron/photon transport code that is only on the CRAY computer. All the input cards for the TART code are listed, and definitions for all input parameters are given. The execution and limitations of the code are described, and input for two sample problems are given

  7. Lung involvement quantification in chest radiographs

    International Nuclear Information System (INIS)

    Giacomini, Guilherme; Alvarez, Matheus; Oliveira, Marcela de; Miranda, Jose Ricardo A.; Pina, Diana R.; Pereira, Paulo C.M.; Ribeiro, Sergio M.

    2014-01-01

    Tuberculosis (TB) caused by Mycobacterium tuberculosis, is an infectious disease which remains a global health problem. The chest radiography is the commonly method employed to assess the TB's evolution. The methods for quantification of abnormalities of chest are usually performed on CT scans (CT). This quantification is important to assess the TB evolution and treatment and comparing different treatments. However, precise quantification is not feasible for the amount of CT scans required. The purpose of this work is to develop a methodology for quantification of lung damage caused by TB through chest radiographs. It was developed an algorithm for computational processing of exams in Matlab, which creates a lungs' 3D representation, with compromised dilated regions inside. The quantification of lung lesions was also made for the same patients through CT scans. The measurements from the two methods were compared and resulting in strong correlation. Applying statistical Bland and Altman, all samples were within the limits of agreement, with a confidence interval of 95%. The results showed an average variation of around 13% between the two quantification methods. The results suggest the effectiveness and applicability of the method developed, providing better risk-benefit to the patient and cost-benefit ratio for the institution. (author)

  8. Fluorescent quantification of melanin.

    Science.gov (United States)

    Fernandes, Bruno; Matamá, Teresa; Guimarães, Diana; Gomes, Andreia; Cavaco-Paulo, Artur

    2016-11-01

    Melanin quantification is reportedly performed by absorption spectroscopy, commonly at 405 nm. Here, we propose the implementation of fluorescence spectroscopy for melanin assessment. In a typical in vitro assay to assess melanin production in response to an external stimulus, absorption spectroscopy clearly overvalues melanin content. This method is also incapable of distinguishing non-melanotic/amelanotic control cells from those that are actually capable of performing melanogenesis. Therefore, fluorescence spectroscopy is the best method for melanin quantification as it proved to be highly specific and accurate, detecting even small variations in the synthesis of melanin. This method can also be applied to the quantification of melanin in more complex biological matrices like zebrafish embryos and human hair. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Quantification of total phosphorothioate in bacterial DNA by a bromoimane-based fluorescent method.

    Science.gov (United States)

    Xiao, Lu; Xiang, Yu

    2016-06-01

    The discovery of phosphorothioate (PT) modifications in bacterial DNA has challenged our understanding of conserved phosphodiester backbone structure of cellular DNA. This exclusive DNA modification in bacteria is not found in animal cells yet, and its biological function in bacteria is still poorly understood. Quantitative information about the bacterial PT modifications is thus important for the investigation of their possible biological functions. In this study, we have developed a simple fluorescence method for selective quantification of total PTs in bacterial DNA, based on fluorescent labeling of PTs and subsequent release of the labeled fluorophores for absolute quantification. The method was highly selective to PTs and not interfered by the presence of reactive small molecules or proteins. The quantification of PTs in an E. coli DNA sample was successfully achieved using our method and gave a result of about 455 PTs per million DNA nucleotides, while almost no detectable PTs were found in a mammalian calf thymus DNA. With this new method, the content of phosphorothioate in bacterial DNA could be successfully quantified, serving as a simple method suitable for routine use in biological phosphorothioate related studies. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Strategies for the generation of parametric images of [11C]PIB with plasma input functions considering discriminations and reproducibility.

    Science.gov (United States)

    Edison, Paul; Brooks, David J; Turkheimer, Federico E; Archer, Hilary A; Hinz, Rainer

    2009-11-01

    Pittsburgh compound B or [11C]PIB is an amyloid imaging agent which shows a clear differentiation between subjects with Alzheimer's disease (AD) and controls. However the observed signal difference in other forms of dementia such as dementia with Lewy bodies (DLB) is smaller, and mild cognitively impaired (MCI) subjects and some healthy elderly normals may show intermediate levels of [11C]PIB binding. The cerebellum, a commonly used reference region for non-specific tracer uptake in [11C]PIB studies in AD may not be valid in Prion disorders or monogenic forms of AD. The aim of this work was to: 1-compare methods for generating parametric maps of [11C]PIB retention in tissue using a plasma input function in respect of their ability to discriminate between AD subjects and controls and 2-estimate the test-retest reproducibility in AD subjects. 12 AD subjects (5 of which underwent a repeat scan within 6 weeks) and 10 control subjects had 90 minute [11C]PIB dynamic PET scans, and arterial plasma input functions were measured. Parametric maps were generated with graphical analysis of reversible binding (Logan plot), irreversible binding (Patlak plot), and spectral analysis. Between group differentiation was calculated using Student's t-test and comparisons between different methods were made using p values. Reproducibility was assessed by intraclass correlation coefficients (ICC). We found that the 75 min value of the impulse response function showed the best group differentiation and had a higher ICC than volume of distribution maps generated from Logan and spectral analysis. Patlak analysis of [11C]PIB binding was the least reproducible.

  11. Verb aspect, alternations and quantification

    Directory of Open Access Journals (Sweden)

    Svetla Koeva

    2015-11-01

    Full Text Available Verb aspect, alternations and quantification In this paper we are briefly discuss the nature of Bulgarian verb aspect and argue that the verb aspect pairs are different lexical units with different (although related meaning, different argument structure (reflecting categories, explicitness and referential status of arguments and different sets of semantic and syntactic alternations. The verb prefixes resulting in perfective verbs derivation in some cases can be interpreted as lexical quantifiers as well. Thus the Bulgarian verb aspect is related (in different way both with the potential for the generation of alternations and with the prefixal lexical quantification. It is shown that the scope of the lexical quantification by means of verbal prefixes is the quantified verb phrase and the scope remains constant in all derived alternations. The paper concerns the basic issues of these complex problems, while the detailed description of the conditions satisfying particular alternation or particular lexical quantification are subject of a more detailed study.

  12. Sound effects: Multimodal input helps infants find displaced objects.

    Science.gov (United States)

    Shinskey, Jeanne L

    2017-09-01

    sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.

  13. IFF, Full-Screen Input Menu Generator for FORTRAN Program

    International Nuclear Information System (INIS)

    Seidl, Albert

    1991-01-01

    1 - Description of program or function: The IFF-package contains input modules for use within FORTRAN programs. This package enables the programmer to easily include interactive menu-directed data input (module VTMEN1) and command-word processing (module INPCOM) into a FORTRAN program. 2 - Method of solution: No mathematical operations are performed. 3 - Restrictions on the complexity of the problem: Certain restrictions of use may arise from the dimensioning of arrays. Field lengths are defined via PARAMETER-statements

  14. MPC for LPV Systems Based on Parameter-Dependent Lyapunov Function with Perturbation on Control Input Strategy

    Directory of Open Access Journals (Sweden)

    Pornchai Bumroongsri

    2012-04-01

    Full Text Available In this paper, the model predictive control (MPC algorithm for linear parameter varying (LPV systems is proposed. The proposed algorithm consists of two steps. The first step is derived by using parameter-dependent Lyapunov function and the second step is derived by using the perturbation on control input strategy. In order to achieve good control performance, the bounds on the rate of variation of the parameters are taken into account in the controller synthesis. An overall algorithm is proved to guarantee robust stability. The controller design is illustrated with two case studies of continuous stirred-tank reactors. Comparisons with other MPC algorithms for LPV systems have been undertaken. The results show that the proposed algorithm can achieve better control performance.

  15. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. Keywords: Comparing data, Filamentous cyanobacteria, Algorithm, Deoising, Natural sample

  16. Phase-based vascular input function: Improved quantitative DCE-MRI of atherosclerotic plaques

    NARCIS (Netherlands)

    van Hoof, R. H. M.; Hermeling, E.; Truijman, M. T. B.; van Oostenbrugge, R. J.; Daemen, J. W. H.; van der Geest, R. J.; van Orshoven, N. P.; Schreuder, A. H.; Backes, W. H.; Daemen, M. J. A. P.; Wildberger, J. E.; Kooi, M. E.

    2015-01-01

    Purpose: Quantitative pharmacokinetic modeling of dynamic contrast-enhanced (DCE)-MRI can be used to assess atherosclerotic plaque microvasculature, which is an important marker of plaque vulnerability. Purpose of the present study was (1) to compare magnitude-versus phase-based vascular input

  17. Kinetic modeling in pre-clinical positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kuntner, Claudia [AIT Austrian Institute of Technology GmbH, Seibersdorf (Austria). Biomedical Systems, Health and Environment Dept.

    2014-07-01

    Pre-clinical positron emission tomography (PET) has evolved in the last few years from pure visualization of radiotracer uptake and distribution towards quantification of the physiological parameters. For reliable and reproducible quantification the kinetic modeling methods used to obtain relevant parameters of radiotracer tissue interaction are important. Here we present different kinetic modeling techniques with a focus on compartmental models including plasma input models and reference tissue input models. The experimental challenges of deriving the plasma input function in rodents and the effect of anesthesia are discussed. Finally, in vivo application of kinetic modeling in various areas of pre-clinical research is presented and compared to human data.

  18. Impact of regulation on English and Welsh water-only companies: an input-distance function approach.

    Science.gov (United States)

    Molinos-Senante, María; Porcher, Simon; Maziotis, Alexandros

    2017-07-01

    The assessment of productivity change over time and its drivers is of great significance for water companies and regulators when setting urban water tariffs. This issue is even more relevant in privatized water industries, such as those in England and Wales, where the price-cap regulation is adopted. In this paper, an input-distance function is used to estimate productivity change and its determinants for the English and Welsh water-only companies (WoCs) over the period of 1993-2009. The impacts of several exogenous variables on companies' efficiencies are also explored. From a policy perspective, this study describes how regulators can use this type of modeling and results to calculate illustrative X factors for the WoCs. The results indicate that the 1994 and 1999 price reviews stimulated technical change, and there were small efficiency gains. However, the 2004 price review did not accelerate efficiency change or improve technical change. The results also indicated that during the whole period of study, the excessive scale of the WoCs contributed negatively to productivity growth. On average, WoCs reported relatively high efficiency levels, which suggests that they had already been investing in technologies that reduce long-term input requirements with respect to exogenous and service-quality variables. Finally, an average WoC needs to improve its productivity toward that of the best company by 1.58%. The methodology and results of this study are of great interest to both regulators and water-company managers for evaluating the effectiveness of regulation and making informed decisions.

  19. The role of PET quantification in cardiovascular imaging.

    Science.gov (United States)

    Slomka, Piotr; Berman, Daniel S; Alexanderson, Erick; Germano, Guido

    2014-08-01

    Positron Emission Tomography (PET) has several clinical and research applications in cardiovascular imaging. Myocardial perfusion imaging with PET allows accurate global and regional measurements of myocardial perfusion, myocardial blood flow and function at stress and rest in one exam. Simultaneous assessment of function and perfusion by PET with quantitative software is currently the routine practice. Combination of ejection fraction reserve with perfusion information may improve the identification of severe disease. The myocardial viability can be estimated by quantitative comparison of fluorodeoxyglucose ( 18 FDG) and rest perfusion imaging. The myocardial blood flow and coronary flow reserve measurements are becoming routinely included in the clinical assessment due to enhanced dynamic imaging capabilities of the latest PET/CT scanners. Absolute flow measurements allow evaluation of the coronary microvascular dysfunction and provide additional prognostic and diagnostic information for coronary disease. Standard quantitative approaches to compute myocardial blood flow from kinetic PET data in automated and rapid fashion have been developed for 13 N-ammonia, 15 O-water and 82 Rb radiotracers. The agreement between software methods available for such analysis is excellent. Relative quantification of 82 Rb PET myocardial perfusion, based on comparisons to normal databases, demonstrates high performance for the detection of obstructive coronary disease. New tracers, such as 18 F-flurpiridaz may allow further improvements in the disease detection. Computerized analysis of perfusion at stress and rest reduces the variability of the assessment as compared to visual analysis. PET quantification can be enhanced by precise coregistration with CT angiography. In emerging clinical applications, the potential to identify vulnerable plaques by quantification of atherosclerotic plaque uptake of 18 FDG and 18 F-sodium fluoride tracers in carotids, aorta and coronary arteries

  20. Framework for Modelling Multiple Input Complex Aggregations for Interactive Installations

    DEFF Research Database (Denmark)

    Padfield, Nicolas; Andreasen, Troels

    2012-01-01

    on fuzzy logic and provides a method for variably balancing interaction and user input with the intention of the artist or director. An experimental design is presented, demonstrating an intuitive interface for parametric modelling of a complex aggregation function. The aggregation function unifies...

  1. Standardless quantification by parameter optimization in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Limandri, Silvina P. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Bonetto, Rita D. [Centro de Investigacion y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco (CINDECA), CONICET, 47 Street 257, (1900) La Plata (Argentina); Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1 and 47 Streets (1900) La Plata (Argentina); Josa, Victor Galvan; Carreras, Alejo C. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Trincavelli, Jorge C., E-mail: trincavelli@famaf.unc.edu.ar [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina)

    2012-11-15

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum Registered-Sign for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: Black-Right-Pointing-Pointer A method for standardless quantification in EPMA is presented. Black-Right-Pointing-Pointer It gives better results than the commercial software GENESIS Spectrum. Black-Right-Pointing-Pointer It gives better results than the software DTSA. Black-Right-Pointing-Pointer It allows the determination of the conductive coating thickness. Black-Right-Pointing-Pointer It gives an estimation for the concentration uncertainties.

  2. High-Voltage-Input Level Translator Using Standard CMOS

    Science.gov (United States)

    Yager, Jeremy A.; Mojarradi, Mohammad M.; Vo, Tuan A.; Blalock, Benjamin J.

    2011-01-01

    proposed integrated circuit would translate (1) a pair of input signals having a low differential potential and a possibly high common-mode potential into (2) a pair of output signals having the same low differential potential and a low common-mode potential. As used here, "low" and "high" refer to potentials that are, respectively, below or above the nominal supply potential (3.3 V) at which standard complementary metal oxide/semiconductor (CMOS) integrated circuits are designed to operate. The input common-mode potential could lie between 0 and 10 V; the output common-mode potential would be 2 V. This translation would make it possible to process the pair of signals by use of standard 3.3-V CMOS analog and/or mixed-signal (analog and digital) circuitry on the same integrated-circuit chip. A schematic of the circuit is shown in the figure. Standard 3.3-V CMOS circuitry cannot withstand input potentials greater than about 4 V. However, there are many applications that involve low-differential-potential, high-common-mode-potential input signal pairs and in which standard 3.3-V CMOS circuitry, which is relatively inexpensive, would be the most appropriate circuitry for performing other functions on the integrated-circuit chip that handles the high-potential input signals. Thus, there is a need to combine high-voltage input circuitry with standard low-voltage CMOS circuitry on the same integrated-circuit chip. The proposed circuit would satisfy this need. In the proposed circuit, the input signals would be coupled into both a level-shifting pair and a common-mode-sensing pair of CMOS transistors. The output of the level-shifting pair would be fed as input to a differential pair of transistors. The resulting differential current output would pass through six standoff transistors to be mirrored into an output branch by four heterojunction bipolar transistors. The mirrored differential current would be converted back to potential by a pair of diode-connected transistors

  3. New efficient five-input majority gate for quantum-dot cellular automata

    International Nuclear Information System (INIS)

    Farazkish, Razieh; Navi, Keivan

    2012-01-01

    A novel fault-tolerant five-input majority gate for quantum-dot cellular automata is presented. Quantum-dot cellular automata (QCA) is an emerging technology which is considered to be presented in future computers. Two principle logic elements in QCA are “majority gate” and “inverter.” In this paper, we propose a new approach to the design of fault-tolerant five-input majority gate by considering two-dimensional arrays of QCA cells. We analyze fault tolerance properties of such block five-input majority gate in terms of misalignment, missing, and dislocation cells. Some physical proofs are used for verifying five-input majority gate circuit layout and functionality. Our results clearly demonstrate that the redundant version of the block five-input majority gate is more robust than the standard style for this gate.

  4. An Uncertainty Quantification Framework for Remote Sensing Retrievals

    Science.gov (United States)

    Braverman, A. J.; Hobbs, J.

    2017-12-01

    Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.

  5. Wavelet-Based Frequency Response Function: Comparative Study of Input Excitation

    Directory of Open Access Journals (Sweden)

    K. Dziedziech

    2014-01-01

    Full Text Available Time-variant systems can be found in many areas of engineering. It is widely accepted that the classical Fourier-based methods are not suitable for the analysis and identification of such systems. The time-variant frequency response function—based on the continuous wavelet transform—is used in this paper for the analysis of time-variant systems. The focus is on the comparative study of various broadband input excitations. The performance of the method is tested using simulated data from a simple MDOF system and experimental data from a frame-like structure.

  6. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  7. Data-independent MS/MS quantification of neuropeptides for determination of putative feeding-related neurohormones in microdialysate.

    Science.gov (United States)

    Schmerberg, Claire M; Liang, Zhidan; Li, Lingjun

    2015-01-21

    Food consumption is an important behavior that is regulated by an intricate array of neuropeptides (NPs). Although many feeding-related NPs have been identified in mammals, precise mechanisms are unclear and difficult to study in mammals, as current methods are not highly multiplexed and require extensive a priori knowledge about analytes. New advances in data-independent acquisition (DIA) MS/MS and the open-source quantification software Skyline have opened up the possibility to identify hundreds of compounds and quantify them from a single DIA MS/MS run. An untargeted DIA MS(E) quantification method using Skyline software for multiplexed, discovery-driven quantification was developed and found to produce linear calibration curves for peptides at physiologically relevant concentrations using a protein digest as internal standard. By using this method, preliminary relative quantification of the crab Cancer borealis neuropeptidome (winnowing candidate NPs related to a behavior of interest in a functionally relevant manner, and demonstrates the success of such a UPLC-MS(E) quantification method using the open source software Skyline.

  8. Harmonize input selection for sediment transport prediction

    Science.gov (United States)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  9. Characterizing stroke lesions using digital templates and lesion quantification tools in a web-based imaging informatics system for a large-scale stroke rehabilitation clinical trial

    Science.gov (United States)

    Wang, Ximing; Edwardson, Matthew; Dromerick, Alexander; Winstein, Carolee; Wang, Jing; Liu, Brent

    2015-03-01

    Previously, we presented an Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) imaging informatics system that supports a large-scale phase III stroke rehabilitation trial. The ePR system is capable of displaying anonymized patient imaging studies and reports, and the system is accessible to multiple clinical trial sites and users across the United States via the web. However, the prior multicenter stroke rehabilitation trials lack any significant neuroimaging analysis infrastructure. In stroke related clinical trials, identification of the stroke lesion characteristics can be meaningful as recent research shows that lesion characteristics are related to stroke scale and functional recovery after stroke. To facilitate the stroke clinical trials, we hope to gain insight into specific lesion characteristics, such as vascular territory, for patients enrolled into large stroke rehabilitation trials. To enhance the system's capability for data analysis and data reporting, we have integrated new features with the system: a digital brain template display, a lesion quantification tool and a digital case report form. The digital brain templates are compiled from published vascular territory templates at each of 5 angles of incidence. These templates were updated to include territories in the brainstem using a vascular territory atlas and the Medical Image Processing, Analysis and Visualization (MIPAV) tool. The digital templates are displayed for side-by-side comparisons and transparent template overlay onto patients' images in the image viewer. The lesion quantification tool quantifies planimetric lesion area from user-defined contour. The digital case report form stores user input into a database, then displays contents in the interface to allow for reviewing, editing, and new inputs. In sum, the newly integrated system features provide the user with readily-accessible web-based tools to identify the vascular territory involved, estimate lesion area

  10. Non-intrusive uncertainty quantification of computational fluid dynamics simulations: notes on the accuracy and efficiency

    Science.gov (United States)

    Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher

    2017-11-01

    Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.

  11. Input-output supervisor

    International Nuclear Information System (INIS)

    Dupuy, R.

    1970-01-01

    The input-output supervisor is the program which monitors the flow of informations between core storage and peripheral equipments of a computer. This work is composed of three parts: 1 - Study of a generalized input-output supervisor. With sample modifications it looks like most of input-output supervisors which are running now on computers. 2 - Application of this theory on a magnetic drum. 3 - Hardware requirement for time-sharing. (author) [fr

  12. Reagent-Free Quantification of Aqueous Free Chlorine via Electrical Readout of Colorimetrically Functionalized Pencil Lines.

    Science.gov (United States)

    Mohtasebi, Amirmasoud; Broomfield, Andrew D; Chowdhury, Tanzina; Selvaganapathy, P Ravi; Kruse, Peter

    2017-06-21

    Colorimetric methods are commonly used to quantify free chlorine in drinking water. However, these methods are not suitable for reagent-free, continuous, and autonomous applications. Here, we demonstrate how functionalization of a pencil-drawn film with phenyl-capped aniline tetramer (PCAT) can be used for quantitative electric readout of free chlorine concentrations. The functionalized film can be implemented in a simple fluidic device for continuous sensing of aqueous free chlorine concentrations. The sensor is selective to free chlorine and can undergo a reagent-free reset for further measurements. Our sensor is superior to electrochemical methods in that it does not require a reference electrode. It is capable of quantification of free chlorine in the range of 0.1-12 ppm with higher precision than colorimetric (absorptivity) methods. The interactions of PCAT with the pencil-drawn film upon exposure to hypochlorite were characterized spectroscopically. A previously reported detection mechanism relied on the measurement of a baseline shift to quantify free chlorine concentrations. The new method demonstrated here measures initial spike size upon exposure to free chlorine. It relies on a fast charge built up on the sensor film due to intermittent PCAT salt formation. It has the advantage of being significantly faster than the measurement of baseline shift, but it cannot be used to detect gradual changes in free chlorine concentration without the use of frequent reset pulses. The stability of PCAT was examined in the presence of free chlorine as a function of pH. While most ions commonly present in drinking water do not interfere with the free chlorine detection, other oxidants may contribute to the signal. Our sensor is easy to fabricate and robust, operates reagent-free, and has very low power requirements and is thus suitable for remote deployment.

  13. Modeling qRT-PCR dynamics with application to cancer biomarker quantification.

    Science.gov (United States)

    Chervoneva, Inna; Freydin, Boris; Hyslop, Terry; Waldman, Scott A

    2017-01-01

    Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is widely used for molecular diagnostics and evaluating prognosis in cancer. The utility of mRNA expression biomarkers relies heavily on the accuracy and precision of quantification, which is still challenging for low abundance transcripts. The critical step for quantification is accurate estimation of efficiency needed for computing a relative qRT-PCR expression. We propose a new approach to estimating qRT-PCR efficiency based on modeling dynamics of polymerase chain reaction amplification. In contrast, only models for fluorescence intensity as a function of polymerase chain reaction cycle have been used so far for quantification. The dynamics of qRT-PCR efficiency is modeled using an ordinary differential equation model, and the fitted ordinary differential equation model is used to obtain effective polymerase chain reaction efficiency estimates needed for efficiency-adjusted quantification. The proposed new qRT-PCR efficiency estimates were used to quantify GUCY2C (Guanylate Cyclase 2C) mRNA expression in the blood of colorectal cancer patients. Time to recurrence and GUCY2C expression ratios were analyzed in a joint model for survival and longitudinal outcomes. The joint model with GUCY2C quantified using the proposed polymerase chain reaction efficiency estimates provided clinically meaningful results for association between time to recurrence and longitudinal trends in GUCY2C expression.

  14. Effects of humic acid on DNA quantification with Quantifiler® Human DNA Quantification kit and short tandem repeat amplification efficiency.

    Science.gov (United States)

    Seo, Seung Bum; Lee, Hye Young; Zhang, Ai Hua; Kim, Hye Yeon; Shin, Dong Hoon; Lee, Soong Deok

    2012-11-01

    Correct DNA quantification is an essential part to obtain reliable STR typing results. Forensic DNA analysts often use commercial kits for DNA quantification; among them, real-time-based DNA quantification kits are most frequently used. Incorrect DNA quantification due to the presence of PCR inhibitors may affect experiment results. In this study, we examined the alteration degree of DNA quantification results estimated in DNA samples containing a PCR inhibitor by using a Quantifiler® Human DNA Quantification kit. For experiments, we prepared approximately 0.25 ng/μl DNA samples containing various concentrations of humic acid (HA). The quantification results were 0.194-0.303 ng/μl at 0-1.6 ng/μl HA (final concentration in the Quantifiler reaction) and 0.003-0.168 ng/μl at 2.4-4.0 ng/μl HA. Most DNA quantity was undetermined when HA concentration was higher than 4.8 ng/μl HA. The C (T) values of an internal PCR control (IPC) were 28.0-31.0, 36.5-37.1, and undetermined at 0-1.6, 2.4, and 3.2 ng/μl HA. These results indicate that underestimated DNA quantification results may be obtained in the DNA sample with high C (T) values of IPC. Thus, researchers should carefully interpret the DNA quantification results. We additionally examined the effects of HA on the STR amplification by using an Identifiler® kit and a MiniFiler™ kit. Based on the results of this study, it is thought that a better understanding of various effects of HA would help researchers recognize and manipulate samples containing HA.

  15. Quantification of Spatial Heterogeneity in Old Growth Forst of Korean Pine

    Science.gov (United States)

    Wang Zhengquan; Wang Qingcheng; Zhang Yandong

    1997-01-01

    Spatial hetergeneity is a very important issue in studying functions and processes of ecological systems at various scales. Semivariogram analysis is an effective technique to summarize spatial data, and quantification of sptail heterogeneity. In this paper, we propose some principles to use semivariograms to characterize and compare spatial heterogeneity of...

  16. Quantification of local mobilities

    DEFF Research Database (Denmark)

    Zhang, Y. B.

    2018-01-01

    A new method for quantification of mobilities of local recrystallization boundary segments is presented. The quantification is based on microstructures characterized using electron microscopy and on determination of migration velocities and driving forces for local boundary segments. Pure aluminium...... is investigated and the results show that even for a single recrystallization boundary, different boundary segments migrate differently, and the differences can be understood based on variations in mobilities and local deformed microstructures. The present work has important implications for understanding...

  17. Impact of Personal Characteristics and Technical Factors on Quantification of Sodium 18F-Fluoride Uptake in Human Arteries

    DEFF Research Database (Denmark)

    Blomberg, Björn Alexander; Thomassen, Anders; de Jong, Pim A

    2015-01-01

    Sodium (18)F-fluoride ((18)F-NaF) PET/CT imaging is a promising imaging technique for assessment of atherosclerosis, but is hampered by a lack of validated quantification protocols. Both personal characteristics and technical factors can affect quantification of arterial (18)F-NaF uptake....... This study investigated if blood activity, renal function, injected dose, circulating time, and PET/CT system affect quantification of arterial (18)F-NaF uptake. METHODS: Eighty-nine healthy subjects were prospectively examined by (18)F-NaF PET/CT imaging. Arterial (18)F-NaF uptake was quantified...... assessed the effect of personal characteristics and technical factors on quantification of arterial (18)F-NaF uptake. RESULTS: NaFmax and TBRmax/mean were dependent on blood activity (β = .34 to .44, P

  18. Optimal control of LQR for discrete time-varying systems with input delays

    Science.gov (United States)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  19. Exponential convergence rate (the spectral convergence) of the fast Pade transform for exact quantification in magnetic resonance spectroscopy

    International Nuclear Information System (INIS)

    Belkic, Dzevad

    2006-01-01

    This study deals with the most challenging numerical aspect for solving the quantification problem in magnetic resonance spectroscopy (MRS). The primary goal is to investigate whether it could be feasible to carry out a rigorous computation within finite arithmetics to reconstruct exactly all the machine accurate input spectral parameters of every resonance from a synthesized noiseless time signal. We also consider simulated time signals embedded in random Gaussian distributed noise of the level comparable to the weakest resonances in the corresponding spectrum. The present choice for this high-resolution task in MRS is the fast Pade transform (FPT). All the sought spectral parameters (complex frequencies and amplitudes) can unequivocally be reconstructed from a given input time signal by using the FPT. Moreover, the present computations demonstrate that the FPT can achieve the spectral convergence, which represents the exponential convergence rate as a function of the signal length for a fixed bandwidth. Such an extraordinary feature equips the FPT with the exemplary high-resolution capabilities that are, in fact, theoretically unlimited. This is illustrated in the present study by the exact reconstruction (within machine accuracy) of all the spectral parameters from an input time signal comprised of 25 harmonics, i.e. complex damped exponentials, including those for tightly overlapped and nearly degenerate resonances whose chemical shifts differ by an exceedingly small fraction of only 10 -11 ppm. Moreover, without exhausting even a quarter of the full signal length, the FPT is shown to retrieve exactly all the input spectral parameters defined with 12 digits of accuracy. Specifically, we demonstrate that when the FPT is close to the convergence region, an unprecedented phase transition occurs, since literally a few additional signal points are sufficient to reach the full 12 digit accuracy with the exponentially fast rate of convergence. This is the critical

  20. Spot quantification in two dimensional gel electrophoresis image analysis: comparison of different approaches and presentation of a novel compound fitting algorithm

    Science.gov (United States)

    2014-01-01

    Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860

  1. Does Input Quality Drive Measured Differences in Firm Productivity?

    DEFF Research Database (Denmark)

    Fox, Jeremy T.; Smeets, Valerie Anne Rolande

    is roughly of the same order of magnitude as some competitive effects found in the literature, but input quality measures do not explain most productivity dispersion, despite economically large production function coefficients. We find that the wage bill explains as much dispersion as human capital measures.......Firms in the same industry can differ in measured productivity by multiples of 3. Griliches (1957) suggests one explanation: the quality of inputs differs across firms. We add labor market history variables such as experience and firm and industry tenure, as well as general human capital measures...

  2. Quantification in single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Buvat, Irene

    2005-01-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena; 2 - quantification in SPECT, problems and correction methods: Attenuation, scattering, un-stationary spatial resolution, partial volume effect, movement, tomographic reconstruction, calibration; 3 - Synthesis: actual quantification accuracy; 4 - Beyond the activity concentration measurement

  3. Development of a VHH-Based Erythropoietin Quantification Assay

    DEFF Research Database (Denmark)

    Kol, Stefan; Beuchert Kallehauge, Thomas; Adema, Simon

    2015-01-01

    Erythropoietin (EPO) quantification during cell line selection and bioreactor cultivation has traditionally been performed with ELISA or HPLC. As these techniques suffer from several drawbacks, we developed a novel EPO quantification assay. A camelid single-domain antibody fragment directed against...... human EPO was evaluated as a capturing antibody in a label-free biolayer interferometry-based quantification assay. Human recombinant EPO can be specifically detected in Chinese hamster ovary cell supernatants in a sensitive and pH-dependent manner. This method enables rapid and robust quantification...

  4. Quantification procedures in micro X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Kanngiesser, Birgit

    2003-01-01

    For the quantification in micro X-ray fluorescence analysis standardfree quantification procedures have become especially important. An introduction to the basic concepts of these quantification procedures is given, followed by a short survey of the procedures which are available now and what kind of experimental situations and analytical problems are addressed. The last point is extended by the description of an own development for the fundamental parameter method, which renders the inclusion of nonparallel beam geometries possible. Finally, open problems for the quantification procedures are discussed

  5. Inverse Tasks In The Tsunami Problem: Nonlinear Regression With Inaccurate Input Data

    Science.gov (United States)

    Lavrentiev, M.; Shchemel, A.; Simonov, K.

    A variant of modified training functional that allows considering inaccurate input data is suggested. A limiting case when a part of input data is completely undefined, and, therefore, a problem of reconstruction of hidden parameters should be solved, is also considered. Some numerical experiments are presented. It is assumed that a dependence of known output variables on known input ones should be found is the classic problem definition, which is widely used in the majority of neural nets algorithms. The quality of approximation is evaluated as a performance function. Often the error of the task is evaluated as squared distance between known input data and predicted data multiplied by weighed coefficients. These coefficients may be named "precision coefficients". When inputs are not known exactly, natural generalization of performance function is adding member that responsible for distance between known inputs and shifted inputs, which lessen model's error. It is desirable that the set of variable parameters is compact for training to be con- verging. In the above problem it is possible to choose variants of demands of a priori compactness, which allow meaningful interpretation in the smoothness of the model dependence. Two kinds of regularization was used, first limited squares of coefficients responsible for nonlinearity and second limited multiplication of the above coeffi- cients and linear coefficients. Asymptotic universality of neural net ability to approxi- mate various smooth functions with any accuracy by increase of the number of tunable parameters is often the base for selecting a type of neural net approximation. It is pos- sible to show that used neural net will approach to Fourier integral transform, which approximate abilities are known, with increasing of the number of tunable parameters. In the limiting case, when input data is set with zero precision, the problem of recon- struction of hidden parameters with observed output data appears. The

  6. On the complex quantification of risk: systems-based perspective on terrorism.

    Science.gov (United States)

    Haimes, Yacov Y

    2011-08-01

    This article highlights the complexity of the quantification of the multidimensional risk function, develops five systems-based premises on quantifying the risk of terrorism to a threatened system, and advocates the quantification of vulnerability and resilience through the states of the system. The five premises are: (i) There exists interdependence between a specific threat to a system by terrorist networks and the states of the targeted system, as represented through the system's vulnerability, resilience, and criticality-impact. (ii) A specific threat, its probability, its timing, the states of the targeted system, and the probability of consequences can be interdependent. (iii) The two questions in the risk assessment process: "What is the likelihood?" and "What are the consequences?" can be interdependent. (iv) Risk management policy options can reduce both the likelihood of a threat to a targeted system and the associated likelihood of consequences by changing the states (including both vulnerability and resilience) of the system. (v) The quantification of risk to a vulnerable system from a specific threat must be built on a systemic and repeatable modeling process, by recognizing that the states of the system constitute an essential step to construct quantitative metrics of the consequences based on intelligence gathering, expert evidence, and other qualitative information. The fact that the states of all systems are functions of time (among other variables) makes the time frame pivotal in each component of the process of risk assessment, management, and communication. Thus, risk to a system, caused by an initiating event (e.g., a threat) is a multidimensional function of the specific threat, its probability and time frame, the states of the system (representing vulnerability and resilience), and the probabilistic multidimensional consequences. © 2011 Society for Risk Analysis.

  7. Conceptual Design of GRIG (GUI Based RETRAN Input Generator)

    International Nuclear Information System (INIS)

    Lee, Gyung Jin; Hwang, Su Hyun; Hong, Soon Joon; Lee, Byung Chul; Jang, Chan Su; Um, Kil Sup

    2007-01-01

    For the development of high performance methodology using advanced transient analysis code, it is essential to generate the basic input of transient analysis code by rigorous QA procedures. There are various types of operating NPPs (Nuclear Power Plants) in Korea such as Westinghouse plants, KSNP(Korea Standard Nuclear Power Plant), APR1400 (Advance Power Reactor), etc. So there are some difficulties to generate and manage systematically the input of transient analysis code reflecting the inherent characteristics of various types of NPPs. To minimize the user faults and investment man power and to generate effectively and accurately the basic inputs of transient analysis code for all domestic NPPs, it is needed to develop the program that can automatically generate the basic input, which can be directly applied to the transient analysis, from the NPP design material. ViRRE (Visual RETRAN Running Environment) developed by KEPCO (Korea Electric Power Corporation) and KAERI (Korea Atomic Energy Research Institute) provides convenient working environment for Kori Unit 1/2. ViRRE shows the calculated results through on-line display but its capability is limited on the convenient execution of RETRAN. So it can not be used as input generator. ViSA (Visual System Analyzer) developed by KAERI is a NPA (Nuclear Plant Analyzer) using RETRAN and MARS code as thermal-hydraulic engine. ViSA contains both pre-processing and post-processing functions. In the pre-processing, only the trip data cards and boundary conditions can be changed through GUI mode based on pre-prepared text-input, so the capability of input generation is very limited. SNAP (Symbolic Nuclear Analysis Package) developed by Applied Programming Technology, Inc. and NRC (Nuclear Regulatory Commission) provides efficient working environment for the use of nuclear safety analysis codes such as RELAP5 and TRAC-M codes. SNAP covers wide aspects of thermal-hydraulic analysis from model creation through data analysis

  8. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    Kovtonyuk, A.; Petruzzi, A.; D'Auria, F.

    2015-01-01

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  9. Volume measurement study for large scale input accountancy tank

    International Nuclear Information System (INIS)

    Uchikoshi, Seiji; Watanabe, Yuichi; Tsujino, Takeshi

    1999-01-01

    Large Scale Tank Calibration (LASTAC) facility, including an experimental tank which has the same volume and structure as the input accountancy tank of Rokkasho Reprocessing Plant (RRP) was constructed in Nuclear Material Control Center of Japan. Demonstration experiments have been carried out to evaluate a precision of solution volume measurement and to establish the procedure of highly accurate pressure measurement for a large scale tank with dip-tube bubbler probe system to be applied to the input accountancy tank of RRP. Solution volume in a tank is determined from substitution the solution level for the calibration function obtained in advance, which express a relation between the solution level and its volume in the tank. Therefore, precise solution volume measurement needs a precise calibration function that is determined carefully. The LASTAC calibration experiments using pure water showed good result in reproducibility. (J.P.N.)

  10. ColloInputGenerator

    DEFF Research Database (Denmark)

    2013-01-01

    This is a very simple program to help you put together input files for use in Gries' (2007) R-based collostruction analysis program. It basically puts together a text file with a frequency list of lexemes in the construction and inserts a column where you can add the corpus frequencies. It requires...... it as input for basic collexeme collostructional analysis (Stefanowitsch & Gries 2003) in Gries' (2007) program. ColloInputGenerator is, in its current state, based on programming commands introduced in Gries (2009). Projected updates: Generation of complete work-ready frequency lists....

  11. In vivo MRS metabolite quantification using genetic optimization

    Science.gov (United States)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; van Ormondt, D.; Graveron-Demilly, D.

    2011-11-01

    The in vivo quantification of metabolites' concentrations, revealed in magnetic resonance spectroscopy (MRS) spectra, constitutes the main subject under investigation in this work. Significant contributions based on artificial intelligence tools, such as neural networks (NNs), with good results have been presented lately but have shown several drawbacks, regarding their quantification accuracy under difficult conditions. A general framework that encounters the quantification procedure as an optimization problem, which is solved using a genetic algorithm (GA), is proposed in this paper. Two different lineshape models are examined, while two GA configurations are applied on artificial data. Moreover, the introduced quantification technique deals with metabolite peaks' overlapping, a considerably difficult situation occurring under real conditions. Appropriate experiments have proved the efficiency of the introduced methodology, in artificial MRS data, by establishing it as a generic metabolite quantification procedure.

  12. In vivo MRS metabolite quantification using genetic optimization

    International Nuclear Information System (INIS)

    Papakostas, G A; Mertzios, B G; Karras, D A; Van Ormondt, D; Graveron-Demilly, D

    2011-01-01

    The in vivo quantification of metabolites' concentrations, revealed in magnetic resonance spectroscopy (MRS) spectra, constitutes the main subject under investigation in this work. Significant contributions based on artificial intelligence tools, such as neural networks (NNs), with good results have been presented lately but have shown several drawbacks, regarding their quantification accuracy under difficult conditions. A general framework that encounters the quantification procedure as an optimization problem, which is solved using a genetic algorithm (GA), is proposed in this paper. Two different lineshape models are examined, while two GA configurations are applied on artificial data. Moreover, the introduced quantification technique deals with metabolite peaks' overlapping, a considerably difficult situation occurring under real conditions. Appropriate experiments have proved the efficiency of the introduced methodology, in artificial MRS data, by establishing it as a generic metabolite quantification procedure

  13. Accurate episomal HIV 2-LTR circles quantification using optimized DNA isolation and droplet digital PCR.

    Science.gov (United States)

    Malatinkova, Eva; Kiselinova, Maja; Bonczkowski, Pawel; Trypsteen, Wim; Messiaen, Peter; Vermeire, Jolien; Verhasselt, Bruno; Vervisch, Karen; Vandekerckhove, Linos; De Spiegelaere, Ward

    2014-01-01

    In HIV-infected patients on combination antiretroviral therapy (cART), the detection of episomal HIV 2-LTR circles is a potential marker for ongoing viral replication. Quantification of 2-LTR circles is based on quantitative PCR or more recently on digital PCR assessment, but is hampered due to its low abundance. Sample pre-PCR processing is a critical step for 2-LTR circles quantification, which has not yet been sufficiently evaluated in patient derived samples. We compared two sample processing procedures to more accurately quantify 2-LTR circles using droplet digital PCR (ddPCR). Episomal HIV 2-LTR circles were either isolated by genomic DNA isolation or by a modified plasmid DNA isolation, to separate the small episomal circular DNA from chromosomal DNA. This was performed in a dilution series of HIV-infected cells and HIV-1 infected patient derived samples (n=59). Samples for the plasmid DNA isolation method were spiked with an internal control plasmid. Genomic DNA isolation enables robust 2-LTR circles quantification. However, in the lower ranges of detection, PCR inhibition caused by high genomic DNA load substantially limits the amount of sample input and this impacts sensitivity and accuracy. Moreover, total genomic DNA isolation resulted in a lower recovery of 2-LTR templates per isolate, further reducing its sensitivity. The modified plasmid DNA isolation with a spiked reference for normalization was more accurate in these low ranges compared to genomic DNA isolation. A linear correlation of both methods was observed in the dilution series (R2=0.974) and in the patient derived samples with 2-LTR numbers above 10 copies per million peripheral blood mononuclear cells (PBMCs), (R2=0.671). Furthermore, Bland-Altman analysis revealed an average agreement between the methods within the 27 samples in which 2-LTR circles were detectable with both methods (bias: 0.3875±1.2657 log10). 2-LTR circles quantification in HIV-infected patients proved to be more

  14. Evaluation of two population-based input functions for quantitative neurological FDG PET studies

    International Nuclear Information System (INIS)

    Eberl, S.; Anayat, A.R.; Fulton, R.R.; Hooper, P.K.; Fulham, M.J.

    1997-01-01

    The conventional measurement of the regional cerebral metabolic rate of glucose (rCMRGlc) with fluorodexoyglucose (FDG) and positron emission tomography (PET) requires arterial or arterialised-venous (a-v) blood sampling at frequent intervals to obtain the plasma input function (IF). We evaluated the accuracy of rCMRGlc measurements using population-based IFs that were calibrated with two a-v blood samples. Population-based IFs were derived from: (1) the average of a-v IFs from 26 patients (Standard IF) and (2) a published model of FDG plasma concentration (Feng IF). Values for rCMRGlc calculated from the population-based IFs were compared with values obtained with IFs derived from frequent a-v blood sampling in 20 non-diabetic and six diabetic patients. Values for rCMRGlc calculated with the different IFs were highly correlated for both patient groups (r≥0.992) and root mean square residuals about the regression line were less than 0.24 mg/min/100 g. The Feng IF tended to underestimate high rCMRGlc. Both population-based IFs simplify the measurement of rCMRGlc with minimal loss in accuracy and require only two a-v blood samples for calibration. The reduced blood sampling requirements markedly reduce radiation exposure to the blood sampler. (orig.)

  15. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    Science.gov (United States)

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  16. Solar Wind Energy Input during Prolonged, Intense Northward Interplanetary Magnetic Fields: A New Coupling Function

    Science.gov (United States)

    Du, A. M.; Tsurutani, B. T.; Sun, W.

    2012-04-01

    Sudden energy release (ER) events in the midnight sector at auroral zone latitudes during intense (B > 10 nT), long-duration (T > 3 hr), northward (Bz > 0 nT = N) IMF magnetic clouds (MCs) during solar cycle 23 (SC23) have been examined in detail. The MCs with northward-then-southward (NS) IMFs were analyzed separately from MCs with southward-then-northward (SN) configurations. It is found that there is a lack of substorms during the N field intervals of NS clouds. In sharp contrast, ER events do occur during the N field portions of SN MCs. From the above two results it is reasonable to conclude that the latter ER events represent residual energy remaining from the preceding S portions of the SN MCs. We derive a new solar wind-magnetosphere coupling function during northward IMFs: ENIMF = α N-1/12V 7/3B1/2 + β V |Dstmin|. The first term on the right-hand side of the equation represents the energy input via "viscous interaction", and the second term indicates the residual energy stored in the magnetotail. It is empirically found that the magnetosphere/magnetotail can store energy for a maximum of ~ 4 hrs before it has dissipated away. This concept is defining one for ER/substorm energy storage. Our scenario indicates that the rate of solar wind energy injection into the magnetosphere/magnetotail determines the form of energy release into the magnetosphere/ionosphere. This may be more important than the dissipation mechanism itself (in understanding the form of the release). The concept of short-term energy storage is applied for the solar case. It is argued that it may be necessary to identify the rate of energy input into solar magnetic loop systems to be able to predict the occurrence of solar flares.

  17. Preliminary evaluation of MRI-derived input function for quantitative measurement of glucose metabolism in an integrated PET-MRI

    International Nuclear Information System (INIS)

    Anazodo, Udunna; Kewin, Matthew; Finger, Elizabeth; Thiessen, Jonathan; Hadway, Jennifer; Butler, John; Pavlosky, William; Prato, Frank; Thompson, Terry; St Lawrence, Keith

    2015-01-01

    PET semi-quantitative methods such as relative uptake value can be robust but offer no biological information and do not account for intra-subject variability in tracer administration or clearance. Simultaneous multimodal measurements that combine PET and MRI not only permit crucial multiparametric measurements, it provides means of applying tracer kinetic modelling without the need for serial arterial blood sampling. In this study we adapted an image-derived input function (IDIF) method to improve characterization of glucose metabolism in an ongoing dementia study. Here we present preliminary results in a small group of frontotemporal dementia patients and controls. IDIF was obtained directly from dynamic PET data guided by regions of interest drawn on carotid vessels on high resolution T1-weighted MR Images. IDIF was corrected for contamination of non-arterial voxels. A validation of the method was performed in a porcine model in a PET-CT scanner comparing IDIF to direct arterial blood samples. Metabolic rate of glucose (CMRglc) was measured voxel-by-voxel in gray matter producing maps that were compared between groups. Net influx rate (Ki) and global mean CMRglc are reported. A good correlation (r = 0.9 p<0.0001) was found between corrected IDIF and input function measured from direct arterial blood sampling in the validation study. In 3 FTD and 3 controls, a trend towards hypometabolism was found in frontal, temporal and parietal lobes similar to significant differences previously reported by other groups. The global mean CMRglc and Ki observed in control subjects are in line with previous reports. In general, kinetic modelling of PET-FDG using an MR-IDIF can improve characterization of glucose metabolism in dementia. This method is feasible in multimodal studies that aim to combine PET molecular imaging with MRI as dynamic PET can be acquired along with multiple MRI measurements.

  18. Preliminary evaluation of MRI-derived input function for quantitative measurement of glucose metabolism in an integrated PET-MRI

    Energy Technology Data Exchange (ETDEWEB)

    Anazodo, Udunna; Kewin, Matthew [Lawson Health Research Institute, Department of Medical Biophysics, Western University, London, Ontario (Canada); Finger, Elizabeth [Department of Clinical Neurological Sciences, Western University, London, Ontario (Canada); Thiessen, Jonathan; Hadway, Jennifer; Butler, John [Lawson Health Research Institute, Department of Medical Biophysics, Western University, London, Ontario (Canada); Pavlosky, William [Diagnostic Imaging, St Joseph' s Health Care, London, Ontario (Canada); Prato, Frank; Thompson, Terry; St Lawrence, Keith [Lawson Health Research Institute, Department of Medical Biophysics, Western University, London, Ontario (Canada)

    2015-05-18

    PET semi-quantitative methods such as relative uptake value can be robust but offer no biological information and do not account for intra-subject variability in tracer administration or clearance. Simultaneous multimodal measurements that combine PET and MRI not only permit crucial multiparametric measurements, it provides means of applying tracer kinetic modelling without the need for serial arterial blood sampling. In this study we adapted an image-derived input function (IDIF) method to improve characterization of glucose metabolism in an ongoing dementia study. Here we present preliminary results in a small group of frontotemporal dementia patients and controls. IDIF was obtained directly from dynamic PET data guided by regions of interest drawn on carotid vessels on high resolution T1-weighted MR Images. IDIF was corrected for contamination of non-arterial voxels. A validation of the method was performed in a porcine model in a PET-CT scanner comparing IDIF to direct arterial blood samples. Metabolic rate of glucose (CMRglc) was measured voxel-by-voxel in gray matter producing maps that were compared between groups. Net influx rate (Ki) and global mean CMRglc are reported. A good correlation (r = 0.9 p<0.0001) was found between corrected IDIF and input function measured from direct arterial blood sampling in the validation study. In 3 FTD and 3 controls, a trend towards hypometabolism was found in frontal, temporal and parietal lobes similar to significant differences previously reported by other groups. The global mean CMRglc and Ki observed in control subjects are in line with previous reports. In general, kinetic modelling of PET-FDG using an MR-IDIF can improve characterization of glucose metabolism in dementia. This method is feasible in multimodal studies that aim to combine PET molecular imaging with MRI as dynamic PET can be acquired along with multiple MRI measurements.

  19. Quantification of complex modular architecture in plants.

    Science.gov (United States)

    Reeb, Catherine; Kaandorp, Jaap; Jansson, Fredrik; Puillandre, Nicolas; Dubuisson, Jean-Yves; Cornette, Raphaël; Jabbour, Florian; Coudert, Yoan; Patiño, Jairo; Flot, Jean-François; Vanderpoorten, Alain

    2018-04-01

    Morphometrics, the assignment of quantities to biological shapes, is a powerful tool to address taxonomic, evolutionary, functional and developmental questions. We propose a novel method for shape quantification of complex modular architecture in thalloid plants, whose extremely reduced morphologies, combined with the lack of a formal framework for thallus description, have long rendered taxonomic and evolutionary studies extremely challenging. Using graph theory, thalli are described as hierarchical series of nodes and edges, allowing for accurate, homologous and repeatable measurements of widths, lengths and angles. The computer program MorphoSnake was developed to extract the skeleton and contours of a thallus and automatically acquire, at each level of organization, width, length, angle and sinuosity measurements. Through the quantification of leaf architecture in Hymenophyllum ferns (Polypodiopsida) and a fully worked example of integrative taxonomy in the taxonomically challenging thalloid liverwort genus Riccardia, we show that MorphoSnake is applicable to all ramified plants. This new possibility of acquiring large numbers of quantitative traits in plants with complex modular architectures opens new perspectives of applications, from the development of rapid species identification tools to evolutionary analyses of adaptive plasticity. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.

  20. Synchronization properties of coupled chaotic neurons: The role of random shared input

    International Nuclear Information System (INIS)

    Kumar, Rupesh; Bilal, Shakir; Ramaswamy, Ram

    2016-01-01

    Spike-time correlations of neighbouring neurons depend on their intrinsic firing properties as well as on the inputs they share. Studies have shown that periodically firing neurons, when subjected to random shared input, exhibit asynchronicity. Here, we study the effect of random shared input on the synchronization of weakly coupled chaotic neurons. The cases of so-called electrical and chemical coupling are both considered, and we observe a wide range of synchronization behaviour. When subjected to identical shared random input, there is a decrease in the threshold coupling strength needed for chaotic neurons to synchronize in-phase. The system also supports lag–synchronous states, and for these, we find that shared input can cause desynchronization. We carry out a master stability function analysis for a network of such neurons and show agreement with the numerical simulations. The contrasting role of shared random input for complete and lag synchronized neurons is useful in understanding spike-time correlations observed in many areas of the brain.

  1. Synchronization properties of coupled chaotic neurons: The role of random shared input

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Rupesh [School of Computational and Integrative Sciences, Jawaharlal Nehru University, New Delhi 110067 (India); Bilal, Shakir [Department of Physics and Astrophysics, University of Delhi, Delhi 110 007 (India); Ramaswamy, Ram [School of Computational and Integrative Sciences, Jawaharlal Nehru University, New Delhi 110067 (India); School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110067 (India)

    2016-06-15

    Spike-time correlations of neighbouring neurons depend on their intrinsic firing properties as well as on the inputs they share. Studies have shown that periodically firing neurons, when subjected to random shared input, exhibit asynchronicity. Here, we study the effect of random shared input on the synchronization of weakly coupled chaotic neurons. The cases of so-called electrical and chemical coupling are both considered, and we observe a wide range of synchronization behaviour. When subjected to identical shared random input, there is a decrease in the threshold coupling strength needed for chaotic neurons to synchronize in-phase. The system also supports lag–synchronous states, and for these, we find that shared input can cause desynchronization. We carry out a master stability function analysis for a network of such neurons and show agreement with the numerical simulations. The contrasting role of shared random input for complete and lag synchronized neurons is useful in understanding spike-time correlations observed in many areas of the brain.

  2. Impact of acid atmospheric deposition on soils : quantification of chemical and hydrologic processes

    NARCIS (Netherlands)

    Grinsven, van J.J.M.

    1988-01-01

    Atmospheric deposition of SO x , NOx and NHx will cause major changes in the chemical composition of solutions in acid soils, which may affect the biological functions of the soil. This thesis deals with quantification of soil acidification by means of chemical

  3. Quantification of arbuscular mycorrhizal fungal DNA in roots: how important is material preservation?

    Science.gov (United States)

    Janoušková, Martina; Püschel, David; Hujslová, Martina; Slavíková, Renata; Jansa, Jan

    2015-04-01

    Monitoring populations of arbuscular mycorrhizal fungi (AMF) in roots is a pre-requisite for improving our understanding of AMF ecology and functioning of the symbiosis in natural conditions. Among other approaches, quantification of fungal DNA in plant tissues by quantitative real-time PCR is one of the advanced techniques with a great potential to process large numbers of samples and to deliver truly quantitative information. Its application potential would greatly increase if the samples could be preserved by drying, but little is currently known about the feasibility and reliability of fungal DNA quantification from dry plant material. We addressed this question by comparing quantification results based on dry root material to those obtained from deep-frozen roots of Medicago truncatula colonized with Rhizophagus sp. The fungal DNA was well conserved in the dry root samples with overall fungal DNA levels in the extracts comparable with those determined in extracts of frozen roots. There was, however, no correlation between the quantitative data sets obtained from the two types of material, and data from dry roots were more variable. Based on these results, we recommend dry material for qualitative screenings but advocate using frozen root materials if precise quantification of fungal DNA is required.

  4. Substitution elasticities between GHG-polluting and nonpolluting inputs in agricultural production: A meta-regression

    International Nuclear Information System (INIS)

    Liu, Boying; Richard Shumway, C.

    2016-01-01

    This paper reports meta-regressions of substitution elasticities between greenhouse gas (GHG) polluting and nonpolluting inputs in agricultural production, which is the main feedstock source for biofuel in the U.S. We treat energy, fertilizer, and manure collectively as the “polluting input” and labor, land, and capital as nonpolluting inputs. We estimate meta-regressions for samples of Morishima substitution elasticities for labor, land, and capital vs. the polluting input. Much of the heterogeneity of Morishima elasticities can be explained by type of primal or dual function, functional form, type and observational level of data, input categories, number of outputs, type of output, time period, and country categories. Each estimated long-run elasticity for the reference case, which is most relevant for assessing GHG emissions through life-cycle analysis, is greater than 1.0 and significantly different from zero. Most predicted long-run elasticities remain significantly different from zero at the data means. These findings imply that life-cycle analysis based on fixed proportion production functions could provide grossly inaccurate measures of GHG of biofuel. - Highlights: • This paper reports meta-regressions of substitution elasticities between greenhouse-gas (GHG) polluting and nonpolluting inputs in agricultural production, which is the main feedstock source for biofuel in the U.S. • We estimate meta-regressions for samples of Morishima substitution elasticities for labor, land, and capital vs. the polluting input based on 65 primary studies. • We found that each estimated long-run elasticity for the reference case, which is most relevant for assessing GHG emissions through life-cycle analysis, is greater than 1.0 and significantly different from zero. Most predicted long-run elasticities remain significantly different from zero at the data means. • These findings imply that life-cycle analysis based on fixed proportion production functions could

  5. Quantum-optical input-output relations for dispersive and lossy multilayer dielectric plates

    International Nuclear Information System (INIS)

    Gruner, T.; Welsch, D.

    1996-01-01

    Using the Green-function approach to the problem of quantization of the phenomenological Maxwell theory, the propagation of quantized radiation through dispersive and absorptive multilayer dielectric plates is studied. Input-output relations are derived, with special emphasis on the determination of the quantum noise generators associated with the absorption of radiation inside the dielectric matter. The input-output relations are used to express arbitrary correlation functions of the outgoing field in terms of correlation functions of the incoming field and those of the noise generators. To illustrate the theory, photons at dielectric tunneling barriers are considered. It is shown that inclusion in the calculations of losses in the photonic band gaps may substantially change the barrier traversal times. copyright 1996 The American Physical Society

  6. Whole-Brain Monosynaptic Afferent Inputs to Basal Forebrain Cholinergic System

    Directory of Open Access Journals (Sweden)

    Rongfeng Hu

    2016-10-01

    Full Text Available The basal forebrain cholinergic system (BFCS robustly modulates many important behaviors, such as arousal, attention, learning and memory, through heavy projections to cortex and hippocampus. However, the presynaptic partners governing BFCS activity still remain poorly understood. Here, we utilized a recently developed rabies virus-based cell-type-specific retrograde tracing system to map the whole-brain afferent inputs of the BFCS. We found that the BFCS receives inputs from multiple cortical areas, such as orbital frontal cortex, motor cortex, and insular cortex, and that the BFCS also receives dense inputs from several subcortical nuclei related to motivation and stress, including lateral septum (LS, central amygdala (CeA, paraventricular nucleus of hypothalamus (PVH, dorsal raphe (DRN and parabrachial nucleus (PBN. Interestingly, we found that the BFCS receives inputs from the olfactory areas and the entorhinal-hippocampal system. These results greatly expand our knowledge about the connectivity of the mouse BFCS and provided important preliminary indications for future exploration of circuit function.

  7. Quantitative assessment of multiple sclerosis lesion load using CAD and expert input

    Science.gov (United States)

    Gertych, Arkadiusz; Wong, Alexis; Sangnil, Alan; Liu, Brent J.

    2008-03-01

    Multiple sclerosis (MS) is a frequently encountered neurological disease with a progressive but variable course affecting the central nervous system. Outline-based lesion quantification in the assessment of lesion load (LL) performed on magnetic resonance (MR) images is clinically useful and provides information about the development and change reflecting overall disease burden. Methods of LL assessment that rely on human input are tedious, have higher intra- and inter-observer variability and are more time-consuming than computerized automatic (CAD) techniques. At present it seems that methods based on human lesion identification preceded by non-interactive outlining by CAD are the best LL quantification strategies. We have developed a CAD that automatically quantifies MS lesions, displays 3-D lesion map and appends radiological findings to original images according to current DICOM standard. CAD is also capable to display and track changes and make comparison between patient's separate MRI studies to determine disease progression. The findings are exported to a separate imaging tool for review and final approval by expert. Capturing and standardized archiving of manual contours is also implemented. Similarity coefficients calculated from quantities of LL in collected exams show a good correlation of CAD-derived results vs. those incorporated as expert's reading. Combining the CAD approach with an expert interaction may impact to the diagnostic work-up of MS patients because of improved reproducibility in LL assessment and reduced time for single MR or comparative exams reading. Inclusion of CAD-generated outlines as DICOM-compliant overlays into the image data can serve as a better reference in MS progression tracking.

  8. Decision peptide-driven: a free software tool for accurate protein quantification using gel electrophoresis and matrix assisted laser desorption ionization time of flight mass spectrometry.

    Science.gov (United States)

    Santos, Hugo M; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Nunes-Miranda, J D; Fdez-Riverola, Florentino; Carvallo, R; Capelo, J L

    2010-09-15

    The decision peptide-driven tool implements a software application for assisting the user in a protocol for accurate protein quantification based on the following steps: (1) protein separation through gel electrophoresis; (2) in-gel protein digestion; (3) direct and inverse (18)O-labeling and (4) matrix assisted laser desorption ionization time of flight mass spectrometry, MALDI analysis. The DPD software compares the MALDI results of the direct and inverse (18)O-labeling experiments and quickly identifies those peptides with paralleled loses in different sets of a typical proteomic workflow. Those peptides are used for subsequent accurate protein quantification. The interpretation of the MALDI data from direct and inverse labeling experiments is time-consuming requiring a significant amount of time to do all comparisons manually. The DPD software shortens and simplifies the searching of the peptides that must be used for quantification from a week to just some minutes. To do so, it takes as input several MALDI spectra and aids the researcher in an automatic mode (i) to compare data from direct and inverse (18)O-labeling experiments, calculating the corresponding ratios to determine those peptides with paralleled losses throughout different sets of experiments; and (ii) allow to use those peptides as internal standards for subsequent accurate protein quantification using (18)O-labeling. In this work the DPD software is presented and explained with the quantification of protein carbonic anhydrase. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  9. Estimating the arterial input function from dynamic contrast-enhanced MRI data with compensation for flow enhancement (II): Applications in spine diagnostics and assessment of crohn's disease

    NARCIS (Netherlands)

    van Schie, Jeroen J. N.; Lavini, Cristina; van Vliet, Lucas J.; Kramer, Gem; Pieters-van den Bos, Indra; Marcus, J. T.; Stoker, Jaap; Vos, Frans M.

    2017-01-01

    Pharmacokinetic (PK) models can describe microvascular density and integrity. An essential component of PK models is the arterial input function (AIF) representing the time-dependent concentration of contrast agent (CA) in the blood plasma supplied to a tissue. To evaluate a novel method for

  10. Towards an uncertainty quantification methodology with CASMO-5

    International Nuclear Information System (INIS)

    Wieselquist, W.; Vasiliev, A.; Ferroukhi, H.

    2011-01-01

    We present the development of an uncertainty quantification (UQ) methodology for the CASMO-5 lattice physics code, used extensively at the Paul Scherrer Institut for standalone neutronics calculations, as well as the generation of nuclear fuel segment libraries for the downstream core simulator, SIMULATE-3. We focus here on propagation of nuclear data uncertainties and describe the framework required for 'black box' UQ--in this case minor modifications of the code are necessary to allow perturbation of the CASMO-5 nuclear data library. We then implement a basic rst-order UQ method, direct perturbation, which directly produces sensitivity coefficients and when folded with the input nuclear data variance-covariance matrix (VCM) yields output uncertainties in the form of an output VCM. We discuss the implementation, including how to map the VCMs of a different group structure to the code library group structure (in our case the ENDF/B-VII-based 586-group library in CASMO-5), present some results for pin cell calculations, and conclude with future work. (author)

  11. Numerical Continuation Methods for Intrusive Uncertainty Quantification Studies

    Energy Technology Data Exchange (ETDEWEB)

    Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Phipps, Eric Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.

  12. A representation result for hysteresis operators with vector valued inputs and its application to models for magnetic materials

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Olaf, E-mail: Olaf.Klein@wias-berlin.de

    2014-02-15

    In this work, hysteresis operators mapping continuous vector-valued input functions being piecewise monotaffine, i.e. being piecewise the composition of a monotone with an affine function, to vector-valued output functions are considered. It is shown that the operator can be generated by a unique defined function on the convexity triple free strings. A formulation of a congruence property for periodic inputs is presented and reformulated as a condition for the generating string function.

  13. Quantification of regional cerebral blood flow (rCBF) measurement with one point sampling by sup 123 I-IMP SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Munaka, Masahiro [University of Occupational and Enviromental Health, Kitakyushu (Japan); Iida, Hidehiro; Murakami, Matsutaro

    1992-02-01

    A handy method of quantifying regional cerebral blood flow (rCBF) measurement by {sup 123}I-IMP SPECT was designed. A standard input function was made and the sampling time to calibrate this standard input function by one point sampling was optimized. An average standard input function was obtained from continuous arterial samplings of 12 healthy adults. The best sampling time was the minimum differential value between the integral calculus value of the standard input function calibrated by one point sampling and the input funciton by continuous arterial samplings. This time was 8 minutes after an intravenous injection of {sup 123}I-IMP and an error was estimated to be {+-}4.1%. The rCBF values by this method were evaluated by comparing them with the rCBF values of the input function with continuous arterial samplings in 2 healthy adults and a patient with cerebral infarction. A significant correlation (r=0.764, p<0.001) was obtained between both. (author).

  14. Total dose induced increase in input offset voltage in JFET input operational amplifiers

    International Nuclear Information System (INIS)

    Pease, R.L.; Krieg, J.; Gehlhausen, M.; Black, J.

    1999-01-01

    Four different types of commercial JFET input operational amplifiers were irradiated with ionizing radiation under a variety of test conditions. All experienced significant increases in input offset voltage (Vos). Microprobe measurement of the electrical characteristics of the de-coupled input JFETs demonstrates that the increase in Vos is a result of the mismatch of the degraded JFETs. (authors)

  15. Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.

    Science.gov (United States)

    Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei

    2018-06-01

    This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.

  16. Quantification of Emphysema with a Three-Dimensional Chest CT Scan: Correlation with the Visual Emphysema Scoring on Chest CT, Pulmonary Function Tests and Dyspnea Severity

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyun Jeong; Hwang, Jung Hwa [Dept. of Radiology, Soonchunhyang University Seoul Hospital, Seoul (Korea, Republic of)

    2011-09-15

    We wanted to prospectively evaluate the correlation between the quantification of emphysema using 3D CT densitometry with the visual emphysema score, pulmonary function tests (PFT) and the dyspnea score in patients with chronic obstructive pulmonary disease (COPD). Non-enhanced chest CT with 3D reconstruction was performed in 28 men with COPD (age 54-88 years). With histogram analysis, the total lung volume, mean lung density and proportion of low attenuation lung volume below predetermined thresholds were measured. The CT parameters were compared with the visual emphysema score, the PFT and the dyspnea score. A low attenuation lung volume below -950 HU was well correlated with the DLco and FEV{sub 1}/FVC. A Low attenuation lung volume below -950 HU and -930 HU was correlated with visual the emphysema score. A low attenuation lung volume below -950 HU was correlated with the dyspnea score, although the correlations between the other CT parameters and the dyspnea score were not significant. Objective quantification of emphysema using 3D CT densitometry was correlated with the visual emphysema score. A low attenuation lung volume below -950 HU was correlated with the DLco, the FEV{sub 1}/FVC and the dyspnea score.

  17. Quantification of Emphysema with a Three-Dimensional Chest CT Scan: Correlation with the Visual Emphysema Scoring on Chest CT, Pulmonary Function Tests and Dyspnea Severity

    International Nuclear Information System (INIS)

    Park, Hyun Jeong; Hwang, Jung Hwa

    2011-01-01

    We wanted to prospectively evaluate the correlation between the quantification of emphysema using 3D CT densitometry with the visual emphysema score, pulmonary function tests (PFT) and the dyspnea score in patients with chronic obstructive pulmonary disease (COPD). Non-enhanced chest CT with 3D reconstruction was performed in 28 men with COPD (age 54-88 years). With histogram analysis, the total lung volume, mean lung density and proportion of low attenuation lung volume below predetermined thresholds were measured. The CT parameters were compared with the visual emphysema score, the PFT and the dyspnea score. A low attenuation lung volume below -950 HU was well correlated with the DLco and FEV 1 /FVC. A Low attenuation lung volume below -950 HU and -930 HU was correlated with visual the emphysema score. A low attenuation lung volume below -950 HU was correlated with the dyspnea score, although the correlations between the other CT parameters and the dyspnea score were not significant. Objective quantification of emphysema using 3D CT densitometry was correlated with the visual emphysema score. A low attenuation lung volume below -950 HU was correlated with the DLco, the FEV 1 /FVC and the dyspnea score.

  18. Quantification of Back-End Nuclear Fuel Cycle Metrics Uncertainties Due to Cross Sections

    International Nuclear Information System (INIS)

    Tracy E. Stover Jr.

    2007-01-01

    This work examines uncertainties in the back end fuel cycle metrics of isotopic composition, decay heat, radioactivity, and radiotoxicity. Most advanced fuel cycle scenarios, including the ones represented in this work, are limited by one or more of these metrics, so that quantification of them becomes of great importance in order to optimize or select one of these scenarios. Uncertainty quantification, in this work, is performed by propagating cross-section covariance data, and later number density covariance data, through a reactor physics and depletion code sequence. Propagation of uncertainty is performed primarily via the Efficient Subspace Method (ESM). ESM decomposes the covariance data into singular pairs and perturbs input data along independent directions of the uncertainty and only for the most significant values of that uncertainty. Results of these perturbations being collected, ESM directly calculates the covariance of the observed output posteriori. By exploiting the rank deficient nature of the uncertainty data, ESM works more efficiently than traditional stochastic sampling, but is shown to produce equivalent results. ESM is beneficial for very detailed models with large amounts of input data that make stochastic sampling impractical. In this study various fuel cycle scenarios are examined. Simplified, representative models of pressurized water reactor (PWR) and boiling water reactor (BWR) fuels composed of both uranium oxide and mixed oxides are examined. These simple models are intended to give a representation of the uncertainty that can be associated with open uranium oxide fuel cycles and closed mixed oxide fuel cycles. The simplified models also serve as a demonstration to show that ESM and stochastic sampling produce equivalent results, because these models require minimum computer resources and have amounts of input data small enough such that either method can be quickly implemented and a numerical experiment performed. The simplified

  19. Quantification of Safety-Critical Software Test Uncertainty

    International Nuclear Information System (INIS)

    Khalaquzzaman, M.; Cho, Jaehyun; Lee, Seung Jun; Jung, Wondea

    2015-01-01

    The method, conservatively assumes that the failure probability of a software for the untested inputs is 1, and the failure probability turns in 0 for successful testing of all test cases. However, in reality the chance of failure exists due to the test uncertainty. Some studies have been carried out to identify the test attributes that affect the test quality. Cao discussed the testing effort, testing coverage, and testing environment. Management of the test uncertainties was discussed in. In this study, the test uncertainty has been considered to estimate the software failure probability because the software testing process is considered to be inherently uncertain. A reliability estimation of software is very important for a probabilistic safety analysis of a digital safety critical system of NPPs. This study focused on the estimation of the probability of a software failure that considers the uncertainty in software testing. In our study, BBN has been employed as an example model for software test uncertainty quantification. Although it can be argued that the direct expert elicitation of test uncertainty is much simpler than BBN estimation, however the BBN approach provides more insights and a basis for uncertainty estimation

  20. A method for quantitative analysis of standard and high-throughput qPCR expression data based on input sample quantity.

    Directory of Open Access Journals (Sweden)

    Mateusz G Adamski

    Full Text Available Over the past decade rapid advances have occurred in the understanding of RNA expression and its regulation. Quantitative polymerase chain reactions (qPCR have become the gold standard for quantifying gene expression. Microfluidic next generation, high throughput qPCR now permits the detection of transcript copy number in thousands of reactions simultaneously, dramatically increasing the sensitivity over standard qPCR. Here we present a gene expression analysis method applicable to both standard polymerase chain reactions (qPCR and high throughput qPCR. This technique is adjusted to the input sample quantity (e.g., the number of cells and is independent of control gene expression. It is efficiency-corrected and with the use of a universal reference sample (commercial complementary DNA (cDNA permits the normalization of results between different batches and between different instruments--regardless of potential differences in transcript amplification efficiency. Modifications of the input quantity method include (1 the achievement of absolute quantification and (2 a non-efficiency corrected analysis. When compared to other commonly used algorithms the input quantity method proved to be valid. This method is of particular value for clinical studies of whole blood and circulating leukocytes where cell counts are readily available.

  1. Plasma input function determination for PET using a commercial laboratory robot

    International Nuclear Information System (INIS)

    Alexoff, David L.; Shea, Colleen; Fowler, Joanna S.; King, Payton; Gatley, S. John; Schlyer, David J.; Wolf, Alfred P.

    1995-01-01

    A commercial laboratory robot system (Zymate PyTechnology II Laboratory Automation System) was interfaced to standard and custom laboratory equipment and programmed to perform rapid radiochemical assays necessary for plasma input function determination in quantitative PET studies in humans and baboons. A Zymark XP robot arm was used to carry out two assays: (1) the determination of total plasma radioactivity concentrations in a series of small-volume whole blood samples and (2) the determination of unchanged (parent) radiotracer in plasma using only solid phase extraction methods. Steady state robotic throughput for determination of total plasma radioactivity in whole blood samples (0.350 mL) is 14.3 samples/h, which includes automated centrifugation, pipetting, weighing and radioactivity counting. Robotic throughput for the assay of parent radiotracer in plasma is 4-6 samples/h depending on the radiotracer. Percents of total radioactivities present as parent radiotracers at 60 min. postinjection of 25 ± 5.0 (N 25), 26 ± 6.8 (N = 68), 13 ± 4.4 (N = 30), 32 ± 7.2 (N = 18), 16 ± 4.9 (N = 20), were obtained for carbon-11 labeled benztropine, raclopride, methylphenidate, SR 46349B (trans, 4-[(3Z)3-(2-dimethylamino-ethyl) oxyimino-3 (2-fluorophenyl)propen-1-yl]phenol), and cocaine respectively in baboon plasma and 84 ± 6.4 (N = 9), 18 ± 11 (N = 10), 74 ± 5.7 (N = 118) and 16 ± 3.7 (N = 18) for carbon-11 labeled benztropine, deprenyl, raclopride, and methylphenidate respectively in human plasma. The automated system has been used for more than 4 years for all plasma analyses for 7 different C-11 labeled compounds used routinely in our laboratory. The robotic radiotracer assay runs unattended and includes automated cleanup procedures that eliminates all human contact with plasma-contaminated containers

  2. Uncertainty quantification applied to the radiological characterization of radioactive waste.

    Science.gov (United States)

    Zaffora, B; Magistris, M; Saporta, G; Chevalier, J-P

    2017-09-01

    This paper describes the process adopted at the European Organization for Nuclear Research (CERN) to quantify uncertainties affecting the characterization of very-low-level radioactive waste. Radioactive waste is a by-product of the operation of high-energy particle accelerators. Radioactive waste must be characterized to ensure its safe disposal in final repositories. Characterizing radioactive waste means establishing the list of radionuclides together with their activities. The estimated activity levels are compared to the limits given by the national authority of the waste disposal. The quantification of the uncertainty affecting the concentration of the radionuclides is therefore essential to estimate the acceptability of the waste in the final repository but also to control the sorting, volume reduction and packaging phases of the characterization process. The characterization method consists of estimating the activity of produced radionuclides either by experimental methods or statistical approaches. The uncertainties are estimated using classical statistical methods and uncertainty propagation. A mixed multivariate random vector is built to generate random input parameters for the activity calculations. The random vector is a robust tool to account for the unknown radiological history of legacy waste. This analytical technique is also particularly useful to generate random chemical compositions of materials when the trace element concentrations are not available or cannot be measured. The methodology was validated using a waste population of legacy copper activated at CERN. The methodology introduced here represents a first approach for the uncertainty quantification (UQ) of the characterization process of waste produced at particle accelerators. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Exact reliability quantification of highly reliable systems with maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)

    2010-12-15

    When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.

  4. Land-use choices follow profitability at the expense of ecological functions in Indonesian smallholder landscapes.

    Science.gov (United States)

    Clough, Yann; Krishna, Vijesh V; Corre, Marife D; Darras, Kevin; Denmead, Lisa H; Meijide, Ana; Moser, Stefan; Musshoff, Oliver; Steinebach, Stefanie; Veldkamp, Edzo; Allen, Kara; Barnes, Andrew D; Breidenbach, Natalie; Brose, Ulrich; Buchori, Damayanti; Daniel, Rolf; Finkeldey, Reiner; Harahap, Idham; Hertel, Dietrich; Holtkamp, A Mareike; Hörandl, Elvira; Irawan, Bambang; Jaya, I Nengah Surati; Jochum, Malte; Klarner, Bernhard; Knohl, Alexander; Kotowska, Martyna M; Krashevska, Valentyna; Kreft, Holger; Kurniawan, Syahrul; Leuschner, Christoph; Maraun, Mark; Melati, Dian Nuraini; Opfermann, Nicole; Pérez-Cruzado, César; Prabowo, Walesa Edho; Rembold, Katja; Rizali, Akhmad; Rubiana, Ratna; Schneider, Dominik; Tjitrosoedirdjo, Sri Sudarmiyati; Tjoa, Aiyen; Tscharntke, Teja; Scheu, Stefan

    2016-10-11

    Smallholder-dominated agricultural mosaic landscapes are highlighted as model production systems that deliver both economic and ecological goods in tropical agricultural landscapes, but trade-offs underlying current land-use dynamics are poorly known. Here, using the most comprehensive quantification of land-use change and associated bundles of ecosystem functions, services and economic benefits to date, we show that Indonesian smallholders predominantly choose farm portfolios with high economic productivity but low ecological value. The more profitable oil palm and rubber monocultures replace forests and agroforests critical for maintaining above- and below-ground ecological functions and the diversity of most taxa. Between the monocultures, the higher economic performance of oil palm over rubber comes with the reliance on fertilizer inputs and with increased nutrient leaching losses. Strategies to achieve an ecological-economic balance and a sustainable management of tropical smallholder landscapes must be prioritized to avoid further environmental degradation.

  5. Land-use choices follow profitability at the expense of ecological functions in Indonesian smallholder landscapes

    Science.gov (United States)

    Clough, Yann; Krishna, Vijesh V.; Corre, Marife D.; Darras, Kevin; Denmead, Lisa H.; Meijide, Ana; Moser, Stefan; Musshoff, Oliver; Steinebach, Stefanie; Veldkamp, Edzo; Allen, Kara; Barnes, Andrew D.; Breidenbach, Natalie; Brose, Ulrich; Buchori, Damayanti; Daniel, Rolf; Finkeldey, Reiner; Harahap, Idham; Hertel, Dietrich; Holtkamp, A. Mareike; Hörandl, Elvira; Irawan, Bambang; Jaya, I. Nengah Surati; Jochum, Malte; Klarner, Bernhard; Knohl, Alexander; Kotowska, Martyna M.; Krashevska, Valentyna; Kreft, Holger; Kurniawan, Syahrul; Leuschner, Christoph; Maraun, Mark; Melati, Dian Nuraini; Opfermann, Nicole; Pérez-Cruzado, César; Prabowo, Walesa Edho; Rembold, Katja; Rizali, Akhmad; Rubiana, Ratna; Schneider, Dominik; Tjitrosoedirdjo, Sri Sudarmiyati; Tjoa, Aiyen; Tscharntke, Teja; Scheu, Stefan

    2016-10-01

    Smallholder-dominated agricultural mosaic landscapes are highlighted as model production systems that deliver both economic and ecological goods in tropical agricultural landscapes, but trade-offs underlying current land-use dynamics are poorly known. Here, using the most comprehensive quantification of land-use change and associated bundles of ecosystem functions, services and economic benefits to date, we show that Indonesian smallholders predominantly choose farm portfolios with high economic productivity but low ecological value. The more profitable oil palm and rubber monocultures replace forests and agroforests critical for maintaining above- and below-ground ecological functions and the diversity of most taxa. Between the monocultures, the higher economic performance of oil palm over rubber comes with the reliance on fertilizer inputs and with increased nutrient leaching losses. Strategies to achieve an ecological-economic balance and a sustainable management of tropical smallholder landscapes must be prioritized to avoid further environmental degradation.

  6. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  7. Biological 2-Input Decoder Circuit in Human Cells

    Science.gov (United States)

    2015-01-01

    Decoders are combinational circuits that convert information from n inputs to a maximum of 2n outputs. This operation is of major importance in computing systems yet it is vastly underexplored in synthetic biology. Here, we present a synthetic gene network architecture that operates as a biological decoder in human cells, converting 2 inputs to 4 outputs. As a proof-of-principle, we use small molecules to emulate the two inputs and fluorescent reporters as the corresponding four outputs. The experiments are performed using transient transfections in human kidney embryonic cells and the characterization by fluorescence microscopy and flow cytometry. We show a clear separation between the ON and OFF mean fluorescent intensity states. Additionally, we adopt the integrated mean fluorescence intensity for the characterization of the circuit and show that this metric is more robust to transfection conditions when compared to the mean fluorescent intensity. To conclude, we present the first implementation of a genetic decoder. This combinational system can be valuable toward engineering higher-order circuits as well as accommodate a multiplexed interface with endogenous cellular functions. PMID:24694115

  8. Biological 2-input decoder circuit in human cells.

    Science.gov (United States)

    Guinn, Michael; Bleris, Leonidas

    2014-08-15

    Decoders are combinational circuits that convert information from n inputs to a maximum of 2(n) outputs. This operation is of major importance in computing systems yet it is vastly underexplored in synthetic biology. Here, we present a synthetic gene network architecture that operates as a biological decoder in human cells, converting 2 inputs to 4 outputs. As a proof-of-principle, we use small molecules to emulate the two inputs and fluorescent reporters as the corresponding four outputs. The experiments are performed using transient transfections in human kidney embryonic cells and the characterization by fluorescence microscopy and flow cytometry. We show a clear separation between the ON and OFF mean fluorescent intensity states. Additionally, we adopt the integrated mean fluorescence intensity for the characterization of the circuit and show that this metric is more robust to transfection conditions when compared to the mean fluorescent intensity. To conclude, we present the first implementation of a genetic decoder. This combinational system can be valuable toward engineering higher-order circuits as well as accommodate a multiplexed interface with endogenous cellular functions.

  9. Digital PCR for direct quantification of viruses without DNA extraction.

    Science.gov (United States)

    Pavšič, Jernej; Žel, Jana; Milavec, Mojca

    2016-01-01

    DNA extraction before amplification is considered an essential step for quantification of viral DNA using real-time PCR (qPCR). However, this can directly affect the final measurements due to variable DNA yields and removal of inhibitors, which leads to increased inter-laboratory variability of qPCR measurements and reduced agreement on viral loads. Digital PCR (dPCR) might be an advantageous methodology for the measurement of virus concentrations, as it does not depend on any calibration material and it has higher tolerance to inhibitors. DNA quantification without an extraction step (i.e. direct quantification) was performed here using dPCR and two different human cytomegalovirus whole-virus materials. Two dPCR platforms were used for this direct quantification of the viral DNA, and these were compared with quantification of the extracted viral DNA in terms of yield and variability. Direct quantification of both whole-virus materials present in simple matrices like cell lysate or Tris-HCl buffer provided repeatable measurements of virus concentrations that were probably in closer agreement with the actual viral load than when estimated through quantification of the extracted DNA. Direct dPCR quantification of other viruses, reference materials and clinically relevant matrices is now needed to show the full versatility of this very promising and cost-efficient development in virus quantification.

  10. Biventricular MR volumetric analysis and MR flow quantification in the ascending aorta and pulmonary trunk for quantification of valvular regurgitation

    International Nuclear Information System (INIS)

    Rominger, M.B.

    2004-01-01

    Purpose: To test the value of biventricular volumetric analysis and the combination of biventricular volumetric analysis with flow quantification in the ascending aorta (Ao) and pulmonary trunk (Pu) for quantification of regurgitation volume and cardiac function in valvular regurgitation (VR) according to location and presence of single or multivalvular disease. Materials and Methods: In 106 patients, the stroke volumes were assessed by measuring the biventricular volumes and the forward-stroke volumes in the great and small circulation by measuring the flow in the Ao and Pu. Valve regurgitation volumes and quotients were calculated for single and multivalvular disease and correlated with semiquantitative 2D-echocardiography (grade I-IV). For the assessment of the cardiac function in VR, the volumetric parameters of ejection fraction and end-diastolic (EDV) and end-systolic (ESV) volumes were determined. Results: The detection rate was 49% for left ventricular (LV) VR and 42% for right ventricular (RV) VR. Low LV VR and RV VR usually could not be detected quantitatively, with the detection rate improving with echocardiographically higher insufficiency grades. Quantitative MRI could detect a higher grade solitary aortic valve insufficiency (≥2) in 11 of 12 patients and higher grade mitral valve insufficiency in 4 of 10 patients. A significant increase in RV and LV ventricular EDV and ESV was seen more often with increased MR regurgitation volumes. Aortic stenosis did not interfere with flow measurements in the Ao. Conclusions: Biventricular volumetry combined with flow measurements in Ao and Pu is a robust, applicable and simple method to assess higher grade regurgitation volumes and the cardiac function in single and multivalvular regurgitation at different locations. It is an important application for the diagnosis of VR by MRI [de

  11. Improved Method for PD-Quantification in Power Cables

    DEFF Research Database (Denmark)

    Holbøll, Joachim T.; Villefrance, Rasmus; Henriksen, Mogens

    1999-01-01

    n this paper, a method is described for improved quantification of partial discharges(PD) in power cables. The method is suitable for PD-detection and location systems in the MHz-range, where pulse attenuation and distortion along the cable cannot be neglected. The system transfer function...... was calculated and measured in order to form basis for magnitude calculation after each measurements. --- Limitations and capabilities of the method will be discussed and related to relevant field applications of high frequent PD-measurements. --- Methods for increased signal/noise ratio are easily implemented...

  12. Inverse Problems and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander

    2014-01-06

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  13. Inverse Problems and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.

    2014-01-01

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  14. Inverse problems and uncertainty quantification

    KAUST Repository

    Litvinenko, Alexander

    2013-12-18

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  15. Wideband Small-Signal Input dq Admittance Modeling of Six-Pulse Diode Rectifiers

    DEFF Research Database (Denmark)

    Yue, Xiaolong; Wang, Xiongfei; Blaabjerg, Frede

    2018-01-01

    This paper studies the wideband small-signal input dq admittance of six-pulse diode rectifiers. Considering the frequency coupling introduced by ripple frequency harmonics of d-and q-channel switching function, the proposed model successfully predicts the small-signal input dq admittance of six......-pulse diode rectifiers in high frequency regions that existing models fail to explain. Simulation and experimental results verify the accuracy of the proposed model....

  16. Investigation of bacterial hopanoid inputs to soils from Western Canada

    International Nuclear Information System (INIS)

    Shunthirasingham, Chubashini; Simpson, Myrna J.

    2006-01-01

    Hopanoids have been widely used as characteristic biomarkers to study inputs of bacterial biomass to sediments because they are preserved in the geologic record. A limited number of studies have been performed on hopanoid biomarkers in soils. The present study examined the distribution and potential preservation of hopanoids in soils that are developed under different climatic conditions and varying vegetative inputs. Solvent extraction and sequential chemical degradation methods were employed to extract both 'free' and 'bound' hopanoids, from three grassland soils, a grassland-forest transition soil, and a forest soil from Western Canada. Identification and quantification of hopanoids in the soil samples were carried out by gas chromatography-mass spectrometry. Methylbishomohopanol, bishomohopanol and bishomohopanoic acid were detected in all solvent extracts. The base hydrolysis and ruthenium tetroxide extracts contained only bishomohopanoic acid at a concentration range of 0.8-8.8 μg/gC and 2.2-28.3 μg/gC, respectively. The acid hydrolysis procedure did not release detectable amounts of hopanoids. The solvent extraction yielded the greatest amounts of 'free' hopanoids in two of the grassland soils (Dark Brown and Black Chernozems) and in the forest soil (Gray Luvisol). In contrast, the chemical degradation methods resulted in higher amounts of 'bound' hopanoids in the third grassland soil (Brown Chernozem) and the transition soil (Dark Gray Chernozem), indicating that more hopanoids exist in the 'bound' form in these soils. Overall, the forest and the transition soils contained more hopanoids than the grassland soils. This is hypothesized to be due to the greater degradation of hopanoids in the grassland soils and or sorption to clay minerals, as compared to the forest and transition soils

  17. Input and execution

    International Nuclear Information System (INIS)

    Carr, S.; Lane, G.; Rowling, G.

    1986-11-01

    This document describes the input procedures, input data files and operating instructions for the SYVAC A/C 1.03 computer program. SYVAC A/C 1.03 simulates the groundwater mediated movement of radionuclides from underground facilities for the disposal of low and intermediate level wastes to the accessible environment, and provides an estimate of the subsequent radiological risk to man. (author)

  18. The quantification of risk and tourism

    Directory of Open Access Journals (Sweden)

    Piet Croucamp

    2014-01-01

    Full Text Available Tourism in South Africa comprises 9.5% of Gross Domestic Product (GDP, but remains an underresearched industry, especially regarding the quantification of the risks prevailing in the social, political and economic environment in which the industry operates. Risk prediction, extrapolation forecasting is conducted largely in the context of a qualitative methodology. This article reflects on the quantification of social constructs as variables of risk in the tourism industry with reference to South Africa. The theory and methodology of quantification is briefly reviewed and the indicators of risk are conceptualized and operationalized. The identified indicators are scaled in indices for purposes of quantification. Risk assessments and the quantification of constructs rely heavily on the experience - often personal - of the researcher and this scholarly endeavour is, therefore, not inclusive of all possible identified indicators of risk. It is accepted that tourism in South Africa is an industry comprising of a large diversity of sectors, each with a different set of risk indicators and risk profiles. The emphasis of this article is thus on the methodology to be applied to a risk profile. A secondary endeavour is to provide for clarity about the conceptual and operational confines of risk in general, as well as how quantified risk relates to the tourism industry. The indices provided include both domesticand international risk indicators. The motivation for the article is to encourage a greater emphasis on quantitative research in our efforts to understand and manage a risk profile for the tourist industry.

  19. Quantification of cardiolipin by liquid chromatography-electrospray ionization mass spectrometry.

    Science.gov (United States)

    Garrett, Teresa A; Kordestani, Reza; Raetz, Christian R H

    2007-01-01

    Cardiolipin (CL), a tetra-acylated glycerophospholipid composed of two phosphatidyl moieties linked by a bridging glycerol, plays an important role in mitochondrial function in eukaryotic cells. Alterations to the content and acylation state of CL cause mitochondrial dysfunction and may be associated with pathologies such as ischemia, hypothyrodism, aging, and heart failure. The structure of CL is very complex because of microheterogeneity among its four acyl chains. Here we have developed a method for the quantification of CL molecular species by liquid chromatography-electrospray ionization mass spectrometry. We quantify the [M-2H](2-) ion of a CL of a given molecular formula and identify the CLs by their total number of carbons and unsaturations in the acyl chains. This method, developed using mouse macrophage RAW 264.7 tumor cells, is broadly applicable to other cell lines, tissues, bacteria and yeast. Furthermore, this method could be used for the quantification of lyso-CLs and bis-lyso-CLs.

  20. ICN_Atlas: Automated description and quantification of functional MRI activation patterns in the framework of intrinsic connectivity networks.

    Science.gov (United States)

    Kozák, Lajos R; van Graan, Louis André; Chaudhary, Umair J; Szabó, Ádám György; Lemieux, Louis

    2017-12-01

    Generally, the interpretation of functional MRI (fMRI) activation maps continues to rely on assessing their relationship to anatomical structures, mostly in a qualitative and often subjective way. Recently, the existence of persistent and stable brain networks of functional nature has been revealed; in particular these so-called intrinsic connectivity networks (ICNs) appear to link patterns of resting state and task-related state connectivity. These networks provide an opportunity of functionally-derived description and interpretation of fMRI maps, that may be especially important in cases where the maps are predominantly task-unrelated, such as studies of spontaneous brain activity e.g. in the case of seizure-related fMRI maps in epilepsy patients or sleep states. Here we present a new toolbox (ICN_Atlas) aimed at facilitating the interpretation of fMRI data in the context of ICN. More specifically, the new methodology was designed to describe fMRI maps in function-oriented, objective and quantitative way using a set of 15 metrics conceived to quantify the degree of 'engagement' of ICNs for any given fMRI-derived statistical map of interest. We demonstrate that the proposed framework provides a highly reliable quantification of fMRI activation maps using a publicly available longitudinal (test-retest) resting-state fMRI dataset. The utility of the ICN_Atlas is also illustrated on a parametric task-modulation fMRI dataset, and on a dataset of a patient who had repeated seizures during resting-state fMRI, confirmed on simultaneously recorded EEG. The proposed ICN_Atlas toolbox is freely available for download at http://icnatlas.com and at http://www.nitrc.org for researchers to use in their fMRI investigations. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  1. PLEXOS Input Data Generator

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-01

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  2. Spectroscopic analysis and in vitro imaging applications of a pH responsive AIE sensor with a two-input inhibit function.

    Science.gov (United States)

    Zhou, Zhan; Gu, Fenglong; Peng, Liang; Hu, Ying; Wang, Qianming

    2015-08-04

    A novel terpyridine derivative formed stable aggregates in aqueous media (DMSO/H2O = 1/99) with dramatically enhanced fluorescence compared to its organic solution. Moreover, the ultra-violet absorption spectra also demonstrated specific responses to the incorporation of water. The yellow emission at 557 nm changed to a solution with intense greenish luminescence only in the presence of protons and it conformed to a molecular logic gate with a two-input INHIBIT function. This molecular-based material could permeate into live cells and remain undissociated in the cytoplasm. The new aggregation induced emission (AIE) pH type bio-probe permitted easy collection of yellow luminescence images on a fluorescent microscope. As designed, it displayed striking green emission in organelles at low internal pH. This feature enabled the self-assembled structure to have a whole new function for the pH detection within the field of cell imaging.

  3. Joint estimation of the fractional differentiation orders and the unknown input for linear fractional non-commensurate system

    KAUST Repository

    Belkhatir, Zehor

    2015-11-05

    This paper deals with the joint estimation of the unknown input and the fractional differentiation orders of a linear fractional order system. A two-stage algorithm combining the modulating functions with a first-order Newton method is applied to solve this estimation problem. First, the modulating functions approach is used to estimate the unknown input for a given fractional differentiation orders. Then, the method is combined with a first-order Newton technique to identify the fractional orders jointly with the input. To show the efficiency of the proposed method, numerical examples illustrating the estimation of the neural activity, considered as input of a fractional model of the neurovascular coupling, along with the fractional differentiation orders are presented in both noise-free and noisy cases.

  4. SU-E-J-86: Lobar Lung Function Quantification by PET Galligas and CT Ventilation Imaging in Lung Cancer Patients

    International Nuclear Information System (INIS)

    Eslick, E; Kipritidis, J; Keall, P; Bailey, D; Bailey, E

    2014-01-01

    Purpose: The purpose of this study was to quantify the lobar lung function using the novel PET Galligas ([68Ga]-carbon nanoparticle) ventilation imaging and the investigational CT ventilation imaging in lung cancer patients pre-treatment. Methods: We present results on our first three lung cancer patients (2 male, mean age 78 years) as part of an ongoing ethics approved study. For each patient a PET Galligas ventilation (PET-V) image and a pair of breath hold CT images (end-exhale and end-inhale tidal volumes) were acquired using a Siemens Biograph PET CT. CT-ventilation (CT-V) images were created from the pair of CT images using deformable image registration (DIR) algorithms and the Hounsfield Unit (HU) ventilation metric. A comparison of ventilation quantification from each modality was done on the lobar level and the voxel level. A Bland-Altman plot was used to assess the difference in mean percentage contribution of each lobe to the total lung function between the two modalities. For each patient, a voxel-wise Spearmans correlation was calculated for the whole lungs between the two modalities. Results: The Bland-Altman plot demonstrated strong agreement between PET-V and CT-V for assessment of lobar function (r=0.99, p<0.001; range mean difference: −5.5 to 3.0). The correlation between PET-V and CT-V at the voxel level was moderate(r=0.60, p<0.001). Conclusion: This preliminary study on the three patients data sets demonstrated strong agreement between PET and CT ventilation imaging for the assessment of pre-treatment lung function at the lobar level. Agreement was only moderate at the level of voxel correlations. These results indicate that CT ventilation imaging has potential for assessing pre-treatment lobar lung function in lung cancer patients

  5. Higher order visual input to the mushroom bodies in the bee, Bombus impatiens.

    Science.gov (United States)

    Paulk, Angelique C; Gronenberg, Wulfila

    2008-11-01

    To produce appropriate behaviors based on biologically relevant associations, sensory pathways conveying different modalities are integrated by higher-order central brain structures, such as insect mushroom bodies. To address this function of sensory integration, we characterized the structure and response of optic lobe (OL) neurons projecting to the calyces of the mushroom bodies in bees. Bees are well known for their visual learning and memory capabilities and their brains possess major direct visual input from the optic lobes to the mushroom bodies. To functionally characterize these visual inputs to the mushroom bodies, we recorded intracellularly from neurons in bumblebees (Apidae: Bombus impatiens) and a single neuron in a honeybee (Apidae: Apis mellifera) while presenting color and motion stimuli. All of the mushroom body input neurons were color sensitive while a subset was motion sensitive. Additionally, most of the mushroom body input neurons would respond to the first, but not to subsequent, presentations of repeated stimuli. In general, the medulla or lobula neurons projecting to the calyx signaled specific chromatic, temporal, and motion features of the visual world to the mushroom bodies, which included sensory information required for the biologically relevant associations bees form during foraging tasks.

  6. (1) H-MRS processing parameters affect metabolite quantification

    DEFF Research Database (Denmark)

    Bhogal, Alex A; Schür, Remmelt R; Houtepen, Lotte C

    2017-01-01

    investigated the influence of model parameters and spectral quantification software on fitted metabolite concentration values. Sixty spectra in 30 individuals (repeated measures) were acquired using a 7-T MRI scanner. Data were processed by four independent research groups with the freedom to choose their own...... + NAAG/Cr + PCr and Glu/Cr + PCr, respectively. Metabolite quantification using identical (1) H-MRS data was influenced by processing parameters, basis sets and software choice. Locally preferred processing choices affected metabolite quantification, even when using identical software. Our results......Proton magnetic resonance spectroscopy ((1) H-MRS) can be used to quantify in vivo metabolite levels, such as lactate, γ-aminobutyric acid (GABA) and glutamate (Glu). However, there are considerable analysis choices which can alter the accuracy or precision of (1) H-MRS metabolite quantification...

  7. Liver kinetics of glucose analogs measured in pigs by PET: importance of dual-input blood sampling

    DEFF Research Database (Denmark)

    Munk, O L; Bass, L; Roelsgaard, K

    2001-01-01

    -input functions were very similar. CONCLUSION: Compartmental analysis of MG and FDG kinetics using dynamic PET data requires measurements of dual-input activity concentrations. Using the dual-input function, physiologically reasonable parameter estimates of K1, k2, and Vp were obtained, whereas the use......Metabolic processes studied by PET are quantified traditionally using compartmental models, which relate the time course of the tracer concentration in tissue to that in arterial blood. For liver studies, the use of arterial input may, however, cause systematic errors to the estimated kinetic...... parameters, because of ignorance of the dual blood supply from the hepatic artery and the portal vein to the liver. METHODS: Six pigs underwent PET after [15O]carbon monoxide inhalation, 3-O-[11C]methylglucose (MG) injection, and [18F]FDG injection. For the glucose scans, PET data were acquired for 90 min...

  8. Benchmarking common quantification strategies for large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Hogrebe, Alexander; von Stechow, Louise; Bekker-Jensen, Dorte B

    2018-01-01

    Comprehensive mass spectrometry (MS)-based proteomics is now feasible, but reproducible quantification remains challenging, especially for post-translational modifications such as phosphorylation. Here, we compare the most popular quantification techniques for global phosphoproteomics: label-free...

  9. Quality of early parent input predicts child vocabulary 3 years later.

    Science.gov (United States)

    Cartmill, Erica A; Armstrong, Benjamin F; Gleitman, Lila R; Goldin-Meadow, Susan; Medina, Tamara N; Trueswell, John C

    2013-07-09

    Children vary greatly in the number of words they know when they enter school, a major factor influencing subsequent school and workplace success. This variability is partially explained by the differential quantity of parental speech to preschoolers. However, the contexts in which young learners hear new words are also likely to vary in referential transparency; that is, in how clearly word meaning can be inferred from the immediate extralinguistic context, an aspect of input quality. To examine this aspect, we asked 218 adult participants to guess 50 parents' words from (muted) videos of their interactions with their 14- to 18-mo-old children. We found systematic differences in how easily individual parents' words could be identified purely from this socio-visual context. Differences in this kind of input quality correlated with the size of the children's vocabulary 3 y later, even after controlling for differences in input quantity. Although input quantity differed as a function of socioeconomic status, input quality (as here measured) did not, suggesting that the quality of nonverbal cues to word meaning that parents offer to their children is an individual matter, widely distributed across the population of parents.

  10. Quantification of Cannabinoid Content in Cannabis

    Science.gov (United States)

    Tian, Y.; Zhang, F.; Jia, K.; Wen, M.; Yuan, Ch.

    2015-09-01

    Cannabis is an economically important plant that is used in many fields, in addition to being the most commonly consumed illicit drug worldwide. Monitoring the spatial distribution of cannabis cultivation and judging whether it is drug- or fiber-type cannabis is critical for governments and international communities to understand the scale of the illegal drug trade. The aim of this study was to investigate whether the cannabinoids content in cannabis could be spectrally quantified using a spectrometer and to identify the optimal wavebands for quantifying the cannabinoid content. Spectral reflectance data of dried cannabis leaf samples and the cannabis canopy were measured in the laboratory and in the field, respectively. Correlation analysis and the stepwise multivariate regression method were used to select the optimal wavebands for cannabinoid content quantification based on the laboratory-measured spectral data. The results indicated that the delta-9-tetrahydrocannabinol (THC) content in cannabis leaves could be quantified using laboratory-measured spectral reflectance data and that the 695 nm band is the optimal band for THC content quantification. This study provides prerequisite information for designing spectral equipment to enable immediate quantification of THC content in cannabis and to discriminate drug- from fiber-type cannabis based on THC content quantification in the field.

  11. IM-135-562-00 IDIM instruction manual for the isolated digital input module for SLC

    International Nuclear Information System (INIS)

    Kieffer, J.

    1983-01-01

    This unit is designed as a general purpose digital input module. Each input is opto-isolated, and is designed to operate over a wide range of positive input voltages. The unit is nonlatching, each CAMAC Read of the unit presenting the data as seen at the inputs at the time of the Read command. The manual includes the following sections: specifications; front panel, lights and connectors; reference list; functional description; 82S100 logic equations; test and checkout procedures; appendix A, SLAC 82S100 programming data; and appendix B, JXK-FORTH 135-562 program listing

  12. SSYST-3. Input description

    International Nuclear Information System (INIS)

    Meyder, R.

    1983-12-01

    The code system SSYST-3 is designed to analyse the thermal and mechanical behaviour of a fuel rod during a LOCA. The report contains a complete input-list for all modules and several tested inputs for a LOCA analysis. (orig.)

  13. Identification and quantification of bio-actives and metabolites in physiological matrices by automated HPLC-MS

    NARCIS (Netherlands)

    van Platerink, C.J.

    2010-01-01

    Identification and quantification of bio-actives and metabolites in physiological matrices by automated HPLC-MS > Food plays an important role in human health. Nowadays there is an increasing interest in the health effects of so-calles functional foods, e.g. effects on blood pressure, cholesterol

  14. Stochastic approach for radionuclides quantification

    Science.gov (United States)

    Clement, A.; Saurel, N.; Perrin, G.

    2018-01-01

    Gamma spectrometry is a passive non-destructive assay used to quantify radionuclides present in more or less complex objects. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the calibration coefficient are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modelling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometric shape, matrix composition, matrix and source distribution. Some of them are strongly dependent on package data knowledge and operator backgrounds. The French Commissariat à l'Energie Atomique (CEA) is developing a new methodology to quantify nuclear materials in waste packages and waste drums without operator adjustment and internal package configuration knowledge. This method suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability densities of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum, and outside dimensions of interest objects. The methodology is testing to quantify actinide activity in different kind of matrix, composition, and configuration of sources standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology.

  15. Generalization of some hidden subgroup algorithms for input sets of arbitrary size

    Science.gov (United States)

    Poslu, Damla; Say, A. C. Cem

    2006-05-01

    We consider the problem of generalizing some quantum algorithms so that they will work on input domains whose cardinalities are not necessarily powers of two. When analyzing the algorithms we assume that generating superpositions of arbitrary subsets of basis states whose cardinalities are not necessarily powers of two perfectly is possible. We have taken Ballhysa's model as a template and have extended it to Chi, Kim and Lee's generalizations of the Deutsch-Jozsa algorithm and to Simon's algorithm. With perfectly equal superpositions of input sets of arbitrary size, Chi, Kim and Lee's generalized Deutsch-Jozsa algorithms, both for evenly-distributed and evenly-balanced functions, worked with one-sided error property. For Simon's algorithm the success probability of the generalized algorithm is the same as that of the original for input sets of arbitrary cardinalities with equiprobable superpositions, since the property that the measured strings are all those which have dot product zero with the string we search, for the case where the function is 2-to-1, is not lost.

  16. Material input of nuclear fuel

    International Nuclear Information System (INIS)

    Rissanen, S.; Tarjanne, R.

    2001-01-01

    The Material Input (MI) of nuclear fuel, expressed in terms of the total amount of natural material needed for manufacturing a product, is examined. The suitability of the MI method for assessing the environmental impacts of fuels is also discussed. Material input is expressed as a Material Input Coefficient (MIC), equalling to the total mass of natural material divided by the mass of the completed product. The material input coefficient is, however, only an intermediate result, which should not be used as such for the comparison of different fuels, because the energy contents of nuclear fuel is about 100 000-fold compared to the energy contents of fossil fuels. As a final result, the material input is expressed in proportion to the amount of generated electricity, which is called MIPS (Material Input Per Service unit). Material input is a simplified and commensurable indicator for the use of natural material, but because it does not take into account the harmfulness of materials or the way how the residual material is processed, it does not alone express the amount of environmental impacts. The examination of the mere amount does not differentiate between for example coal, natural gas or waste rock containing usually just sand. Natural gas is, however, substantially more harmful for the ecosystem than sand. Therefore, other methods should also be used to consider the environmental load of a product. The material input coefficient of nuclear fuel is calculated using data from different types of mines. The calculations are made among other things by using the data of an open pit mine (Key Lake, Canada), an underground mine (McArthur River, Canada) and a by-product mine (Olympic Dam, Australia). Furthermore, the coefficient is calculated for nuclear fuel corresponding to the nuclear fuel supply of Teollisuuden Voima (TVO) company in 2001. Because there is some uncertainty in the initial data, the inaccuracy of the final results can be even 20-50 per cent. The value

  17. Colour thresholding and objective quantification in bioimaging

    Science.gov (United States)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  18. Quantification analysis of CT for aphasic patients

    International Nuclear Information System (INIS)

    Watanabe, Shunzo; Ooyama, Hiroshi; Hojo, Kei; Tasaki, Hiroichi; Hanazono, Toshihide; Sato, Tokijiro; Metoki, Hirobumi; Totsuka, Motokichi; Oosumi, Noboru.

    1987-01-01

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on Slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis). (author)

  19. Cues, quantification, and agreement in language comprehension.

    Science.gov (United States)

    Tanner, Darren; Bulkes, Nyssa Z

    2015-12-01

    We investigated factors that affect the comprehension of subject-verb agreement in English, using quantification as a window into the relationship between morphosyntactic processes in language production and comprehension. Event-related brain potentials (ERPs) were recorded while participants read sentences with grammatical and ungrammatical verbs, in which the plurality of the subject noun phrase was either doubly marked (via overt plural quantification and morphological marking on the noun) or singly marked (via only plural morphology on the noun). Both acceptability judgments and the ERP data showed heightened sensitivity to agreement violations when quantification provided an additional cue to the grammatical number of the subject noun phrase, over and above plural morphology. This is consistent with models of grammatical comprehension that emphasize feature prediction in tandem with cue-based memory retrieval. Our results additionally contrast with those of prior studies that showed no effects of plural quantification on agreement in language production. These findings therefore highlight some nontrivial divergences in the cues and mechanisms supporting morphosyntactic processing in language production and comprehension.

  20. Anatomical Inputs From the Sensory and Value Structures to the Tail of the Rat Striatum

    Directory of Open Access Journals (Sweden)

    Haiyan Jiang

    2018-05-01

    Full Text Available The caudal region of the rodent striatum, called the tail of the striatum (TS, is a relatively small area but might have a distinct function from other striatal subregions. Recent primate studies showed that this part of the striatum has a unique function in encoding long-term value memory of visual objects for habitual behavior. This function might be due to its specific connectivity. We identified inputs to the rat TS and compared those with inputs to the dorsomedial striatum (DMS in the same animals. The TS directly received anatomical inputs from both sensory structures and value-coding regions, but the DMS did not. First, inputs from the sensory cortex and sensory thalamus to the TS were found; visual, auditory, somatosensory and gustatory cortex and thalamus projected to the TS but not to the DMS. Second, two value systems innervated the TS; dopamine and serotonin neurons in the lateral part of the substantia nigra pars compacta (SNc and dorsal raphe nucleus projected to the TS, respectively. The DMS received inputs from the separate group of dopamine neurons in the medial part of the SNc. In addition, learning-related regions of the limbic system innervated the TS; the temporal areas and the basolateral amygdala selectively innervated the TS, but not the DMS. Our data showed that both sensory and value-processing structures innervated the TS, suggesting its plausible role in value-guided sensory-motor association for habitual behavior.

  1. Performance of the Real-Q EBV Quantification Kit for Epstein-Barr Virus DNA Quantification in Whole Blood.

    Science.gov (United States)

    Huh, Hee Jae; Park, Jong Eun; Kim, Ji Youn; Yun, Sun Ae; Lee, Myoung Keun; Lee, Nam Yong; Kim, Jong Won; Ki, Chang Seok

    2017-03-01

    There has been increasing interest in standardized and quantitative Epstein-Barr virus (EBV) DNA testing for the management of EBV disease. We evaluated the performance of the Real-Q EBV Quantification Kit (BioSewoom, Korea) in whole blood (WB). Nucleic acid extraction and real-time PCR were performed by using the MagNA Pure 96 (Roche Diagnostics, Germany) and 7500 Fast real-time PCR system (Applied Biosystems, USA), respectively. Assay sensitivity, linearity, and conversion factor were determined by using the World Health Organization international standard diluted in EBV-negative WB. We used 81 WB clinical specimens to compare performance of the Real-Q EBV Quantification Kit and artus EBV RG PCR Kit (Qiagen, Germany). The limit of detection (LOD) and limit of quantification (LOQ) for the Real-Q kit were 453 and 750 IU/mL, respectively. The conversion factor from EBV genomic copies to IU was 0.62. The linear range of the assay was from 750 to 10⁶ IU/mL. Viral load values measured with the Real-Q assay were on average 0.54 log₁₀ copies/mL higher than those measured with the artus assay. The Real-Q assay offered good analytical performance for EBV DNA quantification in WB.

  2. Development and validation of an open source quantification tool for DSC-MRI studies.

    Science.gov (United States)

    Gordaliza, P M; Mateos-Pérez, J M; Montesinos, P; Guzmán-de-Villoria, J A; Desco, M; Vaquero, J J

    2015-03-01

    This work presents the development of an open source tool for the quantification of dynamic susceptibility-weighted contrast-enhanced (DSC) perfusion studies. The development of this tool is motivated by the lack of open source tools implemented on open platforms to allow external developers to implement their own quantification methods easily and without the need of paying for a development license. This quantification tool was developed as a plugin for the ImageJ image analysis platform using the Java programming language. A modular approach was used in the implementation of the components, in such a way that the addition of new methods can be done without breaking any of the existing functionalities. For the validation process, images from seven patients with brain tumors were acquired and quantified with the presented tool and with a widely used clinical software package. The resulting perfusion parameters were then compared. Perfusion parameters and the corresponding parametric images were obtained. When no gamma-fitting is used, an excellent agreement with the tool used as a gold-standard was obtained (R(2)>0.8 and values are within 95% CI limits in Bland-Altman plots). An open source tool that performs quantification of perfusion studies using magnetic resonance imaging has been developed and validated using a clinical software package. It works as an ImageJ plugin and the source code has been published with an open source license. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Chemical sensors are hybrid-input memristors

    Science.gov (United States)

    Sysoev, V. I.; Arkhipov, V. E.; Okotrub, A. V.; Pershin, Y. V.

    2018-04-01

    Memristors are two-terminal electronic devices whose resistance depends on the history of input signal (voltage or current). Here we demonstrate that the chemical gas sensors can be considered as memristors with a generalized (hybrid) input, namely, with the input consisting of the voltage, analyte concentrations and applied temperature. The concept of hybrid-input memristors is demonstrated experimentally using a single-walled carbon nanotubes chemical sensor. It is shown that with respect to the hybrid input, the sensor exhibits some features common with memristors such as the hysteretic input-output characteristics. This different perspective on chemical gas sensors may open new possibilities for smart sensor applications.

  4. FED, Geometry Input Generator for Program TRUMP

    International Nuclear Information System (INIS)

    Schauer, D.A.; Elrod, D.C.

    1996-01-01

    1 - Description of program or function: FED reduces the effort required to obtain the necessary geometric input for problems which are to be solved using the heat-transfer code, TRUMP (NESC 771). TRUMP calculates transient and steady-state temperature distributions in multidimensional systems. FED can properly zone any body of revolution in one, or three dimensions. 2 - Method of solution: The region of interest must first be divided into areas which may consist of a common material. The boundaries of these areas are the required FED input. Each area is subdivided into volume nodes, and the geometrical properties are calculated. Finally, FED connects the adjacent nodes to one another, using the proper surface area, interface distance, and, if specified, radiation form factor and interface conductance. 3 - Restrictions on the complexity of the problem: Rectangular bodies can only be approximated by using a very large radius of revolution compared to the total radial thickness and by considering only a small angular segment in the circumferential direction

  5. Image-derived input function obtained in a 3TMR-brainPET

    Energy Technology Data Exchange (ETDEWEB)

    Silva, N.A. da [Institute of Biophysics and Biomedical Engineering, University of Lisbon (Portugal); Institute of Neurosciences and Medicine - 4, Juelich (Germany); Herzog, H., E-mail: h.herzog@fz-juelich.de [Institute of Neurosciences and Medicine - 4, Juelich (Germany); Weirich, C.; Tellmann, L.; Rota Kops, E. [Institute of Neurosciences and Medicine - 4, Juelich (Germany); Hautzel, H. [Department of Nuclear Medicine (KME), University of Duesseldorf, Medical Faculty at Research Center Juelich, Juelich (Germany); Almeida, P. [Institute of Biophysics and Biomedical Engineering, University of Lisbon (Portugal)

    2013-02-21

    Aim: The combination of a high-resolution MR-compatible BrainPET insert operated within a 3 T MAGNETOM Trio MR scanner is an excellent tool for obtaining an image derived input function (IDIF), due to simultaneous imaging. In this work, we explore the possibility of obtaining an IDIF from volumes of interest (VOI) defined over the carotid arteries (CAs) using the MR data. Material and methods: FDG data from three patients without brain disorders were included. VOIs were drawn bilaterally over the CAs on a MPRAGE image using a 50% isocontour (MR50VOI). CA PET/MR co-registration was examined based on an individual and combined CA co-registration. After that, to estimate the IDIF, the MR50VOI average (IDIF-A), four hottest pixels per plane (IDIF-4H) and four hottest pixels in VOI (IDIF-4V) were considered. A model-based correction for residual partial volume effects involving venous blood samples was applied, from which partial volume (PV) and spillover (SP) coefficients were estimated. Additionally, a theoretical PV coefficient (PVt) was calculated based on MR50VOI. Results: The results show an excellent co-registration between the MR and PET, with an area under the curve ratio between both co-registration methods of 1.00±0.04. A good agreement between PV and PVt was found for IDIF-A, with PV of 0.39±0.06 and PVt 0.40±0.03, and for IDIF-4H, with PV of 0.47±0.05 and PVt 0.47±0.03. The SPs were 0.20±0.03 and 0.21±0.03 for IDIF-A and IDIF-4H, respectively. Conclusion: The integration of a high resolution BrainPET in an MR scanner allows to obtain an IDIF from an MR-based VOI. This must be corrected for a residual partial volume effect.

  6. Construction of an input sensitivity variable CAMAC module for measuring DC voltage

    International Nuclear Information System (INIS)

    Noda, Nobuaki.

    1979-03-01

    In on-line experimental data processing systems, the collection of DC voltage data is frequently required. In plasma confinement experiments, for example, the range of input voltage is very wide from over 1 kV applied to photomultiplier tubes to 10 mV full scale of the controller output for ionization vacuum gauges. A DC voltmeter CAMAC module with variable input range, convenient for plasma experiments and inexpensive, has been constructed for trial. The number of input channels is 16, and the input range is changeable in six steps from +-10 mV to +-200 V; these are all set by commands from a computer. The module is actually used for the on-line data processing system for JIPP T-2 experiment. The ideas behind its development, and the functions, features and usage of the module are described in this report. (J.P.N.)

  7. Quantitative and Functional Requirements for Bioluminescent Cancer Models.

    Science.gov (United States)

    Feys, Lynn; Descamps, Benedicte; Vanhove, Christian; Vermeulen, Stefan; Vandesompele, J O; Vanderheyden, Katrien; Messens, Kathy; Bracke, Marc; De Wever, Olivier

    2016-01-01

    Bioluminescent cancer models are widely used but detailed quantification of the luciferase signal and functional comparison with a non-transfected control cell line are generally lacking. In the present study, we provide quantitative and functional tests for luciferase-transfected cells. We quantified the luciferase expression in BLM and HCT8/E11 transfected cancer cells, and examined the effect of long-term luciferin exposure. The present study also investigated functional differences between parental and transfected cancer cells. Our results showed that quantification of different single-cell-derived populations are superior with droplet digital polymerase chain reaction. Quantification of luciferase protein level and luciferase bioluminescent activity is only useful when there is a significant difference in copy number. Continuous exposure of cell cultures to luciferin leads to inhibitory effects on mitochondrial activity, cell growth and bioluminescence. These inhibitory effects correlate with luciferase copy number. Cell culture and mouse xenograft assays showed no significant functional differences between luciferase-transfected and parental cells. Luciferase-transfected cells should be validated by quantitative and functional assays before starting large-scale experiments. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  8. Quantification and visualization of alveolar bone resorption from 3D dental CT images

    International Nuclear Information System (INIS)

    Nagao, Jiro; Mori, Kensaku; Kitasaka, Takayuki; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka

    2007-01-01

    Purpose A computer aided diagnosis (CAD) system for quantifying and visualizing alveolar bone resorption caused by periodontitis was developed based on three-dimensional (3D) image processing of dental CT images. Methods The proposed system enables visualization and quantification of resorption of alveolar bone surrounding and between the roots of teeth. It has the following functions: (1) vertical measurement of the depth of resorption surrounding the tooth in 3D images, avoiding physical obstruction; (2) quantification of the amount of resorption in the furcation area; and (3) visualization of quantification results by pseudo-color maps, graphs, and motion pictures. The resorption measurement accuracy in the area surrounding teeth was evaluated by comparing with dentist's recognition on five real patient CT images, giving average absolute difference of 0.87 mm. An artificial image with mathematical truth was also used for measurement evaluation. Results The average absolute difference was 0.36 and 0.10 mm for surrounding and furcation areas, respectively. The system provides an intuitive presentation of the measurement results. Conclusion Computer aided diagnosis of 3D dental CT scans is feasible and the technique is a promising new tool for the quantitative evaluation of periodontal bone loss. (orig.)

  9. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  10. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    Science.gov (United States)

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.

  11. Guided Wave Delamination Detection and Quantification With Wavefield Data Analysis

    Science.gov (United States)

    Tian, Zhenhua; Campbell Leckey, Cara A.; Seebo, Jeffrey P.; Yu, Lingyu

    2014-01-01

    Unexpected damage can occur in aerospace composites due to impact events or material stress during off-nominal loading events. In particular, laminated composites are susceptible to delamination damage due to weak transverse tensile and inter-laminar shear strengths. Developments of reliable and quantitative techniques to detect delamination damage in laminated composites are imperative for safe and functional optimally-designed next-generation composite structures. In this paper, we investigate guided wave interactions with delamination damage and develop quantification algorithms by using wavefield data analysis. The trapped guided waves in the delamination region are observed from the wavefield data and further quantitatively interpreted by using different wavenumber analysis methods. The frequency-wavenumber representation of the wavefield shows that new wavenumbers are present and correlate to trapped waves in the damage region. These new wavenumbers are used to detect and quantify the delamination damage through the wavenumber analysis, which can show how the wavenumber changes as a function of wave propagation distance. The location and spatial duration of the new wavenumbers can be identified, providing a useful means not only for detecting the presence of delamination damage but also allowing for estimation of the delamination size. Our method has been applied to detect and quantify real delamination damage with complex geometry (grown using a quasi-static indentation technique). The detection and quantification results show the location, size, and shape of the delamination damage.

  12. Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.

    Science.gov (United States)

    Gerold, Chase T; Bakker, Eric; Henry, Charles S

    2018-04-03

    In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.

  13. PET Quantification of the Norepinephrine Transporter in Human Brain with (S,S)-18F-FMeNER-D2.

    Science.gov (United States)

    Moriguchi, Sho; Kimura, Yasuyuki; Ichise, Masanori; Arakawa, Ryosuke; Takano, Harumasa; Seki, Chie; Ikoma, Yoko; Takahata, Keisuke; Nagashima, Tomohisa; Yamada, Makiko; Mimura, Masaru; Suhara, Tetsuya

    2017-07-01

    Norepinephrine transporter (NET) in the brain plays important roles in human cognition and the pathophysiology of psychiatric disorders. Two radioligands, ( S , S )- 11 C-MRB and ( S , S )- 18 F-FMeNER-D 2 , have been used for imaging NETs in the thalamus and midbrain (including locus coeruleus) using PET in humans. However, NET density in the equally important cerebral cortex has not been well quantified because of unfavorable kinetics with ( S , S )- 11 C-MRB and defluorination with ( S , S )- 18 F-FMeNER-D 2 , which can complicate NET quantification in the cerebral cortex adjacent to the skull containing defluorinated 18 F radioactivity. In this study, we have established analysis methods of quantification of NET density in the brain including the cerebral cortex using ( S , S )- 18 F-FMeNER-D 2 PET. Methods: We analyzed our previous ( S , S )- 18 F-FMeNER-D 2 PET data of 10 healthy volunteers dynamically acquired for 240 min with arterial blood sampling. The effects of defluorination on the NET quantification in the superficial cerebral cortex was evaluated by establishing a time stability of NET density estimations with an arterial input 2-tissue-compartment model, which guided the less-invasive reference tissue model and area under the time-activity curve methods to accurately quantify NET density in all brain regions including the cerebral cortex. Results: Defluorination of ( S , S )- 18 F-FMeNER-D 2 became prominent toward the latter half of the 240-min scan. Total distribution volumes in the superficial cerebral cortex increased with the scan duration beyond 120 min. We verified that 90-min dynamic scans provided a sufficient amount of data for quantification of NET density unaffected by defluorination. Reference tissue model binding potential values from the 90-min scan data and area under the time-activity curve ratios of 70- to 90-min data allowed for the accurate quantification of NET density in the cerebral cortex. Conclusion: We have established

  14. Sensitivity Analysis of Input Parameters for a Dynamic Food Chain Model DYNACON

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Lee, Geun Chang; Han, Moon Hee; Cho, Gyu Seong

    2000-01-01

    The sensitivity analysis of input parameters for a dynamic food chain model DYNACON was conducted as a function of deposition data for the long-lived radionuclides ( 137 Cs, 90 Sr). Also, the influence of input parameters for the short and long-terms contamination of selected foodstuffs (cereals, leafy vegetables, milk) was investigated. The input parameters were sampled using the LHS technique, and their sensitivity indices represented as PRCC. The sensitivity index was strongly dependent on contamination period as well as deposition data. In case of deposition during the growing stages of plants, the input parameters associated with contamination by foliar absorption were relatively important in long-term contamination as well as short-term contamination. They were also important in short-term contamination in case of deposition during the non-growing stages. In long-term contamination, the influence of input parameters associated with foliar absorption decreased, while the influence of input parameters associated with root uptake increased. These phenomena were more remarkable in case of the deposition of non-growing stages than growing stages, and in case of 90 Sr deposition than 137 Cs deposition. In case of deposition during growing stages of pasture, the input parameters associated with the characteristics of cattle such as feed-milk transfer factor and daily intake rate of cattle were relatively important in contamination of milk

  15. Synthesis and Review: Advancing agricultural greenhouse gas quantification

    International Nuclear Information System (INIS)

    Olander, Lydia P; Wollenberg, Eva; Tubiello, Francesco N; Herold, Martin

    2014-01-01

    Reducing emissions of agricultural greenhouse gases (GHGs), such as methane and nitrous oxide, and sequestering carbon in the soil or in living biomass can help reduce the impact of agriculture on climate change while improving productivity and reducing resource use. There is an increasing demand for improved, low cost quantification of GHGs in agriculture, whether for national reporting to the United Nations Framework Convention on Climate Change (UNFCCC), underpinning and stimulating improved practices, establishing crediting mechanisms, or supporting green products. This ERL focus issue highlights GHG quantification to call attention to our existing knowledge and opportunities for further progress. In this article we synthesize the findings of 21 papers on the current state of global capability for agricultural GHG quantification and visions for its improvement. We conclude that strategic investment in quantification can lead to significant global improvement in agricultural GHG estimation in the near term. (paper)

  16. Investigation of bacterial hopanoid inputs to soils from Western Canada

    Energy Technology Data Exchange (ETDEWEB)

    Shunthirasingham, Chubashini [Department of Physical and Environmental Sciences, University of Toronto, Scarborough College, 1265 Military Trail, Toronto, Ont., M1C1A4 (Canada); Simpson, Myrna J. [Department of Physical and Environmental Sciences, University of Toronto, Scarborough College, 1265 Military Trail, Toronto, Ont., M1C1A4 (Canada)]. E-mail: myrna.simpson@utoronto.ca

    2006-06-15

    Hopanoids have been widely used as characteristic biomarkers to study inputs of bacterial biomass to sediments because they are preserved in the geologic record. A limited number of studies have been performed on hopanoid biomarkers in soils. The present study examined the distribution and potential preservation of hopanoids in soils that are developed under different climatic conditions and varying vegetative inputs. Solvent extraction and sequential chemical degradation methods were employed to extract both 'free' and 'bound' hopanoids, from three grassland soils, a grassland-forest transition soil, and a forest soil from Western Canada. Identification and quantification of hopanoids in the soil samples were carried out by gas chromatography-mass spectrometry. Methylbishomohopanol, bishomohopanol and bishomohopanoic acid were detected in all solvent extracts. The base hydrolysis and ruthenium tetroxide extracts contained only bishomohopanoic acid at a concentration range of 0.8-8.8 {mu}g/gC and 2.2-28.3 {mu}g/gC, respectively. The acid hydrolysis procedure did not release detectable amounts of hopanoids. The solvent extraction yielded the greatest amounts of 'free' hopanoids in two of the grassland soils (Dark Brown and Black Chernozems) and in the forest soil (Gray Luvisol). In contrast, the chemical degradation methods resulted in higher amounts of 'bound' hopanoids in the third grassland soil (Brown Chernozem) and the transition soil (Dark Gray Chernozem), indicating that more hopanoids exist in the 'bound' form in these soils. Overall, the forest and the transition soils contained more hopanoids than the grassland soils. This is hypothesized to be due to the greater degradation of hopanoids in the grassland soils and or sorption to clay minerals, as compared to the forest and transition soils.

  17. Atomic Resolution Imaging and Quantification of Chemical Functionality of Surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Schwarz, Udo D. [Yale Univ., New Haven, CT (United States). Dept. of Mechanical Engineering and Materials Science; Altman, Eric I. [Yale Univ., New Haven, CT (United States). Dept. of Chemical and Environmental Engineering

    2014-12-10

    The work carried out from 2006-2014 under DoE support was targeted at developing new approaches to the atomic-scale characterization of surfaces that include species-selective imaging and an ability to quantify chemical surface interactions with site-specific accuracy. The newly established methods were subsequently applied to gain insight into the local chemical interactions that govern the catalytic properties of model catalysts of interest to DoE. The foundation of our work was the development of three-dimensional atomic force microscopy (3DAFM), a new measurement mode that allows the mapping of the complete surface force and energy fields with picometer resolution in space (x, y, and z) and piconewton/millielectron volts in force/energy. From this experimental platform, we further expanded by adding the simultaneous recording of tunneling current (3D-AFM/STM) using chemically well-defined tips. Through comparison with simulations, we were able to achieve precise quantification and assignment of local chemical interactions to exact positions within the lattice. During the course of the project, the novel techniques were applied to surface-oxidized copper, titanium dioxide, and silicon oxide. On these materials, defect-induced changes to the chemical surface reactivity and electronic charge density were characterized with site-specific accuracy.

  18. Positron emission tomography quantification of serotonin transporter in suicide attempters with major depressive disorder.

    Science.gov (United States)

    Miller, Jeffrey M; Hesselgrave, Natalie; Ogden, R Todd; Sullivan, Gregory M; Oquendo, Maria A; Mann, J John; Parsey, Ramin V

    2013-08-15

    Several lines of evidence implicate abnormal serotonergic function in suicidal behavior and completed suicide, including low serotonin transporter binding in postmortem studies of completed suicide. We have also reported low in vivo serotonin transporter binding in major depressive disorder (MDD) during a major depressive episode using positron emission tomography (PET) with [(11)C]McN5652. We quantified regional brain serotonin transporter binding in vivo in depressed suicide attempters, depressed nonattempters, and healthy controls using PET and a superior radiotracer, [(11)C]DASB. Fifty-one subjects with DSM-IV current MDD, 15 of whom were past suicide attempters, and 32 healthy control subjects underwent PET scanning with [(11)C]DASB to quantify in vivo regional brain serotonin transporter binding. Metabolite-corrected arterial input functions and plasma free-fraction were acquired to improve quantification. Depressed suicide attempters had lower serotonin transporter binding in midbrain compared with depressed nonattempters (p = .031) and control subjects (p = .0093). There was no difference in serotonin transporter binding comparing all depressed subjects with healthy control subjects considering six a priori regions of interest simultaneously (p = .41). Low midbrain serotonin transporter binding appears to be related to the pathophysiology of suicidal behavior rather than of major depressive disorder. This is consistent with postmortem work showing low midbrain serotonin transporter binding capacity in depressed suicides and may partially explain discrepant in vivo findings quantifying serotonin transporter in depression. Future studies should investigate midbrain serotonin transporter binding as a predictor of suicidal behavior in MDD and determine the cause of low binding. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  19. Critical points of DNA quantification by real-time PCR--effects of DNA extraction method and sample matrix on quantification of genetically modified organisms.

    Science.gov (United States)

    Cankar, Katarina; Stebih, Dejan; Dreo, Tanja; Zel, Jana; Gruden, Kristina

    2006-08-14

    Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary criterion by which to

  20. La quantification en Kabiye: une approche linguistique | Pali ...

    African Journals Online (AJOL)

    ... which is denoted by lexical quantifiers. Quantification with specific reference is provided by different types of linguistic units (nouns, numerals, adjectives, adverbs, ideophones and verbs) in arguments/noun phrases and in the predicative phrase in the sense of Chomsky. Keywords: quantification, class, number, reference, ...

  1. Enhanced Input in LCTL Pedagogy

    Directory of Open Access Journals (Sweden)

    Marilyn S. Manley

    2009-08-01

    Full Text Available Language materials for the more-commonly-taught languages (MCTLs often include visual input enhancement (Sharwood Smith 1991, 1993 which makes use of typographical cues like bolding and underlining to enhance the saliency of targeted forms. For a variety of reasons, this paper argues that the use of enhanced input, both visual and oral, is especially important as a tool for the lesscommonly-taught languages (LCTLs. As there continues to be a scarcity of teaching resources for the LCTLs, individual teachers must take it upon themselves to incorporate enhanced input into their own self-made materials. Specific examples of how to incorporate both visual and oral enhanced input into language teaching are drawn from the author’s own experiences teaching Cuzco Quechua. Additionally, survey results are presented from the author’s Fall 2010 semester Cuzco Quechua language students, supporting the use of both visual and oral enhanced input.

  2. Enhanced Input in LCTL Pedagogy

    Directory of Open Access Journals (Sweden)

    Marilyn S. Manley

    2010-08-01

    Full Text Available Language materials for the more-commonly-taught languages (MCTLs often include visual input enhancement (Sharwood Smith 1991, 1993 which makes use of typographical cues like bolding and underlining to enhance the saliency of targeted forms. For a variety of reasons, this paper argues that the use of enhanced input, both visual and oral, is especially important as a tool for the lesscommonly-taught languages (LCTLs. As there continues to be a scarcity of teaching resources for the LCTLs, individual teachers must take it upon themselves to incorporate enhanced input into their own self-made materials. Specific examples of how to incorporate both visual and oral enhanced input into language teaching are drawn from the author’s own experiences teaching Cuzco Quechua. Additionally, survey results are presented from the author’s Fall 2010 semester Cuzco Quechua language students, supporting the use of both visual and oral enhanced input.

  3. EPC: A Provably Secure Permutation Based Compression Function

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid

    2010-01-01

    The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...

  4. Measurements of the dynamic input impedance of a dc SQUID

    International Nuclear Information System (INIS)

    Hilbert, C.; Clarke, J.

    1985-01-01

    The impedance of a circuit coupled magnetically via a mutual inductance M/sub i/ to a dc SQUID of geometric inductance L is modified by the dynamic input impedance of the SQUID, which can be characterized by the flux-to-current transfer function J/sub Phi/approx. =partialJ/partialPhi; J is the current circulating in the SQUID loop and ∫ is the flux applied to the loop. At the same time, the SQUID is modified by the presence of the input circuit in the lumped circuit approximation, one expects its inductance to be reduced to L'(1-α/sub e/ 2 )L, where α/sub e/ is an effective coupling coefficient. Calculations of J/sub Phi/ using an analog simulator are described and presented in the form of a dynamic inductance L and a dynamic resistance R versus bias current I and Phi. Experimental measurements of L and R were made on a planar, thin-film SQUID tightly coupled to a spiral input coil that was connected in series with a capacitor C/sub i/ to form a resonant circuit. Thus, J/sub Phi/ was determined from the change in the resonant frequency and quality factor of this circuit as a function of I and Phi. At low bias currents (low Josephson frequencies) the measured values of L were in reasonable agreement with values simulated for the reduced SQUID, while at higher bias currents (higher Josephson frequencies) the measured values were in better agreement with values simulated for the unscreened SQUID. Similar conclusions were reached in the comparison of the experimental and simulated values of the flux-to-voltage transfer function V/sub Phi/

  5. A new environmental Kuznets curve? Relationship between direct material input and income per capita. Evidence from industrialised countries

    Energy Technology Data Exchange (ETDEWEB)

    Canas, Angela; Ferrao, Paulo; Conceicao, Pedro [IN+-Centre for Innovation, Technology and Policy Research, IST-Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisbon (Portugal)

    2003-09-01

    Many studies have focused on the quantification of the input of materials into the economy. However, the insights provided by those studies have generally been limited. This paper attempts to bring analytical value to the discussion on 'dematerialization' considering direct material input (DMI) per capita as the dependent variable in a test of the environmental Kuznets curve (EKC). The explanatory variable is, as usual, gross domestic product per capita. The quadratic and cubic versions of the EKC are tested econometrically, using panel data ranging from 1960 to 1998 for 16 industrialised countries. The results indicate a strong and robust support for both the quadratic and cubic EKC relationships between material input and income, in industrialised economies. While the statistical support to both types of evolution may seem contradictory, it suggests that as industrialised economies grow, the intensity of material consumption first increases, but eventually starts exhibiting a decreasing trend after a certain income threshold is reached. The inverted-U, or quadratic, relationship is confirmed, even though for the ranges of income considered in this study, the trend is mostly on the increasing part of the inverted-U curve. However, the statistical support to the cubic specification suggests that these results need to be regarded with caution. Overall, the statistically stronger models are the quadratic and cubic models with country random effects, and the cubic model with country and year fixed effects.

  6. Flow cytometric of reticulocytes quantification: radio-induction medullary aplasia application

    International Nuclear Information System (INIS)

    Dubner, D.; Perez, M.; Gisone, P.

    1996-01-01

    Flow cytometric reticulocyte quantification was assayed in ten patients undergoing bone marrow transplantation (BMT) with previous conditioning by chemotherapy and total body irradiation. A reticulocyte maturity index (RMI) was determined taking into account the RNA content. With the aim of testing the utility of RMI as an early predictor of functional recovery in marrow aplasia, other hematological indicators as neutrophils count were comparatively evaluated. Mean time elapsed between BMT and engraftment evidence by RMI was 17,6 days. In six patients the RMI was the earliest indicator of functional recovery. The applicability of this assay in the pursuit of radioinduced bone marrow aplasia is discussed. (authors). 4 refs., 4 figs., 2 tabs

  7. Quantification analysis of CT for aphasic patients

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, S.; Ooyama, H.; Hojo, K.; Tasaki, H.; Hanazono, T.; Sato, T.; Metoki, H.; Totsuka, M.; Oosumi, N.

    1987-02-01

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis).

  8. Rapid quantification and sex determination of forensic evidence materials.

    Science.gov (United States)

    Andréasson, Hanna; Allen, Marie

    2003-11-01

    DNA quantification of forensic evidence is very valuable for an optimal use of the available biological material. Moreover, sex determination is of great importance as additional information in criminal investigations as well as in identification of missing persons, no suspect cases, and ancient DNA studies. While routine forensic DNA analysis based on short tandem repeat markers includes a marker for sex determination, analysis of samples containing scarce amounts of DNA is often based on mitochondrial DNA, and sex determination is not performed. In order to allow quantification and simultaneous sex determination on minute amounts of DNA, an assay based on real-time PCR analysis of a marker within the human amelogenin gene has been developed. The sex determination is based on melting curve analysis, while an externally standardized kinetic analysis allows quantification of the nuclear DNA copy number in the sample. This real-time DNA quantification assay has proven to be highly sensitive, enabling quantification of single DNA copies. Although certain limitations were apparent, the system is a rapid, cost-effective, and flexible assay for analysis of forensic casework samples.

  9. Spatiotemporal coding of inputs for a system of globally coupled phase oscillators

    Science.gov (United States)

    Wordsworth, John; Ashwin, Peter

    2008-12-01

    We investigate the spatiotemporal coding of low amplitude inputs to a simple system of globally coupled phase oscillators with coupling function g(ϕ)=-sin(ϕ+α)+rsin(2ϕ+β) that has robust heteroclinic cycles (slow switching between cluster states). The inputs correspond to detuning of the oscillators. It was recently noted that globally coupled phase oscillators can encode their frequencies in the form of spatiotemporal codes of a sequence of cluster states [P. Ashwin, G. Orosz, J. Wordsworth, and S. Townley, SIAM J. Appl. Dyn. Syst. 6, 728 (2007)]. Concentrating on the case of N=5 oscillators we show in detail how the spatiotemporal coding can be used to resolve all of the information that relates the individual inputs to each other, providing that a long enough time series is considered. We investigate robustness to the addition of noise and find a remarkable stability, especially of the temporal coding, to the addition of noise even for noise of a comparable magnitude to the inputs.

  10. Input and output constraints-based stabilisation of switched nonlinear systems with unstable subsystems and its application

    Science.gov (United States)

    Chen, Chao; Liu, Qian; Zhao, Jun

    2018-01-01

    This paper studies the problem of stabilisation of switched nonlinear systems with output and input constraints. We propose a recursive approach to solve this issue. None of the subsystems are assumed to be stablisable while the switched system is stabilised by dual design of controllers for subsystems and a switching law. When only dealing with bounded input, we provide nested switching controllers using an extended backstepping procedure. If both input and output constraints are taken into consideration, a Barrier Lyapunov Function is employed during operation to construct multiple Lyapunov functions for switched nonlinear system in the backstepping procedure. As a practical example, the control design of an equilibrium manifold expansion model of aero-engine is given to demonstrate the effectiveness of the proposed design method.

  11. MDS MIC Catalog Inputs

    Science.gov (United States)

    Johnson-Throop, Kathy A.; Vowell, C. W.; Smith, Byron; Darcy, Jeannette

    2006-01-01

    This viewgraph presentation reviews the inputs to the MDS Medical Information Communique (MIC) catalog. The purpose of the group is to provide input for updating the MDS MIC Catalog and to request that MMOP assign Action Item to other working groups and FSs to support the MITWG Process for developing MIC-DDs.

  12. Metabolic Profiling and Quantification of Neurotransmitters in Mouse Brain by Gas Chromatography-Mass Spectrometry.

    Science.gov (United States)

    Jäger, Christian; Hiller, Karsten; Buttini, Manuel

    2016-09-01

    Metabolites are key mediators of cellular functions, and have emerged as important modulators in a variety of diseases. Recent developments in translational biomedicine have highlighted the importance of not looking at just one disease marker or disease inducing molecule, but at populations thereof to gain a global understanding of cellular function in health and disease. The goal of metabolomics is the systematic identification and quantification of metabolite populations. One of the most pressing issues of our times is the understanding of normal and diseased nervous tissue functions. To ensure high quality data, proper sample processing is crucial. Here, we present a method for the extraction of metabolites from brain tissue, their subsequent preparation for non-targeted gas chromatography-mass spectrometry (GC-MS) measurement, as well as giving some guidelines for processing of raw data. In addition, we present a sensitive screening method for neurotransmitters based on GC-MS in selected ion monitoring mode. The precise multi-analyte detection and quantification of amino acid and monoamine neurotransmitters can be used for further studies such as metabolic modeling. Our protocol can be applied to shed light on nervous tissue function in health, as well as neurodegenerative disease mechanisms and the effect of experimental therapeutics at the metabolic level. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  13. Supplementary High-Input Impedance Voltage-Mode Universal Biquadratic Filter Using DVCCs

    Directory of Open Access Journals (Sweden)

    Jitendra Mohan

    2012-01-01

    Full Text Available To further extend the existing knowledge on voltage-mode universal biquadratic filter, in this paper, a new biquadratic filter circuit with single input and multiple outputs is proposed, employing three differential voltage current conveyors (DVCCs, three resistors, and two grounded capacitors. The proposed circuit realizes all the standard filter functions, that is, high-pass, band-pass, low-pass, notch, and all-pass filters simultaneously. The circuit enjoys the feature of high-input impedance, orthogonal control of resonance angular frequency (o, and quality factor (Q via grounded resistor and the use of grounded capacitors which is ideal for IC implementation.

  14. A methodology for uncertainty quantification in quantitative technology valuation based on expert elicitation

    Science.gov (United States)

    Akram, Muhammad Farooq Bin

    The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for

  15. Inhibitory Gating of Basolateral Amygdala Inputs to the Prefrontal Cortex.

    Science.gov (United States)

    McGarry, Laura M; Carter, Adam G

    2016-09-07

    Interactions between the prefrontal cortex (PFC) and basolateral amygdala (BLA) regulate emotional behaviors. However, a circuit-level understanding of functional connections between these brain regions remains incomplete. The BLA sends prominent glutamatergic projections to the PFC, but the overall influence of these inputs is predominantly inhibitory. Here we combine targeted recordings and optogenetics to examine the synaptic underpinnings of this inhibition in the mouse infralimbic PFC. We find that BLA inputs preferentially target layer 2 corticoamygdala over neighboring corticostriatal neurons. However, these inputs make even stronger connections onto neighboring parvalbumin and somatostatin expressing interneurons. Inhibitory connections from these two populations of interneurons are also much stronger onto corticoamygdala neurons. Consequently, BLA inputs are able to drive robust feedforward inhibition via two parallel interneuron pathways. Moreover, the contributions of these interneurons shift during repetitive activity, due to differences in short-term synaptic dynamics. Thus, parvalbumin interneurons are activated at the start of stimulus trains, whereas somatostatin interneuron activation builds during these trains. Together, these results reveal how the BLA impacts the PFC through a complex interplay of direct excitation and feedforward inhibition. They also highlight the roles of targeted connections onto multiple projection neurons and interneurons in this cortical circuit. Our findings provide a mechanistic understanding for how the BLA can influence the PFC circuit, with important implications for how this circuit participates in the regulation of emotion. The prefrontal cortex (PFC) and basolateral amygdala (BLA) interact to control emotional behaviors. Here we show that BLA inputs elicit direct excitation and feedforward inhibition of layer 2 projection neurons in infralimbic PFC. BLA inputs are much stronger at corticoamygdala neurons compared

  16. PCC/SRC, PCC and SRC Calculation from Multivariate Input for Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.; Johnson, J.D.

    1995-01-01

    1 - Description of program or function: PCC/SRC is designed for use in conjunction with sensitivity analyses of complex computer models. PCC/SRC calculates the partial correlation coefficients (PCC) and the standardized regression coefficients (SRC) from the multivariate input to, and output from, a computer model. 2 - Method of solution: PCC/SRC calculates the coefficients on either the original observations or on the ranks of the original observations. These coefficients provide alternative measures of the relative contribution (importance) of each of the various input variables to the observed variations in output. Relationships between the coefficients and differences in their interpretations are identified. If the computer model output has an associated time or spatial history, PCC/SRC will generate a graph of the coefficients over time or space for each input-variable, output- variable combination of interest, indicating the importance of each input value over time or space. 3 - Restrictions on the complexity of the problem - Maxima of: 100 observations, 100 different time steps or intervals between successive dependent variable readings, 50 independent variables (model input), 20 dependent variables (model output). 10 ordered triples specifying intervals between dependent variable readings

  17. 'Motion frozen' quantification and display of myocardial perfusion gated SPECT

    International Nuclear Information System (INIS)

    Slomka, P.J.; Hurwitz, G.A.; Baddredine, M.; Baranowski, J.; Aladl, U.E.

    2002-01-01

    Aim: Gated SPECT imaging incorporates both functional and perfusion information of the left ventricle (LV). However perfusion data is confounded by the effect of ventricular motion. Most existing quantification paradigms simply add all gated frames and then proceed to extract the perfusion information from static images, discarding the effects of cardiac motion. In an attempt to improve the reliability and accuracy of cardiac SPECT quantification we propose to eliminate the LV motion prior to the perfusion quantification via automated image warping algorithm. Methods: A pilot series of 14 male and 11 female gated stress SPECT images acquired with 8 time bins have been co-registered to the coordinates of the 3D normal templates. Subsequently the LV endo and epi-cardial 3D points (300-500) were identified on end-systolic (ES) and end-diastolic (ED) frames, defining the ES-ED motion vectors. The nonlinear image warping algorithm (thin-plate-spline) was then applied to warp end-systolic frame was onto the end-diastolic frames using the corresponding ES-ED motion vectors. The remaining 6 intermediate frames were also transformed to the ED coordinates using fractions of the motion vectors. Such warped images were then summed to provide the LV perfusion image in the ED phase but with counts from the full cycle. Results: The identification of the ED/ES corresponding points was successful in all cases. The corrected displacement between ED and ES images was up to 25 mm. The summed images had the appearance of the ED frames but have been much less noisy since all the counts have been used. The spatial resolution of such images appeared higher than that of summed gated images, especially in the female scans. These 'motion frozen' images could be displayed and quantified as regular non-gated tomograms including polar map paradigm. Conclusions: This image processing technique may improve the effective image resolution of summed gated myocardial perfusion images used for

  18. Real-time PCR for the quantification of fungi in planta.

    Science.gov (United States)

    Klosterman, Steven J

    2012-01-01

    Methods enabling quantification of fungi in planta can be useful for a variety of applications. In combination with information on plant disease severity, indirect quantification of fungi in planta offers an additional tool in the screening of plants that are resistant to fungal diseases. In this chapter, a method is described for the quantification of DNA from a fungus in plant leaves using real-time PCR (qPCR). Although the method described entails quantification of the fungus Verticillium dahliae in lettuce leaves, the methodology described would be useful for other pathosystems as well. The method utilizes primers that are specific for amplification of a β-tubulin sequence from V. dahliae and a lettuce actin gene sequence as a reference for normalization. This approach enabled quantification of V. dahliae in the amount of 2.5 fg/ng of lettuce leaf DNA at 21 days following plant inoculation.

  19. Upper Limb Evaluation in Duchenne Muscular Dystrophy: Fat-Water Quantification by MRI, Muscle Force and Function Define Endpoints for Clinical Trials.

    Science.gov (United States)

    Ricotti, Valeria; Evans, Matthew R B; Sinclair, Christopher D J; Butler, Jordan W; Ridout, Deborah A; Hogrel, Jean-Yves; Emira, Ahmed; Morrow, Jasper M; Reilly, Mary M; Hanna, Michael G; Janiczek, Robert L; Matthews, Paul M; Yousry, Tarek A; Muntoni, Francesco; Thornton, John S

    2016-01-01

    A number of promising experimental therapies for Duchenne muscular dystrophy (DMD) are emerging. Clinical trials currently rely on invasive biopsies or motivation-dependent functional tests to assess outcome. Quantitative muscle magnetic resonance imaging (MRI) could offer a valuable alternative and permit inclusion of non-ambulant DMD subjects. The aims of our study were to explore the responsiveness of upper-limb MRI muscle-fat measurement as a non-invasive objective endpoint for clinical trials in non-ambulant DMD, and to investigate the relationship of these MRI measures to those of muscle force and function. 15 non-ambulant DMD boys (mean age 13.3 y) and 10 age-gender matched healthy controls (mean age 14.6 y) were recruited. 3-Tesla MRI fat-water quantification was used to measure forearm muscle fat transformation in non-ambulant DMD boys compared with healthy controls. DMD boys were assessed at 4 time-points over 12 months, using 3-point Dixon MRI to measure muscle fat-fraction (f.f.). Images from ten forearm muscles were segmented and mean f.f. and cross-sectional area recorded. DMD subjects also underwent comprehensive upper limb function and force evaluation. Overall mean baseline forearm f.f. was higher in DMD than in healthy controls (pmuscle f.f. as a biomarker to monitor disease progression in the upper limb in non-ambulant DMD, with sensitivity adequate to detect group-level change over time intervals practical for use in clinical trials. Clinical validity is supported by the association of the progressive fat transformation of muscle with loss of muscle force and function.

  20. A machine learning approach for efficient uncertainty quantification using multiscale methods

    Science.gov (United States)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  1. Unconventional barometry and rheometry: new quantification approaches for mechanically-controlled microstructures

    Science.gov (United States)

    Tajcmanova, L.; Moulas, E.; Vrijmoed, J.; Podladchikov, Y.

    2016-12-01

    Estimation of pressure-temperature (P-T) from petrographic observations in metamorphic rocks has become a common practice in petrology studies during the last 50 years. This data often serves as a key input in geodynamic reconstructions and thus directly influences our understanding of lithospheric processes. Such an approach might have led the metamorphic geology field to a certain level of quiescence. In the classical view of metamorphic quantification approaches, fast viscous relaxation (and therefore constant pressure across the rock microstructure) is assumed, with chemical diffusion being the limiting factor in equilibration. Recently, we have focused on the other possible scenario - fast chemical diffusion and slow viscous relaxation - and brings an alternative interpretation of chemical zoning found in high-grade rocks. The aim has been to provide insight into the role of mechanically maintained pressure variations on multi-component chemical zoning in minerals. Furthermore, we used the pressure information from the mechanically-controlled microstructure for rheological constrains. We show an unconventional way of relating the direct microstructural observations in rocks to the nonlinearity of rheology at time scales unattainable by laboratory measurements. Our analysis documents that mechanically controlled microstructures that have been preserved over geological times can be used to deduce flow-law parameters and in turn estimate stress levels of minerals in their natural environment. The development of the new quantification approaches has opened new horizons in understanding the phase transformations in the Earth's lithosphere. Furthermore, the new data generated can serve as a food for thought for the next generation of fully coupled numerical codes that involve reacting materials while respecting conservation of mass, momentum and energy.

  2. Quantification of brain images using Korean standard templates and structural and cytoarchitectonic probabilistic maps

    International Nuclear Information System (INIS)

    Lee, Jae Sung; Lee, Dong Soo; Kim, Yu Kyeong

    2004-01-01

    Population based structural and functional maps of the brain provide effective tools for the analysis and interpretation of complex and individually variable brain data. Brain MRI and PET standard templates and statistical probabilistic maps based on image data of Korean normal volunteers have been developed and probabilistic maps based on cytoarchitectonic data have been introduced. A quantification method using these data was developed for the objective assessment of regional intensity in the brain images. Age, gender and ethnic specific anatomical and functional brain templates based on MR and PET images of Korean normal volunteers were developed. Korean structural probabilistic maps for 89 brain regions and cytoarchitectonic probabilistic maps for 13 Brodmann areas were transformed onto the standard templates. Brain FDG PET and SPGR MR images of normal volunteers were spatially normalized onto the template of each modality and gender. Regional uptake of radiotracers in PET and gray matter concentration in MR images were then quantified by averaging (or summing) regional intensities weighted using the probabilistic maps of brain regions. Regionally specific effects of aging on glucose metabolism in cingulate cortex were also examined. Quantification program could generate quantification results for single spatially normalized images per 20 seconds. Glucose metabolism change in cingulate gyrus was regionally specific: ratios of glucose metabolism in the rostral anterior cingulate vs. posterior cingulate and the caudal anterior cingulate vs. posterior cingulate were significantly decreased as the age increased. 'Rostral anterior' / 'posterior' was decreased by 3.1% per decade of age (p -11 , r=0.81) and 'caudal anterior' / 'posterior' was decreased by 1.7% (p -8 , r=0.72). Ethnic specific standard templates and probabilistic maps and quantification program developed in this study will be useful for the analysis of brain image of Korean people since the difference

  3. Quantification of brain images using Korean standard templates and structural and cytoarchitectonic probabilistic maps

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Sung; Lee, Dong Soo; Kim, Yu Kyeong [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)] [and others

    2004-06-01

    Population based structural and functional maps of the brain provide effective tools for the analysis and interpretation of complex and individually variable brain data. Brain MRI and PET standard templates and statistical probabilistic maps based on image data of Korean normal volunteers have been developed and probabilistic maps based on cytoarchitectonic data have been introduced. A quantification method using these data was developed for the objective assessment of regional intensity in the brain images. Age, gender and ethnic specific anatomical and functional brain templates based on MR and PET images of Korean normal volunteers were developed. Korean structural probabilistic maps for 89 brain regions and cytoarchitectonic probabilistic maps for 13 Brodmann areas were transformed onto the standard templates. Brain FDG PET and SPGR MR images of normal volunteers were spatially normalized onto the template of each modality and gender. Regional uptake of radiotracers in PET and gray matter concentration in MR images were then quantified by averaging (or summing) regional intensities weighted using the probabilistic maps of brain regions. Regionally specific effects of aging on glucose metabolism in cingulate cortex were also examined. Quantification program could generate quantification results for single spatially normalized images per 20 seconds. Glucose metabolism change in cingulate gyrus was regionally specific: ratios of glucose metabolism in the rostral anterior cingulate vs. posterior cingulate and the caudal anterior cingulate vs. posterior cingulate were significantly decreased as the age increased. 'Rostral anterior' / 'posterior' was decreased by 3.1% per decade of age (p<10{sup -11}, r=0.81) and 'caudal anterior' / 'posterior' was decreased by 1.7% (p<10{sup -8}, r=0.72). Ethnic specific standard templates and probabilistic maps and quantification program developed in this study will be useful for the analysis

  4. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  5. Transforming the Way We Teach Function Transformations

    Science.gov (United States)

    Faulkenberry, Eileen Durand; Faulkenberry, Thomas J.

    2010-01-01

    In this article, the authors discuss "function," a well-defined rule that relates inputs to outputs. They have found that by using the input-output definition of "function," they can examine transformations of functions simply by looking at changes to input or output and the respective changes to the graph. Applying transformations to the input…

  6. PREP-45, Input Preparation for CITATION-2

    International Nuclear Information System (INIS)

    Ramalho Carlos, C.A.

    1995-01-01

    1 - Description of program or function: A Fortran program has been created, which saves much effort in preparing sections 004 (intervals in the coordinates) and 005 (zone numbers) of the input data file for the multigroup theory code CITATION (version CITATION-2, NESC0387/09), particularly when a thin complicated mesh is used. 2 - Method of solution: A domain is defined for CITATION calculations through specifying its sub-domains (e.g. graphite, lead, beryllium, water and fuel sub-domains) in a compact and simple way. An independent and previous geometrical specification is made of the various types of elements which are envisaged to constitute the contents of the reactor core grid positions. Then the load table for the configuration is input and scanned throughout, thus enabling the geometric mesh description to be produced (section 004). Also the zone placement (section 005) is achieved by means of element description subroutines for the different types of element (which may require appropriate but simple changes in the actual cases). The output of PREP45 is directly obtained in a format which is compatible with CITATION-2 input. 3 - Restrictions on the complexity of the problem: Only rectangular two-dimensional Cartesian coordinates are considered. A maximum of 12 sub-domains in the x direction (18 in the y direction) and up to 8 distinct element types are considered in this version. Other limitations exist which can nevertheless be overcome with simple changes in the source program

  7. Life Cycle Assessment (LCA for Wheat (Triticum aestivum L. Production Systems of Iran: 1- Comparison of Inputs Level

    Directory of Open Access Journals (Sweden)

    Mahdi Nassiri Mahallati

    2018-02-01

    Full Text Available Introduction Agricultural intensification has serious environmental consequences such as depletion of non-renewable resources, emission of greenhouse gases, threatening of biodiversity and pollution of both surface and underground water resources. The life cycle assessment (LCA provides a standard method for assessing environmental impacts from various economic activities, including agriculture, and covers a wide range of impact categories across the entire production chain. Over the past few decades, food production in Iran has been increased drastically due to heavier use of chemical inputs. Since the use of LCA method is overlooked for assesseing the effects of agricultural intensification in Iran and few researches are conducted at local level (such as province, cities, the purpose of this research is evaluation of wheat production systems throughout the country based on the level of intensification using LCA method. Materials and Methods Fourteen provinces covering 80 percent of total cultivated area of wheat production in the country were subjected to a cradle to gate LCA study using the standard method. The selected provinces were classified as low, medium and high input based on the level of intensification and all inputs and emissions were estimated within the system boundaries during inventory stage. Required data for yield, and level of applied inputs for 14 provinces were collected from the official databases of the Ministry of Jihad Agriculture. The various environmental impacts including, abiotic resource depletion, land use, global warming potential, acidification and eutrophication potential, human, aquatic and terrestrial ecotoxicity potential of wheat production systems over the country was studied based on emission coefficients and characterization factors provided by standard literatures. The integrated effects of emission of each impact category were calculated per functional units (hectare cultivated area as well as ton

  8. Input filter compensation for switching regulators

    Science.gov (United States)

    Lee, F. C.; Kelkar, S. S.

    1982-01-01

    The problems caused by the interaction between the input filter, output filter, and the control loop are discussed. The input filter design is made more complicated because of the need to avoid performance degradation and also stay within the weight and loss limitations. Conventional input filter design techniques are then dicussed. The concept of pole zero cancellation is reviewed; this concept is the basis for an approach to control the peaking of the output impedance of the input filter and thus mitigate some of the problems caused by the input filter. The proposed approach for control of the peaking of the output impedance of the input filter is to use a feedforward loop working in conjunction with feedback loops, thus forming a total state control scheme. The design of the feedforward loop for a buck regulator is described. A possible implementation of the feedforward loop design is suggested.

  9. Application of the homology method for quantification of low-attenuation lung region in patients with and without COPD

    Directory of Open Access Journals (Sweden)

    Nishio M

    2016-09-01

    Full Text Available Mizuho Nishio,1 Kazuaki Nakane,2 Yutaka Tanaka3 1Clinical PET Center, Institute of Biomedical Research and Innovation, Hyogo, Japan; 2Department of Molecular Pathology, Osaka University Graduate School of Medicine and Health Science, Osaka, Japan; 3Department of Radiology, Chibune General Hospital, Osaka, Japan Background: Homology is a mathematical concept that can be used to quantify degree of contact. Recently, image processing with the homology method has been proposed. In this study, we used the homology method and computed tomography images to quantify emphysema.Methods: This study included 112 patients who had undergone computed tomography and pulmonary function test. Low-attenuation lung regions were evaluated by the homology method, and homology-based emphysema quantification (b0, b1, nb0, nb1, and R was performed. For comparison, the percentage of low-attenuation lung area (LAA% was also obtained. Relationships between emphysema quantification and pulmonary function test results were evaluated by Pearson’s correlation coefficients. In addition to the correlation, the patients were divided into the following three groups based on guidelines of the Global initiative for chronic Obstructive Lung Disease: Group A, nonsmokers; Group B, smokers without COPD, mild COPD, and moderate COPD; Group C, severe COPD and very severe COPD. The homology-based emphysema quantification and LAA% were compared among these groups.Results: For forced expiratory volume in 1 second/forced vital capacity, the correlation coefficients were as follows: LAA%, -0.603; b0, -0.460; b1, -0.500; nb0, -0.449; nb1, -0.524; and R, -0.574. For forced expiratory volume in 1 second, the coefficients were as follows: LAA%, -0.461; b0, -0.173; b1, -0.314; nb0, -0.191; nb1, -0.329; and R, -0.409. Between Groups A and B, difference in nb0 was significant (P-value = 0.00858, and those in the other types of quantification were not significant.Conclusion: Feasibility of the

  10. Development and validation of gui based input file generation code for relap

    International Nuclear Information System (INIS)

    Anwar, M.M.; Khan, A.A.; Chughati, I.R.; Chaudri, K.S.; Inyat, M.H.; Hayat, T.

    2009-01-01

    Reactor Excursion and Leak Analysis Program (RELAP) is a widely acceptable computer code for thermal hydraulics modeling of Nuclear Power Plants. It calculates thermal- hydraulic transients in water-cooled nuclear reactors by solving approximations to the one-dimensional, two-phase equations of hydraulics in an arbitrarily connected system of nodes. However, the preparation of input file and subsequent analysis of results in this code is a tedious task. The development of a Graphical User Interface (GUI) for preparation of the input file for RELAP-5 is done with the validation of GUI generated Input File. The GUI is developed in Microsoft Visual Studio using Visual C Sharp (C) as programming language. The Nodalization diagram is drawn graphically and the program contains various component forms along with the starting data form, which are launched for properties assignment to generate Input File Cards serving as GUI for the user. The GUI is provided with Open / Save function to store and recall the Nodalization diagram along with Components' properties. The GUI generated Input File is validated for several case studies and individual component cards are compared with the originally required format. The generated Input File of RELAP is found consistent with the requirement of RELAP. The GUI provided a useful platform for simulating complex hydrodynamic problems efficiently with RELAP. (author)

  11. Quantification and visualization of alveolar bone resorption from 3D dental CT images

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Jiro; Mori, Kensaku; Kitasaka, Takayuki; Suenaga, Yasuhito [Nagoya University, Graduate School of Information Science, Nagoya (Japan); Yamada, Shohzoh; Naitoh, Munetaka [Aichi-Gakuin University, School of Dentistry, Nagoya (Japan)

    2007-06-15

    Purpose A computer aided diagnosis (CAD) system for quantifying and visualizing alveolar bone resorption caused by periodontitis was developed based on three-dimensional (3D) image processing of dental CT images. Methods The proposed system enables visualization and quantification of resorption of alveolar bone surrounding and between the roots of teeth. It has the following functions: (1) vertical measurement of the depth of resorption surrounding the tooth in 3D images, avoiding physical obstruction; (2) quantification of the amount of resorption in the furcation area; and (3) visualization of quantification results by pseudo-color maps, graphs, and motion pictures. The resorption measurement accuracy in the area surrounding teeth was evaluated by comparing with dentist's recognition on five real patient CT images, giving average absolute difference of 0.87 mm. An artificial image with mathematical truth was also used for measurement evaluation. Results The average absolute difference was 0.36 and 0.10 mm for surrounding and furcation areas, respectively. The system provides an intuitive presentation of the measurement results. Conclusion Computer aided diagnosis of 3D dental CT scans is feasible and the technique is a promising new tool for the quantitative evaluation of periodontal bone loss. (orig.)

  12. Computer-assisted radiological quantification of rheumatoid arthritis

    International Nuclear Information System (INIS)

    Peloschek, P.L.

    2000-03-01

    Specific objective was to develop the layout and structure of a platform for effective quantification of rheumatoid arthritis (RA). A fully operative Java stand-alone application software (RheumaCoach) was developed to support the efficacy of the scoring process in RA (Web address: http://www.univie.ac.at/radio/radio.htm). Addressed as potential users of such a program are physicians enrolled in clinical trials to evaluate the course of RA and its modulation with drug therapies and scientists developing new scoring modalities. The software 'RheumaCoach' consists of three major modules: The Tutorial starts with 'Rheumatoid Arthritis', to teach the basic pathology of the disease. Afterwards the section 'Imaging Standards' explains how to produce proper radiographs. 'Principles - How to use the 'Larsen Score', 'Radiographic Findings' and 'Quantification by Scoring' explain the requirements for unbiased scoring of RA. At the Data Input Sheet care was taken to follow the radiologist's approach in analysing films as published previously. At the compute sheet the calculated Larsen-Score may be compared with former scores and the further possibilities (calculate, export, print, send) are easily accessible. In a first pre-clinical study the system was tested in an unstructured. Two structured evaluations (30 fully documented and blinded cases of RA, four radiologists scored hands and feet with or without the RheumaCoach) followed. Between the evaluations we permanently improved the software. For all readers the usage of the RheumaCoach fastened the procedure, all together the scoring without computer-assistance needed about 20 % percent more time. Availability of the programme via the internet provides common access for potential quality control in multi-center studies. Documentation of results in a specifically designed printout improves communication between radiologists and rheumatologists. The possibilities of direct export to other programmes and electronic

  13. Strategy study of quantification harmonization of SUV in PET/CT images

    International Nuclear Information System (INIS)

    Fischer, Andreia Caroline Fischer da Silveira

    2014-01-01

    In clinical practice, PET/CT images are often analyzed qualitatively by visual comparison of tumor lesions and normal tissues uptake; and semi-quantitatively by means of a parameter called SUV (Standardized Uptake Value). To ensure that longitudinal studies acquired on different scanners are interchangeable, and information of quantification is comparable, it is necessary to establish a strategy to harmonize the quantification of SUV. The aim of this study is to evaluate the strategy to harmonize the quantification of PET/CT images, performed with different scanner models and manufacturers. For this purpose, a survey of the technical characteristics of equipment and acquisition protocols of clinical images of different services of PET/CT in the state of Rio Grande do Sul was conducted. For each scanner, the accuracy of SUV quantification, and the Recovery Coefficient (RC) curves were determined, using the reconstruction parameters clinically relevant and available. From these data, harmonized performance specifications among the evaluated scanners were identified, as well as the algorithm that produces, for each one, the most accurate quantification. Finally, the most appropriate reconstruction parameters to harmonize the SUV quantification in each scanner, either regionally or internationally were identified. It was found that the RC values of the analyzed scanners proved to be overestimated by up to 38%, particularly for objects larger than 17mm. These results demonstrate the need for further optimization, through the reconstruction parameters modification, and even the change of the reconstruction algorithm used in each scanner. It was observed that there is a decoupling between the best image for PET/CT qualitative analysis and the best image for quantification studies. Thus, the choice of reconstruction method should be tied to the purpose of the PET/CT study in question, since the same reconstruction algorithm is not adequate, in one scanner, for qualitative

  14. Application of Fuzzy Comprehensive Evaluation Method in Trust Quantification

    Directory of Open Access Journals (Sweden)

    Shunan Ma

    2011-10-01

    Full Text Available Trust can play an important role for the sharing of resources and information in open network environments. Trust quantification is thus an important issue in dynamic trust management. By considering the fuzziness and uncertainty of trust, in this paper, we propose a fuzzy comprehensive evaluation method to quantify trust along with a trust quantification algorithm. Simulation results show that the trust quantification algorithm that we propose can effectively quantify trust and the quantified value of an entity's trust is consistent with the behavior of the entity.

  15. Negative dielectrophoresis spectroscopy for rare analyte quantification in biological samples

    Science.gov (United States)

    Kirmani, Syed Abdul Mannan; Gudagunti, Fleming Dackson; Velmanickam, Logeeshan; Nawarathna, Dharmakeerthi; Lima, Ivan T., Jr.

    2017-03-01

    We propose the use of negative dielectrophoresis (DEP) spectroscopy as a technique to improve the detection limit of rare analytes in biological samples. We observe a significant dependence of the negative DEP force on functionalized polystyrene beads at the edges of interdigitated electrodes with respect to the frequency of the electric field. We measured this velocity of repulsion for 0% and 0.8% conjugation of avidin with biotin functionalized polystyrene beads with our automated software through real-time image processing that monitors the Rayleigh scattering from the beads. A significant difference in the velocity of the beads was observed in the presence of as little as 80 molecules of avidin per biotin functionalized bead. This technology can be applied in the detection and quantification of rare analytes that can be useful in the diagnosis and the treatment of diseases, such as cancer and myocardial infarction, with the use of polystyrene beads functionalized with antibodies for the target biomarkers.

  16. 7 CFR 3430.607 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE RESEARCH, EDUCATION... § 3430.607 Stakeholder input. CSREES shall seek and obtain stakeholder input through a variety of forums...

  17. Isotope correlation techniques for verifying input accountability measurements at a reprocessing plant

    International Nuclear Information System (INIS)

    Umezawa, H.; Nakahara, Y.

    1983-01-01

    Isotope correlation techniques were studied to verify input accountability measurements at a reprocessing plant. On the basis of a historical data bank, correlation between plutonium-to-uranium ratio and isotopic variables was derived as a function of burnup. The burnup was determined from the isotopic ratios of uranium and plutonium, too. Data treatment was therefore made in an iterative manner. The isotopic variables were defined to cover a wide spectrum of isotopes of uranium and plutonium. The isotope correlation techniques evaluated important parameters such as the fuel burnup, the most probable ratio of plutonium to uranium, and the amounts of uranium and plutonium in reprocessing batches in connection with fresh fuel fabrication data. In addition, the most probable values of isotope abundance of plutonium and uranium could be estimated from the plutonium-to-uranium ratio determined, being compared with the reported data for verification. A pocket-computer-based system was developed to enable inspectors to collect and evaluate data in a timely fashion at the input accountability measurement point by the isotope correlation techniques. The device is supported by battery power and completely independent of the operator's system. The software of the system was written in BASIC. The data input can be stored in a cassette tape and transferred into a higher level computer. The correlations used for the analysis were given as a form of analytical function. Coefficients for the function were provided relevant to the type of reactor and the initial enrichment of fuel. (author)

  18. Target-specific M1 inputs to infragranular S1 pyramidal neurons

    Science.gov (United States)

    Fanselow, Erika E.; Simons, Daniel J.

    2016-01-01

    The functional role of input from the primary motor cortex (M1) to primary somatosensory cortex (S1) is unclear; one key to understanding this pathway may lie in elucidating the cell-type specific microcircuits that connect S1 and M1. Recently, we discovered that a subset of pyramidal neurons in the infragranular layers of S1 receive especially strong input from M1 (Kinnischtzke AK, Simons DJ, Fanselow EE. Cereb Cortex 24: 2237–2248, 2014), suggesting that M1 may affect specific classes of pyramidal neurons differently. Here, using combined optogenetic and retrograde labeling approaches in the mouse, we examined the strengths of M1 inputs to five classes of infragranular S1 neurons categorized by their projections to particular cortical and subcortical targets. We found that the magnitude of M1 synaptic input to S1 pyramidal neurons varies greatly depending on the projection target of the postsynaptic neuron. Of the populations examined, M1-projecting corticocortical neurons in L6 received the strongest M1 inputs, whereas ventral posterior medial nucleus-projecting corticothalamic neurons, also located in L6, received the weakest. Each population also possessed distinct intrinsic properties. The results suggest that M1 differentially engages specific classes of S1 projection neurons, thereby regulating the motor-related influence S1 exerts over subcortical structures. PMID:27334960

  19. Iron overload in the liver diagnostic and quantification

    Energy Technology Data Exchange (ETDEWEB)

    Alustiza, Jose M. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)]. E-mail: jmalustiza@osatek.es; Castiella, Agustin [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Juan, Maria D. de [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Emparanza, Jose I. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Artetxe, Jose [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Uranga, Maite [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)

    2007-03-15

    Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification.

  20. Iron overload in the liver diagnostic and quantification

    International Nuclear Information System (INIS)

    Alustiza, Jose M.; Castiella, Agustin; Juan, Maria D. de; Emparanza, Jose I.; Artetxe, Jose; Uranga, Maite

    2007-01-01

    Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification

  1. World Input-Output Network.

    Directory of Open Access Journals (Sweden)

    Federica Cerina

    Full Text Available Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD is one of the first efforts to construct the global multi-regional input-output (GMRIO tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  2. 7 CFR 3430.15 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.15 Section 3430.15... Stakeholder input. Section 103(c)(2) of the Agricultural Research, Extension, and Education Reform Act of 1998... RFAs for competitive programs. CSREES will provide instructions for submission of stakeholder input in...

  3. Disease quantification in dermatology

    DEFF Research Database (Denmark)

    Greve, Tanja Maria; Kamp, Søren; Jemec, Gregor B E

    2013-01-01

    Accurate documentation of disease severity is a prerequisite for clinical research and the practice of evidence-based medicine. The quantification of skin diseases such as psoriasis currently relies heavily on clinical scores. Although these clinical scoring methods are well established and very ...

  4. Input description for BIOPATH

    International Nuclear Information System (INIS)

    Marklund, J.E.; Bergstroem, U.; Edlund, O.

    1980-01-01

    The computer program BIOPATH describes the flow of radioactivity within a given ecosystem after a postulated release of radioactive material and the resulting dose for specified population groups. The present report accounts for the input data necessary to run BIOPATH. The report also contains descriptions of possible control cards and an input example as well as a short summary of the basic theory.(author)

  5. Simplifying BRDF input data for optical signature modeling

    Science.gov (United States)

    Hallberg, Tomas; Pohl, Anna; Fagerström, Jan

    2017-05-01

    Scene simulations of optical signature properties using signature codes normally requires input of various parameterized measurement data of surfaces and coatings in order to achieve realistic scene object features. Some of the most important parameters are used in the model of the Bidirectional Reflectance Distribution Function (BRDF) and are normally determined by surface reflectance and scattering measurements. Reflectance measurements of the spectral Directional Hemispherical Reflectance (DHR) at various incident angles can normally be performed in most spectroscopy labs, while measuring the BRDF is more complicated or may not be available at all in many optical labs. We will present a method in order to achieve the necessary BRDF data directly from DHR measurements for modeling software using the Sandford-Robertson BRDF model. The accuracy of the method is tested by modeling a test surface by comparing results from using estimated and measured BRDF data as input to the model. These results show that using this method gives no significant loss in modeling accuracy.

  6. MEG evidence for conceptual combination but not numeral quantification in the left anterior temporal lobe during language production

    Directory of Open Access Journals (Sweden)

    Paul eDel Prato

    2014-06-01

    Full Text Available The left anterior temporal lobe (LATL has risen as a leading candidate for a brain locus of composition in language; yet the computational details of its function are unknown. Although most literature discusses it as a combinatory region in very general terms, it has also been proposed to reflect the more specific function of conceptual combination, which in the classic use of this term mainly pertains to the combination of open class words with obvious conceptual contributions. We aimed to distinguish between these two possibilities by contrasting plural nouns in contexts where they were either preceded by a color modifier (red cups, eliciting conceptual combination, or by a number word (two cups, eliciting numeral quantification but no conceptual combination. This contrast was chosen because within a production task, it allows the manipulation of composition type while keeping the physical stimulus constant: a display of two red cups can be named as two cups or red cups depending on the task instruction. These utterances were compared to productions of two-word number and color lists, intended as noncombinatory control conditions. MEG activity was recorded during the planning for production, prior to motion artifacts. As expected on the basis of comprehension studies, color modification elicited increased LATL activity as compared to color lists, demonstrating that this basic combinatory effect is strongly crossmodal. However, numeral quantification did not elicit a parallel effect, suggesting that the function of the LATL is (i semantic and not syntactic (given that both color modification and numeral quantification involve syntactic composition and (ii corresponds more closely to the classical psychological notion of conceptual combination as opposed to a more general semantic combinatory function.

  7. Regge-like initial input and evolution of non-singlet structure ...

    Indian Academy of Sciences (India)

    Regge-like initial input and evolution of non-singlet structure functions from DGLAP equation up to next-next-to-leading order at low x and low Q. 2. NAYAN MANI NATH1,2,∗, MRINAL KUMAR DAS1 and JAYANTA KUMAR SARMA1. 1Department of Physics, Tezpur University, Tezpur 784 028, India. 2Department of Physics ...

  8. The Generalization Complexity Measure for Continuous Input Data

    Directory of Open Access Journals (Sweden)

    Iván Gómez

    2014-01-01

    defined in Boolean space, quantifies the complexity of data in relationship to the prediction accuracy that can be expected when using a supervised classifier like a neural network, SVM, and so forth. We first extend the original measure for its use with continuous functions to later on, using an approach based on the use of the set of Walsh functions, consider the case of having a finite number of data points (inputs/outputs pairs, that is, usually the practical case. Using a set of trigonometric functions a model that gives a relationship between the size of the hidden layer of a neural network and the complexity is constructed. Finally, we demonstrate the application of the introduced complexity measure, by using the generated model, to the problem of estimating an adequate neural network architecture for real-world data sets.

  9. Comparison of Suitability of the Most Common Ancient DNA Quantification Methods.

    Science.gov (United States)

    Brzobohatá, Kristýna; Drozdová, Eva; Smutný, Jiří; Zeman, Tomáš; Beňuš, Radoslav

    2017-04-01

    Ancient DNA (aDNA) extracted from historical bones is damaged and fragmented into short segments, present in low quantity, and usually copurified with microbial DNA. A wide range of DNA quantification methods are available. The aim of this study was to compare the five most common DNA quantification methods for aDNA. Quantification methods were tested on DNA extracted from skeletal material originating from an early medieval burial site. The tested methods included ultraviolet (UV) absorbance, real-time quantitative polymerase chain reaction (qPCR) based on SYBR ® green detection, real-time qPCR based on a forensic kit, quantification via fluorescent dyes bonded to DNA, and fragmentary analysis. Differences between groups were tested using a paired t-test. Methods that measure total DNA present in the sample (NanoDrop ™ UV spectrophotometer and Qubit ® fluorometer) showed the highest concentrations. Methods based on real-time qPCR underestimated the quantity of aDNA. The most accurate method of aDNA quantification was fragmentary analysis, which also allows DNA quantification of the desired length and is not affected by PCR inhibitors. Methods based on the quantification of the total amount of DNA in samples are unsuitable for ancient samples as they overestimate the amount of DNA presumably due to the presence of microbial DNA. Real-time qPCR methods give undervalued results due to DNA damage and the presence of PCR inhibitors. DNA quantification methods based on fragment analysis show not only the quantity of DNA but also fragment length.

  10. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    Directory of Open Access Journals (Sweden)

    Žel Jana

    2006-08-01

    Full Text Available Abstract Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was

  11. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    Science.gov (United States)

    Cankar, Katarina; Štebih, Dejan; Dreo, Tanja; Žel, Jana; Gruden, Kristina

    2006-01-01

    Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary

  12. Responses of tree and insect herbivores to elevated nitrogen inputs: A meta-analysis

    Science.gov (United States)

    Li, Furong; Dudley, Tom L.; Chen, Baoming; Chang, Xiaoyu; Liang, Liyin; Peng, Shaolin

    2016-11-01

    Increasing atmospheric nitrogen (N) inputs have the potential to alter terrestrial ecosystem function through impacts on plant-herbivore interactions. The goal of our study is to search for a general pattern in responses of tree characteristics important for herbivores and insect herbivorous performance to elevated N inputs. We conducted a meta-analysis based on 109 papers describing impacts of nitrogen inputs on tree characteristics and 16 papers on insect performance. The differences in plant characteristics and insect performance between broadleaves and conifers were also explored. Tree aboveground biomass, leaf biomass and leaf N concentration significantly increased under elevated N inputs. Elevated N inputs had no significantly overall effect on concentrations of phenolic compounds and lignin but adversely affected tannin, as defensive chemicals for insect herbivores. Additionally, the overall effect of insect herbivore performance (including development time, insect biomass, relative growth rate, and so on) was significantly increased by elevated N inputs. According to the inconsistent responses between broadleaves and conifers, broadleaves would be more likely to increase growth by light interception and photosynthesis rather than producing more defensive chemicals to elevated N inputs by comparison with conifers. Moreover, the overall carbohydrate concentration was significantly reduced by 13.12% in broadleaves while increased slightly in conifers. The overall tannin concentration decreased significantly by 39.21% in broadleaves but a 5.8% decrease in conifers was not significant. The results of the analysis indicated that elevated N inputs would provide more food sources and ameliorate tree palatability for insects, while the resistance of trees against their insect herbivores was weakened, especially for broadleaves. Thus, global forest insect pest problems would be aggravated by elevated N inputs. As N inputs continue to rise in the future, forest

  13. CometQ: An automated tool for the detection and quantification of DNA damage using comet assay image analysis.

    Science.gov (United States)

    Ganapathy, Sreelatha; Muraleedharan, Aparna; Sathidevi, Puthumangalathu Savithri; Chand, Parkash; Rajkumar, Ravi Philip

    2016-09-01

    DNA damage analysis plays an important role in determining the approaches for treatment and prevention of various diseases like cancer, schizophrenia and other heritable diseases. Comet assay is a sensitive and versatile method for DNA damage analysis. The main objective of this work is to implement a fully automated tool for the detection and quantification of DNA damage by analysing comet assay images. The comet assay image analysis consists of four stages: (1) classifier (2) comet segmentation (3) comet partitioning and (4) comet quantification. Main features of the proposed software are the design and development of four comet segmentation methods, and the automatic routing of the input comet assay image to the most suitable one among these methods depending on the type of the image (silver stained or fluorescent stained) as well as the level of DNA damage (heavily damaged or lightly/moderately damaged). A classifier stage, based on support vector machine (SVM) is designed and implemented at the front end, to categorise the input image into one of the above four groups to ensure proper routing. Comet segmentation is followed by comet partitioning which is implemented using a novel technique coined as modified fuzzy clustering. Comet parameters are calculated in the comet quantification stage and are saved in an excel file. Our dataset consists of 600 silver stained images obtained from 40 Schizophrenia patients with different levels of severity, admitted to a tertiary hospital in South India and 56 fluorescent stained images obtained from different internet sources. The performance of "CometQ", the proposed standalone application for automated analysis of comet assay images, is evaluated by a clinical expert and is also compared with that of a most recent and related software-OpenComet. CometQ gave 90.26% positive predictive value (PPV) and 93.34% sensitivity which are much higher than those of OpenComet, especially in the case of silver stained images. The

  14. Network and neuronal membrane properties in hybrid networks reciprocally regulate selectivity to rapid thalamocortical inputs.

    Science.gov (United States)

    Pesavento, Michael J; Pinto, David J

    2012-11-01

    Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.

  15. Modeling and generating input processes

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M.E.

    1987-01-01

    This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.

  16. Exploring Heterogeneous Multicore Architectures for Advanced Embedded Uncertainty Quantification.

    Energy Technology Data Exchange (ETDEWEB)

    Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.

    2014-09-01

    We explore rearrangements of classical uncertainty quantification methods with the aim of achieving higher aggregate performance for uncertainty quantification calculations on emerging multicore and manycore architectures. We show a rearrangement of the stochastic Galerkin method leads to improved performance and scalability on several computational architectures whereby un- certainty information is propagated at the lowest levels of the simulation code improving memory access patterns, exposing new dimensions of fine grained parallelism, and reducing communica- tion. We also develop a general framework for implementing such rearrangements for a diverse set of uncertainty quantification algorithms as well as computational simulation codes to which they are applied.

  17. GMO quantification: valuable experience and insights for the future.

    Science.gov (United States)

    Milavec, Mojca; Dobnik, David; Yang, Litao; Zhang, Dabing; Gruden, Kristina; Zel, Jana

    2014-10-01

    Cultivation and marketing of genetically modified organisms (GMOs) have been unevenly adopted worldwide. To facilitate international trade and to provide information to consumers, labelling requirements have been set up in many countries. Quantitative real-time polymerase chain reaction (qPCR) is currently the method of choice for detection, identification and quantification of GMOs. This has been critically assessed and the requirements for the method performance have been set. Nevertheless, there are challenges that should still be highlighted, such as measuring the quantity and quality of DNA, and determining the qPCR efficiency, possible sequence mismatches, characteristics of taxon-specific genes and appropriate units of measurement, as these remain potential sources of measurement uncertainty. To overcome these problems and to cope with the continuous increase in the number and variety of GMOs, new approaches are needed. Statistical strategies of quantification have already been proposed and expanded with the development of digital PCR. The first attempts have been made to use new generation sequencing also for quantitative purposes, although accurate quantification of the contents of GMOs using this technology is still a challenge for the future, and especially for mixed samples. New approaches are needed also for the quantification of stacks, and for potential quantification of organisms produced by new plant breeding techniques.

  18. Exploiting multicompartment effects in triple-echo steady-state T2 mapping for fat fraction quantification.

    Science.gov (United States)

    Liu, Dian; Steingoetter, Andreas; Curcic, Jelena; Kozerke, Sebastian

    2018-01-01

    To investigate and exploit the effect of intravoxel off-resonance compartments in the triple-echo steady-state (TESS) sequence without fat suppression for T 2 mapping and to leverage the results for fat fraction quantification. In multicompartment tissue, where at least one compartment is excited off-resonance, the total signal exhibits periodic modulations as a function of echo time (TE). Simulated multicompartment TESS signals were synthesized at various TEs. Fat emulsion phantoms were prepared and scanned at the same TE combinations using TESS. In vivo knee data were obtained with TESS to validate the simulations. The multicompartment effect was exploited for fat fraction quantification in the stomach by acquiring TESS signals at two TE combinations. Simulated and measured multicompartment signal intensities were in good agreement. Multicompartment effects caused erroneous T 2 offsets, even at low water-fat ratios. The choice of TE caused T 2 variations of as much as 28% in cartilage. The feasibility of fat fraction quantification to monitor the decrease of fat content in the stomach during digestion is demonstrated. Intravoxel off-resonance compartments are a confounding factor for T 2 quantification using TESS, causing errors that are dependent on the TE. At the same time, off-resonance effects may allow for efficient fat fraction mapping using steady-state imaging. Magn Reson Med 79:423-429, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. The Quantification Process for the PRiME-U34i

    International Nuclear Information System (INIS)

    Hwang, Mee-Jeong; Han, Sang-Hoon; Yang, Joon-Eon

    2006-01-01

    In this paper, we introduce the quantification process for the PRIME-U34i, which is the merged model of ETs (Event Trees) and FTs (Fault Trees) for the level 1 internal PSA of UCN 3 and 4. PRiME-U34i has one top event. Therefore, the quantification process is changed to a simplified method when compared to the past one. In the past, we used the text file called a user file to control the quantification process. However, this user file is so complicated that it is difficult for a non-expert to understand it. Moreover, in the past PSA, ET and FT were separated but in PRiMEU34i, ET and FT were merged together. Thus, the quantification process is different. This paper is composed of five sections. In section 2, we introduce the construction of the one top model. Section 3 shows the quantification process used in the PRiME-U34i. Section 4 describes the post processing. Last section is the conclusions

  20. Wave energy input into the Ekman layer

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper is concerned with the wave energy input into the Ekman layer, based on 3 observational facts that surface waves could significantly affect the profile of the Ekman layer. Under the assumption of constant vertical diffusivity, the analytical form of wave energy input into the Ekman layer is derived. Analysis of the energy balance shows that the energy input to the Ekman layer through the wind stress and the interaction of the Stokes-drift with planetary vorticity can be divided into two kinds. One is the wind energy input, and the other is the wave energy input which is dependent on wind speed, wave characteristics and the wind direction relative to the wave direction. Estimates of wave energy input show that wave energy input can be up to 10% in high-latitude and high-wind speed areas and higher than 20% in the Antarctic Circumpolar Current, compared with the wind energy input into the classical Ekman layer. Results of this paper are of significance to the study of wave-induced large scale effects.

  1. College Oral English teaching from the perspective of input and output theory

    Directory of Open Access Journals (Sweden)

    Xiangxiang Yuan

    2017-11-01

    Full Text Available With the development of society and the deepening of economic globalization, the communicative competence of spoken English has become an important indicator of the talent. Therefore, how to improve college students’ oral English proficiency has become the focus of college English teaching. The phenomenon of “heavy input and light output” in college English teaching in China for a long period of time has led to the emergence of “dumb English, low efficiency”. Aiming at these problems, this paper discusses the functions of input and output and their relationship, and puts forward some views on oral English teaching.

  2. Design of a Programmable Gain, Temperature Compensated Current-Input Current-Output CMOS Logarithmic Amplifier.

    Science.gov (United States)

    Ming Gu; Chakrabartty, Shantanu

    2014-06-01

    This paper presents the design of a programmable gain, temperature compensated, current-mode CMOS logarithmic amplifier that can be used for biomedical signal processing. Unlike conventional logarithmic amplifiers that use a transimpedance technique to generate a voltage signal as a logarithmic function of the input current, the proposed approach directly produces a current output as a logarithmic function of the input current. Also, unlike a conventional transimpedance amplifier the gain of the proposed logarithmic amplifier can be programmed using floating-gate trimming circuits. The synthesis of the proposed circuit is based on the Hart's extended translinear principle which involves embedding a floating-voltage source and a linear resistive element within a translinear loop. Temperature compensation is then achieved using a translinear-based resistive cancelation technique. Measured results from prototypes fabricated in a 0.5 μm CMOS process show that the amplifier has an input dynamic range of 120 dB and a temperature sensitivity of 230 ppm/°C (27 °C- 57°C), while consuming less than 100 nW of power.

  3. Effect of rehabilitation worker input on visual function outcomes in individuals with low vision: study protocol for a randomised controlled trial.

    Science.gov (United States)

    Acton, Jennifer H; Molik, Bablin; Binns, Alison; Court, Helen; Margrain, Tom H

    2016-02-24

    Visual Rehabilitation Officers help people with a visual impairment maintain their independence. This intervention adopts a flexible, goal-centred approach, which may include training in mobility, use of optical and non-optical aids, and performance of activities of daily living. Although Visual Rehabilitation Officers are an integral part of the low vision service in the United Kingdom, evidence that they are effective is lacking. The purpose of this exploratory trial is to estimate the impact of a Visual Rehabilitation Officer on self-reported visual function, psychosocial and quality-of-life outcomes in individuals with low vision. In this exploratory, assessor-masked, parallel group, randomised controlled trial, participants will be allocated either to receive home visits from a Visual Rehabilitation Officer (n = 30) or to a waiting list control group (n = 30) in a 1:1 ratio. Adult volunteers with a visual impairment, who have been identified as needing rehabilitation officer input by a social worker, will take part. Those with an urgent need for a Visual Rehabilitation Officer or who have a cognitive impairment will be excluded. The primary outcome measure will be self-reported visual function (48-item Veterans Affairs Low Vision Visual Functioning Questionnaire). Secondary outcome measures will include psychological and quality-of-life metrics: the Patient Health Questionnaire (PHQ-9), the Warwick-Edinburgh Mental Well-being Scale (WEMWBS), the Adjustment to Age-related Visual Loss Scale (AVL-12), the Standardised Health-related Quality of Life Questionnaire (EQ-5D) and the UCLA Loneliness Scale. The interviewer collecting the outcomes will be masked to the group allocations. The analysis will be undertaken on a complete case and intention-to-treat basis. Analysis of covariance (ANCOVA) will be applied to follow-up questionnaire scores, with the baseline score as a covariate. This trial is expected to provide robust effect size estimates of the intervention

  4. Final Technical Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Knio, Omar M. [Duke Univ., Durham, NC (United States). Dept. of Mechanical Engineering and Materials Science

    2017-06-06

    QUEST is a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, University of Southern California, Massachusetts Institute of Technology, University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The Duke effort focused on the development of algorithms and utility software for non-intrusive sparse UQ representations, and on participation in the organization of annual workshops and tutorials to disseminate UQ tools to the community, and to gather input in order to adapt approaches to the needs of SciDAC customers. In particular, fundamental developments were made in (a) multiscale stochastic preconditioners, (b) gradient-based approaches to inverse problems, (c) adaptive pseudo-spectral approximations, (d) stochastic limit cycles, and (e) sensitivity analysis tools for noisy systems. In addition, large-scale demonstrations were performed, namely in the context of ocean general circulation models.

  5. Quantification of 5-HT{sub 1A} receptors in human brain using p-MPPF kinetic modelling and PET

    Energy Technology Data Exchange (ETDEWEB)

    Sanabria-Bohorquez, S.M.; Veraart, C. [Neural Rehabilitation Engineering Lab., Univ. Catholique de Louvain, Brussels (Belgium); Biver, F.; Damhaut, P.; Wikler, D.; Goldman, S. [PET/Biomedical Cyclotron Unit, Univ. Libre de Bruxelles (Belgium)

    2002-01-01

    Serotonin-1A (5-HT{sub 1A}) receptors are implicated in neurochemical mechanisms underlying anxiety and depression and their treatment. Animal studies have suggested that 4-(2'-methoxyphenyl)-1-[2'-[N-(2''-pyridinyl)-p-[{sup 18}F]fluorobenzamido] ethyl] piperazine (p-MPPF) may be a suitable positron emission tomography (PET) tracer of 5-HT{sub 1A} receptors. To test p-MPPF in humans, we performed 60-min dynamic PET scans in 13 healthy volunteers after single bolus injection. Metabolite quantification revealed a fast decrease in tracer plasma concentration, such that at 5 min post injection about 25% of the total radioactivity in plasma corresponded to p-MPPF. Radioactivity concentration was highest in hippocampus, intermediate in neocortex and lowest in basal ganglia and cerebellum. The interactions between p-MPPF and 5-HT{sub 1A} receptors were described using linear compartmental models with plasma input and reference tissue approaches. The two quantification methods provided similar results which are in agreement with previous reports on 5-HT{sub 1A} receptor brain distribution. In conclusion, our results show that p-MPPF is a suitable PET radioligand for 5-HT{sub 1A} receptor human studies. (orig.)

  6. Objective automated quantification of fluorescence signal in histological sections of rat lens.

    Science.gov (United States)

    Talebizadeh, Nooshin; Hagström, Nanna Zhou; Yu, Zhaohua; Kronschläger, Martin; Söderberg, Per; Wählby, Carolina

    2017-08-01

    Visual quantification and classification of fluorescent signals is the gold standard in microscopy. The purpose of this study was to develop an automated method to delineate cells and to quantify expression of fluorescent signal of biomarkers in each nucleus and cytoplasm of lens epithelial cells in a histological section. A region of interest representing the lens epithelium was manually demarcated in each input image. Thereafter, individual cell nuclei within the region of interest were automatically delineated based on watershed segmentation and thresholding with an algorithm developed in Matlab™. Fluorescence signal was quantified within nuclei, cytoplasms and juxtaposed backgrounds. The classification of cells as labelled or not labelled was based on comparison of the fluorescence signal within cells with local background. The classification rule was thereafter optimized as compared with visual classification of a limited dataset. The performance of the automated classification was evaluated by asking 11 independent blinded observers to classify all cells (n = 395) in one lens image. Time consumed by the automatic algorithm and visual classification of cells was recorded. On an average, 77% of the cells were correctly classified as compared with the majority vote of the visual observers. The average agreement among visual observers was 83%. However, variation among visual observers was high, and agreement between two visual observers was as low as 71% in the worst case. Automated classification was on average 10 times faster than visual scoring. The presented method enables objective and fast detection of lens epithelial cells and quantification of expression of fluorescent signal with an accuracy comparable with the variability among visual observers. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  7. Artifacts Quantification of Metal Implants in MRI

    Science.gov (United States)

    Vrachnis, I. N.; Vlachopoulos, G. F.; Maris, T. G.; Costaridou, L. I.

    2017-11-01

    The presence of materials with different magnetic properties, such as metal implants, causes distortion of the magnetic field locally, resulting in signal voids and pile ups, i.e. susceptibility artifacts in MRI. Quantitative and unbiased measurement of the artifact is prerequisite for optimization of acquisition parameters. In this study an image gradient based segmentation method is proposed for susceptibility artifact quantification. The method captures abrupt signal alterations by calculation of the image gradient. Then the artifact is quantified in terms of its extent by an automated cross entropy thresholding method as image area percentage. The proposed method for artifact quantification was tested in phantoms containing two orthopedic implants with significantly different magnetic permeabilities. The method was compared against a method proposed in the literature, considered as a reference, demonstrating moderate to good correlation (Spearman’s rho = 0.62 and 0.802 in case of titanium and stainless steel implants). The automated character of the proposed quantification method seems promising towards MRI acquisition parameter optimization.

  8. Quantification of the impact of a confounding variable on functional connectivity confirms anti-correlated networks in the resting-state.

    Science.gov (United States)

    Carbonell, F; Bellec, P; Shmuel, A

    2014-02-01

    The effect of regressing out the global average signal (GAS) in resting state fMRI data has become a concern for interpreting functional connectivity analyses. It is not clear whether the reported anti-correlations between the Default Mode and the Dorsal Attention Networks are intrinsic to the brain, or are artificially created by regressing out the GAS. Here we introduce a concept, Impact of the Global Average on Functional Connectivity (IGAFC), for quantifying the sensitivity of seed-based correlation analyses to the regression of the GAS. This voxel-wise IGAFC index is defined as the product of two correlation coefficients: the correlation between the GAS and the fMRI time course of a voxel, times the correlation between the GAS and the seed time course. This definition enables the calculation of a threshold at which the impact of regressing-out the GAS would be large enough to introduce spurious negative correlations. It also yields a post-hoc impact correction procedure via thresholding, which eliminates spurious correlations introduced by regressing out the GAS. In addition, we introduce an Artificial Negative Correlation Index (ANCI), defined as the absolute difference between the IGAFC index and the impact threshold. The ANCI allows a graded confidence scale for ranking voxels according to their likelihood of showing artificial correlations. By applying this method, we observed regions in the Default Mode and Dorsal Attention Networks that were anti-correlated. These findings confirm that the previously reported negative correlations between the Dorsal Attention and Default Mode Networks are intrinsic to the brain and not the result of statistical manipulations. Our proposed quantification of the impact that a confound may have on functional connectivity can be generalized to global effect estimators other than the GAS. It can be readily applied to other confounds, such as systemic physiological or head movement interferences, in order to quantify their

  9. The effectiveness of aided augmented input techniques for persons with developmental disabilities: a systematic review.

    Science.gov (United States)

    Allen, Anna A; Schlosser, Ralf W; Brock, Kristofer L; Shane, Howard C

    2017-09-01

    When working with individuals with little or no functional speech, clinicians often recommend that communication partners use the client's augmentative and alternative communication (AAC) device when speaking to the client. This is broadly known as "augmented input" and is thought to enhance the client's learning of language form and content. The purpose of this systematic review was to determine the effects of augmented input on communication outcomes in persons with developmental disabilities and persons with childhood apraxia of speech who use aided AAC. Nineteen studies met the inclusion criteria. Each included study was reviewed in terms of participant characteristics, terminology used, symbol format, augmented input characteristics, outcomes measured, effectiveness, and study quality. Results indicate that augmented input can improve single-word vocabulary skills and expression of multi-symbol utterances; however, comprehension beyond the single word level has not been explored. Additionally, it is difficult to form conclusions about the effect of augmented input on specific diagnostic populations. Directions for future research are posited.

  10. Metal Stable Isotope Tagging: Renaissance of Radioimmunoassay for Multiplex and Absolute Quantification of Biomolecules.

    Science.gov (United States)

    Liu, Rui; Zhang, Shixi; Wei, Chao; Xing, Zhi; Zhang, Sichun; Zhang, Xinrong

    2016-05-17

    is the development and application of the mass cytometer, which fully exploited the multiplexing potential of metal stable isotope tagging. It realized the simultaneous detection of dozens of parameters in single cells, accurate immunophenotyping in cell populations, through modeling of intracellular signaling network and undoubted discrimination of function and connection of cell subsets. Metal stable isotope tagging has great potential applications in hematopoiesis, immunology, stem cells, cancer, and drug screening related research and opened a post-fluorescence era of cytometry. Herein, we review the development of biomolecule quantification using metal stable isotope tagging. Particularly, the power of multiplex and absolute quantification is demonstrated. We address the advantages, applicable situations, and limitations of metal stable isotope tagging strategies and propose suggestions for future developments. The transfer of enzymatic or fluorescent tagging to metal stable isotope tagging may occur in many aspects of biological and clinical practices in the near future, just as the revolution from radioactive isotope tagging to fluorescent tagging happened in the past.

  11. Quantification of trace-level DNA by real-time whole genome amplification.

    Science.gov (United States)

    Kang, Min-Jung; Yu, Hannah; Kim, Sook-Kyung; Park, Sang-Ryoul; Yang, Inchul

    2011-01-01

    Quantification of trace amounts of DNA is a challenge in analytical applications where the concentration of a target DNA is very low or only limited amounts of samples are available for analysis. PCR-based methods including real-time PCR are highly sensitive and widely used for quantification of low-level DNA samples. However, ordinary PCR methods require at least one copy of a specific gene sequence for amplification and may not work for a sub-genomic amount of DNA. We suggest a real-time whole genome amplification method adopting the degenerate oligonucleotide primed PCR (DOP-PCR) for quantification of sub-genomic amounts of DNA. This approach enabled quantification of sub-picogram amounts of DNA independently of their sequences. When the method was applied to the human placental DNA of which amount was accurately determined by inductively coupled plasma-optical emission spectroscopy (ICP-OES), an accurate and stable quantification capability for DNA samples ranging from 80 fg to 8 ng was obtained. In blind tests of laboratory-prepared DNA samples, measurement accuracies of 7.4%, -2.1%, and -13.9% with analytical precisions around 15% were achieved for 400-pg, 4-pg, and 400-fg DNA samples, respectively. A similar quantification capability was also observed for other DNA species from calf, E. coli, and lambda phage. Therefore, when provided with an appropriate standard DNA, the suggested real-time DOP-PCR method can be used as a universal method for quantification of trace amounts of DNA.

  12. Gestures and multimodal input

    OpenAIRE

    Keates, Simeon; Robinson, Peter

    1999-01-01

    For users with motion impairments, the standard keyboard and mouse arrangement for computer access often presents problems. Other approaches have to be adopted to overcome this. In this paper, we will describe the development of a prototype multimodal input system based on two gestural input channels. Results from extensive user trials of this system are presented. These trials showed that the physical and cognitive loads on the user can quickly become excessive and detrimental to the interac...

  13. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    International Nuclear Information System (INIS)

    Zhu, T.

    2015-01-01

    nuclear data uncertainty format. The first stage of NUSS development focuses on applying simple random sampling (SRS) algorithm for uncertainty quantification. The effect of combining multigroup and ACE format on the propagated nuclear data uncertainties is assessed. It is found that the number of energy groups has minor impact on the precision of κ_e_f_f uncertainty as long as the group structure reflects the neutron flux spectrum. Successful verification of the NUSS tool for propagating nuclear data uncertainties through MCNPX and quantifying MCNPX output parameter uncertainties is obtained. The second stage of NUSS development is motivated by the need for an efficient sensitivity analysis methodology based on global sampling and coupled with MCNPX. For complex systems, the computing time for obtaining a breakdown of total uncertainty contributions by individual inputs becomes prohibitive when many MCNPX runs are required. The capability of determining simultaneously the total uncertainty and individual nuclear data uncertainty contributions is thus researched and implemented into the NUSS-RF tool. It is based on the Random Balance Design algorithm and is validated by three mathematical test cases for both linear and nonlinear models and correlated inputs. NUSS-RF is then applied to demonstrate the efficient decomposition of total uncertainty by individual nuclear data. However an attempt to decompose total uncertainty into individual contributions using the conventional S/U method shows different decomposition results when the inputs are correlated. The investigation and findings of this PhD work are valuable because of the introduction of global sensitivity analysis into the existing repertoire of nuclear data uncertainty quantification methods. The NUSS tool is expected to be useful for expanding the types of MCNPX-related applications, such as an upgrade to the current PSI criticality safety assessment methodology for Swiss application, for which nuclear data

  14. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, T.

    2015-07-01

    nuclear data uncertainty format. The first stage of NUSS development focuses on applying simple random sampling (SRS) algorithm for uncertainty quantification. The effect of combining multigroup and ACE format on the propagated nuclear data uncertainties is assessed. It is found that the number of energy groups has minor impact on the precision of κ{sub eff} uncertainty as long as the group structure reflects the neutron flux spectrum. Successful verification of the NUSS tool for propagating nuclear data uncertainties through MCNPX and quantifying MCNPX output parameter uncertainties is obtained. The second stage of NUSS development is motivated by the need for an efficient sensitivity analysis methodology based on global sampling and coupled with MCNPX. For complex systems, the computing time for obtaining a breakdown of total uncertainty contributions by individual inputs becomes prohibitive when many MCNPX runs are required. The capability of determining simultaneously the total uncertainty and individual nuclear data uncertainty contributions is thus researched and implemented into the NUSS-RF tool. It is based on the Random Balance Design algorithm and is validated by three mathematical test cases for both linear and nonlinear models and correlated inputs. NUSS-RF is then applied to demonstrate the efficient decomposition of total uncertainty by individual nuclear data. However an attempt to decompose total uncertainty into individual contributions using the conventional S/U method shows different decomposition results when the inputs are correlated. The investigation and findings of this PhD work are valuable because of the introduction of global sensitivity analysis into the existing repertoire of nuclear data uncertainty quantification methods. The NUSS tool is expected to be useful for expanding the types of MCNPX-related applications, such as an upgrade to the current PSI criticality safety assessment methodology for Swiss application, for which nuclear data

  15. The Importance of Input and Interaction in SLA

    Institute of Scientific and Technical Information of China (English)

    党春花

    2009-01-01

    As is known to us, input and interaction play the crucial roles in second language acquisition (SLA). Different linguistic schools have different explanations to input and interaction Behaviorist theories hold a view that input is composed of stimuli and response, putting more emphasis on the importance of input, while mentalist theories find input is a necessary condition to SLA, not a sufficient condition. At present, social interaction theories, which is one type of cognitive linguistics, suggests that besides input, interaction is also essential to language acquisition. Then, this essay will discuss how input and interaction result in SLA.

  16. Noninvasive quantification of 18F-FLT human brain PET for the assessment of tumour proliferation in patients with high-grade glioma

    International Nuclear Information System (INIS)

    Backes, Heiko; Ullrich, Roland; Neumaier, Bernd; Kracht, Lutz; Wienhard, Klaus; Jacobs, Andreas H.

    2009-01-01

    Compartmental modelling of 3 ' -deoxy-3 ' -[ 18 F]-fluorothymidine ( 18 F-FLT) PET-derived kinetics provides a method for noninvasive assessment of the proliferation rate of gliomas. Such analyses, however, require an input function generally derived by serial blood sampling and counting. In the current study, 18 F-FLT kinetic parameters obtained from image-derived input functions were compared with those from input functions derived from arterialized blood samples. Based on the analysis of 11 patients with glioma (WHO grade II-IV) a procedure for the automated extraction of an input function from 18 F-FLT brain PET data was derived. The time-activity curve of the volume of interest with the maximum difference in 18 F-FLT uptake during the first 5 min after injection and the period from 60 to 90 min was corrected for partial-volume effects and in vivo metabolism of 18 F-FLT. For each patient a two-compartment kinetic model was applied to the tumour tissue using the image-derived input function. The resulting kinetic rate constants K 1 (transport across the blood-brain barrier) and K i (metabolic rate constant or net influx constant) were compared with those obtained from the same data using the input function derived from blood samples. Additionally, the metabolic rate constant was correlated with the frequency of tumour cells stained with Ki-67, a widely used immunohistochemical marker of cell proliferation. The rate constants from kinetic modelling were comparable when the blood sample-derived input functions were replaced by the image-derived functions (K 1,img and K 1,sample , r = 0.95, p -5 ; K i,img and K i,sample , r = 0.86, p 1,img and K 1,sample , p = 0.20; K i,img and K i,sample , p = 0.92). Furthermore, a significant correlation between K i,img and the percentage of Ki-67-positive cells was observed (r = 0.73, p = 0.01). Kinetic modelling of 18 F-FLT brain PET data using image-derived input functions extracted from human brain PET data with the practical

  17. Data Acquisition for Quality Loss Function Modelling

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard; Howard, Thomas J.

    2016-01-01

    Quality loss functions can be a valuable tool when assessing the impact of variation on product quality. Typically, the input for the quality loss function would be a measure of the varying product performance and the output would be a measure of quality. While the unit of the input is given by t...... by the product function in focus, the quality output can be measured and quantified in a number of ways. In this article a structured approach for acquiring stakeholder satisfaction data for use in quality loss function modelling is introduced.......Quality loss functions can be a valuable tool when assessing the impact of variation on product quality. Typically, the input for the quality loss function would be a measure of the varying product performance and the output would be a measure of quality. While the unit of the input is given...

  18. Collagen Quantification in Tissue Specimens.

    Science.gov (United States)

    Coentro, João Quintas; Capella-Monsonís, Héctor; Graceffa, Valeria; Wu, Zhuning; Mullen, Anne Maria; Raghunath, Michael; Zeugolis, Dimitrios I

    2017-01-01

    Collagen is the major extracellular protein in mammals. Accurate quantification of collagen is essential in the biomaterials (e.g., reproducible collagen scaffold fabrication), drug discovery (e.g., assessment of collagen in pathophysiologies, such as fibrosis), and tissue engineering (e.g., quantification of cell-synthesized collagen) fields. Although measuring hydroxyproline content is the most widely used method to quantify collagen in biological specimens, the process is very laborious. To this end, the Sircol™ Collagen Assay is widely used due to its inherent simplicity and convenience. However, this method leads to overestimation of collagen content due to the interaction of Sirius red with basic amino acids of non-collagenous proteins. Herein, we describe the addition of an ultrafiltration purification step in the process to accurately determine collagen content in tissues.

  19. Quantification of margins and mixed uncertainties using evidence theory and stochastic expansions

    International Nuclear Information System (INIS)

    Shah, Harsheel; Hosder, Serhat; Winter, Tyler

    2015-01-01

    The objective of this paper is to implement Dempster–Shafer Theory of Evidence (DSTE) in the presence of mixed (aleatory and multiple sources of epistemic) uncertainty to the reliability and performance assessment of complex engineering systems through the use of quantification of margins and uncertainties (QMU) methodology. This study focuses on quantifying the simulation uncertainties, both in the design condition and the performance boundaries along with the determination of margins. To address the possibility of multiple sources and intervals for epistemic uncertainty characterization, DSTE is used for uncertainty quantification. An approach to incorporate aleatory uncertainty in Dempster–Shafer structures is presented by discretizing the aleatory variable distributions into sets of intervals. In view of excessive computational costs for large scale applications and repetitive simulations needed for DSTE analysis, a stochastic response surface based on point-collocation non-intrusive polynomial chaos (NIPC) has been implemented as the surrogate for the model response. The technique is demonstrated on a model problem with non-linear analytical functions representing the outputs and performance boundaries of two coupled systems. Finally, the QMU approach is demonstrated on a multi-disciplinary analysis of a high speed civil transport (HSCT). - Highlights: • Quantification of margins and uncertainties (QMU) methodology with evidence theory. • Treatment of both inherent and epistemic uncertainties within evidence theory. • Stochastic expansions for representation of performance metrics and boundaries. • Demonstration of QMU on an analytical problem. • QMU analysis applied to an aerospace system (high speed civil transport)

  20. Cutset Quantification Error Evaluation for Shin-Kori 1 and 2 PSA model

    International Nuclear Information System (INIS)

    Choi, Jong Soo

    2009-01-01

    Probabilistic safety assessments (PSA) for nuclear power plants (NPPs) are based on the minimal cut set (MCS) quantification method. In PSAs, the risk and importance measures are computed from a cutset equation mainly by using approximations. The conservatism of the approximations is also a source of quantification uncertainty. In this paper, exact MCS quantification methods which are based on the 'sum of disjoint products (SDP)' logic and Inclusion-exclusion formula are applied and the conservatism of the MCS quantification results in Shin-Kori 1 and 2 PSA is evaluated

  1. Automatic Construction of Java Programs from Functional Program Specifications

    OpenAIRE

    Md. Humayun Kabir

    2015-01-01

    This paper presents a novel approach to construct Java programs automatically from the input functional program specifications on natural numbers from the constructive proofs of the input specifications using an inductive theorem prover called Poiti'n. The construction of a Java program from the input functional program specification involves two phases. The theorem prover is used to construct a higher order functional (HOF) program from the input specification expressed as an existential the...

  2. Quantification of heterogeneity observed in medical images

    International Nuclear Information System (INIS)

    Brooks, Frank J; Grigsby, Perry W

    2013-01-01

    There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity

  3. Quantification of heterogeneity observed in medical images.

    Science.gov (United States)

    Brooks, Frank J; Grigsby, Perry W

    2013-03-02

    There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity.

  4. The Method of Manufactured Universes for validating uncertainty quantification methods

    KAUST Repository

    Stripling, H.F.

    2011-09-01

    The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which experimental data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented in this paper manufactures a particle-transport universe, models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new experiments within the manufactured reality. The results of this preliminary study indicate that, even in a simple problem, the improper application of a specific UQ method or unrealized effects of a modeling assumption may produce inaccurate predictions. We conclude that the validation framework presented in this paper is a powerful and flexible tool for the investigation and understanding of UQ methodologies. © 2011 Elsevier Ltd. All rights reserved.

  5. Quantification of landscape multifunctionality based on farm functionality indices

    DEFF Research Database (Denmark)

    Andersen, Peter Stubkjær; Vejre, Henrik; Dalgaard, Tommy

    2011-01-01

    ) wildlife habitats, and (4) recreation. At farm level each of these functions is defined by data on a number of farmers’ activities as well as farm characteristics which can be harvested by a selection of the interview questions. The selected interview questions are attached as indicators to the relevant...... present a bottom-up method in which landscape multifunctionality is quantified by using functional indices developed from farm questionaire data. The interview survey comprised 382 farms in a rural area of Denmark. The functional classes included in the method are: (1) production, (2) residence, (3...... function. A score spectrum is assigned to each indicator to enable a representation of its relative contribution to the function on each farm depending on the question responses from the interviewees. The values for each indicator are weighted in relation to each of the others and all the values are summed...

  6. Analysis on relation between safety input and accidents

    Institute of Scientific and Technical Information of China (English)

    YAO Qing-guo; ZHANG Xue-mu; LI Chun-hui

    2007-01-01

    The number of safety input directly determines the level of safety, and there exists dialectical and unified relations between safety input and accidents. Based on the field investigation and reliable data, this paper deeply studied the dialectical relationship between safety input and accidents, and acquired the conclusions. The security situation of the coal enterprises was related to the security input rate, being effected little by the security input scale, and build the relationship model between safety input and accidents on this basis, that is the accident model.

  7. Fast metabolite identification with Input Output Kernel Regression

    Science.gov (United States)

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  8. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    Science.gov (United States)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  9. Stereological quantification of mast cells in human synovium

    DEFF Research Database (Denmark)

    Damsgaard, T E; Sørensen, Flemming Brandt; Herlin, T

    1999-01-01

    Mast cells participate in both the acute allergic reaction as well as in chronic inflammatory diseases. Earlier studies have revealed divergent results regarding the quantification of mast cells in the human synovium. The aim of the present study was therefore to quantify these cells in the human...... synovium, using stereological techniques. Different methods of staining and quantification have previously been used for mast cell quantification in human synovium. Stereological techniques provide precise and unbiased information on the number of cell profiles in two-dimensional tissue sections of......, in this case, human synovium. In 10 patients suffering from osteoarthritis a median of 3.6 mast cells/mm2 synovial membrane was found. The total number of cells (synoviocytes, fibroblasts, lymphocytes, leukocytes) present was 395.9 cells/mm2 (median). The mast cells constituted 0.8% of all the cell profiles...

  10. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    Science.gov (United States)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  11. Mars 2.2 code manual: input requirements

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Lee, Won Jae; Jeong, Jae Jun; Lee, Young Jin; Hwang, Moon Kyu; Kim, Kyung Doo; Lee, Seung Wook; Bae, Sung Won

    2003-07-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This input manual provides a complete list of input required to run MARS. The manual is divided largely into two parts, namely, the one-dimensional part and the multi-dimensional part. The inputs for auxiliary parts such as minor edit requests and graph formatting inputs are shared by the two parts and as such mixed input is possible. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS. MARS development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  12. MARS code manual volume II: input requirements

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This input manual provides a complete list of input required to run MARS. The manual is divided largely into two parts, namely, the one-dimensional part and the multi-dimensional part. The inputs for auxiliary parts such as minor edit requests and graph formatting inputs are shared by the two parts and as such mixed input is possible. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  13. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, Putri Wikie

    2017-01-24

    There are some events which are expected effecting CPI’s fluctuation, i.e. financial crisis 1997/1998, fuel price risings, base year changing’s, independence of Timor-Timur (October 1999), and Tsunami disaster in Aceh (December 2004). During re-search period, there were eight fuel price risings and four base year changing’s. The objective of this research is to obtain multi input intervention model which can des-cribe magnitude and duration of each event effected to CPI. Most of intervention re-searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those events were affecting CPI. Additionally, other events, such as Ied on January 1999, events on April 2002, July 2003, December 2005, and September 2008, were affecting CPI too. In general, those events gave positive effect to CPI, except events on April 2002 and July 2003 which gave negative effects.

  14. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Development of Input/Output System for the Reactor Transient Analysis System (RETAS)

    International Nuclear Information System (INIS)

    Suh, Jae Seung; Kang, Doo Hyuk; Cho, Yeon Sik; Ahn, Seung Hoon; Cho, Yong Jin

    2009-01-01

    A Korea Institute of Nuclear Safety Reactor Transient Analysis System (KINS-RETAS) aims at providing a realistic prediction of core and RCS response to the potential or actual event scenarios in Korean nuclear power plants (NPPs). A thermal hydraulic system code MARS is a pivot code of the RETAS, and used to predict thermal hydraulic (TH) behaviors in the core and associated systems. MARS alone can be applied to many types of transients, but is sometimes coupled with the other codes developed for different objectives. Many tools have been developed to aid users in preparing input and displaying the transient information and output data. Output file and Graphical User Interfaces (GUI) that help prepare input decks, as seen in SNAP (Gitnick, 1998), VISA (K.D. Kim, 2007) and display aids include the eFAST (KINS, 2007). The tools listed above are graphical interfaces. The input deck builders allow the user to create a functional diagram of the plant, pictorially on the screen. The functional diagram, when annotated with control volume and junction numbers, is a nodalization diagram. Data required for an input deck is entered for volumes and junctions through a mouse-driven menu and pop-up dialog; after the information is complete, an input deck is generated. Display GUIs show data from MARS calculations, either during or after the transient. The RETAS requires the user to first generate a set of 'input', two dimensional pictures of the plant on which some of the data is displayed either numerically or with a color map. The RETAS can generate XY-plots of the data. Time histories of plant conditions can be seen via the plots or through the RETAS's replay mode. The user input was combined with design input from MARS developers and experts from both the GUI and ergonomics fields. A partial list of capabilities follows. - 3D display for neutronics. - Easier method (less user time and effort) to generate 'input' for the 3D displays. - Detailed view of data at volume or

  16. Development of Input/Output System for the Reactor Transient Analysis System (RETAS)

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Jae Seung; Kang, Doo Hyuk; Cho, Yeon Sik [ENESYS, Daejeon (Korea, Republic of); Ahn, Seung Hoon; Cho, Yong Jin [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2009-05-15

    A Korea Institute of Nuclear Safety Reactor Transient Analysis System (KINS-RETAS) aims at providing a realistic prediction of core and RCS response to the potential or actual event scenarios in Korean nuclear power plants (NPPs). A thermal hydraulic system code MARS is a pivot code of the RETAS, and used to predict thermal hydraulic (TH) behaviors in the core and associated systems. MARS alone can be applied to many types of transients, but is sometimes coupled with the other codes developed for different objectives. Many tools have been developed to aid users in preparing input and displaying the transient information and output data. Output file and Graphical User Interfaces (GUI) that help prepare input decks, as seen in SNAP (Gitnick, 1998), VISA (K.D. Kim, 2007) and display aids include the eFAST (KINS, 2007). The tools listed above are graphical interfaces. The input deck builders allow the user to create a functional diagram of the plant, pictorially on the screen. The functional diagram, when annotated with control volume and junction numbers, is a nodalization diagram. Data required for an input deck is entered for volumes and junctions through a mouse-driven menu and pop-up dialog; after the information is complete, an input deck is generated. Display GUIs show data from MARS calculations, either during or after the transient. The RETAS requires the user to first generate a set of 'input', two dimensional pictures of the plant on which some of the data is displayed either numerically or with a color map. The RETAS can generate XY-plots of the data. Time histories of plant conditions can be seen via the plots or through the RETAS's replay mode. The user input was combined with design input from MARS developers and experts from both the GUI and ergonomics fields. A partial list of capabilities follows. - 3D display for neutronics. - Easier method (less user time and effort) to generate 'input' for the 3D displays. - Detailed view

  17. Preclinical evaluation and quantification of [18F]MK-9470 as a radioligand for PET imaging of the type 1 cannabinoid receptor in rat brain

    International Nuclear Information System (INIS)

    Casteels, Cindy; Koole, Michel; Laere, Koen van; Celen, Sofie; Bormans, Guy

    2012-01-01

    [ 18 F]MK-9470 is an inverse agonist for the type 1 cannabinoid (CB1) receptor allowing its use in PET imaging. We characterized the kinetics of [ 18 F]MK-9470 and evaluated its ability to quantify CB1 receptor availability in the rat brain. Dynamic small-animal PET scans with [ 18 F]MK-9470 were performed in Wistar rats on a FOCUS-220 system for up to 10 h. Both plasma and perfused brain homogenates were analysed using HPLC to quantify radiometabolites. Displacement and blocking experiments were done using cold MK-9470 and another inverse agonist, SR141716A. The distribution volume (V T ) of [ 18 F]MK-9470 was used as a quantitative measure and compared to the use of brain uptake, expressed as SUV, a simplified method of quantification. The percentage of intact [ 18 F]MK-9470 in arterial plasma samples was 80 ± 23 % at 10 min, 38 ± 30 % at 40 min and 13 ± 14 % at 210 min. A polar radiometabolite fraction was detected in plasma and brain tissue. The brain radiometabolite concentration was uniform across the whole brain. Displacement and pretreatment studies showed that 56 % of the tracer binding was specific and reversible. V T values obtained with a one-tissue compartment model plus constrained radiometabolite input had good identifiability (≤10 %). Ignoring the radiometabolite contribution using a one-tissue compartment model alone, i.e. without constrained radiometabolite input, overestimated the [ 18 F]MK-9470 V T , but was correlated. A correlation between [ 18 F]MK-9470 V T and SUV in the brain was also found (R 2 = 0.26-0.33; p ≤ 0.03). While the presence of a brain-penetrating radiometabolite fraction complicates the quantification of [ 18 F]MK-9470 in the rat brain, its tracer kinetics can be modelled using a one-tissue compartment model with and without constrained radiometabolite input. (orig.)

  18. Quantification of differential gene expression by multiplexed targeted resequencing of cDNA

    Science.gov (United States)

    Arts, Peer; van der Raadt, Jori; van Gestel, Sebastianus H.C.; Steehouwer, Marloes; Shendure, Jay; Hoischen, Alexander; Albers, Cornelis A.

    2017-01-01

    Whole-transcriptome or RNA sequencing (RNA-Seq) is a powerful and versatile tool for functional analysis of different types of RNA molecules, but sample reagent and sequencing cost can be prohibitive for hypothesis-driven studies where the aim is to quantify differential expression of a limited number of genes. Here we present an approach for quantification of differential mRNA expression by targeted resequencing of complementary DNA using single-molecule molecular inversion probes (cDNA-smMIPs) that enable highly multiplexed resequencing of cDNA target regions of ∼100 nucleotides and counting of individual molecules. We show that accurate estimates of differential expression can be obtained from molecule counts for hundreds of smMIPs per reaction and that smMIPs are also suitable for quantification of relative gene expression and allele-specific expression. Compared with low-coverage RNA-Seq and a hybridization-based targeted RNA-Seq method, cDNA-smMIPs are a cost-effective high-throughput tool for hypothesis-driven expression analysis in large numbers of genes (10 to 500) and samples (hundreds to thousands). PMID:28474677

  19. Feasibility Study for Applicability of the Wavelet Transform to Code Accuracy Quantification

    International Nuclear Information System (INIS)

    Kim, Jong Rok; Choi, Ki Yong

    2012-01-01

    A purpose of the assessment process of large thermal-hydraulic system codes is verifying their quality by comparing code predictions against experimental data. This process is essential for reliable safety analysis of nuclear power plants. Extensive experimental programs have been conducted in order to support the development and validation activities of best estimate thermal-hydraulic codes. So far, the Fast Fourier Transform Based Method (FFTBM) has been used widely for quantification of the prediction accuracy regardless of its limitation that it does not provide any time resolution for a local event. As alternative options, several time windowing methods (running average, short time Fourier transform, and etc.) can be utilized, but such time windowing methods also have a limitation of a fixed resolution. This limitation can be overcome by a wavelet transform because the resolution of the wavelet transform effectively varies in the time-frequency plane depending on choice of basic functions which are not necessarily sinusoidal. In this study, a feasibility of a new code accuracy quantification methodology using the wavelet transform is pursued

  20. PCR amplification of repetitive sequences as a possible approach in relative species quantification

    DEFF Research Database (Denmark)

    Ballin, Nicolai Zederkopff; Vogensen, Finn Kvist; Karlsson, Anders H

    2012-01-01

    Abstract Both relative and absolute quantifications are possible in species quantification when single copy genomic DNA is used. However, amplification of single copy genomic DNA does not allow a limit of detection as low as one obtained from amplification of repetitive sequences. Amplification...... of repetitive sequences is therefore frequently used in absolute quantification but problems occur in relative quantification as the number of repetitive sequences is unknown. A promising approach was developed where data from amplification of repetitive sequences were used in relative quantification of species...... to relatively quantify the amount of chicken DNA in a binary mixture of chicken DNA and pig DNA. However, the designed PCR primers lack the specificity required for regulatory species control....

  1. Strategies of Transition to Sustainable Agriculture in Iran II- Inputs Replacement and Designing Agroecosystem

    Directory of Open Access Journals (Sweden)

    Alireza Koocheki

    2018-02-01

    Full Text Available Abstract Introduction Sustainable agricultural development is an important goal in economic planning and human development worldwide. A range of processes and relationships are transformed, beginning with aspects of basic soil structure, organic matter content, and diversity and activity of soil biota. Eventually, major changes also occur in the relationships among weed, insect, and disease populations, and in the balance between beneficial and pest organisms. Ultimately, nutrient dynamics and cycling, energy use efficiency, and overall system productivity are impacted. Measuring and monitoring these changes during the conversion period helps the farmer evaluate the success of the conversion process, and provides a framework to determine the requirements for sustainability. After improving resource use efficiency, replacement of ecological inputs with chemical inputs as second step and redesign of agro-ecosystems is as final step in transition of common to sustainable agriculture. The study was investigated to evaluation of Iran’s agricultural systems status. Materials and Methods Using organic and ecological inputs than chemicals is the second step for transition to sustainable agriculture. This study was performed to assess and measure the status of inputs replacement and agro-ecosystem designing based on ecological principle in Iran. For this purpose, we used 223 studied researches on agronomical and medicinal plants. After, they analyzed based on functional and structural characteristics and then used. Considering to the importance of multi-functionality in sustainable agriculture, in this study we considered the multiple managements for inputs replacement. The using functions in the study were: improving fertility and bio-chemical characteristics of soil, ecological managements of pest and diseases, reducing the energy usage, and increasing biodiversity. Using the organic and biological inputs, remaining the plant residual on soil, using

  2. Stochastic Systems Uncertainty Quantification and Propagation

    CERN Document Server

    Grigoriu, Mircea

    2012-01-01

    Uncertainty is an inherent feature of both properties of physical systems and the inputs to these systems that needs to be quantified for cost effective and reliable designs. The states of these systems satisfy equations with random entries, referred to as stochastic equations, so that they are random functions of time and/or space. The solution of stochastic equations poses notable technical difficulties that are frequently circumvented by heuristic assumptions at the expense of accuracy and rigor. The main objective of Stochastic Systems is to promoting the development of accurate and efficient methods for solving stochastic equations and to foster interactions between engineers, scientists, and mathematicians. To achieve these objectives Stochastic Systems presents: ·         A clear and brief review of essential concepts on probability theory, random functions, stochastic calculus, Monte Carlo simulation, and functional analysis   ·          Probabilistic models for random variables an...

  3. Design of a Code-Maker Translator Assistive Input Device with a Contest Fuzzy Recognition Algorithm for the Severely Disabled

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2015-01-01

    Full Text Available This study developed an assistive system for the severe physical disabilities, named “code-maker translator assistive input device” which utilizes a contest fuzzy recognition algorithm and Morse codes encoding to provide the keyboard and mouse functions for users to access a standard personal computer, smartphone, and tablet PC. This assistive input device has seven features that are small size, easy installing, modular design, simple maintenance, functionality, very flexible input interface selection, and scalability of system functions, when this device combined with the computer applications software or APP programs. The users with severe physical disabilities can use this device to operate the various functions of computer, smartphone, and tablet PCs, such as sending e-mail, Internet browsing, playing games, and controlling home appliances. A patient with a brain artery malformation participated in this study. The analysis result showed that the subject could make himself familiar with operating of the long/short tone of Morse code in one month. In the future, we hope this system can help more people in need.

  4. Development and operation of K-URT data input system

    International Nuclear Information System (INIS)

    Kim, Yun Jae; Myoung, Noh Hoon; Kim, Jong Hyun; Han, Jae Jun

    2010-05-01

    Activities for TSPA(Total System Performance Assessment) on the permanent disposal of high level radioactive waste includes production of input data, safety assessment using input data, license procedure and others. These activities are performed in 5 steps as follows; (1) Adequate planning, (2) Controlled execution, (3) Complete documentation, (4) Thorough review, (5) Independent oversight. For the confidence building, it is very important to record and manage the materials obtained from research works in transparency. For the documentation of disposal research work from planning stage to data management stage, KAERI developed CYPRUS named CYBER R and D Platform for Radwaste Disposal in Underground System with a QA(Quality Assurance) System. In CYPRUS, QA system makes effects on other functions such as data management, project management and others. This report analyzes the structure of CYPRUS and proposes to accumulate qualified data, to provide a convenient application and to promote access and use of CYPRUS for a future-oriented system

  5. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  6. On a multi-channel transportation loss system with controlled input and controlled service

    Directory of Open Access Journals (Sweden)

    Jewgeni Dshalalow

    1987-01-01

    Full Text Available A multi-channel loss queueing system is investigated. The input stream is a controlled point process. The service in each of m parallel channels depends on the state of the system at certain moments of time when input and service may be controlled. To obtain explicitly the limiting distribution of the main process (Zt (the number of busy channels in equilibrium, an auxiliary three dimensional process with two additional components (one of them is a semi-Markov process is treated as semi-regenerative process. An optimization problem is discussed. Simple expressions for an objective function are derived.

  7. Leaders’ receptivity to subordinates’ creative input: the role of achievement goals and composition of creative input

    NARCIS (Netherlands)

    Sijbom, R.B.L.; Janssen, O.; van Yperen, N.W.

    2015-01-01

    We identified leaders’ achievement goals and composition of creative input as important factors that can clarify when and why leaders are receptive to, and supportive of, subordinates’ creative input. As hypothesized, in two experimental studies, we found that relative to mastery goal leaders,

  8. High-frequency matrix converter with square wave input

    Science.gov (United States)

    Carr, Joseph Alexander; Balda, Juan Carlos

    2015-03-31

    A device for producing an alternating current output voltage from a high-frequency, square-wave input voltage comprising, high-frequency, square-wave input a matrix converter and a control system. The matrix converter comprises a plurality of electrical switches. The high-frequency input and the matrix converter are electrically connected to each other. The control system is connected to each switch of the matrix converter. The control system is electrically connected to the input of the matrix converter. The control system is configured to operate each electrical switch of the matrix converter converting a high-frequency, square-wave input voltage across the first input port of the matrix converter and the second input port of the matrix converter to an alternating current output voltage at the output of the matrix converter.

  9. Quantification of Confocal Images Using LabVIEW for Tissue Engineering Applications.

    Science.gov (United States)

    Sfakis, Lauren; Kamaldinov, Tim; Larsen, Melinda; Castracane, James; Khmaladze, Alexander

    2016-11-01

    Quantifying confocal images to enable location of specific proteins of interest in three-dimensional (3D) is important for many tissue engineering (TE) applications. Quantification of protein localization is essential for evaluation of specific scaffold constructs for cell growth and differentiation for application in TE and tissue regeneration strategies. Although obtaining information regarding protein expression levels is important, the location of proteins within cells grown on scaffolds is often the key to evaluating scaffold efficacy. Functional epithelial cell monolayers must be organized with apicobasal polarity with proteins specifically localized to the apical or basolateral regions of cells in many organs. In this work, a customized program was developed using the LabVIEW platform to quantify protein positions in Z-stacks of confocal images of epithelial cell monolayers. The program's functionality is demonstrated through salivary gland TE, since functional salivary epithelial cells must correctly orient many proteins on the apical and basolateral membranes. Bio-LabVIEW Image Matrix Evaluation (Bio-LIME) takes 3D information collected from confocal Z-stack images and processes the fluorescence at each pixel to determine cell heights, nuclei heights, nuclei widths, protein localization, and cell count. As a demonstration of its utility, Bio-LIME was used to quantify the 3D location of the Zonula occludens-1 protein contained within tight junctions and its change in 3D position in response to chemical modification of the scaffold with laminin. Additionally, Bio-LIME was used to demonstrate that there is no advantage of sub-100 nm poly lactic-co-glycolic acid nanofibers over 250 nm fibers for epithelial apicobasal polarization. Bio-LIME will be broadly applicable for quantification of proteins in 3D that are grown in many different contexts.

  10. Application of adaptive hierarchical sparse grid collocation to the uncertainty quantification of nuclear reactor simulators

    Energy Technology Data Exchange (ETDEWEB)

    Yankov, A.; Downar, T. [University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States)

    2013-07-01

    Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)

  11. Molecular quantification of genes encoding for green-fluorescent proteins

    DEFF Research Database (Denmark)

    Felske, A; Vandieken, V; Pauling, B V

    2003-01-01

    A quantitative PCR approach is presented to analyze the amount of recombinant green fluorescent protein (gfp) genes in environmental DNA samples. The quantification assay is a combination of specific PCR amplification and temperature gradient gel electrophoresis (TGGE). Gene quantification...... PCR strategy is a highly specific and sensitive way to monitor recombinant DNA in environments like the efflux of a biotechnological plant....

  12. A highly sensitive method for quantification of iohexol

    DEFF Research Database (Denmark)

    Schulz, A.; Boeringer, F.; Swifka, J.

    2014-01-01

    -chromatography-electrospray-massspectrometry (LC-ESI-MS) approach using the multiple reaction monitoring mode for iohexol quantification. In order to test whether a significantly decreased amount of iohexol is sufficient for reliable quantification, a LC-ESI-MS approach was assessed. We analyzed the kinetics of iohexol in rats after application...... of different amounts of iohexol (15 mg to 150 1.tg per rat). Blood sampling was conducted at four time points, at 15, 30, 60, and 90 min, after iohexol injection. The analyte (iohexol) and the internal standard (iotha(amic acid) were separated from serum proteins using a centrifugal filtration device...... with a cut-off of 3 kDa. The chromatographic separation was achieved on an analytical Zorbax SB C18 column. The detection and quantification were performed on a high capacity trap mass spectrometer using positive ion ESI in the multiple reaction monitoring (MRM) mode. Furthermore, using real-time polymerase...

  13. Cluster analysis in kinetic modelling of the brain: A noninvasive alternative to arterial sampling

    DEFF Research Database (Denmark)

    Liptrot, Matthew George; Adams, K.H.; Martiny, L.

    2004-01-01

    In emission tomography, quantification of brain tracer uptake, metabolism or binding requires knowledge of the cerebral input function. Traditionally, this is achieved with arterial blood sampling. We propose a noninvasive alternative via the use of a blood vessel time-activity curve (TAC....... © 2003 Elsevier Inc. All rights reserved....

  14. Spatially Resolved MR-Compatible Doppler Ultrasound: Proof of Concept for Triggering of Diagnostic Quality Cardiovascular MRI for Function and Flow Quantification at 3T.

    Science.gov (United States)

    Crowe, Lindsey Alexandra; Manasseh, Gibran; Chmielewski, Aneta; Hachulla, Anne-Lise; Speicher, Daniel; Greiser, Andreas; Muller, Hajo; de Perrot, Thomas; Vallee, Jean-Paul; Salomir, Rares

    2018-02-01

    We demonstrate the use of a magnetic-resonance (MR)-compatible ultrasound (US) imaging probe using spatially resolved Doppler for diagnostic quality cardiovascular MR imaging (MRI) as an initial step toward hybrid US/MR fetal imaging. A newly developed technology for a dedicated MR-compatible phased array ultrasound-imaging probe acquired pulsed color Doppler carotid images, which were converted in near-real time to a trigger signal for cardiac cine and flow quantification MRI. Ultrasound and MR data acquired simultaneously were interference free. Conventional electrocardiogram (ECG) and the proposed spatially resolved Doppler triggering were compared in 10 healthy volunteers. A synthetic "false-triggered" image was retrospectively processed using metric optimized gating (MOG). Images were scored by expert readers, and sharpness, cardiac function and aortic flow were quantified. Four-dimensional (4-D) flow (two volunteers) showed feasibility of Doppler triggering over a long acquisition time. Imaging modalities were compatible. US probe positioning was stable and comfortable. Image quality scores and quantified sharpness were statistically equal for Doppler- and ECG-triggering (p ). ECG-, Doppler-triggered, and MOG ejection fractions were equivalent (p ), with false-triggered values significantly lower (p 0.05). 4-D flow quantification gave consistent results between ECG and Doppler triggering. We report interference-free pulsed color Doppler ultrasound during MR data acquisition. Cardiovascular MRI of diagnostic quality was successfully obtained with pulsed color Doppler triggering. The hardware platform could further enable advanced free-breathing cardiac imaging. Doppler ultrasound triggering is applicable where ECG is compromised due to pathology or interference at higher magnetic fields, and where direct ECG is impossible, i.e., fetal imaging.

  15. Controlling uncertain neutral dynamic systems with delay in control input

    International Nuclear Information System (INIS)

    Park, Ju H.; Kwon, O.

    2005-01-01

    This article gives a novel criterion for the asymptotic stabilization of the zero solutions of a class of neutral systems with delays in control input. By constructing Lyapunov functionals, we have obtained the criterion which is expressed in terms of matrix inequalities. The solutions of the inequalities can be easily solved by efficient convex optimization algorithms. A numerical example is included to illustrate the design procedure of the proposed method

  16. Automatic individual arterial input functions calculated from PCA outperform manual and population-averaged approaches for the pharmacokinetic modeling of DCE-MR images.

    Science.gov (United States)

    Sanz-Requena, Roberto; Prats-Montalbán, José Manuel; Martí-Bonmatí, Luis; Alberich-Bayarri, Ángel; García-Martí, Gracián; Pérez, Rosario; Ferrer, Alberto

    2015-08-01

    To introduce a segmentation method to calculate an automatic arterial input function (AIF) based on principal component analysis (PCA) of dynamic contrast enhanced MR (DCE-MR) imaging and compare it with individual manually selected and population-averaged AIFs using calculated pharmacokinetic parameters. The study included 65 individuals with prostate examinations (27 tumors and 38 controls). Manual AIFs were individually extracted and also averaged to obtain a population AIF. Automatic AIFs were individually obtained by applying PCA to volumetric DCE-MR imaging data and finding the highest correlation of the PCs with a reference AIF. Variability was assessed using coefficients of variation and repeated measures tests. The different AIFs were used as inputs to the pharmacokinetic model and correlation coefficients, Bland-Altman plots and analysis of variance tests were obtained to compare the results. Automatic PCA-based AIFs were successfully extracted in all cases. The manual and PCA-based AIFs showed good correlation (r between pharmacokinetic parameters ranging from 0.74 to 0.95), with differences below the manual individual variability (RMSCV up to 27.3%). The population-averaged AIF showed larger differences (r from 0.30 to 0.61). The automatic PCA-based approach minimizes the variability associated to obtaining individual volume-based AIFs in DCE-MR studies of the prostate. © 2014 Wiley Periodicals, Inc.

  17. Upper Limb Evaluation in Duchenne Muscular Dystrophy: Fat-Water Quantification by MRI, Muscle Force and Function Define Endpoints for Clinical Trials.

    Directory of Open Access Journals (Sweden)

    Valeria Ricotti

    Full Text Available A number of promising experimental therapies for Duchenne muscular dystrophy (DMD are emerging. Clinical trials currently rely on invasive biopsies or motivation-dependent functional tests to assess outcome. Quantitative muscle magnetic resonance imaging (MRI could offer a valuable alternative and permit inclusion of non-ambulant DMD subjects. The aims of our study were to explore the responsiveness of upper-limb MRI muscle-fat measurement as a non-invasive objective endpoint for clinical trials in non-ambulant DMD, and to investigate the relationship of these MRI measures to those of muscle force and function.15 non-ambulant DMD boys (mean age 13.3 y and 10 age-gender matched healthy controls (mean age 14.6 y were recruited. 3-Tesla MRI fat-water quantification was used to measure forearm muscle fat transformation in non-ambulant DMD boys compared with healthy controls. DMD boys were assessed at 4 time-points over 12 months, using 3-point Dixon MRI to measure muscle fat-fraction (f.f.. Images from ten forearm muscles were segmented and mean f.f. and cross-sectional area recorded. DMD subjects also underwent comprehensive upper limb function and force evaluation.Overall mean baseline forearm f.f. was higher in DMD than in healthy controls (p<0.001. A progressive f.f. increase was observed in DMD over 12 months, reaching significance from 6 months (p<0.001, n = 7, accompanied by a significant loss in pinch strength at 6 months (p<0.001, n = 9 and a loss of upper limb function and grip force observed over 12 months (p<0.001, n = 8.These results support the use of MRI muscle f.f. as a biomarker to monitor disease progression in the upper limb in non-ambulant DMD, with sensitivity adequate to detect group-level change over time intervals practical for use in clinical trials. Clinical validity is supported by the association of the progressive fat transformation of muscle with loss of muscle force and function.

  18. Comparison between radionuclide ventriculography and echocardiography for quantification of left ventricular systolic function in rats exposed to doxorubicin

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Luciano Fonseca Lemos de; Carvalho, Eduardo Elias Vieira de; Romano, Minna Moreira Dias; Maciel, Benedito Carlos; Simões, Marcus Vinicius, E-mail: msimoes@fmrp.usp.br [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Centro de Cardiologia; O' Connell, João Lucas; Pulici, Érica Carolina Campos [Universidade Federal de Uberlândia (UFU), MG (Brazil)

    2017-01-15

    Background: Radionuclide ventriculography (RV) is a validated method to evaluate the left ventricular systolic function (LVSF) in small rodents. However, no prior study has compared the results of RV with those obtained by other imaging methods in this context. Objectives: To compare the results of LVSF obtained by RV and echocardiography (ECHO) in an experimental model of cardiotoxicity due to doxorubicin (DXR) in rats. Methods: Adult male Wistar rats serving as controls (n = 7) or receiving DXR (n = 22) in accumulated doses of 8, 12, and 16 mg/kg were evaluated with ECHO performed with a Sonos 5500 Philips equipment (12-MHz transducer) and RV obtained with an Orbiter-Siemens gamma camera using a pinhole collimator with a 4-mm aperture. Histopathological quantification of myocardial fibrosis was performed after euthanasia. Results: The control animals showed comparable results in the LVSF analysis obtained with ECHO and RV (83.5 ± 5% and 82.8 ± 2.8%, respectively, p > 0.05). The animals that received DXR presented lower LVSF values when compared with controls (p < 0.05); however, the LVSF values obtained by RV (60.6 ± 12.5%) were lower than those obtained by ECHO (71.8 ± 10.1%, p = 0.0004) in this group. An analysis of the correlation between the LVSF and myocardial fibrosis showed a moderate correlation when the LVSF was assessed by ECHO (r = -0.69, p = 0.0002) and a stronger correlation when it was assessed by RV (r = -0.79, p < 0.0001). On multiple regression analysis, only RV correlated independently with myocardial fibrosis. Conclusion: RV is an alternative method to assess the left ventricular function in small rodents in vivo. When compared with ECHO, RV showed a better correlation with the degree of myocardial injury in a model of DXR-induced cardiotoxicity. (author)

  19. Comparison between radionuclide ventriculography and echocardiography for quantification of left ventricular systolic function in rats exposed to doxorubicin

    International Nuclear Information System (INIS)

    Oliveira, Luciano Fonseca Lemos de; Carvalho, Eduardo Elias Vieira de; Romano, Minna Moreira Dias; Maciel, Benedito Carlos; Simões, Marcus Vinicius

    2017-01-01

    Background: Radionuclide ventriculography (RV) is a validated method to evaluate the left ventricular systolic function (LVSF) in small rodents. However, no prior study has compared the results of RV with those obtained by other imaging methods in this context. Objectives: To compare the results of LVSF obtained by RV and echocardiography (ECHO) in an experimental model of cardiotoxicity due to doxorubicin (DXR) in rats. Methods: Adult male Wistar rats serving as controls (n = 7) or receiving DXR (n = 22) in accumulated doses of 8, 12, and 16 mg/kg were evaluated with ECHO performed with a Sonos 5500 Philips equipment (12-MHz transducer) and RV obtained with an Orbiter-Siemens gamma camera using a pinhole collimator with a 4-mm aperture. Histopathological quantification of myocardial fibrosis was performed after euthanasia. Results: The control animals showed comparable results in the LVSF analysis obtained with ECHO and RV (83.5 ± 5% and 82.8 ± 2.8%, respectively, p > 0.05). The animals that received DXR presented lower LVSF values when compared with controls (p < 0.05); however, the LVSF values obtained by RV (60.6 ± 12.5%) were lower than those obtained by ECHO (71.8 ± 10.1%, p = 0.0004) in this group. An analysis of the correlation between the LVSF and myocardial fibrosis showed a moderate correlation when the LVSF was assessed by ECHO (r = -0.69, p = 0.0002) and a stronger correlation when it was assessed by RV (r = -0.79, p < 0.0001). On multiple regression analysis, only RV correlated independently with myocardial fibrosis. Conclusion: RV is an alternative method to assess the left ventricular function in small rodents in vivo. When compared with ECHO, RV showed a better correlation with the degree of myocardial injury in a model of DXR-induced cardiotoxicity. (author)

  20. Comparison between Radionuclide Ventriculography and Echocardiography for Quantification of Left Ventricular Systolic Function in Rats Exposed to Doxorubicin

    Directory of Open Access Journals (Sweden)

    Luciano Fonseca Lemos de Oliveira

    Full Text Available Abstract Background: Radionuclide ventriculography (RV is a validated method to evaluate the left ventricular systolic function (LVSF in small rodents. However, no prior study has compared the results of RV with those obtained by other imaging methods in this context. Objectives: To compare the results of LVSF obtained by RV and echocardiography (ECHO in an experimental model of cardiotoxicity due to doxorubicin (DXR in rats. Methods: Adult male Wistar rats serving as controls (n = 7 or receiving DXR (n = 22 in accumulated doses of 8, 12, and 16 mg/kg were evaluated with ECHO performed with a Sonos 5500 Philips equipment (12-MHz transducer and RV obtained with an Orbiter-Siemens gamma camera using a pinhole collimator with a 4-mm aperture. Histopathological quantification of myocardial fibrosis was performed after euthanasia. Results: The control animals showed comparable results in the LVSF analysis obtained with ECHO and RV (83.5 ± 5% and 82.8 ± 2.8%, respectively, p > 0.05. The animals that received DXR presented lower LVSF values when compared with controls (p < 0.05; however, the LVSF values obtained by RV (60.6 ± 12.5% were lower than those obtained by ECHO (71.8 ± 10.1%, p = 0.0004 in this group. An analysis of the correlation between the LVSF and myocardial fibrosis showed a moderate correlation when the LVSF was assessed by ECHO (r = -0.69, p = 0.0002 and a stronger correlation when it was assessed by RV (r = -0.79, p < 0.0001. On multiple regression analysis, only RV correlated independently with myocardial fibrosis. Conclusion: RV is an alternative method to assess the left ventricular function in small rodents in vivo. When compared with ECHO, RV showed a better correlation with the degree of myocardial injury in a model of DXR-induced cardiotoxicity.

  1. Aspects of input processing in the numerical control of electron beam machines

    International Nuclear Information System (INIS)

    Chowdhury, A.K.

    1981-01-01

    A high-performance Numerical Control has been developed for an Electron Beam Machine. The system is structured into 3 hierarchial levels: Input Processing, Realtime Processing (such as Geometry Interpolation) and the Interfaces to the Electron Beam Machine. The author considers the Input Processing. In conventional Numerical Controls the Interfaces to the control is given by the control language as defined in DIN 66025. State of the art in NC-technology offers programming systems of differing competence covering the spectra between manual programming in the control language to highly sophisticated systems such as APT. This software interface has been used to define an Input Processor that in cooperation with the Hostcomputer meets the requirements of a sophisticated NC-system but at the same time provides a modest stand-alone system with all the basic functions such as interactive program-editing, program storage, program execution simultaneous with the development of another program, etc. Software aspects such as adapting DIN 66025 for Electron Beam Machining, organisation and modularisation of Input Processor Software has been considered and solutions have been proposed. Hardware aspects considered are interconnections of the Input Processor with the Host and the Realtime Processors. Because of economical and development-time considerations, available software and hardware has been liberally used and own development has been kept to a minimum. The proposed system is modular in software and hardware and therefore very flexible and open-ended to future expansion. (Auth.)

  2. Textual Enhancement of Input: Issues and Possibilities

    Science.gov (United States)

    Han, ZhaoHong; Park, Eun Sung; Combs, Charles

    2008-01-01

    The input enhancement hypothesis proposed by Sharwood Smith (1991, 1993) has stimulated considerable research over the last 15 years. This article reviews the research on textual enhancement of input (TE), an area where the majority of input enhancement studies have aggregated. Methodological idiosyncrasies are the norm of this body of research.…

  3. Comparison of DNA Quantification Methods for Next Generation Sequencing.

    Science.gov (United States)

    Robin, Jérôme D; Ludlow, Andrew T; LaRanger, Ryan; Wright, Woodring E; Shay, Jerry W

    2016-04-06

    Next Generation Sequencing (NGS) is a powerful tool that depends on loading a precise amount of DNA onto a flowcell. NGS strategies have expanded our ability to investigate genomic phenomena by referencing mutations in cancer and diseases through large-scale genotyping, developing methods to map rare chromatin interactions (4C; 5C and Hi-C) and identifying chromatin features associated with regulatory elements (ChIP-seq, Bis-Seq, ChiA-PET). While many methods are available for DNA library quantification, there is no unambiguous gold standard. Most techniques use PCR to amplify DNA libraries to obtain sufficient quantities for optical density measurement. However, increased PCR cycles can distort the library's heterogeneity and prevent the detection of rare variants. In this analysis, we compared new digital PCR technologies (droplet digital PCR; ddPCR, ddPCR-Tail) with standard methods for the titration of NGS libraries. DdPCR-Tail is comparable to qPCR and fluorometry (QuBit) and allows sensitive quantification by analysis of barcode repartition after sequencing of multiplexed samples. This study provides a direct comparison between quantification methods throughout a complete sequencing experiment and provides the impetus to use ddPCR-based quantification for improvement of NGS quality.

  4. Preclinical evaluation and quantification of [{sup 18}F]MK-9470 as a radioligand for PET imaging of the type 1 cannabinoid receptor in rat brain

    Energy Technology Data Exchange (ETDEWEB)

    Casteels, Cindy [K.U. Leuven, University Hospital Leuven, Division of Nuclear Medicine, Leuven (Belgium); K.U. Leuven, MoSAIC, Molecular Small Animal Imaging Center, Leuven (Belgium); University Hospital Gasthuisberg, Division of Nuclear Medicine, Leuven (Belgium); Koole, Michel; Laere, Koen van [K.U. Leuven, University Hospital Leuven, Division of Nuclear Medicine, Leuven (Belgium); K.U. Leuven, MoSAIC, Molecular Small Animal Imaging Center, Leuven (Belgium); Celen, Sofie; Bormans, Guy [K.U. Leuven, MoSAIC, Molecular Small Animal Imaging Center, Leuven (Belgium); K.U. Leuven, Laboratory for Radiopharmacy, Leuven (Belgium)

    2012-09-15

    [{sup 18}F]MK-9470 is an inverse agonist for the type 1 cannabinoid (CB1) receptor allowing its use in PET imaging. We characterized the kinetics of [{sup 18}F]MK-9470 and evaluated its ability to quantify CB1 receptor availability in the rat brain. Dynamic small-animal PET scans with [{sup 18}F]MK-9470 were performed in Wistar rats on a FOCUS-220 system for up to 10 h. Both plasma and perfused brain homogenates were analysed using HPLC to quantify radiometabolites. Displacement and blocking experiments were done using cold MK-9470 and another inverse agonist, SR141716A. The distribution volume (V{sub T}) of [{sup 18}F]MK-9470 was used as a quantitative measure and compared to the use of brain uptake, expressed as SUV, a simplified method of quantification. The percentage of intact [{sup 18}F]MK-9470 in arterial plasma samples was 80 {+-} 23 % at 10 min, 38 {+-} 30 % at 40 min and 13 {+-} 14 % at 210 min. A polar radiometabolite fraction was detected in plasma and brain tissue. The brain radiometabolite concentration was uniform across the whole brain. Displacement and pretreatment studies showed that 56 % of the tracer binding was specific and reversible. V{sub T} values obtained with a one-tissue compartment model plus constrained radiometabolite input had good identifiability ({<=}10 %). Ignoring the radiometabolite contribution using a one-tissue compartment model alone, i.e. without constrained radiometabolite input, overestimated the [{sup 18}F]MK-9470 V{sub T}, but was correlated. A correlation between [{sup 18}F]MK-9470 V{sub T} and SUV in the brain was also found (R {sup 2} = 0.26-0.33; p {<=} 0.03). While the presence of a brain-penetrating radiometabolite fraction complicates the quantification of [{sup 18}F]MK-9470 in the rat brain, its tracer kinetics can be modelled using a one-tissue compartment model with and without constrained radiometabolite input. (orig.)

  5. SPECT quantification of regional radionuclide distributions

    International Nuclear Information System (INIS)

    Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.

    1986-01-01

    SPECT quantification of regional radionuclide activities within the human body is affected by several physical and instrumental factors including attenuation of photons within the patient, Compton scattered events, the system's finite spatial resolution and object size, finite number of detected events, partial volume effects, the radiopharmaceutical biokinetics, and patient and/or organ motion. Furthermore, other instrumentation factors such as calibration of the center-of-rotation, sampling, and detector nonuniformities will affect the SPECT measurement process. These factors are described, together with examples of compensation methods that are currently available for improving SPECT quantification. SPECT offers the potential to improve in vivo estimates of absorbed dose, provided the acquisition, reconstruction, and compensation procedures are adequately implemented and utilized. 53 references, 2 figures

  6. FlaME: Flash Molecular Editor - a 2D structure input tool for the web

    Directory of Open Access Journals (Sweden)

    Dallakian Pavel

    2011-02-01

    Full Text Available Abstract Background So far, there have been no Flash-based web tools available for chemical structure input. The authors herein present a feasibility study, aiming at the development of a compact and easy-to-use 2D structure editor, using Adobe's Flash technology and its programming language, ActionScript. As a reference model application from the Java world, we selected the Java Molecular Editor (JME. In this feasibility study, we made an attempt to realize a subset of JME's functionality in the Flash Molecular Editor (FlaME utility. These basic capabilities are: structure input, editing and depiction of single molecules, data import and export in molfile format. Implementation The result of molecular diagram sketching in FlaME is accessible in V2000 molfile format. By integrating the molecular editor into a web page, its communication with the HTML elements on this page is established using the two JavaScript functions, getMol( and setMol(. In addition, structures can be copied to the system clipboard. Conclusion A first attempt was made to create a compact single-file application for 2D molecular structure input/editing on the web, based on Flash technology. With the application examples presented in this article, it could be demonstrated that the Flash methods are principally well-suited to provide the requisite communication between the Flash object (application and the HTML elements on a web page, using JavaScript functions.

  7. Automatic detection of arterial input function in dynamic contrast enhanced MRI based on affinity propagation clustering.

    Science.gov (United States)

    Shi, Lin; Wang, Defeng; Liu, Wen; Fang, Kui; Wang, Yi-Xiang J; Huang, Wenhua; King, Ann D; Heng, Pheng Ann; Ahuja, Anil T

    2014-05-01

    To automatically and robustly detect the arterial input function (AIF) with high detection accuracy and low computational cost in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). In this study, we developed an automatic AIF detection method using an accelerated version (Fast-AP) of affinity propagation (AP) clustering. The validity of this Fast-AP-based method was proved on two DCE-MRI datasets, i.e., rat kidney and human head and neck. The detailed AIF detection performance of this proposed method was assessed in comparison with other clustering-based methods, namely original AP and K-means, as well as the manual AIF detection method. Both the automatic AP- and Fast-AP-based methods achieved satisfactory AIF detection accuracy, but the computational cost of Fast-AP could be reduced by 64.37-92.10% on rat dataset and 73.18-90.18% on human dataset compared with the cost of AP. The K-means yielded the lowest computational cost, but resulted in the lowest AIF detection accuracy. The experimental results demonstrated that both the AP- and Fast-AP-based methods were insensitive to the initialization of cluster centers, and had superior robustness compared with K-means method. The Fast-AP-based method enables automatic AIF detection with high accuracy and efficiency. Copyright © 2013 Wiley Periodicals, Inc.

  8. Direct quantification of creatinine in human urine by using isotope dilution extractive electrospray ionization tandem mass spectrometry

    International Nuclear Information System (INIS)

    Li Xue; Fang Xiaowei; Yu Zhiqiang; Sheng Guoying; Wu Minghong; Fu Jiamo; Chen Huanwen

    2012-01-01

    Highlights: ► High throughput analysis of urinary creatinine is achieved by using ID-EESI–MS/MS. ► Urine sample is directly analyzed and no sample pre-treatment is required. ► Accurate quantification is accomplished with isotope dilution technique. - Abstract: Urinary creatinine (CRE) is an important biomarker of renal function. Fast and accurate quantification of CRE in human urine is required by clinical research. By using isotope dilution extractive electrospray ionization tandem mass spectrometry (EESI–MS/MS) a high throughput method for direct and accurate quantification of urinary CRE was developed in this study. Under optimized conditions, the method detection limit was lower than 50 μg L −1 . Over the concentration range investigated (0.05–10 mg L −1 ), the calibration curve was obtained with satisfactory linearity (R 2 = 0.9861), and the relative standard deviation (RSD) values for CRE and isotope-labeled CRE (CRE-d3) were 7.1–11.8% (n = 6) and 4.1–11.3% (n = 6), respectively. The isotope dilution EESI–MS/MS method was validated by analyzing six human urine samples, and the results were comparable with the conventional spectrophotometric method (based on the Jaffe reaction). Recoveries for individual urine samples were 85–111% and less than 0.3 min was taken for each measurement, indicating that the present isotope dilution EESI–MS/MS method is a promising strategy for the fast and accurate quantification of urinary CRE in clinical laboratories.

  9. Effect of input compression and input frequency response on music perception in cochlear implant users.

    Science.gov (United States)

    Halliwell, Emily R; Jones, Linor L; Fraser, Matthew; Lockley, Morag; Hill-Feltham, Penelope; McKay, Colette M

    2015-06-01

    A study was conducted to determine whether modifications to input compression and input frequency response characteristics can improve music-listening satisfaction in cochlear implant users. Experiment 1 compared three pre-processed versions of music and speech stimuli in a laboratory setting: original, compressed, and flattened frequency response. Music excerpts comprised three music genres (classical, country, and jazz), and a running speech excerpt was compared. Experiment 2 implemented a flattened input frequency response in the speech processor program. In a take-home trial, participants compared unaltered and flattened frequency responses. Ten and twelve adult Nucleus Freedom cochlear implant users participated in Experiments 1 and 2, respectively. Experiment 1 revealed a significant preference for music stimuli with a flattened frequency response compared to both original and compressed stimuli, whereas there was a significant preference for the original (rising) frequency response for speech stimuli. Experiment 2 revealed no significant mean preference for the flattened frequency response, with 9 of 11 subjects preferring the rising frequency response. Input compression did not alter music enjoyment. Comparison of the two experiments indicated that individual frequency response preferences may depend on the genre or familiarity, and particularly whether the music contained lyrics.

  10. Spacecraft reorientation control in presence of attitude constraint considering input saturation and stochastic disturbance

    Science.gov (United States)

    Cheng, Yu; Ye, Dong; Sun, Zhaowei; Zhang, Shijie

    2018-03-01

    This paper proposes a novel feedback control law for spacecraft to deal with attitude constraint, input saturation, and stochastic disturbance during the attitude reorientation maneuver process. Applying the parameter selection method to improving the existence conditions for the repulsive potential function, the universality of the potential-function-based algorithm is enhanced. Moreover, utilizing the auxiliary system driven by the difference between saturated torque and command torque, a backstepping control law, which satisfies the input saturation constraint and guarantees the spacecraft stability, is presented. Unlike some methods that passively rely on the inherent characteristic of the existing controller to stabilize the adverse effects of external stochastic disturbance, this paper puts forward a nonlinear disturbance observer to compensate the disturbance in real-time, which achieves a better performance of robustness. The simulation results validate the effectiveness, reliability, and universality of the proposed control law.

  11. Quantification of Airfoil Geometry-Induced Aerodynamic Uncertainties---Comparison of Approaches

    KAUST Repository

    Liu, Dishi

    2015-04-14

    Uncertainty quantification in aerodynamic simulations calls for efficient numerical methods to reduce computational cost, especially for uncertainties caused by random geometry variations which involve a large number of variables. This paper compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and by point collocation, radial basis function and a gradient-enhanced version of kriging, and examines their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry which is parameterized by independent Gaussian variables. The results show that gradient-enhanced surrogate methods achieve better accuracy than direct integration methods with the same computational cost.

  12. Quantification of Airfoil Geometry-Induced Aerodynamic Uncertainties---Comparison of Approaches

    KAUST Repository

    Liu, Dishi; Litvinenko, Alexander; Schillings, Claudia; Schulz, Volker

    2015-01-01

    Uncertainty quantification in aerodynamic simulations calls for efficient numerical methods to reduce computational cost, especially for uncertainties caused by random geometry variations which involve a large number of variables. This paper compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and by point collocation, radial basis function and a gradient-enhanced version of kriging, and examines their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry which is parameterized by independent Gaussian variables. The results show that gradient-enhanced surrogate methods achieve better accuracy than direct integration methods with the same computational cost.

  13. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    Science.gov (United States)

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  14. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    Science.gov (United States)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  15. Quantification of susceptibility change at high-concentrated SPIO-labeled target by characteristic phase gradient recognition.

    Science.gov (United States)

    Zhu, Haitao; Nie, Binbin; Liu, Hua; Guo, Hua; Demachi, Kazuyuki; Sekino, Masaki; Shan, Baoci

    2016-05-01

    Phase map cross-correlation detection and quantification may produce highlighted signal at superparamagnetic iron oxide nanoparticles, and distinguish them from other hypointensities. The method may quantify susceptibility change by performing least squares analysis between a theoretically generated magnetic field template and an experimentally scanned phase image. Because characteristic phase recognition requires the removal of phase wrap and phase background, additional steps of phase unwrapping and filtering may increase the chance of computing error and enlarge the inconsistence among algorithms. To solve problem, phase gradient cross-correlation and quantification method is developed by recognizing characteristic phase gradient pattern instead of phase image because phase gradient operation inherently includes unwrapping and filtering functions. However, few studies have mentioned the detectable limit of currently used phase gradient calculation algorithms. The limit may lead to an underestimation of large magnetic susceptibility change caused by high-concentrated iron accumulation. In this study, mathematical derivation points out the value of maximum detectable phase gradient calculated by differential chain algorithm in both spatial and Fourier domain. To break through the limit, a modified quantification method is proposed by using unwrapped forward differentiation for phase gradient generation. The method enlarges the detectable range of phase gradient measurement and avoids the underestimation of magnetic susceptibility. Simulation and phantom experiments were used to quantitatively compare different methods. In vivo application performs MRI scanning on nude mice implanted by iron-labeled human cancer cells. Results validate the limit of detectable phase gradient and the consequent susceptibility underestimation. Results also demonstrate the advantage of unwrapped forward differentiation compared with differential chain algorithms for susceptibility

  16. Rapid and Easy Protocol for Quantification of Next-Generation Sequencing Libraries.

    Science.gov (United States)

    Hawkins, Steve F C; Guest, Paul C

    2018-01-01

    The emergence of next-generation sequencing (NGS) over the last 10 years has increased the efficiency of DNA sequencing in terms of speed, ease, and price. However, the exact quantification of a NGS library is crucial in order to obtain good data on sequencing platforms developed by the current market leader Illumina. Different approaches for DNA quantification are available currently and the most commonly used are based on analysis of the physical properties of the DNA through spectrophotometric or fluorometric methods. Although these methods are technically simple, they do not allow exact quantification as can be achieved using a real-time quantitative PCR (qPCR) approach. A qPCR protocol for DNA quantification with applications in NGS library preparation studies is presented here. This can be applied in various fields of study such as medical disorders resulting from nutritional programming disturbances.

  17. Specificity and affinity quantification of protein-protein interactions.

    Science.gov (United States)

    Yan, Zhiqiang; Guo, Liyong; Hu, Liang; Wang, Jin

    2013-05-01

    Most biological processes are mediated by the protein-protein interactions. Determination of the protein-protein structures and insight into their interactions are vital to understand the mechanisms of protein functions. Currently, compared with the isolated protein structures, only a small fraction of protein-protein structures are experimentally solved. Therefore, the computational docking methods play an increasing role in predicting the structures and interactions of protein-protein complexes. The scoring function of protein-protein interactions is the key responsible for the accuracy of the computational docking. Previous scoring functions were mostly developed by optimizing the binding affinity which determines the stability of the protein-protein complex, but they are often lack of the consideration of specificity which determines the discrimination of native protein-protein complex against competitive ones. We developed a scoring function (named as SPA-PP, specificity and affinity of the protein-protein interactions) by incorporating both the specificity and affinity into the optimization strategy. The testing results and comparisons with other scoring functions show that SPA-PP performs remarkably on both predictions of binding pose and binding affinity. Thus, SPA-PP is a promising quantification of protein-protein interactions, which can be implemented into the protein docking tools and applied for the predictions of protein-protein structure and affinity. The algorithm is implemented in C language, and the code can be downloaded from http://dl.dropbox.com/u/1865642/Optimization.cpp.

  18. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling.

    Science.gov (United States)

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging

  19. The Effects of Type and Quantity of Input on Iranian EFL Learners’ Oral Language Proficiency

    Directory of Open Access Journals (Sweden)

    Zahra Hassanzadeh

    2014-05-01

    Full Text Available In the written texts on foreign language learning, a group of studies has stressed the function of learning context and learning chances for learners’ language input. The present thesis had two main goals: on the one hand, different types of input to which Iranian grade four high school EFL learners’ are exposed were looked at; on the other hand, the possible relationship between types and quantity of input and Iranian EFL learners’ oral proficiency was investigated. It was supposed that EFL learners who have access to more input will show better oral proficiency than those who do not have. Instruments used in the present study for the purpose of data collation included  PET test, researcher- made questionnaire, oral language proficiency test and face- to -face interview. Data were gathered from 50 Iranian female grade four high school foreign language learners who were selected from among 120 students whose score on PET test were +1SD from the mean score. The results of the Spearman rank –order correlation test for the types of input and oral language proficiency scores, showed that the participants’ oral proficiency score significantly correlated with the intended four sources of input including spoken (rho= 0.416, sig=0.003, written (rho= 0.364, sig=0.009, aural (rho= 0.343, sig=0.015 and visual or audio-visual types of input (rho= 0.47, sig=0.00. The findings of Spearman rank –order correlation test for the quantity of input and oral language proficiency scores also showed a significant relationship between quantity of input and oral language proficiency (rho= 0.543, sig= 0.00. The findings showed that EFL learners’ oral proficiency is significantly correlated with efficient and effective input. The findings may also suggest  answers to the question why most Iranian English learners fail to speak English fluently, which might be due to  lack of effective input. This may emphasize the importance of the types and quantity of

  20. Two-Phase Microfluidic Systems for High Throughput Quantification of Agglutination Assays

    KAUST Repository

    Castro, David

    2018-01-01

    assay, with a minimum detection limit of 50 ng/mL using optical image analysis. We compare optical image analysis and light scattering as quantification methods, and demonstrate the first light scattering quantification of agglutination assays in a two

  1. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Energy and the non-energy inputs substitution: evidence for Italy, Portugal and Spain

    International Nuclear Information System (INIS)

    Medina, J.; Vega-Cervera, J.A.

    2001-01-01

    The factor demand is modeled for Italy, Portugal and Spain. We estimated a translog cost function with capital, labor and energy over the 1980-1996 period. Our objective regarding energy as input was two-fold: on the one hand, to verify its incorporation as a productive factor, and, on the other, to observe its degree of substitutability with the other classical factors, given the high level of energy dependency of these countries. Using a separability test and confidence intervals for the Allen and price elasticities, our estimates confirmed both the nonseparability of the energy input and the existence of consistent substitution between energy and labor only for Italy. (author)

  3. Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.

    Science.gov (United States)

    Herzallah, Randa

    2015-03-01

    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Two- and three-input TALE-based AND logic computation in embryonic stem cells.

    Science.gov (United States)

    Lienert, Florian; Torella, Joseph P; Chen, Jan-Hung; Norsworthy, Michael; Richardson, Ryan R; Silver, Pamela A

    2013-11-01

    Biological computing circuits can enhance our ability to control cellular functions and have potential applications in tissue engineering and medical treatments. Transcriptional activator-like effectors (TALEs) represent attractive components of synthetic gene regulatory circuits, as they can be designed de novo to target a given DNA sequence. We here demonstrate that TALEs can perform Boolean logic computation in mammalian cells. Using a split-intein protein-splicing strategy, we show that a functional TALE can be reconstituted from two inactive parts, thus generating two-input AND logic computation. We further demonstrate three-piece intein splicing in mammalian cells and use it to perform three-input AND computation. Using methods for random as well as targeted insertion of these relatively large genetic circuits, we show that TALE-based logic circuits are functional when integrated into the genome of mouse embryonic stem cells. Comparing construct variants in the same genomic context, we modulated the strength of the TALE-responsive promoter to improve the output of these circuits. Our work establishes split TALEs as a tool for building logic computation with the potential of controlling expression of endogenous genes or transgenes in response to a combination of cellular signals.

  5. JAKEF, Gradient or Jacobian Function from Objective Function or Vector Function

    International Nuclear Information System (INIS)

    Hillstrom, K.E.

    1988-01-01

    1 - Description of program or function: JAKEF is a language processor that accepts as input a single- or double-precision ANSI standard 1977 FORTRAN subroutine defining an objective function f(x), or a vector function F(x), and produces as output a single- or double- precision ANSI standard 1977 FORTRAN subroutine defining the gradient of f(x), or the Jacobian of F(x). 2 - Method of solution: JAKEF is a four-pass compiler consisting of a lexical preprocessor, a parser, a tree-building and flow analysis pass, and a differentiator and output construction pass. The lexical preprocessor reworks the input FORTRAN program to give it a recognizable lexical structure. The parser transforms the pre-processed input into a string of tokens in a post-fix representation of the program tree. The tree-building and flow analysis pass constructs a tree out of the post-fix token string. The differentiator identifies relevant assignment statements; then, if necessary, it analyzes them into component statements governed by a single differentiation rule and augments each of these statements with a call to a member of the run-time support package which implements the differentiation rule. After completing the construction of the main body of the routine, JAKEF inserts calls to support package routines that complete the differentiation. This results in a modified program tree in a form compatible with FORTRAN rules. 3 - Restrictions on the complexity of the problem: Statement functions and Equivalence's that involve the independent variables are not handled correctly. Variables, constants, or functions of type COMPLEX are not recognized. Character sub-string expressions and alternate returns are not permitted

  6. Molecular quantification of environmental DNA using microfluidics and digital PCR.

    Science.gov (United States)

    Hoshino, Tatsuhiko; Inagaki, Fumio

    2012-09-01

    Real-time PCR has been widely used to evaluate gene abundance in natural microbial habitats. However, PCR-inhibitory substances often reduce the efficiency of PCR, leading to the underestimation of target gene copy numbers. Digital PCR using microfluidics is a new approach that allows absolute quantification of DNA molecules. In this study, digital PCR was applied to environmental samples, and the effect of PCR inhibitors on DNA quantification was tested. In the control experiment using λ DNA and humic acids, underestimation of λ DNA at 1/4400 of the theoretical value was observed with 6.58 ng μL(-1) humic acids. In contrast, digital PCR provided accurate quantification data with a concentration of humic acids up to 9.34 ng μL(-1). The inhibitory effect of paddy field soil extract on quantification of the archaeal 16S rRNA gene was also tested. By diluting the DNA extract, quantified copy numbers from real-time PCR and digital PCR became similar, indicating that dilution was a useful way to remedy PCR inhibition. The dilution strategy was, however, not applicable to all natural environmental samples. For example, when marine subsurface sediment samples were tested the copy number of archaeal 16S rRNA genes was 1.04×10(3) copies/g-sediment by digital PCR, whereas real-time PCR only resulted in 4.64×10(2) copies/g-sediment, which was most likely due to an inhibitory effect. The data from this study demonstrated that inhibitory substances had little effect on DNA quantification using microfluidics and digital PCR, and showed the great advantages of digital PCR in accurate quantifications of DNA extracted from various microbial habitats. Copyright © 2012 Elsevier GmbH. All rights reserved.

  7. Synchronization of motor unit firings: an epiphenomenon of firing rate characteristics not common inputs.

    Science.gov (United States)

    Kline, Joshua C; De Luca, Carlo J

    2016-01-01

    Synchronous motor unit firing instances have been attributed to anatomical inputs shared by motoneurons. Yet, there is a lack of empirical evidence confirming the notion that common inputs elicit synchronization under voluntary conditions. We tested this notion by measuring synchronization between motor unit action potential trains (MUAPTs) as their firing rates progressed within a contraction from a relatively low force level to a higher one. On average, the degree of synchronization decreased as the force increased. The common input notion provides no empirically supported explanation for the observed synchronization behavior. Therefore, we investigated a more probable explanation for synchronization. Our data set of 17,546 paired MUAPTs revealed that the degree of synchronization varies as a function of two characteristics of the motor unit firing rate: the similarity and the slope as a function of force. Both are measures of the excitation of the motoneurons. As the force generated by the muscle increases, the firing rate slope decreases, and the synchronization correspondingly decreases. Different muscles have motor units with different firing rate characteristics and display different amounts of synchronization. Although this association is not proof of causality, it consistently explains our observations and strongly suggests further investigation. So viewed, synchronization is likely an epiphenomenon, subject to countless unknown neural interactions. As such, synchronous firing instances may not be the product of a specific design and may not serve a specific physiological purpose. Our explanation for synchronization has the advantage of being supported by empirical evidence, whereas the common input does not. Copyright © 2016 the American Physiological Society.

  8. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.; Curioni, A.; Fedulova, I.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost

  9. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    Directory of Open Access Journals (Sweden)

    Jongbin Ko

    2014-01-01

    Full Text Available A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  10. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    Science.gov (United States)

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  11. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    Science.gov (United States)

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification

  12. Inputs to the dorsal striatum of the mouse reflect the parallel circuit architecture of the forebrain.

    Science.gov (United States)

    Pan, Weixing X; Mao, Tianyi; Dudman, Joshua T

    2010-01-01

    The basal ganglia play a critical role in the regulation of voluntary action in vertebrates. Our understanding of the function of the basal ganglia relies heavily upon anatomical information, but continued progress will require an understanding of the specific functional roles played by diverse cell types and their connectivity. An increasing number of mouse lines allow extensive identification, characterization, and manipulation of specified cell types in the basal ganglia. Despite the promise of genetically modified mice for elucidating the functional roles of diverse cell types, there is relatively little anatomical data obtained directly in the mouse. Here we have characterized the retrograde labeling obtained from a series of tracer injections throughout the dorsal striatum of adult mice. We found systematic variations in input along both the medial-lateral and anterior-posterior neuraxes in close agreement with canonical features of basal ganglia anatomy in the rat. In addition to the canonical features we have provided experimental support for the importance of non-canonical inputs to the striatum from the raphe nuclei and the amygdala. To look for organization at a finer scale we have analyzed the correlation structure of labeling intensity across our entire dataset. Using this analysis we found substantial local heterogeneity within the large-scale order. From this analysis we conclude that individual striatal sites receive varied combinations of cortical and thalamic input from multiple functional areas, consistent with some earlier studies in the rat that have suggested the presence of a combinatorial map.

  13. Logarithmic-function generator

    Science.gov (United States)

    Caron, P. R.

    1975-01-01

    Solid-state logarithmic-function generator is compact and provides improved accuracy. Generator includes a stable multivibrator feeding into RC circuit. Resulting exponentially decaying voltage is compared with input signal. Generator output is proportional to time required for exponential voltage to decay from preset reference level to level of input signal.

  14. Rapid Quantification and Validation of Lipid Concentrations within Liposomes

    Directory of Open Access Journals (Sweden)

    Carla B. Roces

    2016-09-01

    Full Text Available Quantification of the lipid content in liposomal adjuvants for subunit vaccine formulation is of extreme importance, since this concentration impacts both efficacy and stability. In this paper, we outline a high performance liquid chromatography-evaporative light scattering detector (HPLC-ELSD method that allows for the rapid and simultaneous quantification of lipid concentrations within liposomal systems prepared by three liposomal manufacturing techniques (lipid film hydration, high shear mixing, and microfluidics. The ELSD system was used to quantify four lipids: 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC, cholesterol, dimethyldioctadecylammonium (DDA bromide, and ᴅ-(+-trehalose 6,6′-dibehenate (TDB. The developed method offers rapidity, high sensitivity, direct linearity, and a good consistency on the responses (R2 > 0.993 for the four lipids tested. The corresponding limit of detection (LOD and limit of quantification (LOQ were 0.11 and 0.36 mg/mL (DMPC, 0.02 and 0.80 mg/mL (cholesterol, 0.06 and 0.20 mg/mL (DDA, and 0.05 and 0.16 mg/mL (TDB, respectively. HPLC-ELSD was shown to be a rapid and effective method for the quantification of lipids within liposome formulations without the need for lipid extraction processes.

  15. Design and Implementation of Kana-Input Navigation System for Kids based on the Cyber Assistant

    Directory of Open Access Journals (Sweden)

    Hiroshi Matsuda

    2004-02-01

    Full Text Available In Japan, it has increased the opportunity for young children to experience the personal computer in elementary schools. However, in order to use computer, many domestic barriers have confronted young children (Kids because they cannot read difficult Kanji characters and had not learnt Roman alphabet yet. As a result, they cannot input text strings by JIS Kana keyboard. In this research, we developed Kana-Input NaVigation System for kids (KINVS based on the Cyber Assistant System (CAS. CAS is a Human-Style Software Robot based on the 3D-CG real-time animation and voice synthesis technology. KINVS enables to input Hiragana/Katakana characters by mouse operation only (without keyboard operation and CAS supports them by using speaking, facial expression, body action and sound effects. KINVS displays the 3D-Stage like a classroom. In this room, Blackboard, Interactive parts to input Kana-characters, and CAS are placed. As some results of preliminary experiments, it is definitely unfit for Kids to double-click objects quickly or to move the Scrollbar by mouse dragging. So, mouse input method of KINVS are designed to use only single click and wheeler rotation. To input characters, Kids clicks or rotates the Interactive Parts. KINVS reports all information by voice speaking and Kana subtitles instead of Kanji text. Furthermore, to verify the functional feature of KINVS, we measured how long Kids had taken to input long text by using KINVS.

  16. Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters

    Science.gov (United States)

    Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei

    2018-05-01

    In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.

  17. Quantifying input uncertainty in an assemble-to-order system simulation with correlated input variables of mixed types

    NARCIS (Netherlands)

    Akçay, A.E.; Biller, B.

    2014-01-01

    We consider an assemble-to-order production system where the product demands and the time since the last customer arrival are not independent. The simulation of this system requires a multivariate input model that generates random input vectors with correlated discrete and continuous components. In

  18. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    Science.gov (United States)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  19. Gating of Long-Term Potentiation by Nicotinic Acetylcholine Receptors at the Cerebellum Input Stage

    NARCIS (Netherlands)

    F. Prestori (Francesca); C. Bonardi (Claudia); L. Mapelli (Lisa); P. Lombardo (Paola); R. Goselink (Rianne); M.E. de Stefano (Maria Egle); D. Gandolfi (Daniela); J. Mapelli (Jonathan); D. Bertrand (Daniel); M. Schonewille (Martijn); C.I. de Zeeuw (Chris); E. D'Angelo (Egidio)

    2013-01-01

    textabstractThe brain needs mechanisms able to correlate plastic changes with local circuit activity and internal functional states. At the cerebellum input stage, uncontrolled induction of long-term potentiation or depression (LTP or LTD) between mossy fibres and granule cells can saturate synaptic

  20. Influence of active dendritic currents on input-output processing in spinal motoneurons in vivo.

    Science.gov (United States)

    Lee, R H; Kuo, J J; Jiang, M C; Heckman, C J

    2003-01-01

    The extensive dendritic tree of the adult spinal motoneuron generates a powerful persistent inward current (PIC). We investigated how this dendritic PIC influenced conversion of synaptic input to rhythmic firing. A linearly increasing, predominantly excitatory synaptic input was generated in triceps ankle extensor motoneurons by slow stretch (duration: 2-10 s) of the Achilles tendon in the decerebrate cat preparation. The firing pattern evoked by stretch was measured by injecting a steady current to depolarize the cell to threshold for firing. The effective synaptic current (I(N), the net synaptic current reaching the soma of the cell) evoked by stretch was measured during voltage clamp. Hyperpolarized holding potentials were used to minimize the activation of the dendritic PIC and thus estimate stretch-evoked I(N) for a passive dendritic tree (I(N,PASS)). Depolarized holding potentials that approximated the average membrane potential during rhythmic firing allowed strong activation of the dendritic PIC and thus resulted in marked enhancement of the total stretch-evoked I(N) (I(N,TOT)). The net effect of the dendritic PIC on the generation of rhythmic firing was assessed by plotting stretch-evoked firing (strong PIC activation) versus stretch-evoked I(N,PASS) (minimal PIC activation). The gain of this input-output function for the neuron (I-O(N)) was found to be ~2.7 times as high as for the standard injected frequency current (F-I) function in low-input conductance neurons. However, about halfway through the stretch, firing rate tended to become constant, resulting in a sharp saturation in I-O(N) that was not present in F-I. In addition, the gain of I-O(N) decreased sharply with increasing input conductance, resulting in much lower stretch-evoked firing rates in high-input conductance cells. All three of these phenomena (high initial gain, saturation, and differences in low- and high-input conductance cells) were also readily apparent in the differences between