WorldWideScience

Sample records for input function quantification

  1. An open tool for input function estimation and quantification of dynamic PET FDG brain scans.

    Science.gov (United States)

    Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro

    2016-08-01

    Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main

  2. Hybrid image and blood sampling input function for quantification of small animal dynamic PET data

    International Nuclear Information System (INIS)

    Shoghi, Kooresh I.; Welch, Michael J.

    2007-01-01

    We describe and validate a hybrid image and blood sampling (HIBS) method to derive the input function for quantification of microPET mice data. The HIBS algorithm derives the peak of the input function from the image, which is corrected for recovery, while the tail is derived from 5 to 6 optimally placed blood sampling points. A Bezier interpolation algorithm is used to link the rightmost image peak data point to the leftmost blood sampling point. To assess the performance of HIBS, 4 mice underwent 60-min microPET imaging sessions following a 0.40-0.50-mCi bolus administration of 18 FDG. In total, 21 blood samples (blood-sampled plasma time-activity curve, bsPTAC) were obtained throughout the imaging session to compare against the proposed HIBS method. MicroPET images were reconstructed using filtered back projection with a zoom of 2.75 on the heart. Volumetric regions of interest (ROIs) were composed by drawing circular ROIs 3 pixels in diameter on 3-4 transverse planes of the left ventricle. Performance was characterized by kinetic simulations in terms of bias in parameter estimates when bsPTAC and HIBS are used as input functions. The peak of the bsPTAC curve was distorted in comparison to the HIBS-derived curve due to temporal limitations and delay in blood sampling, which affected the rates of bidirectional exchange between plasma and tissue. The results highlight limitations in using bsPTAC. The HIBS method, however, yields consistent results, and thus, is a substitute for bsPTAC

  3. Simplified quantification of small animal [{sup 18}F]FDG PET studies using a standard arterial input function

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philipp T. [University Hospital Aachen, Department of Neurology, Aachen (Germany); Circiumaru, Valentina; Thomas, Daniel H. [University of Pennsylvania, Department of Radiology, Philadelphia (United States); Cardi, Christopher A.; Bal, Harshali; Acton, Paul D. [Thomas Jefferson University, Department of Radiology, Philadelphia (United States)

    2006-08-15

    Arterial input function (AIF) measurement for quantification of small animal PET studies is technically challenging and limited by the small blood volume of small laboratory animals. The present study investigated the use of a standard arterial input function (SAIF) to simplify the experimental procedure. Twelve [{sup 18}F]fluorodeoxyglucose ([{sup 18}F]FDG) PET studies accompanied by serial arterial blood sampling were acquired in seven male Sprague-Dawley rats under isoflurane anaesthesia without (every rat) and with additional (five rats) vibrissae stimulation. A leave-one-out procedure was employed to validate the use of a SAIF with individual scaling by one (1S) or two (2S) arterial blood samples. Automatic slow bolus infusion of [{sup 18}F]FDG resulted in highly similar AIF in all rats. The average differences of the area under the curve of the measured AIF and the individually scaled SAIF were 0.11{+-}4.26% and 0.04{+-}2.61% for the 1S (6-min sample) and the 2S (4-min/43-min samples) approach, respectively. The average differences between the cerebral metabolic rates of glucose (CMR{sub glc}) calculated using the measured AIF and the scaled SAIF were 1.31{+-}5.45% and 1.30{+-}3.84% for the 1S and the 2S approach, respectively. The use of a SAIF scaled by one or (preferably) two arterial blood samples can serve as a valid substitute for individual AIF measurements to quantify [{sup 18}F]FDG PET studies in rats. The SAIF approach minimises the loss of blood and should be ideally suited for longitudinal quantitative small animal [{sup 18}F]FDG PET studies. (orig.)

  4. Noninvasive quantification of cerebral metabolic rate for glucose in rats using 18F-FDG PET and standard input function

    Science.gov (United States)

    Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi

    2015-01-01

    Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947

  5. Estimating the input function non-invasively for FDG-PET quantification with multiple linear regression analysis: simulation and verification with in vivo data

    International Nuclear Information System (INIS)

    Fang, Yu-Hua; Kao, Tsair; Liu, Ren-Shyan; Wu, Liang-Chih

    2004-01-01

    A novel statistical method, namely Regression-Estimated Input Function (REIF), is proposed in this study for the purpose of non-invasive estimation of the input function for fluorine-18 2-fluoro-2-deoxy-d-glucose positron emission tomography (FDG-PET) quantitative analysis. We collected 44 patients who had undergone a blood sampling procedure during their FDG-PET scans. First, we generated tissue time-activity curves of the grey matter and the whole brain with a segmentation technique for every subject. Summations of different intervals of these two curves were used as a feature vector, which also included the net injection dose. Multiple linear regression analysis was then applied to find the correlation between the input function and the feature vector. After a simulation study with in vivo data, the data of 29 patients were applied to calculate the regression coefficients, which were then used to estimate the input functions of the other 15 subjects. Comparing the estimated input functions with the corresponding real input functions, the averaged error percentages of the area under the curve and the cerebral metabolic rate of glucose (CMRGlc) were 12.13±8.85 and 16.60±9.61, respectively. Regression analysis of the CMRGlc values derived from the real and estimated input functions revealed a high correlation (r=0.91). No significant difference was found between the real CMRGlc and that derived from our regression-estimated input function (Student's t test, P>0.05). The proposed REIF method demonstrated good abilities for input function and CMRGlc estimation, and represents a reliable replacement for the blood sampling procedures in FDG-PET quantification. (orig.)

  6. Recovery coefficients for the quantification of the arterial input function from dynamic pet measurements: experimental and theoretical determination

    International Nuclear Information System (INIS)

    Brix, G.; Bellemann, M.E.; Hauser, H.; Doll, J.

    2002-01-01

    Aim: For kinetic modelling of dynamic PET data, the arterial input function can be determined directly from the PET scans if a large artery is visualized on the images. It was the purpose of this study to experimentally and theoretically determine recovery coefficients for cylinders as a function of the diameter and level of background activity. Methods: The measurements were performed using a phantom with seven cylinder inserts (φ = 5-46 mm). The cylinders were filled with an aqueous 68 Ga solution while the main chamber was filled with a 18 F solution in order to obtain a varying concentration ratio between the cylinders and the background due to the different isotope half lives. After iterative image reconstruction, the activity concentrations were measured in the center of the cylinders and the recovery coefficients were calculated as a function of the diameter and the background activity. Based on the imaging properties of the PET system, we also developed a model for the quantitative assessment of recovery coefficients. Results: The functional dependence of the measured recovery data from the cylinder diameter and the concentration ratio is well described by our model. For dynamic PET measurements, the recovery correction must take into account the decreasing concentration ratio between the blood vessel and the surrounding tissue. Under the realized measurement and data analysis conditions, a recovery correction is required for vessels with a diameter of up to 25 mm. Conclusions: Based on the experimentally verified model, the activity concentration in large arteries can be calculated from the measured activity concentration in the blood vessel and the background activity. The presented approach offers the possibility to determine the arterial input function for pharmacokinetic PET studies non-invasively from large arteries (especially the aorta). (orig.) [de

  7. Quantification of allochthonous nutrient input into freshwater bodies by herbivorous waterbirds

    NARCIS (Netherlands)

    Hahn, S.M.; Bauer, S.; Klaassen, M.R.J.

    2008-01-01

    1. Waterbirds are considered to import large quantities of nutrients to freshwater bodies but quantification of these loadings remains problematic. We developed two general models to calculate such allochthonous nutrient inputs considering food intake, foraging behaviour and digestive performance of

  8. Input-profile-based software failure probability quantification for safety signal generation systems

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Lim, Ho Gon; Lee, Ho Jung; Kim, Man Cheol; Jang, Seung Cheol

    2009-01-01

    The approaches for software failure probability estimation are mainly based on the results of testing. Test cases represent the inputs, which are encountered in an actual use. The test inputs for the safety-critical application such as a reactor protection system (RPS) of a nuclear power plant are the inputs which cause the activation of protective action such as a reactor trip. A digital system treats inputs from instrumentation sensors as discrete digital values by using an analog-to-digital converter. Input profile must be determined in consideration of these characteristics for effective software failure probability quantification. Another important characteristic of software testing is that we do not have to repeat the test for the same input value since the software response is deterministic for each specific digital input. With these considerations, we propose an effective software testing method for quantifying the failure probability. As an example application, the input profile of the digital RPS is developed based on the typical plant data. The proposed method in this study is expected to provide a simple but realistic mean to quantify the software failure probability based on input profile and system dynamics.

  9. Evaluation of severe accident risks: Quantification of major input parameters: MAACS [MELCOR Accident Consequence Code System] input

    International Nuclear Information System (INIS)

    Sprung, J.L.; Jow, H-N; Rollstin, J.A.; Helton, J.C.

    1990-12-01

    Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric and biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs

  10. Combining MRI with PET for partial volume correction improves image-derived input functions in mice

    Energy Technology Data Exchange (ETDEWEB)

    Evans, Eleanor; Buonincontri, Guido [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Izquierdo, David [Athinoula A Martinos Centre, Harvard University, Cambridge, MA (United States); Methner, Carmen [Department of Medicine, University of Cambridge, Cambridge (United Kingdom); Hawkes, Rob C [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Ansorge, Richard E [Department of Physics, University of Cambridge, Cambridge (United Kingdom); Kreig, Thomas [Department of Medicine, University of Cambridge, Cambridge (United Kingdom); Carpenter, T Adrian [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Sawiak, Stephen J [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Behavioural and Clinical Neurosciences Institute, University of Cambridge, Cambridge (United Kingdom)

    2014-07-29

    Kinetic modelling in PET requires the arterial input function (AIF), defined as the time-activity curve (TAC) in plasma. This measure is challenging to obtain in mice due to low blood volumes, resulting in a reliance on image-based methods for AIF derivation. We present a comparison of PET- and MR-based region-of-interest (ROI) analysis to obtain image-derived AIFs from the left ventricle (LV) of a mouse model. ROI-based partial volume correction (PVC) was performed to improve quantification.

  11. Combining MRI with PET for partial volume correction improves image-derived input functions in mice

    International Nuclear Information System (INIS)

    Evans, Eleanor; Buonincontri, Guido; Izquierdo, David; Methner, Carmen; Hawkes, Rob C; Ansorge, Richard E; Kreig, Thomas; Carpenter, T Adrian; Sawiak, Stephen J

    2014-01-01

    Kinetic modelling in PET requires the arterial input function (AIF), defined as the time-activity curve (TAC) in plasma. This measure is challenging to obtain in mice due to low blood volumes, resulting in a reliance on image-based methods for AIF derivation. We present a comparison of PET- and MR-based region-of-interest (ROI) analysis to obtain image-derived AIFs from the left ventricle (LV) of a mouse model. ROI-based partial volume correction (PVC) was performed to improve quantification.

  12. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  13. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    Science.gov (United States)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  14. Concepts in production ecology for analysis and quantification of agricultural input-output combinations.

    NARCIS (Netherlands)

    Ittersum, van M.K.; Rabbinge, R.

    1997-01-01

    Definitions and concepts of production ecology are presented as a basis for development of alternative production technologies characterized by their input-output combinations. With these concepts the relative importance of several growth factors and inputs is investigated to explain actual yield

  15. Simultaneous acquisition of dynamic PET-MRI: arterial input function using DSC-MRI and [18F]-FET

    Energy Technology Data Exchange (ETDEWEB)

    Caldeira, Liliana; Yun, Seong Dae; Silva, Nuno da; Filss, Christian; Scheins, Juergen; Telmann, Lutz; Herzog, Hans; Shah, Jon [Institute of Neuroscience and Medicine - 4, Forschungszentrum Juelich GmbH (Germany)

    2015-05-18

    This work focuses on the study of simultaneous dynamic MR-PET acquisition in brain tumour patients. MR-based perfusion-weighted imaging (PWI) and PET [18F]-FET are dynamic methods, which allow to evaluate tumour metabolism in a quantitative way. In both methods, arterial input function (AIF) is necessary for quantification. However, the AIF estimation is a challenging task. In this work, we explore the possibilities to combine dynamic MR and PET AIF.

  16. Simultaneous acquisition of dynamic PET-MRI: arterial input function using DSC-MRI and [18F]-FET

    International Nuclear Information System (INIS)

    Caldeira, Liliana; Yun, Seong Dae; Silva, Nuno da; Filss, Christian; Scheins, Juergen; Telmann, Lutz; Herzog, Hans; Shah, Jon

    2015-01-01

    This work focuses on the study of simultaneous dynamic MR-PET acquisition in brain tumour patients. MR-based perfusion-weighted imaging (PWI) and PET [18F]-FET are dynamic methods, which allow to evaluate tumour metabolism in a quantitative way. In both methods, arterial input function (AIF) is necessary for quantification. However, the AIF estimation is a challenging task. In this work, we explore the possibilities to combine dynamic MR and PET AIF.

  17. Multi detector input and function generator for polarized neutron experiments

    International Nuclear Information System (INIS)

    De Blois, J.; Beunes, A.J.H.; Ende, P. v.d.; Osterholt, E.A.; Rekveldt, M.T.; Schipper, M.N.; Velthuis, S.G.E. te

    1998-01-01

    In this paper a VME module is described for static or stroboscopic measurements with a neutron scattering instrument, consisting essentially of a series of up to 64 3 He neutron detectors around a sample environment. Each detector is provided with an amplifier and a discriminator to separate the neutrons from noise. To reduce the wiring, the discriminator outputs are connected to the module by coding boxes. Two 16-inputs to one-output coding boxes generate serial output codes on a fiber optic connection. This basically fast connection reduces the dead time introduced by the coding, and the influence of environmental noise. With stroboscopic measurements a periodic function is used to affect the sample surrounded by a field coil. Each detected neutron is labeled with a data label containing the detector number and the time of detection with respect to a time reference. The data time base can be programmed on a linear or a nonlinear scale. An external source or an attribute of the periodic function may generate the time reference pulse. A 12-bit DAC connected to the output of an 8 K, 16-bits memory, where the pattern of the current has been stored before, generates the function. The function memory is scanned by the programmable function time base. Attributes are set by the four remaining bits of the memory. One separate detector input connects a monitor detector in the neutron beam with a 32-bit counter/timer that provides measuring on a preset count, preset time or preset frame. (orig.)

  18. Estimation of the input function in dynamic positron emission tomography applied to fluorodeoxyglucose

    International Nuclear Information System (INIS)

    Jouvie, Camille

    2013-01-01

    Positron Emission Tomography (PET) is a method of functional imaging, used in particular for drug development and tumor imaging. In PET, the estimation of the arterial plasmatic activity concentration of the non-metabolized compound (the 'input function') is necessary for the extraction of the pharmacokinetic parameters. These parameters enable the quantification of the compound dynamics in the tissues. This PhD thesis contributes to the study of the input function by the development of a minimally invasive method to estimate the input function. This method uses the PET image and a few blood samples. In this work, the example of the FDG tracer is chosen. The proposed method relies on compartmental modeling: it deconvoluates the three-compartment-model. The originality of the method consists in using a large number of regions of interest (ROIs), a large number of sets of three ROIs, and an iterative process. To validate the method, simulations of PET images of increasing complexity have been performed, from a simple image simulated with an analytic simulator to a complex image simulated with a Monte-Carlo simulator. After simulation of the acquisition, reconstruction and corrections, the images were segmented (through segmentation of an IRM image and registration between PET and IRM images) and corrected for partial volume effect by a variant of Rousset's method, to obtain the kinetics in the ROIs, which are the input data of the estimation method. The evaluation of the method on simulated and real data is presented, as well as a study of the method robustness to different error sources, for example in the segmentation, in the registration or in the activity of the used blood samples. (author) [fr

  19. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a formulation of a fully nonlinear and dispersive potential flow water wave model with random inputs for the probabilistic description...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...... benchmarks often used for validation of deterministic water wave models. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in comparison with experimental measurements could be partially explained...

  20. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    Science.gov (United States)

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Temporal and spatial quantification of farm and landscape functions

    DEFF Research Database (Denmark)

    Andersen, Peter Stubkjær

    , residence, habitat, and recreation; development of a method for quantifying farm functionality and assessing multifunctionality; and definition of a farm typology based on multifunctionality strategies. Empirical data from farm interviews were used in the study to test the developed methods. The results...... is generally decreases and a tendency of increased segregation of the rural landscape is observed. In perspective, further studies on quantification in tangible units, synergies and trade-offs between functions at different scales, and correlations between structures and functions are needed....

  2. Partial volume effect (PVE) on the arterial input function (AIF) in T1-weighted perfusion imaging and limitations of the multiplicative rescaling approach

    DEFF Research Database (Denmark)

    Hansen, Adam Espe; Pedersen, Henrik; Rostrup, Egill

    2009-01-01

    The partial volume effect (PVE) on the arterial input function (AIF) remains a major obstacle to absolute quantification of cerebral blood flow (CBF) using MRI. This study evaluates the validity and performance of a commonly used multiplicative rescaling of the AIF to correct for the PVE. In a gr...

  3. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    NARCIS (Netherlands)

    Vriens, D.; Geus-Oei, L.F. de; Oyen, W.J.G.; Visser, E.P.

    2009-01-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible

  4. Error correction in multi-fidelity molecular dynamics simulations using functional uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu

    2017-04-01

    We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated under three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.

  5. Determination of arterial input function in dynamic susceptibility contrast MRI using group independent component analysis technique

    International Nuclear Information System (INIS)

    Chen, S.; Liu, H.-L.; Yang Yihong; Hsu, Y.-Y.; Chuang, K.-S.

    2006-01-01

    Quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast (DSC) magnetic resonance imaging (MRI) requires the determination of the arterial input function (AIF). The segmentation of surrounding tissue by manual selection is error-prone due to the partial volume artifacts. Independent component analysis (ICA) has the advantage in automatically decomposing the signals into interpretable components. Recently group ICA technique has been applied to fMRI study and showed reduced variance caused by motion artifact and noise. In this work, we investigated the feasibility and efficacy of the use of group ICA technique to extract the AIF. Both simulated and in vivo data were analyzed in this study. The simulation data of eight phantoms were generated using randomized lesion locations and time activity curves. The clinical data were obtained from spin-echo EPI MR scans performed in seven normal subjects. Group ICA technique was applied to analyze data through concatenating across seven subjects. The AIFs were calculated from the weighted average of the signals in the region selected by ICA. Preliminary results of this study showed that group ICA technique could not extract accurate AIF information from regions around the vessel. The mismatched location of vessels within the group reduced the benefits of group study

  6. Plasticity of the cis-regulatory input function of a gene.

    Directory of Open Access Journals (Sweden)

    Avraham E Mayo

    2006-04-01

    Full Text Available The transcription rate of a gene is often controlled by several regulators that bind specific sites in the gene's cis-regulatory region. The combined effect of these regulators is described by a cis-regulatory input function. What determines the form of an input function, and how variable is it with respect to mutations? To address this, we employ the well-characterized lac operon of Escherichia coli, which has an elaborate input function, intermediate between Boolean AND-gate and OR-gate logic. We mapped in detail the input function of 12 variants of the lac promoter, each with different point mutations in the regulator binding sites, by means of accurate expression measurements from living cells. We find that even a few mutations can significantly change the input function, resulting in functions that resemble Pure AND gates, OR gates, or single-input switches. Other types of gates were not found. The variant input functions can be described in a unified manner by a mathematical model. The model also lets us predict which functions cannot be reached by point mutations. The input function that we studied thus appears to be plastic, in the sense that many of the mutations do not ruin the regulation completely but rather result in new ways to integrate the inputs.

  7. The cerebral blood flow measurement without absolute input function value for PET O-15 injection method

    International Nuclear Information System (INIS)

    Matsuda, Tadashige

    2004-01-01

    This paper shows the method of the measurement for the region of interest's (ROI's) cerebral blood flow (CBF) using PET data and the input function without the absolute density value of the radio activity. The value of the input function and the output function are fixed from the clinical data by regression analysis. The input function and the output function are transformed by the Fourier transform. The transfer function of the differential equation of the compartment model is got by these Fourier transforms. The CBF can be estimated by the transfer function regression analysis. Results are compared between the proposal and conventional methods. (author)

  8. Direct quantification of negatively charged functional groups on membrane surfaces

    KAUST Repository

    Tiraferri, Alberto

    2012-02-01

    Surface charge plays an important role in membrane-based separations of particulates, macromolecules, and dissolved ionic species. In this study, we present two experimental methods to determine the concentration of negatively charged functional groups at the surface of dense polymeric membranes. Both techniques consist of associating the membrane surface moieties with chemical probes, followed by quantification of the bound probes. Uranyl acetate and toluidine blue O dye, which interact with the membrane functional groups via complexation and electrostatic interaction, respectively, were used as probes. The amount of associated probes was quantified using liquid scintillation counting for uranium atoms and visible light spectroscopy for the toluidine blue dye. The techniques were validated using self-assembled monolayers of alkanethiols with known amounts of charged moieties. The surface density of negatively charged functional groups of hand-cast thin-film composite polyamide membranes, as well as commercial cellulose triacetate and polyamide membranes, was quantified under various conditions. Using both techniques, we measured a negatively charged functional group density of 20-30nm -2 for the hand-cast thin-film composite membranes. The ionization behavior of the membrane functional groups, determined from measurements with toluidine blue at varying pH, was consistent with published data for thin-film composite polyamide membranes. Similarly, the measured charge densities on commercial membranes were in general agreement with previous investigations. The relative simplicity of the two methods makes them a useful tool for quantifying the surface charge concentration of a variety of surfaces, including separation membranes. © 2011 Elsevier B.V.

  9. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    OpenAIRE

    Priska Arindya Purnama

    2017-01-01

    The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt) sequence expected to be effected by an input series (Xt) and other inputs in a group called a noise series (Nt). Multi input transfer function model obtained is (b1,s1,r1) (b2,s2,r2) (b3,s3,r3) (b4,s4,r4)(pn,qn) = (0,0,0)...

  10. Targeted quantification of functional enzyme dynamics in environmental samples for microbially mediated biogeochemical processes: Targeted quantification of functional enzyme dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Li, Minjing [School of Environmental Studies, China University of Geosciences, Wuhan 430074 People' s Republic of China; Gao, Yuqian [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Qian, Wei-Jun [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Shi, Liang [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Liu, Yuanyuan [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Nelson, William C. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Nicora, Carrie D. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Resch, Charles T. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Thompson, Christopher [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Yan, Sen [School of Environmental Studies, China University of Geosciences, Wuhan 430074 People' s Republic of China; Fredrickson, James K. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Liu, Chongxuan [Pacific Northwest National Laboratory, Richland, WA 99354 USA; School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055 People' s Republic of China

    2017-07-13

    Microbially mediated biogeochemical processes are catalyzed by enzymes that control the transformation of carbon, nitrogen, and other elements in environment. The dynamic linkage between enzymes and biogeochemical species transformation has, however, rarely been investigated because of the lack of analytical approaches to efficiently and reliably quantify enzymes and their dynamics in soils and sediments. Herein, we developed a signature peptide-based technique for sensitively quantifying dissimilatory and assimilatory enzymes using nitrate-reducing enzymes in a hyporheic zone sediment as an example. Moreover, the measured changes in enzyme concentration were found to correlate with the nitrate reduction rate in a way different from that inferred from biogeochemical models based on biomass or functional genes as surrogates for functional enzymes. This phenomenon has important implications for understanding and modeling the dynamics of microbial community functions and biogeochemical processes in environments. Our results also demonstrate the importance of enzyme quantification for the identification and interrogation of those biogeochemical processes with low metabolite concentrations as a result of faster enzyme-catalyzed consumption of metabolites than their production. The dynamic enzyme behaviors provide a basis for the development of enzyme-based models to describe the relationship between the microbial community and biogeochemical processes.

  11. School District Inputs and Biased Estimation of Educational Production Functions.

    Science.gov (United States)

    Watts, Michael

    1985-01-01

    In 1979, Eric Hanushek pointed out a potential problem in estimating educational production functions, particularly at the precollege level. He observed that it is frequently inappropriate to include school-system variables in equations using the individual student as the unit of observation. This study offers limited evidence supporting this…

  12. Direct quantification of negatively charged functional groups on membrane surfaces

    KAUST Repository

    Tiraferri, Alberto; Elimelech, Menachem

    2012-01-01

    groups at the surface of dense polymeric membranes. Both techniques consist of associating the membrane surface moieties with chemical probes, followed by quantification of the bound probes. Uranyl acetate and toluidine blue O dye, which interact

  13. Parametrically defined cerebral blood vessels as non-invasive blood input functions for brain PET studies

    International Nuclear Information System (INIS)

    Asselin, Marie-Claude; Cunningham, Vincent J; Amano, Shigeko; Gunn, Roger N; Nahmias, Claude

    2004-01-01

    A non-invasive alternative to arterial blood sampling for the generation of a blood input function for brain positron emission tomography (PET) studies is presented. The method aims to extract the dimensions of the blood vessel directly from PET images and to simultaneously correct the radioactivity concentration for partial volume and spillover. This involves simulation of the tomographic imaging process to generate images of different blood vessel and background geometries and selecting the one that best fits, in a least-squares sense, the acquired PET image. A phantom experiment was conducted to validate the method which was then applied to eight subjects injected with 6-[ 18 F]fluoro-L-DOPA and one subject injected with [ 11 C]CO-labelled red blood cells. In the phantom study, the diameter of syringes filled with an 11 C solution and inserted into a water-filled cylinder were estimated with an accuracy of half a pixel (1 mm). The radioactivity concentration was recovered to 100 ± 4% in the 8.7 mm diameter syringe, the one that most closely approximated the superior sagittal sinus. In the human studies, the method systematically overestimated the calibre of the superior sagittal sinus by 2-3 mm compared to measurements made in magnetic resonance venograms on the same subjects. Sources of discrepancies related to the anatomy of the blood vessel were found not to be fundamental limitations to the applicability of the method to human subjects. This method has the potential to provide accurate quantification of blood radioactivity concentration from PET images without the need for blood samples, corrections for delay and dispersion, co-registered anatomical images, or manually defined regions of interest

  14. A linear functional differential equation with distributions in the input

    Directory of Open Access Journals (Sweden)

    Vadim Z. Tsalyuk

    2003-10-01

    Full Text Available This paper studies the functional differential equation $$ dot x(t = int_a^t {d_s R(t,s, x(s} + F'(t, quad t in [a,b], $$ where $F'$ is a generalized derivative, and $R(t,cdot$ and $F$ are functions of bounded variation. A solution is defined by the difference $x - F$ being absolutely continuous and satisfying the inclusion $$ frac{d}{dt} (x(t - F(t in int_a^t {d_s R(t,s,x(s}. $$ Here, the integral in the right is the multivalued Stieltjes integral presented in cite{VTs1} (in this article we review and extend the results in cite{VTs1}. We show that the solution set for the initial-value problem is nonempty, compact, and convex. A solution $x$ is said to have memory if there exists the function $x$ such that $x(a = x(a$, $x(b = x(b$, $ x(t in [x(t-0,x(t+0]$ for $t in (a,b$, and $frac{d}{dt} (x(t - F(t = int_a^t {d_s R(t,s,{x}(s}$, where Lebesgue-Stieltjes integral is used. We show that such solutions form a nonempty, compact, and convex set. It is shown that solutions with memory obey the Cauchy-type formula $$ x(t in C(t,ax(a + int_a^t C(t,s, dF(s. $$

  15. Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    Science.gov (United States)

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.

  16. Uncertainty quantification for nuclear density functional theory and information content of new measurements.

    Science.gov (United States)

    McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-03-27

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  17. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    Energy Technology Data Exchange (ETDEWEB)

    McDonnell, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schunck, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Higdon, D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sarich, J. [Argonne National Lab. (ANL), Argonne, IL (United States); Wild, S. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Nazarewicz, W. [Michigan State Univ., East Lansing, MI (United States); Oak Ridge National Lab., Oak Ridge, TN (United States); Univ. of Warsaw, Warsaw (Poland)

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  18. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    Directory of Open Access Journals (Sweden)

    Priska Arindya Purnama

    2017-11-01

    Full Text Available The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt sequence expected to be effected by an input series (Xt and other inputs in a group called a noise series (Nt. Multi input transfer function model obtained is (b1,s1,r1 (b2,s2,r2 (b3,s3,r3 (b4,s4,r4(pn,qn = (0,0,0 (23,0,0 (1,2,0 (0,0,0 ([5,8],2 and shows that air temperature on t-day affects rainfall on t-day, rainfall on t-day is influenced by air humidity in the previous 23 days, rainfall on t-day is affected by wind speed in the previous day , and rainfall on day t is affected by clouds on day t. The results of rainfall forecasting in Batu City with multi input transfer function model can be said to be accurate, because it produces relatively small RMSE value. The value of RMSE data forecasting training is 7.7921 while forecasting data testing is 4.2184. Multi-input transfer function model is suitable for rainfall in Batu City.

  19. Variance gradients and uncertainty budgets for nonlinear measurement functions with independent inputs

    International Nuclear Information System (INIS)

    Campanelli, Mark; Kacker, Raghu; Kessel, Rüdiger

    2013-01-01

    A novel variance-based measure for global sensitivity analysis, termed a variance gradient (VG), is presented for constructing uncertainty budgets under the Guide to the Expression of Uncertainty in Measurement (GUM) framework for nonlinear measurement functions with independent inputs. The motivation behind VGs is the desire of metrologists to understand which inputs' variance reductions would most effectively reduce the variance of the measurand. VGs are particularly useful when the application of the first supplement to the GUM is indicated because of the inadequacy of measurement function linearization. However, VGs reduce to a commonly understood variance decomposition in the case of a linear(ized) measurement function with independent inputs for which the original GUM readily applies. The usefulness of VGs is illustrated by application to an example from the first supplement to the GUM, as well as to the benchmark Ishigami function. A comparison of VGs to other available sensitivity measures is made. (paper)

  20. EMP damage function of input port of DC solid state relay

    International Nuclear Information System (INIS)

    Sun Beiyun; Chen Xiangyue; Mao Congguang; Zhou Hui

    2009-01-01

    The principle of using pivotal quantity to estimate confidence interval for the cumulative distribution function at a specific value when the distribution is assumed to be normal is introduced. The damage function of input port of DC solide state relay is calculated by this method. This method can be used for vulnerability assessment. (authors)

  1. How the type of input function affects the dynamic response of conducting polymer actuators

    Science.gov (United States)

    Xiang, Xingcan; Alici, Gursel; Mutlu, Rahim; Li, Weihua

    2014-10-01

    There has been a growing interest in smart actuators typified by conducting polymer actuators, especially in their (i) fabrication, modeling and control with minimum external data and (ii) applications in bio-inspired devices, robotics and mechatronics. Their control is a challenging research problem due to the complex and nonlinear properties of these actuators, which cannot be predicted accurately. Based on an input-shaping technique, we propose a new method to improve the conducting polymer actuators’ command-following ability, while minimizing their electric power consumption. We applied four input functions with smooth characteristics to a trilayer conducting polymer actuator to experimentally evaluate its command-following ability under an open-loop control strategy and a simulated feedback control strategy, and, more importantly, to quantify how the type of input function affects the dynamic response of this class of actuators. We have found that the four smooth inputs consume less electrical power than sharp inputs such as a step input with discontinuous higher-order derivatives. We also obtained an improved transient response performance from the smooth inputs, especially under the simulated feedback control strategy, which we have proposed previously [X Xiang, R Mutlu, G Alici, and W Li, 2014 “Control of conducting polymer actuators without physical feedback: simulated feedback control approach with particle swarm optimization’, Journal of Smart Materials and Structure, 23]. The idea of using a smooth input command, which results in lower power consumption and better control performance, can be extended to other smart actuators. Consuming less electrical energy or power will have a direct effect on enhancing the operational life of these actuators.

  2. How the type of input function affects the dynamic response of conducting polymer actuators

    International Nuclear Information System (INIS)

    Xiang, Xingcan; Alici, Gursel; Mutlu, Rahim; Li, Weihua

    2014-01-01

    There has been a growing interest in smart actuators typified by conducting polymer actuators, especially in their (i) fabrication, modeling and control with minimum external data and (ii) applications in bio-inspired devices, robotics and mechatronics. Their control is a challenging research problem due to the complex and nonlinear properties of these actuators, which cannot be predicted accurately. Based on an input-shaping technique, we propose a new method to improve the conducting polymer actuators’ command-following ability, while minimizing their electric power consumption. We applied four input functions with smooth characteristics to a trilayer conducting polymer actuator to experimentally evaluate its command-following ability under an open-loop control strategy and a simulated feedback control strategy, and, more importantly, to quantify how the type of input function affects the dynamic response of this class of actuators. We have found that the four smooth inputs consume less electrical power than sharp inputs such as a step input with discontinuous higher-order derivatives. We also obtained an improved transient response performance from the smooth inputs, especially under the simulated feedback control strategy, which we have proposed previously [X Xiang, R Mutlu, G Alici, and W Li, 2014 “Control of conducting polymer actuators without physical feedback: simulated feedback control approach with particle swarm optimization’, Journal of Smart Materials and Structure, 23]. The idea of using a smooth input command, which results in lower power consumption and better control performance, can be extended to other smart actuators. Consuming less electrical energy or power will have a direct effect on enhancing the operational life of these actuators. (paper)

  3. Development of an exchange–correlation functional with uncertainty quantification capabilities for density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Aldegunde, Manuel, E-mail: M.A.Aldegunde-Rodriguez@warwick.ac.uk; Kermode, James R., E-mail: J.R.Kermode@warwick.ac.uk; Zabaras, Nicholas

    2016-04-15

    This paper presents the development of a new exchange–correlation functional from the point of view of machine learning. Using atomization energies of solids and small molecules, we train a linear model for the exchange enhancement factor using a Bayesian approach which allows for the quantification of uncertainties in the predictions. A relevance vector machine is used to automatically select the most relevant terms of the model. We then test this model on atomization energies and also on bulk properties. The average model provides a mean absolute error of only 0.116 eV for the test points of the G2/97 set but a larger 0.314 eV for the test solids. In terms of bulk properties, the prediction for transition metals and monovalent semiconductors has a very low test error. However, as expected, predictions for types of materials not represented in the training set such as ionic solids show much larger errors.

  4. Semi-parametric arterial input functions for quantitative dynamic contrast enhanced magnetic resonance imaging in mice

    Czech Academy of Sciences Publication Activity Database

    Taxt, T.; Reed, R. K.; Pavlin, T.; Rygh, C. B.; Andersen, E.; Jiřík, Radovan

    2018-01-01

    Roč. 46, FEB (2018), s. 10-20 ISSN 0730-725X R&D Projects: GA ČR GA17-13830S; GA MŠk(CZ) LO1212 Institutional support: RVO:68081731 Keywords : DCE-MRI * blind deconvolution * arterial input function Subject RIV: FA - Cardiovascular Diseases incl. Cardiotharic Surgery Impact factor: 2.225, year: 2016

  5. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  6. Atomic Resolution Imaging and Quantification of Chemical Functionality of Surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Schwarz, Udo D. [Yale Univ., New Haven, CT (United States). Dept. of Mechanical Engineering and Materials Science; Altman, Eric I. [Yale Univ., New Haven, CT (United States). Dept. of Chemical and Environmental Engineering

    2014-12-10

    The work carried out from 2006-2014 under DoE support was targeted at developing new approaches to the atomic-scale characterization of surfaces that include species-selective imaging and an ability to quantify chemical surface interactions with site-specific accuracy. The newly established methods were subsequently applied to gain insight into the local chemical interactions that govern the catalytic properties of model catalysts of interest to DoE. The foundation of our work was the development of three-dimensional atomic force microscopy (3DAFM), a new measurement mode that allows the mapping of the complete surface force and energy fields with picometer resolution in space (x, y, and z) and piconewton/millielectron volts in force/energy. From this experimental platform, we further expanded by adding the simultaneous recording of tunneling current (3D-AFM/STM) using chemically well-defined tips. Through comparison with simulations, we were able to achieve precise quantification and assignment of local chemical interactions to exact positions within the lattice. During the course of the project, the novel techniques were applied to surface-oxidized copper, titanium dioxide, and silicon oxide. On these materials, defect-induced changes to the chemical surface reactivity and electronic charge density were characterized with site-specific accuracy.

  7. Simultaneous determination of arterial input function of the internal carotid and middle cerebral arteries for dynamic susceptibility contrast MRI

    International Nuclear Information System (INIS)

    Scholdei, R.; Wenz, F.; Fuss, M.; Essig, M.; Knopp, M.V.

    1999-01-01

    Purpose: The determination of the arterial input function (AIF) is necessary for absolute quantification of the regional cerebral blood volume and blood flow using dynamic susceptibility contrast MRI. The suitability of different vessels (ICA-internal carotid artery, MCA-middle cerebral artery) for AIF determination was compared in this study. Methods: A standard 1.5 T MR system and a simultaneous dual FLASH sequence (TR/TE1/TE2/α=32/15/25/10 ) were used to follow a bolus of contrast agent. Slice I was chosen to cut the ICA perpendicularly. Slice II included the MCA. Seventeen data sets from ten subjects were evaluated. Results: The number of AIF-relevant pixels, the area under the AIF and the maximum concentration were all lower when the AIF was determined from the MCA compared to the ICA. Additionally, the mean transit time (MTT) and the time to maximum concentration (TTM) were longer in the MCA, complicating the computerized identification of AIF-relevant pixels. Data from one subject, who was examined five times, demonstrated that the intraindividual variance of the measured parameters was markedly lower than the interpersonal variance. Conclusions: It appears to be advantageous to measure the AIF in the ICA rather than the MCA. (orig.) [de

  8. Synthesis of nanodiamond derivatives carrying amino functions and quantification by a modified Kaiser test

    Directory of Open Access Journals (Sweden)

    Gerald Jarre

    2014-11-01

    Full Text Available Nanodiamonds functionalized with different organic moieties carrying terminal amino groups have been synthesized. These include conjugates generated by Diels–Alder reactions of ortho-quinodimethanes formed in situ from pyrazine and 5,6-dihydrocyclobuta[d]pyrimidine derivatives. For the quantification of primary amino groups a modified photometric assay based on the Kaiser test has been developed and validated for different types of aminated nanodiamond. The results correspond well to values obtained by thermogravimetry. The method represents an alternative wet-chemical quantification method in cases where other techniques like elemental analysis fail due to unfavourable combustion behaviour of the analyte or other impediments.

  9. Quantification of landscape multifunctionality based on farm functionality indices

    DEFF Research Database (Denmark)

    Andersen, Peter Stubkjær; Vejre, Henrik; Dalgaard, Tommy

    2011-01-01

    ) wildlife habitats, and (4) recreation. At farm level each of these functions is defined by data on a number of farmers’ activities as well as farm characteristics which can be harvested by a selection of the interview questions. The selected interview questions are attached as indicators to the relevant...... present a bottom-up method in which landscape multifunctionality is quantified by using functional indices developed from farm questionaire data. The interview survey comprised 382 farms in a rural area of Denmark. The functional classes included in the method are: (1) production, (2) residence, (3...... function. A score spectrum is assigned to each indicator to enable a representation of its relative contribution to the function on each farm depending on the question responses from the interviewees. The values for each indicator are weighted in relation to each of the others and all the values are summed...

  10. A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification

    Science.gov (United States)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.

    MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.

  11. Estimation of Input Function from Dynamic PET Brain Data Using Bayesian Blind Source Separation

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav

    2015-01-01

    Roč. 12, č. 4 (2015), s. 1273-1287 ISSN 1820-0214 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind source separation * Variational Bayes method * dynamic PET * input function * deconvolution Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.623, year: 2015 http://library.utia.cas.cz/separaty/2015/AS/tichy-0450509.pdf

  12. IMPROVED DERIVATION OF INPUT FUNCTION IN DYNAMIC MOUSE [18F]FDG PET USING BLADDER RADIOACTIVITY KINETICS

    Science.gov (United States)

    Wong, Koon-Pong; Zhang, Xiaoli; Huang, Sung-Cheng

    2013-01-01

    Purpose Accurate determination of the plasma input function (IF) is essential for absolute quantification of physiological parameters in positron emission tomography (PET). However, it requires an invasive and tedious procedure of arterial blood sampling that is challenging in mice because of the limited blood volume. In this study, a hybrid modeling approach is proposed to estimate the plasma IF of 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) in mice using accumulated radioactivity in urinary bladder together with a single late-time blood sample measurement. Methods Dynamic PET scans were performed on nine isoflurane-anesthetized male C57BL/6 mice after a bolus injection of [18F]FDG at the lateral caudal vein. During a 60- or 90-min scan, serial blood samples were taken from the femoral artery. Image data were reconstructed using filtered backprojection with CT-based attenuation correction. Total accumulated radioactivity in the urinary bladder was fitted to a renal compartmental model with the last blood sample and a 1-exponential function that described the [18F]FDG clearance in blood. Multiple late-time blood sample estimates were calculated by the blood [18F]FDG clearance equation. A sum of 4-exponentials was assumed for the plasma IF that served as a forcing function to all tissues. The estimated plasma IF was obtained by simultaneously fitting the [18F]FDG model to the time-activity curves (TACs) of liver and muscle and the forcing function to early (0–1 min) left-ventricle data (corrected for delay, dispersion, partial-volume effects and erythrocytes uptake) and the late-time blood estimates. Using only the blood sample acquired at the end of the study to estimate the IF and the use of liver TAC as an alternative IF were also investigated. Results The area under the plasma TACs calculated for all studies using the hybrid approach was not significantly different from that using all blood samples. [18F]FDG uptake constants in brain, myocardium, skeletal

  13. A Microneedle Functionalized with Polyethyleneimine and Nanotubes for Highly Sensitive, Label-Free Quantification of DNA

    OpenAIRE

    Saadat-Moghaddam, Darius; Kim, Jong-Hoon

    2017-01-01

    The accurate measure of DNA concentration is necessary for many DNA-based biological applications. However, the current methods are limited in terms of sensitivity, reproducibility, human error, and contamination. Here, we present a microneedle functionalized with polyethyleneimine (PEI) and single-walled carbon nanotubes (SWCNTs) for the highly sensitive quantification of DNA. The microneedle was fabricated using ultraviolet (UV) lithography and anisotropic etching, and then functionalized w...

  14. Functional Groups Quantification of Chondritic Organics by XANES Spectroscopy

    Science.gov (United States)

    Le Guillou, C.; Bernard, S.

    2017-07-01

    We have developed a new method to quantify the functional group concentration of organics using STXM-XANES. Applied to IOM and in situ FIB sections measurement, it reveals a lower aromaticity than expected from previous NMR results (35% vs. 60%).

  15. Minimally invasive input function for 2-{sup 18}F-fluoro-A-85380 brain PET studies

    Energy Technology Data Exchange (ETDEWEB)

    Zanotti-Fregonara, Paolo [National Institute of Mental Health, NIH, Molecular Imaging Branch, Bethesda, MD (United States); Maroy, Renaud; Peyronneau, Marie-Anne; Trebossen, Regine [CEA, DSV, I2BM, Service Hospitalier Frederic Joliot, Orsay (France); Bottlaender, Michel [CEA, DSV, I2BM, NeuroSpin, Gif-sur-Yvette (France)

    2012-04-15

    Quantitative neuroreceptor positron emission tomography (PET) studies often require arterial cannulation to measure input function. While population-based input function (PBIF) would be a less invasive alternative, it has only rarely been used in conjunction with neuroreceptor PET tracers. The aims of this study were (1) to validate the use of PBIF for 2-{sup 18}F-fluoro-A-85380, a tracer for nicotinic receptors; (2) to compare the accuracy of measures obtained via PBIF to those obtained via blood-scaled image-derived input function (IDIF) from carotid arteries; and (3) to explore the possibility of using venous instead of arterial samples for both PBIF and IDIF. Ten healthy volunteers underwent a dynamic 2-{sup 18}F-fluoro-A-85380 brain PET scan with arterial and, in seven subjects, concurrent venous serial blood sampling. PBIF was obtained by averaging the normalized metabolite-corrected arterial input function and subsequently scaling each curve with individual blood samples. IDIF was obtained from the carotid arteries using a blood-scaling method. Estimated Logan distribution volume (V{sub T}) values were compared to the reference values obtained from arterial cannulation. For all subjects, PBIF curves scaled with arterial samples were similar in shape and magnitude to the reference arterial input function. The Logan V{sub T} ratio was 1.00 {+-} 0.05; all subjects had an estimation error <10%. IDIF gave slightly less accurate results (V{sub T} ratio 1.03 {+-} 0.07; eight of ten subjects had an error <10%). PBIF scaled with venous samples yielded inaccurate results (V{sub T} ratio 1.13 {+-} 0.13; only three of seven subjects had an error <10%). Due to arteriovenous differences at early time points, IDIF could not be calculated using venous samples. PBIF scaled with arterial samples accurately estimates Logan V{sub T} for 2-{sup 18}F-fluoro-A-85380. Results obtained with PBIF were slightly better than those obtained with IDIF. Due to arteriovenous concentration

  16. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    International Nuclear Information System (INIS)

    Jochimsen, Thies H.; Zeisig, Vilia; Schulz, Jessica; Werner, Peter; Patt, Marianne; Patt, Jörg; Dreyer, Antje Y.; Boltze, Johannes; Barthel, Henryk; Sabri, Osama; Sattler, Bernhard

    2016-01-01

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  17. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    Energy Technology Data Exchange (ETDEWEB)

    Jochimsen, Thies H.; Zeisig, Vilia [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Schulz, Jessica [Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, Leipzig, D-04103 (Germany); Werner, Peter; Patt, Marianne; Patt, Jörg [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Dreyer, Antje Y. [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Boltze, Johannes [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Fraunhofer Research Institution of Marine Biotechnology and Institute for Medical and Marine Biotechnology, University of Lübeck, Lübeck (Germany); Barthel, Henryk; Sabri, Osama; Sattler, Bernhard [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany)

    2016-02-13

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  18. Distance-Ranked Fault Identification of Reconfigurable Hardware Bitstreams via Functional Input

    Directory of Open Access Journals (Sweden)

    Naveed Imran

    2014-01-01

    Full Text Available Distance-Ranked Fault Identification (DRFI is a dynamic reconfiguration technique which employs runtime inputs to conduct online functional testing of fielded FPGA logic and interconnect resources without test vectors. At design time, a diverse set of functionally identical bitstream configurations are created which utilize alternate hardware resources in the FPGA fabric. An ordering is imposed on the configuration pool as updated by the PageRank indexing precedence. The configurations which utilize permanently damaged resources and hence manifest discrepant outputs, receive lower rank are thus less preferred for instantiation on the FPGA. Results indicate accurate identification of fault-free configurations in a pool of pregenerated bitstreams with a low number of reconfigurations and input evaluations. For MCNC benchmark circuits, the observed reduction in input evaluations is up to 75% when comparing the DRFI technique to unguided evaluation. The DRFI diagnosis method is seen to isolate all 14 healthy configurations from a pool of 100 pregenerated configurations, and thereby offering a 100% isolation accuracy provided the fault-free configurations exist in the design pool. When a complete recovery is not feasible, graceful degradation may be realized which is demonstrated by the PSNR improvement of images processed in a video encoder case study.

  19. Early uneven ear input induces long-lasting differences in left-right motor function.

    Science.gov (United States)

    Antoine, Michelle W; Zhu, Xiaoxia; Dieterich, Marianne; Brandt, Thomas; Vijayakumar, Sarath; McKeehan, Nicholas; Arezzo, Joseph C; Zukin, R Suzanne; Borkholder, David A; Jones, Sherri M; Frisina, Robert D; Hébert, Jean M

    2018-03-01

    How asymmetries in motor behavior become established normally or atypically in mammals remains unclear. An established model for motor asymmetry that is conserved across mammals can be obtained by experimentally inducing asymmetric striatal dopamine activity. However, the factors that can cause motor asymmetries in the absence of experimental manipulations to the brain remain unknown. Here, we show that mice with inner ear dysfunction display a robust left or right rotational preference, and this motor preference reflects an atypical asymmetry in cortico-striatal neurotransmission. By unilaterally targeting striatal activity with an antagonist of extracellular signal-regulated kinase (ERK), a downstream integrator of striatal neurotransmitter signaling, we can reverse or exaggerate rotational preference in these mice. By surgically biasing vestibular failure to one ear, we can dictate the direction of motor preference, illustrating the influence of uneven vestibular failure in establishing the outward asymmetries in motor preference. The inner ear-induced striatal asymmetries identified here intersect with non-ear-induced asymmetries previously linked to lateralized motor behavior across species and suggest that aspects of left-right brain function in mammals can be ontogenetically influenced by inner ear input. Consistent with inner ear input contributing to motor asymmetry, we also show that, in humans with normal ear function, the motor-dominant hemisphere, measured as handedness, is ipsilateral to the ear with weaker vestibular input.

  20. Entrainment and phase-shifting by centrifugation abolished in mice lacking functional vestibular input

    Science.gov (United States)

    Fuller, Charles; Ringgold, Kristyn

    The circadian pacemaker can be phase shifted and entrained by appropriately timed locomotor activity, however the mechanism(s) involved remain poorly understood. Recent work in our lab has suggested the involvement of the vestibular otolith organs in activity-induced changes within the circadian timing system (CTS). For example, we have shown that changes in circa-dian period and phase in response to locomotion (wheel running) require functional macular gravity receptors. We believe the neurovestibular system is responsible for the transduction of gravitoinertial input associated with the types of locomotor activity that are known to af-fect the pacemaker. This study investigated the hypothesis that daily, timed gravitoinertial stimuli, as applied by centrifugation. would induce entrainment of circadian rhythms in only those animals with functional afferent vestibular input. To test this hypothesis, , chemically labyrinthectomized (Labx) mice, mice lacking macular vestibular input (head tilt or hets) and wildtype (WT) littermates were implanted i.p. with biotelemetry and individually housed in a 4-meter diameter centrifuge in constant darkness (DD). After 2 weeks in DD, the mice were exposed daily to 2G via centrifugation from 1000-1200 for 9 weeks. Only WT mice showed entrainment to the daily 2G pulse. The 2G pulse was then re-set to occur at 1200-1400 for 4 weeks. Only WT mice demonstrated a phase shift in response to the re-setting of the 2G pulse and subsequent re-entrainment to the new centrifugation schedule. These results provide further evidence that gravitoinertial stimuli require a functional vestibular system to both en-train and phase shift the CTS. Entrainment among only WT mice supports the role of macular gravity receptive cells in modulation of the CTS while also providing a functional mechanism by which gravitoinertial stimuli, including locomotor activity, may affect the pacemaker.

  1. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies

    Science.gov (United States)

    Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.

    2012-01-01

    Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum

  2. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies

    International Nuclear Information System (INIS)

    Winant, Celeste D; Aparici, Carina Mari; Bacharach, Stephen L; Gullberg, Grant T; Zelnik, Yuval R; Reutter, Bryan W; Sitek, Arkadiusz

    2012-01-01

    Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94 Tc-methoxyisobutylisonitrile ( 94 Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K 1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K 1 . For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94 Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99m Tc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal

  3. Assessment of input function distortions on kinetic model parameters in simulated dynamic 82Rb PET perfusion studies

    International Nuclear Information System (INIS)

    Meyer, Carsten; Peligrad, Dragos-Nicolae; Weibrecht, Martin

    2007-01-01

    Cardiac 82 rubidium dynamic PET studies allow quantifying absolute myocardial perfusion by using tracer kinetic modeling. Here, the accurate measurement of the input function, i.e. the tracer concentration in blood plasma, is a major challenge. This measurement is deteriorated by inappropriate temporal sampling, spillover, etc. Such effects may influence the measured input peak value and the measured blood pool clearance. The aim of our study is to evaluate the effect of input function distortions on the myocardial perfusion as estimated by the model. To this end, we simulate noise-free myocardium time activity curves (TACs) with a two-compartment kinetic model. The input function to the model is a generic analytical function. Distortions of this function have been introduced by varying its parameters. Using the distorted input function, the compartment model has been fitted to the simulated myocardium TAC. This analysis has been performed for various sets of model parameters covering a physiologically relevant range. The evaluation shows that ±10% error in the input peak value can easily lead to ±10-25% error in the model parameter K 1 , which relates to myocardial perfusion. Variations in the input function tail are generally less relevant. We conclude that an accurate estimation especially of the plasma input peak is crucial for a reliable kinetic analysis and blood flow estimation

  4. Characterizing the Input-Output Function of the Olfactory-Limbic Pathway in the Guinea Pig

    Directory of Open Access Journals (Sweden)

    Gian Luca Breschi

    2015-01-01

    Full Text Available Nowadays the neuroscientific community is taking more and more advantage of the continuous interaction between engineers and computational neuroscientists in order to develop neuroprostheses aimed at replacing damaged brain areas with artificial devices. To this end, a technological effort is required to develop neural network models which can be fed with the recorded electrophysiological patterns to yield the correct brain stimulation to recover the desired functions. In this paper we present a machine learning approach to derive the input-output function of the olfactory-limbic pathway in the in vitro whole brain of guinea pig, less complex and more controllable than an in vivo system. We first experimentally characterized the neuronal pathway by delivering different sets of electrical stimuli from the lateral olfactory tract (LOT and by recording the corresponding responses in the lateral entorhinal cortex (l-ERC. As a second step, we used information theory to evaluate how much information output features carry about the input. Finally we used the acquired data to learn the LOT-l-ERC “I/O function,” by means of the kernel regularized least squares method, able to predict l-ERC responses on the basis of LOT stimulation features. Our modeling approach can be further exploited for brain prostheses applications.

  5. Lessons learned using HAMMLAB experimenter systems: Input for HAMMLAB 2000 functional requirements

    International Nuclear Information System (INIS)

    Sebok, Angelia L.

    1998-02-01

    To design a usable HAMMLAB 2000, lessons learned from use of the existing HAMMLAB must be documented. User suggestions are important and must be taken into account. Different roles in HAMMLAB experimental sessions are identified, and major functions of each role were specified. A series of questionnaires were developed and administered to different users of HAMMLAB, each tailored to the individual job description. The results of those questionnaires are included in this report. Previous HAMMLAB modification recommendations were also reviewed, to provide input to this document. A trial experimental session was also conducted, to give an overview of the tasks in HAMMLAB. (author)

  6. A time-resolved model of the mesospheric Na layer: constraints on the meteor input function

    Directory of Open Access Journals (Sweden)

    J. M. C. Plane

    2004-01-01

    Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.

  7. Estimation of the pulmonary input function in dynamic whole body PET

    International Nuclear Information System (INIS)

    Ho-Shon, K.; Buchen, P.; Meikle, S.R.; Fulham, M.J.; University of Sydney, Sydney, NSW

    1998-01-01

    Full text: Dynamic data acquisition in Whole Body PET (WB-PET) has the potential to measure the metabolic rate of glucose (MRGlc) in tissue in-vivo. Estimation of changes in tumoral MRGlc may be a valuable tool in cancer by providing an quantitative index of response to treatment. A necessary requirement is an input function (IF) that can be obtained from arterial, 'arterialised' venous or pulmonary arterial blood in the case of lung tumours. Our aim was to extract the pulmonary input function from dynamic WB-PET data using Principal Component Analysis (PCA), Factor Analysis (FA) and Maximum Entropy (ME) for the evaluation of patients undergoing induction chemotherapy for non-small cell lung cancer. PCA is first used as a method of dimension reduction to obtain a signal space, defined by an optimal metric and a set of vectors. FA is used together with a ME constraint to rotate these vectors to obtain 'physiological' factors. A form of entropy function that does not require normalised data was used. This enabled the introduction of a penalty function based on the blood concentration at the last time point which provides an additional constraint. Tissue functions from 10 planes through normal lung were simulated. The model was a linear combination of an IF and a tissue time activity curve (TAC). The proportion of the IF to TAC was varied over the planes to simulate the apical to basal gradient in vascularity of the lung and pseudo Poisson noise was added. The method accurately extracted the IF at noise levels spanning the expected range for dynamic ROI data acquired with the interplane septa extended. Our method is minimally invasive because it requires only 1 late venous blood sample and is applicable to a wide range of tracers since it does not assume a particular compartmental model. Pilot data from 2 patients have been collected enabling comparison of the estimated IF with direct blood sampling from the pulmonary artery

  8. Development of Input Function Measurement System for Small Animal PET Study

    International Nuclear Information System (INIS)

    Kim, Jong Guk; Kim, Byung Su; Kim, Jin Su

    2010-01-01

    For quantitative measurement of radioactivity concentration in tissue and a validated tracer kinetic model, the high sensitive detection system has been required for blood sampling. With the accurate measurement of time activity curves (TACs) of labeled compounds in blood (plasma) enable to provide quantitative information on biological parameters of interest in local tissue. Especially, the development of new tracers for PET imaging requires knowledge of the kinetics of the tracer in the body and in arterial blood and plasma. Conventional approaches of obtaining an input function are to sample arterial blood sequentially by manual as a function of time. Several continuous blood sampling systems have been developed and used in nuclear medicine research field to overcome the limited temporal resolution in sampling by the conventional method. In this work, we developed the high sensitive and unique geometric design of GSO detector for small animal blood activity measurement

  9. Direct electrical stimulation as an input gate into brain functional networks: principles, advantages and limitations.

    Science.gov (United States)

    Mandonnet, Emmanuel; Winkler, Peter A; Duffau, Hugues

    2010-02-01

    While the fundamental and clinical contribution of direct electrical stimulation (DES) of the brain is now well acknowledged, its advantages and limitations have not been re-evaluated for a long time. Here, we critically review exactly what DES can tell us about cerebral function. First, we show that DES is highly sensitive for detecting the cortical and axonal eloquent structures. Moreover, DES also provides a unique opportunity to study brain connectivity, since each area responsive to stimulation is in fact an input gate into a large-scale network rather than an isolated discrete functional site. DES, however, also has a limitation: its specificity is suboptimal. Indeed, DES may lead to interpretations that a structure is crucial because of the induction of a transient functional response when stimulated, whereas (1) this effect is caused by the backward spreading of the electro-stimulation along the network to an essential area and/or (2) the stimulated region can be functionally compensated owing to long-term brain plasticity mechanisms. In brief, although DES is still the gold standard for brain mapping, its combination with new methods such as perioperative neurofunctional imaging and biomathematical modeling is now mandatory, in order to clearly differentiate those networks that are actually indispensable to function from those that can be compensated.

  10. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  11. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    Science.gov (United States)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  12. Serotonin 1B Receptors Regulate Prefrontal Function by Gating Callosal and Hippocampal Inputs

    DEFF Research Database (Denmark)

    Kjaerby, Celia; Athilingam, Jegath; Robinson, Sarah E

    2016-01-01

    Both medial prefrontal cortex (mPFC) and serotonin play key roles in anxiety; however, specific mechanisms through which serotonin might act on the mPFC to modulate anxiety-related behavior remain unknown. Here, we use a combination of optogenetics and synaptic physiology to show that serotonin...... acts presynaptically via 5-HT1B receptors to selectively suppress inputs from the contralateral mPFC and ventral hippocampus (vHPC), while sparing those from mediodorsal thalamus. To elucidate how these actions could potentially regulate prefrontal circuit function, we infused a 5-HT1B agonist...... into the mPFC of freely behaving mice. Consistent with previous studies that have optogenetically inhibited vHPC-mPFC projections, activating prefrontal 5-HT1B receptors suppressed theta-frequency mPFC activity (4-12 Hz), and reduced avoidance of anxiogenic regions in the elevated plus maze. These findings...

  13. Solar Wind Energy Input during Prolonged, Intense Northward Interplanetary Magnetic Fields: A New Coupling Function

    Science.gov (United States)

    Du, A. M.; Tsurutani, B. T.; Sun, W.

    2012-04-01

    Sudden energy release (ER) events in the midnight sector at auroral zone latitudes during intense (B > 10 nT), long-duration (T > 3 hr), northward (Bz > 0 nT = N) IMF magnetic clouds (MCs) during solar cycle 23 (SC23) have been examined in detail. The MCs with northward-then-southward (NS) IMFs were analyzed separately from MCs with southward-then-northward (SN) configurations. It is found that there is a lack of substorms during the N field intervals of NS clouds. In sharp contrast, ER events do occur during the N field portions of SN MCs. From the above two results it is reasonable to conclude that the latter ER events represent residual energy remaining from the preceding S portions of the SN MCs. We derive a new solar wind-magnetosphere coupling function during northward IMFs: ENIMF = α N-1/12V 7/3B1/2 + β V |Dstmin|. The first term on the right-hand side of the equation represents the energy input via "viscous interaction", and the second term indicates the residual energy stored in the magnetotail. It is empirically found that the magnetosphere/magnetotail can store energy for a maximum of ~ 4 hrs before it has dissipated away. This concept is defining one for ER/substorm energy storage. Our scenario indicates that the rate of solar wind energy injection into the magnetosphere/magnetotail determines the form of energy release into the magnetosphere/ionosphere. This may be more important than the dissipation mechanism itself (in understanding the form of the release). The concept of short-term energy storage is applied for the solar case. It is argued that it may be necessary to identify the rate of energy input into solar magnetic loop systems to be able to predict the occurrence of solar flares.

  14. Methods for modeling and quantification in functional imaging by positron emissions tomography and magnetic resonance imaging

    International Nuclear Information System (INIS)

    Costes, Nicolas

    2017-01-01

    This report presents experiences and researches in the field of in vivo medical imaging by positron emission tomography (PET) and magnetic resonance imaging (MRI). In particular, advances in terms of reconstruction, quantification and modeling in PET are described. The validation of processing and analysis methods is supported by the creation of data by simulation of the imaging process in PET. The recent advances of combined PET/MRI clinical cameras, allowing simultaneous acquisition of molecular/metabolic PET information, and functional/structural MRI information opens the door to unique methodological innovations, exploiting spatial alignment and simultaneity of the PET and MRI signals. It will lead to an increase in accuracy and sensitivity in the measurement of biological phenomena. In this context, the developed projects address new methodological issues related to quantification, and to the respective contributions of MRI or PET information for a reciprocal improvement of the signals of the two modalities. They open perspectives for combined analysis of the two imaging techniques, allowing optimal use of synchronous, anatomical, molecular and functional information for brain imaging. These innovative concepts, as well as data correction and analysis methods, will be easily translated into other areas of investigation using combined PET/MRI. (author) [fr

  15. Determination of the arterial input function in mouse-models using clinical MRI

    International Nuclear Information System (INIS)

    Theis, D.; Fachhochschule Giessen-Friedberg; Keil, B.; Heverhagen, J.T.; Klose, K.J.; Behe, M.; Fiebich, M.

    2008-01-01

    Dynamic contrast enhanced magnetic resonance imaging is a promising method for quantitative analysis of tumor perfusion and is increasingly used in study of cancer in small animal models. In those studies the determination of the arterial input function (AIF) of the target tissue can be the first step. Series of short-axis images of the heart were acquired during administration of a bolus of Gd-DTPA using saturation-recovery gradient echo pulse sequences. The AIF was determined from the changes of the signal intensity in the left ventricle. The native T1 relaxation times and AIF were determined for 11 mice. An average value of (1.16 ± 0.09) s for the native T1 relaxation time was measured. However, the AIF showed significant inter animal variability, as previously observed by other authors. The inter-animal variability shows, that a direct measurement of the AIF is reasonable to avoid significant errors. The proposed method for determination of the AIF proved to be reliable. (orig.)

  16. Evaluation of two population-based input functions for quantitative neurological FDG PET studies

    International Nuclear Information System (INIS)

    Eberl, S.; Anayat, A.R.; Fulton, R.R.; Hooper, P.K.; Fulham, M.J.

    1997-01-01

    The conventional measurement of the regional cerebral metabolic rate of glucose (rCMRGlc) with fluorodexoyglucose (FDG) and positron emission tomography (PET) requires arterial or arterialised-venous (a-v) blood sampling at frequent intervals to obtain the plasma input function (IF). We evaluated the accuracy of rCMRGlc measurements using population-based IFs that were calibrated with two a-v blood samples. Population-based IFs were derived from: (1) the average of a-v IFs from 26 patients (Standard IF) and (2) a published model of FDG plasma concentration (Feng IF). Values for rCMRGlc calculated from the population-based IFs were compared with values obtained with IFs derived from frequent a-v blood sampling in 20 non-diabetic and six diabetic patients. Values for rCMRGlc calculated with the different IFs were highly correlated for both patient groups (r≥0.992) and root mean square residuals about the regression line were less than 0.24 mg/min/100 g. The Feng IF tended to underestimate high rCMRGlc. Both population-based IFs simplify the measurement of rCMRGlc with minimal loss in accuracy and require only two a-v blood samples for calibration. The reduced blood sampling requirements markedly reduce radiation exposure to the blood sampler. (orig.)

  17. Automatic detection of arterial input function in dynamic contrast enhanced MRI based on affinity propagation clustering.

    Science.gov (United States)

    Shi, Lin; Wang, Defeng; Liu, Wen; Fang, Kui; Wang, Yi-Xiang J; Huang, Wenhua; King, Ann D; Heng, Pheng Ann; Ahuja, Anil T

    2014-05-01

    To automatically and robustly detect the arterial input function (AIF) with high detection accuracy and low computational cost in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). In this study, we developed an automatic AIF detection method using an accelerated version (Fast-AP) of affinity propagation (AP) clustering. The validity of this Fast-AP-based method was proved on two DCE-MRI datasets, i.e., rat kidney and human head and neck. The detailed AIF detection performance of this proposed method was assessed in comparison with other clustering-based methods, namely original AP and K-means, as well as the manual AIF detection method. Both the automatic AP- and Fast-AP-based methods achieved satisfactory AIF detection accuracy, but the computational cost of Fast-AP could be reduced by 64.37-92.10% on rat dataset and 73.18-90.18% on human dataset compared with the cost of AP. The K-means yielded the lowest computational cost, but resulted in the lowest AIF detection accuracy. The experimental results demonstrated that both the AP- and Fast-AP-based methods were insensitive to the initialization of cluster centers, and had superior robustness compared with K-means method. The Fast-AP-based method enables automatic AIF detection with high accuracy and efficiency. Copyright © 2013 Wiley Periodicals, Inc.

  18. WE-D-204-07: Development of An ImageJ Plugin for Renal Function Quantification: RenalQuant

    Energy Technology Data Exchange (ETDEWEB)

    Marques da Silva, A; Narciso, L [PUCRS, Porto Alegre, RS (Brazil)

    2015-06-15

    Purpose: Commercial workstations usually have their own software to calculate dynamic renal functions. However, usually they have low flexibility and subjectivity on delimiting kidney and background areas. The aim of this paper is to present a public domain software, called RenalQuant, capable to semi-automatically draw regions of interest on dynamic renal scintigraphies, extracting data and generating renal function quantification parameters. Methods: The software was developed in Java and written as an ImageJ-based plugin. The preprocessing and segmentation steps include the user’s selection of one time frame with higher activity in kidney’s region, compared with background, and low activity in the liver. Next, the chosen time frame is smoothed using a Gaussian low pass spatial filter (σ = 3) for noise reduction and better delimitation of kidneys. The maximum entropy thresholding method is used for segmentation. A background area is automatically placed below each kidney, and the user confirms if these regions are correctly segmented and positioned. Quantitative data are extracted and each renogram and relative renal function (RRF) value is calculated and displayed. Results: RenalQuant plugin was validated using retrospective 20 patients’ 99mTc-DTPA exams, and compared with results produced by commercial workstation software, referred as reference. The renograms intraclass correlation coefficients (ICC) were calculated and false-negative and false-positive RRF values were analyzed. The results showed that ICC values between RenalQuant plugin and reference software for both kidneys’ renograms were higher than 0.75, showing excellent reliability. Conclusion: Our results indicated RenalQuant plugin can be trustingly used to generate renograms, using DICOM dynamic renal scintigraphy exams as input. It is user friendly and user’s interaction occurs at a minimum level. Further studies have to investigate how to increase RRF accuracy and explore how to solve

  19. Estimating the basilar-membrane input-output function in normal-hearing and hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    To partly characterize the function of cochlear processing in humans, the basilar membrane (BM) input-output function can be estimated. In recent studies, forward masking has been used to estimate BM compression. If an on-frequency masker is processed compressively, while an off-frequency masker...... is transformed more linearly, the ratio between the slopes of growth of masking (GOM) functions provides an estimate of BM compression at the signal frequency. In this study, this paradigm is extended to also estimate the knee-point of the I/O-function between linear rocessing at low levels and compressive...... processing at medium levels. If a signal can be masked by a low-level on-frequency masker such that signal and masker fall in the linear region of the I/O-function, then a steeper GOM function is expected. The knee-point can then be estimated in the input level region where the GOM changes significantly...

  20. Plasma input function determination for PET using a commercial laboratory robot

    International Nuclear Information System (INIS)

    Alexoff, David L.; Shea, Colleen; Fowler, Joanna S.; King, Payton; Gatley, S. John; Schlyer, David J.; Wolf, Alfred P.

    1995-01-01

    A commercial laboratory robot system (Zymate PyTechnology II Laboratory Automation System) was interfaced to standard and custom laboratory equipment and programmed to perform rapid radiochemical assays necessary for plasma input function determination in quantitative PET studies in humans and baboons. A Zymark XP robot arm was used to carry out two assays: (1) the determination of total plasma radioactivity concentrations in a series of small-volume whole blood samples and (2) the determination of unchanged (parent) radiotracer in plasma using only solid phase extraction methods. Steady state robotic throughput for determination of total plasma radioactivity in whole blood samples (0.350 mL) is 14.3 samples/h, which includes automated centrifugation, pipetting, weighing and radioactivity counting. Robotic throughput for the assay of parent radiotracer in plasma is 4-6 samples/h depending on the radiotracer. Percents of total radioactivities present as parent radiotracers at 60 min. postinjection of 25 ± 5.0 (N 25), 26 ± 6.8 (N = 68), 13 ± 4.4 (N = 30), 32 ± 7.2 (N = 18), 16 ± 4.9 (N = 20), were obtained for carbon-11 labeled benztropine, raclopride, methylphenidate, SR 46349B (trans, 4-[(3Z)3-(2-dimethylamino-ethyl) oxyimino-3 (2-fluorophenyl)propen-1-yl]phenol), and cocaine respectively in baboon plasma and 84 ± 6.4 (N = 9), 18 ± 11 (N = 10), 74 ± 5.7 (N = 118) and 16 ± 3.7 (N = 18) for carbon-11 labeled benztropine, deprenyl, raclopride, and methylphenidate respectively in human plasma. The automated system has been used for more than 4 years for all plasma analyses for 7 different C-11 labeled compounds used routinely in our laboratory. The robotic radiotracer assay runs unattended and includes automated cleanup procedures that eliminates all human contact with plasma-contaminated containers

  1. Cerebral blood flow with [15O]water PET studies using an image-derived input function and MR-defined carotid centerlines

    Science.gov (United States)

    Fung, Edward K.; Carson, Richard E.

    2013-03-01

    average aRC values, the means were unchanged, and intersubject variability was noticeably reduced. This MR-based centerline method with local re-registration to [15O]water PET yields a consistent IDIF over multiple injections in the same subject, thus permitting the absolute quantification of CBF without arterial input function measurements.

  2. Quantification of Na+,K+ pumps and their transport rate in skeletal muscle: Functional significance

    Science.gov (United States)

    2013-01-01

    During excitation, muscle cells gain Na+ and lose K+, leading to a rise in extracellular K+ ([K+]o), depolarization, and loss of excitability. Recent studies support the idea that these events are important causes of muscle fatigue and that full use of the Na+,K+-ATPase (also known as the Na+,K+ pump) is often essential for adequate clearance of extracellular K+. As a result of their electrogenic action, Na+,K+ pumps also help reverse depolarization arising during excitation, hyperkalemia, and anoxia, or from cell damage resulting from exercise, rhabdomyolysis, or muscle diseases. The ability to evaluate Na+,K+-pump function and the capacity of the Na+,K+ pumps to fill these needs require quantification of the total content of Na+,K+ pumps in skeletal muscle. Inhibition of Na+,K+-pump activity, or a decrease in their content, reduces muscle contractility. Conversely, stimulation of the Na+,K+-pump transport rate or increasing the content of Na+,K+ pumps enhances muscle excitability and contractility. Measurements of [3H]ouabain binding to skeletal muscle in vivo or in vitro have enabled the reproducible quantification of the total content of Na+,K+ pumps in molar units in various animal species, and in both healthy people and individuals with various diseases. In contrast, measurements of 3-O-methylfluorescein phosphatase activity associated with the Na+,K+-ATPase may show inconsistent results. Measurements of Na+ and K+ fluxes in intact isolated muscles show that, after Na+ loading or intense excitation, all the Na+,K+ pumps are functional, allowing calculation of the maximum Na+,K+-pumping capacity, expressed in molar units/g muscle/min. The activity and content of Na+,K+ pumps are regulated by exercise, inactivity, K+ deficiency, fasting, age, and several hormones and pharmaceuticals. Studies on the α-subunit isoforms of the Na+,K+-ATPase have detected a relative increase in their number in response to exercise and the glucocorticoid dexamethasone but have not

  3. Quantification of Na+,K+ pumps and their transport rate in skeletal muscle: functional significance.

    Science.gov (United States)

    Clausen, Torben

    2013-10-01

    During excitation, muscle cells gain Na(+) and lose K(+), leading to a rise in extracellular K(+) ([K(+)]o), depolarization, and loss of excitability. Recent studies support the idea that these events are important causes of muscle fatigue and that full use of the Na(+),K(+)-ATPase (also known as the Na(+),K(+) pump) is often essential for adequate clearance of extracellular K(+). As a result of their electrogenic action, Na(+),K(+) pumps also help reverse depolarization arising during excitation, hyperkalemia, and anoxia, or from cell damage resulting from exercise, rhabdomyolysis, or muscle diseases. The ability to evaluate Na(+),K(+)-pump function and the capacity of the Na(+),K(+) pumps to fill these needs require quantification of the total content of Na(+),K(+) pumps in skeletal muscle. Inhibition of Na(+),K(+)-pump activity, or a decrease in their content, reduces muscle contractility. Conversely, stimulation of the Na(+),K(+)-pump transport rate or increasing the content of Na(+),K(+) pumps enhances muscle excitability and contractility. Measurements of [(3)H]ouabain binding to skeletal muscle in vivo or in vitro have enabled the reproducible quantification of the total content of Na(+),K(+) pumps in molar units in various animal species, and in both healthy people and individuals with various diseases. In contrast, measurements of 3-O-methylfluorescein phosphatase activity associated with the Na(+),K(+)-ATPase may show inconsistent results. Measurements of Na(+) and K(+) fluxes in intact isolated muscles show that, after Na(+) loading or intense excitation, all the Na(+),K(+) pumps are functional, allowing calculation of the maximum Na(+),K(+)-pumping capacity, expressed in molar units/g muscle/min. The activity and content of Na(+),K(+) pumps are regulated by exercise, inactivity, K(+) deficiency, fasting, age, and several hormones and pharmaceuticals. Studies on the α-subunit isoforms of the Na(+),K(+)-ATPase have detected a relative increase in their

  4. Feasibility study of the non-invasive estimation of the β+ arterial input function for human PET imaging

    International Nuclear Information System (INIS)

    Hubert, X.

    2009-12-01

    This work deals with the estimation of the concentration of molecules in arterial blood which are labelled with positron-emitting radioelements. This concentration is called 'β + arterial input function'. This concentration has to be estimated for a large number of pharmacokinetic analyses. Nowadays it is measured through series of arterial sampling, which is an accurate method but requiring a stringent protocol. Complications might occur during arterial blood sampling because this method is invasive (hematomas, nosocomial infections). The objective of this work is to overcome this risk through a non-invasive estimation of β + input function with an external detector and a collimator. This allows the reconstruction of blood vessels and thus the discrimination of arterial signal from signals in other tissues. Collimators in medical imaging are not adapted to estimate β + input function because their sensitivity is very low. During this work, they are replaced by coded-aperture collimators, originally developed for astronomy. New methods where coded apertures are used with statistical reconstruction algorithms are presented. Techniques for analytical ray-tracing and for the acceleration of reconstructions are proposed. A new method which decomposes reconstructions on temporal sets and on spatial sets is also developed to efficiently estimate arterial input function from series of temporal acquisitions. This work demonstrates that the trade-off between sensitivity and spatial resolution in PET can be improved thanks to coded aperture collimators and statistical reconstruction algorithm; it also provides new tools to implement such improvements. (author)

  5. Average arterial input function for quantitative dynamic contrast enhanced magnetic resonance imaging of neck nodal metastases

    International Nuclear Information System (INIS)

    Shukla-Dave, Amita; Lee, Nancy; Stambuk, Hilda; Wang, Ya; Huang, Wei; Thaler, Howard T; Patel, Snehal G; Shah, Jatin P; Koutcher, Jason A

    2009-01-01

    The present study determines the feasibility of generating an average arterial input function (Avg-AIF) from a limited population of patients with neck nodal metastases to be used for pharmacokinetic modeling of dynamic contrast-enhanced MRI (DCE-MRI) data in clinical trials of larger populations. Twenty patients (mean age 50 years [range 27–77 years]) with neck nodal metastases underwent pretreatment DCE-MRI studies with a temporal resolution of 3.75 to 7.5 sec on a 1.5T clinical MRI scanner. Eleven individual AIFs (Ind-AIFs) met the criteria of expected enhancement pattern and were used to generate Avg-AIF. Tofts model was used to calculate pharmacokinetic DCE-MRI parameters. Bland-Altman plots and paired Student t-tests were used to describe significant differences between the pharmacokinetic parameters obtained from individual and average AIFs. Ind-AIFs obtained from eleven patients were used to calculate the Avg-AIF. No overall significant difference (bias) was observed for the transfer constant (K trans ) measured with Ind-AIFs compared to Avg-AIF (p = 0.20 for region-of-interest (ROI) analysis and p = 0.18 for histogram median analysis). Similarly, no overall significant difference was observed for interstitial fluid space volume fraction (v e ) measured with Ind-AIFs compared to Avg-AIF (p = 0.48 for ROI analysis and p = 0.93 for histogram median analysis). However, the Bland-Altman plot suggests that as K trans increases, the Ind-AIF estimates tend to become proportionally higher than the Avg-AIF estimates. We found no statistically significant overall bias in K trans or v e estimates derived from Avg-AIF, generated from a limited population, as compared with Ind-AIFs. However, further study is needed to determine whether calibration is needed across the range of K trans . The Avg-AIF obtained from a limited population may be used for pharmacokinetic modeling of DCE-MRI data in larger population studies with neck nodal metastases. Further validation of

  6. Image-derived input function obtained in a 3TMR-brainPET

    Energy Technology Data Exchange (ETDEWEB)

    Silva, N.A. da [Institute of Biophysics and Biomedical Engineering, University of Lisbon (Portugal); Institute of Neurosciences and Medicine - 4, Juelich (Germany); Herzog, H., E-mail: h.herzog@fz-juelich.de [Institute of Neurosciences and Medicine - 4, Juelich (Germany); Weirich, C.; Tellmann, L.; Rota Kops, E. [Institute of Neurosciences and Medicine - 4, Juelich (Germany); Hautzel, H. [Department of Nuclear Medicine (KME), University of Duesseldorf, Medical Faculty at Research Center Juelich, Juelich (Germany); Almeida, P. [Institute of Biophysics and Biomedical Engineering, University of Lisbon (Portugal)

    2013-02-21

    Aim: The combination of a high-resolution MR-compatible BrainPET insert operated within a 3 T MAGNETOM Trio MR scanner is an excellent tool for obtaining an image derived input function (IDIF), due to simultaneous imaging. In this work, we explore the possibility of obtaining an IDIF from volumes of interest (VOI) defined over the carotid arteries (CAs) using the MR data. Material and methods: FDG data from three patients without brain disorders were included. VOIs were drawn bilaterally over the CAs on a MPRAGE image using a 50% isocontour (MR50VOI). CA PET/MR co-registration was examined based on an individual and combined CA co-registration. After that, to estimate the IDIF, the MR50VOI average (IDIF-A), four hottest pixels per plane (IDIF-4H) and four hottest pixels in VOI (IDIF-4V) were considered. A model-based correction for residual partial volume effects involving venous blood samples was applied, from which partial volume (PV) and spillover (SP) coefficients were estimated. Additionally, a theoretical PV coefficient (PVt) was calculated based on MR50VOI. Results: The results show an excellent co-registration between the MR and PET, with an area under the curve ratio between both co-registration methods of 1.00±0.04. A good agreement between PV and PVt was found for IDIF-A, with PV of 0.39±0.06 and PVt 0.40±0.03, and for IDIF-4H, with PV of 0.47±0.05 and PVt 0.47±0.03. The SPs were 0.20±0.03 and 0.21±0.03 for IDIF-A and IDIF-4H, respectively. Conclusion: The integration of a high resolution BrainPET in an MR scanner allows to obtain an IDIF from an MR-based VOI. This must be corrected for a residual partial volume effect.

  7. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling.

    Science.gov (United States)

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging

  8. Phase-based vascular input function: Improved quantitative DCE-MRI of atherosclerotic plaques

    NARCIS (Netherlands)

    van Hoof, R. H. M.; Hermeling, E.; Truijman, M. T. B.; van Oostenbrugge, R. J.; Daemen, J. W. H.; van der Geest, R. J.; van Orshoven, N. P.; Schreuder, A. H.; Backes, W. H.; Daemen, M. J. A. P.; Wildberger, J. E.; Kooi, M. E.

    2015-01-01

    Purpose: Quantitative pharmacokinetic modeling of dynamic contrast-enhanced (DCE)-MRI can be used to assess atherosclerotic plaque microvasculature, which is an important marker of plaque vulnerability. Purpose of the present study was (1) to compare magnitude-versus phase-based vascular input

  9. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    Science.gov (United States)

    Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P

    2009-12-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0

  10. Identification and Quantification of Uncertainties Related to Using Distributed X-band Radar Estimated Precipitation as input in Urban Drainage Models

    DEFF Research Database (Denmark)

    Pedersen, Lisbeth

    The Local Area Weather Radar (LAWR) is a small scale weather radar providing distributed measurements of rainfall primarily for use as input in hydrological applications. As any other weather radar the LAWR measurement of the rainfall is an indirect measurement since it does not measure the rainf......The Local Area Weather Radar (LAWR) is a small scale weather radar providing distributed measurements of rainfall primarily for use as input in hydrological applications. As any other weather radar the LAWR measurement of the rainfall is an indirect measurement since it does not measure...... are quantified using statistical methods. Furthermore, the present calibration method is reviewed and a new extended calibration method has been developed and tested resulting in improved rainfall estimates. As part of the calibration analysis a number of elements affecting the LAWR performance were identified...... in connection with boundary assignment besides general improved understanding of the benefits and pitfalls in using distributed rainfall data as input to models. In connection with the use of LAWR data in urban drainage context, the potential for using LAWR data for extreme rainfall statistics has been studied...

  11. A Model to Determinate the Influence of Probability Density Functions (PDFs of Input Quantities in Measurements

    Directory of Open Access Journals (Sweden)

    Jesús Caja

    2016-06-01

    Full Text Available A method for analysing the effect of different hypotheses about the type of the input quantities distributions of a measurement model is presented here so that the developed algorithms can be simplified. As an example, a model of indirect measurements with optical coordinate measurement machine was employed to evaluate these different hypotheses. As a result of the different experiments, the assumption that the different variables of the model can be modelled as normal distributions is proved.

  12. Hepatoprotective effects of kombucha tea: identification of functional strains and quantification of functional components.

    Science.gov (United States)

    Wang, Yong; Ji, Baoping; Wu, Wei; Wang, Ruojun; Yang, Zhiwei; Zhang, Di; Tian, Wenli

    2014-01-30

    Kombucha tea (KT), a traditional health beverage containing potential hepatoprotective agents, is fermented from sugared tea by a symbiotic culture of yeast and bacteria for 8 days. However, the functional strains that produce components for the hepatoprotective property of KT remain unclear. Multiple strains are involved in traditional KT production. Therefore, KT has not been standardized or produced commercially. This study aimed to identify the functional strains and quantify the functional components with hepatoprotective effects in kombucha tea. Gluconacetobacter sp. A4 was one of the microorganisms in KT in which the D-saccharic acid-1,4-lactone (DSL) produced by G. sp. A4 was significantly higher than that produced by original tea fungus at 8 days of fermentation. Traditional KT (TKT, tea broth fermented by mixed tea fungus), modified KT (MKT, fermented by single G. sp. A4), and DSL significantly inhibited the acetaminophen-induced increase of alanine aminotransferase, alkaline phosphatase, triglyceride and malondialdehyde, as well as facilitating the reduction of total antioxidant capacity in mice. Furthermore, MKT and TKT are both similar to DSL in terms of protection against acetaminophen-induced liver injury in mice. These results suggested a positive relationship between DSL content and the hepatoprotective effect of TKT, MKT and DSL groups. G. sp. A4 was concluded to be a potential functional strain and DSL might be the key functional component for the hepatoprotective property in KT. The stronger capability of G. sp. A4 in producing DSL makes it a better choice for the commercial production of KT. © 2013 Society of Chemical Industry.

  13. A Microneedle Functionalized with Polyethyleneimine and Nanotubes for Highly Sensitive, Label-Free Quantification of DNA.

    Science.gov (United States)

    Saadat-Moghaddam, Darius; Kim, Jong-Hoon

    2017-08-16

    The accurate measure of DNA concentration is necessary for many DNA-based biological applications. However, the current methods are limited in terms of sensitivity, reproducibility, human error, and contamination. Here, we present a microneedle functionalized with polyethyleneimine (PEI) and single-walled carbon nanotubes (SWCNTs) for the highly sensitive quantification of DNA. The microneedle was fabricated using ultraviolet (UV) lithography and anisotropic etching, and then functionalized with PEI and SWCNTs through a dip coating process. The electrical characteristics of the microneedle change with the accumulation of DNA on the surface. Current-voltage measurements in deionized water were conducted to study these changes in the electrical properties of the sensor. The sensitivity test found the signal to be discernable from the noise level down to 100 attomolar (aM), demonstrating higher sensitivity than currently available UV fluorescence and UV absorbance based methods. A microneedle without any surface modification only had a 100 femtomolar (fM) sensitivity. All measurement results were consistent with fluorescence microscopy.

  14. Preclinical In vivo Imaging for Fat Tissue Identification, Quantification, and Functional Characterization.

    Science.gov (United States)

    Marzola, Pasquina; Boschi, Federico; Moneta, Francesco; Sbarbati, Andrea; Zancanaro, Carlo

    2016-01-01

    Localization, differentiation, and quantitative assessment of fat tissues have always collected the interest of researchers. Nowadays, these topics are even more relevant as obesity (the excess of fat tissue) is considered a real pathology requiring in some cases pharmacological and surgical approaches. Several weight loss medications, acting either on the metabolism or on the central nervous system, are currently under preclinical or clinical investigation. Animal models of obesity have been developed and are widely used in pharmaceutical research. The assessment of candidate drugs in animal models requires non-invasive methods for longitudinal assessment of efficacy, the main outcome being the amount of body fat. Fat tissues can be either quantified in the entire animal or localized and measured in selected organs/regions of the body. Fat tissues are characterized by peculiar contrast in several imaging modalities as for example Magnetic Resonance Imaging (MRI) that can distinguish between fat and water protons thank to their different magnetic resonance properties. Since fat tissues have higher carbon/hydrogen content than other soft tissues and bones, they can be easily assessed by Computed Tomography (CT) as well. Interestingly, MRI also discriminates between white and brown adipose tissue (BAT); the latter has long been regarded as a potential target for anti-obesity drugs because of its ability to enhance energy consumption through increased thermogenesis. Positron Emission Tomography (PET) performed with 18 F-FDG as glucose analog radiotracer reflects well the metabolic rate in body tissues and consequently is the technique of choice for studies of BAT metabolism. This review will focus on the main, non-invasive imaging techniques (MRI, CT, and PET) that are fundamental for the assessment, quantification and functional characterization of fat deposits in small laboratory animals. The contribution of optical techniques, which are currently regarded with

  15. Preclinical in vivo imaging for fat tissue identification, quantification and functional characterization

    Directory of Open Access Journals (Sweden)

    Pasquina Marzola

    2016-09-01

    Full Text Available Localization, differentiation and quantitative assessment of fat tissues have always collected the interest of researchers. Nowadays, these topics are even more relevant as obesity (the excess of fat tissue is considered a real pathology requiring in some cases pharmacological and surgical approaches. Several weight loss medications, acting either on the metabolism or on the central nervous system, are currently under preclinical or clinical investigation. Animal models of obesity have been developed which are widely used in pharmaceutical research. The assessment of candidate drugs in animal models requires non-invasive methods for longitudinal assessment of efficacy, the main outcome being the amount of body fat. Fat tissues can be either quantified in the entire animal or localized and measured in selected organs/regions of the body. Fat tissues are characterized by peculiar contrast in several imaging modalities as for example Magnetic Resonance Imaging (MRI that can distinguish between fat and water protons thank to their different magnetic resonance properties. Since fat tissues have higher carbon/hydrogen content than other soft tissues and bones, they can be easily assessed by Computed Tomography (CT as well. Interestingly, MRI also discriminates between white and brown adipose tissue; the latter has long been regarded as a potential target for anti-obesity drugs because of its ability to enhance energy consumption through increased thermogenesis. Positron Emission Tomography (PET performed with 18F-FDG as glucose analogue radiotracer reflects well the metabolic rate in body tissues and consequently is the technique of choice for studies of BAT metabolism. This review will focus on the main, non-invasive imaging techniques (MRI, CT and PET that are fundamental for the assessment, quantification and functional characterization of fat deposits in small laboratory animals. The contribution of optical techniques, which are currently regarded

  16. Wavelet-Based Frequency Response Function: Comparative Study of Input Excitation

    Directory of Open Access Journals (Sweden)

    K. Dziedziech

    2014-01-01

    Full Text Available Time-variant systems can be found in many areas of engineering. It is widely accepted that the classical Fourier-based methods are not suitable for the analysis and identification of such systems. The time-variant frequency response function—based on the continuous wavelet transform—is used in this paper for the analysis of time-variant systems. The focus is on the comparative study of various broadband input excitations. The performance of the method is tested using simulated data from a simple MDOF system and experimental data from a frame-like structure.

  17. GRUKON - A package of applied computer programs system input and operating procedures of functional modules

    International Nuclear Information System (INIS)

    Sinitsa, V.V.; Rineiskij, A.A.

    1993-04-01

    This manual describes a software package for the production of multigroup neutron cross-sections from evaluated nuclear data files. It presents the information necessary for the implementation of the program's modules in the framework of the execution of the program, including: operating procedures of the program, the data input, the macrocommand language, the assignment of the system's procedures. This report also presents the methodology used in the coding of the individual modules: the rules, the syntax, the method of procedures. The report also presents an example of the application of the data processing module. (author)

  18. Study on embodied CO2 transfer between the Jing-Jin-Ji region and other regions in China: a quantification using an interregional input-output model.

    Science.gov (United States)

    Chen, Mengmeng; Wu, Sanmang; Lei, Yalin; Li, Shantong

    2018-03-08

    Jing-Jin-Ji region (i.e., Beijing, Tianjin, and Hebei) is China's key development region, but it is also the leading and most serious air pollution region in China. High fossil fuel consumption is the major source of both carbon dioxide (CO 2 ) emissions and air pollutants. Therefore, it is important to reveal the source of CO 2 emissions to control the air pollution in the Jing-Jin-Ji region. In this study, an interregional input-output model was applied to quantitatively estimate the embodied CO 2 transfer between Jing-Jin-Ji region and other region in China using China's interregional input-output data in 2010. The results indicated that there was a significant difference in the production-based CO 2 emissions in China, and furthermore, the Jing-Jin-Ji region and its surrounding regions were the main regions of the production-based CO 2 emissions in China. Hebei Province exported a large amount of embodied CO 2 to meet the investment, consumption, and export demands of Beijing and Tianjin. The Jing-Jin-Ji regions exported a great deal of embodied CO 2 to the coastal provinces of southeast China and imported it from neighboring provinces.

  19. Estimation of an image derived input function with MR-defined carotid arteries in FDG-PET human studies using a novel partial volume correction method

    DEFF Research Database (Denmark)

    Sari, Hasan; Erlandsson, Kjell; Law, Ian

    2017-01-01

    Kinetic analysis of18F-fluorodeoxyglucose positron emission tomography data requires an accurate knowledge the arterial input function. The gold standard method to measure the arterial input function requires collection of arterial blood samples and is an invasive method. Measuring an image deriv...... input function (p > 0.12 for grey matter and white matter). Hence, the presented image derived input function extraction method can be a practical alternative to noninvasively analyze dynamic18F-fluorodeoxyglucose data without the need for blood sampling....

  20. Reagent-Free Quantification of Aqueous Free Chlorine via Electrical Readout of Colorimetrically Functionalized Pencil Lines.

    Science.gov (United States)

    Mohtasebi, Amirmasoud; Broomfield, Andrew D; Chowdhury, Tanzina; Selvaganapathy, P Ravi; Kruse, Peter

    2017-06-21

    Colorimetric methods are commonly used to quantify free chlorine in drinking water. However, these methods are not suitable for reagent-free, continuous, and autonomous applications. Here, we demonstrate how functionalization of a pencil-drawn film with phenyl-capped aniline tetramer (PCAT) can be used for quantitative electric readout of free chlorine concentrations. The functionalized film can be implemented in a simple fluidic device for continuous sensing of aqueous free chlorine concentrations. The sensor is selective to free chlorine and can undergo a reagent-free reset for further measurements. Our sensor is superior to electrochemical methods in that it does not require a reference electrode. It is capable of quantification of free chlorine in the range of 0.1-12 ppm with higher precision than colorimetric (absorptivity) methods. The interactions of PCAT with the pencil-drawn film upon exposure to hypochlorite were characterized spectroscopically. A previously reported detection mechanism relied on the measurement of a baseline shift to quantify free chlorine concentrations. The new method demonstrated here measures initial spike size upon exposure to free chlorine. It relies on a fast charge built up on the sensor film due to intermittent PCAT salt formation. It has the advantage of being significantly faster than the measurement of baseline shift, but it cannot be used to detect gradual changes in free chlorine concentration without the use of frequent reset pulses. The stability of PCAT was examined in the presence of free chlorine as a function of pH. While most ions commonly present in drinking water do not interfere with the free chlorine detection, other oxidants may contribute to the signal. Our sensor is easy to fabricate and robust, operates reagent-free, and has very low power requirements and is thus suitable for remote deployment.

  1. Dynamic Contrast-Enhanced Perfusion MRI of High Grade Brain Gliomas Obtained with Arterial or Venous Waveform Input Function.

    Science.gov (United States)

    Filice, Silvano; Crisi, Girolamo

    2016-01-01

    The aim of this study was to evaluate the differences in dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) perfusion estimates of high-grade brain gliomas (HGG) due to the use of an input function (IF) obtained respectively from arterial (AIF) and venous (VIF) approaches by two different commercially available software applications. This prospective study includes 20 patients with pathologically confirmed diagnosis of high-grade gliomas. The data source was processed by using two DCE dedicated commercial packages, both based on the extended Toft model, but the first customized to obtain input function from arterial measurement and the second from sagittal sinus sampling. The quantitative parametric perfusion maps estimated from the two software packages were compared by means of a region of interest (ROI) analysis. The resulting input functions from venous and arterial data were also compared. No significant difference has been found between the perfusion parameters obtained with the two different software packages (P-value < .05). The comparison of the VIFs and AIFs obtained by the two packages showed no statistical differences. Direct comparison of DCE-MRI measurements with IF generated by means of arterial or venous waveform led to no statistical difference in quantitative metrics for evaluating HGG. However, additional research involving DCE-MRI acquisition protocols and post-processing would be beneficial to further substantiate the effectiveness of venous approach as the IF method compared with arterial-based IF measurement. Copyright © 2015 by the American Society of Neuroimaging.

  2. Latitudinal and seasonal variability of the micrometeor input function: A study using model predictions and observations from Arecibo and PFISR

    Science.gov (United States)

    Fentzke, J. T.; Janches, D.; Sparks, J. J.

    2009-05-01

    In this work, we use a semi-empirical model of the micrometeor input function (MIF) together with meteor head-echo observations obtained with two high power and large aperture (HPLA) radars, the 430 MHz Arecibo Observatory (AO) radar in Puerto Rico (18°N, 67°W) and the 450 MHz Poker flat incoherent scatter radar (PFISR) in Alaska (65°N, 147°W), to study the seasonal and geographical dependence of the meteoric flux in the upper atmosphere. The model, recently developed by Janches et al. [2006a. Modeling the global micrometeor input function in the upper atmosphere observed by high power and large aperture radars. Journal of Geophysical Research 111] and Fentzke and Janches [2008. A semi-empirical model of the contribution from sporadic meteoroid sources on the meteor input function observed at arecibo. Journal of Geophysical Research (Space Physics) 113 (A03304)], includes an initial mass flux that is provided by the six known meteor sources (i.e. orbital families of dust) as well as detailed modeling of meteoroid atmospheric entry and ablation physics. In addition, we use a simple ionization model to treat radar sensitivity issues by defining minimum electron volume density production thresholds required in the meteor head-echo plasma for detection. This simplified approach works well because we use observations from two radars with similar frequencies, but different sensitivities and locations. This methodology allows us to explore the initial input of particles and how it manifests in different parts of the MLT as observed by these instruments without the need to invoke more sophisticated plasma models, which are under current development. The comparisons between model predictions and radar observations show excellent agreement between diurnal, seasonal, and latitudinal variability of the detected meteor rate and radial velocity distributions, allowing us to understand how individual meteoroid populations contribute to the overall flux at a particular

  3. Origin and function of short-latency inputs to the neural substrates underlying the acoustic startle reflex

    Directory of Open Access Journals (Sweden)

    Ricardo eGómez-Nieto

    2014-07-01

    Full Text Available The acoustic startle reflex (ASR is a survival mechanism of alarm, which rapidly alerts the organism to a sudden loud auditory stimulus. In rats, the primary ASR circuit encompasses three serially connected structures: cochlear root neurons (CRNs, neurons in the caudal pontine reticular nucleus (PnC, and motoneurons in the medulla and spinal cord. It is well established that both CRNs and PnC neurons receive short-latency auditory inputs to mediate the ASR. Here, we investigated the anatomical origin and functional role of these inputs using a multidisciplinary approach that combines morphological, electrophysiological and behavioural techniques. Anterograde tracer injections into the cochlea suggest that CRNs somata and dendrites receive inputs depending, respectively, on their basal or apical cochlear origin. Confocal colocalization experiments demonstrated that these cochlear inputs are immunopositive for the vesicular glutamate transporter 1. Using extracellular recordings in vivo followed by subsequent tracer injections, we investigated the response of PnC neurons after contra-, ipsi-, and bilateral acoustic stimulation and identified the source of their auditory afferents. Our results showed that the binaural firing rate of PnC neurons was higher than the monaural, exhibiting higher spike discharges with contralateral than ipsilateral acoustic stimulations. Our histological analysis confirmed the CRNs as the principal source of short-latency acoustic inputs, and indicated that other areas of the cochlear nucleus complex are not likely to innervate PnC. Behaviourally, we observed a strong reduction of ASR amplitude in monaural earplugged rats that corresponds with the binaural summation process shown in our electrophysiological findings. Our study contributes to understand better the role of neuronal mechanisms in auditory alerting behaviours and provides strong evidence that the CRNs-PnC pathway mediates fast neurotransmission and binaural

  4. Sensitivity of the 252Cf(sf neutron observables to the FREYA input yield functions Y(A, Z, TKE

    Directory of Open Access Journals (Sweden)

    Randrup Jørgen

    2017-01-01

    Full Text Available Within the framework of the fission event generator FREYA, we are studying the sensitivity of various neutron observables to the yield distribution Y (A,Z,TKE used as input to the code. Concentrating on spontaneous fission of 252Cf, we have sampled a large number of different input yield functions based on χ2 fits to the experimental data on Y (A and Y (TKE|A. For each of these input yield distributions, we then use FREYA to generate a large sample of complete fission events from which we extract a variety of neutron observables, including the multiplicity distribution, the associated correlation coefficients, and its factorial moments, the dependence of the mean neutron multiplicity on the total fragment kinetic energy TKE and on the fragment mass number A, the neutron energy spectrum, and the two-neutron angular correlation function. In this way, we can determine the variation of these observables resulting from the uncertainties in the experimental mesurements. The imposition of a constraint on the resulting mean neutron multiplicity reduces the variation of the calculated neutron observables and provides a means for shrinking the uncertainties associated with the measured data.

  5. Sensitivity of the 252Cf(sf) neutron observables to the FREYA input yield functions Y(A, Z, TKE)

    Science.gov (United States)

    Randrup, Jørgen; Talou, Patrick; Vogt, Ramona

    2017-09-01

    Within the framework of the fission event generator FREYA, we are studying the sensitivity of various neutron observables to the yield distribution Y (A,Z,TKE) used as input to the code. Concentrating on spontaneous fission of 252Cf, we have sampled a large number of different input yield functions based on χ2 fits to the experimental data on Y (A) and Y (TKE|A). For each of these input yield distributions, we then use FREYA to generate a large sample of complete fission events from which we extract a variety of neutron observables, including the multiplicity distribution, the associated correlation coefficients, and its factorial moments, the dependence of the mean neutron multiplicity on the total fragment kinetic energy TKE and on the fragment mass number A, the neutron energy spectrum, and the two-neutron angular correlation function. In this way, we can determine the variation of these observables resulting from the uncertainties in the experimental mesurements. The imposition of a constraint on the resulting mean neutron multiplicity reduces the variation of the calculated neutron observables and provides a means for shrinking the uncertainties associated with the measured data.

  6. Quantification of brain perfusion SPECT with N-isopropyl-p-iodoamphetamine using noninvasive microsphere method. Estimation of arterial input by dynamic imaging

    International Nuclear Information System (INIS)

    Yonekura, Yoshiharu; Sugihara, Hideki; Taniguchi, Yoshimitsu; Aoki, Etsuo; Furuichi, Kenji; Miyazaki, Yoshiharu.

    1997-01-01

    We have developed a noninvasive method to quantify brain perfusion SPECT with 123 I-N-isopropyl-p-iodoamphetamine (IMP) using serial dynamic planar imaging of the initial transit phase. The method is based on the microsphere model, but does not require arterial sampling. Serial dynamic planar imaging was performed for 6 min after the bolus injection of IMP (167 MBq in 1.5 ml), followed by additional planar imaging at 20 min and SPECT scan thereafter. The total arterial input to the brain during the initial 5 min after injection was estimated by the injected dose, with the correction of the lung retention, divided by cardiac output (CO). CO was estimated from the initial transit of IMP in the right heart. Cardiac output index (COI), obtained from the integral of the first transit of IMP in the right heart divided by the injected dose, was calibrated by CO measured by Doppler ultrasonography. Regional cerebral blood flow (rCBF) obtained by this method in normal subjects was acceptable. However, the results may be influenced by the injection technique, and careful attention should be considered for clinical application of this method. (author)

  7. Activity and function recognition for moving and static objects in urban environments from wide-area persistent surveillance inputs

    Science.gov (United States)

    Levchuk, Georgiy; Bobick, Aaron; Jones, Eric

    2010-04-01

    In this paper, we describe results from experimental analysis of a model designed to recognize activities and functions of moving and static objects from low-resolution wide-area video inputs. Our model is based on representing the activities and functions using three variables: (i) time; (ii) space; and (iii) structures. The activity and function recognition is achieved by imposing lexical, syntactic, and semantic constraints on the lower-level event sequences. In the reported research, we have evaluated the utility and sensitivity of several algorithms derived from natural language processing and pattern recognition domains. We achieved high recognition accuracy for a wide range of activity and function types in the experiments using Electro-Optical (EO) imagery collected by Wide Area Airborne Surveillance (WAAS) platform.

  8. Interprovincial transfer of embodied energy between the Jing-Jin-Ji area and other provinces in China: A quantification using interprovincial input-output model.

    Science.gov (United States)

    Chen, Weiming; Wu, Sanmang; Lei, Yalin; Li, Shantong

    2017-04-15

    Commodity trade between regions implies a large amount of energy transfer. As an important economic growth pole of China, the Jing-Jin-Ji area (Beijing-Tianjin-Hebei) is also one of the areas with the largest energy consumption in China. Moreover, the primary energy consumer goods in this area are fossil fuels, such as coal. This has led to serious air pollution in the area. Therefore, the reduction of energy consumption under the premise of maintaining sustained economic growth is an important task of the Jing-Jin-Ji area. In this study, an interprovincial input-output model was applied to quantitatively estimate the embodied energy transfer between Jing-Jin-Ji area and other provinces in China. The results indicated that the Metal and nonmetal mineral processing industry and the Electrical, gas and water industry in the Jing-Jin-Ji area exported a large amount of embodied energy to the Yangtze River Delta and the Pearl River Delta. However, the embodied energy export of the Jing-Jin-Ji area mainly exported by Hebei province. Beijing and Tianjin even have some net import of embodied energy. The embodied energy transfer between Tianjin, Hebei and other provinces was mainly driven by investment, while the main media of embodied energy transfer between Beijing and other provinces was consumption. Therefore, we suggest that the Jing-Jin-Ji area should further increase the degree of dependence on other provinces' energy-intensive products and reduce the export of energy-intensive products. In addition, there should be difference in the energy and industrial policies among Beijing, Tianjin and Hebei, and the problems of high energy consumption and high proportion of heavy industry in Hebei should be first resolved. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Visualization and quantification of large bowel motility with functional cine-MRI

    International Nuclear Information System (INIS)

    Buhmann, S.; Wielage, C.; Fischer, T.; Reiser, M.; Lienemann, A.; Kirchhoff, C.; Mussack, T.

    2005-01-01

    Purpose: to develop and evaluate a method to visualize and quantify large bowel motility using functional cine MRI. Methods: fifteen healthy individuals (8males, 7 females, 20 to 45 years old) with no history or present symptoms of bowel disorders were enrolled in a functional cine MRI examination at 6 a. m. after a starving phase for at least eight hours before and after oral administration of Senna tea (mild stimulating purgative). Two consecutive sets of repeated measurements of the entire abdomen were performed using a 1.5T MRI system with coronal T2-weighted HASTE sequences anatomically adjusted to the course of the large bowel. A navigator technique was used for respiratory gating at the level of the right dorsal diaphragm. The changes in diameter (given in cm) were measured at 5 different locations of the ascending (AC), transverse (TC) and descending colon (DC), and assessed as parameters for the bowel motility. Results: the mean values as a statistical measure for large bowel relaxation were determined. Before ingestion of Senna tea, the mean diameter measured 3.41 cm (ascending colon), 3 cm (transverse colon) and 2.67 cm (descending colon). After the ingestion of Senna tea, the mean diameter increased to 3.69 cm (ascending colon) to 3.4 cm (transverse colon) and to 2.9 cm (descending colon). A statistically significant difference was demonstrated with the Wilcoxon test (level of confidence 0.05). For the determination of dynamic increase, the changes of the statistical scatter amplitude to the mean value were expressed as percentage before and after the ingestion of Senna tea. Thereby, an increase in variation and dynamic range was detected for the AC (112.9%) and DC (100%), but a decrease in the dynamics for the TC (69%). Conclusion: a non-invasive method for the assessment of bowel motility was developed for the first time. The use of functional cine MRI utilizing a prokinetic stimulus allowed visualisation and quantification of large bowel motility

  10. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  11. The functional upregulation of piriform cortex is associated with cross-modal plasticity in loss of whisker tactile inputs.

    Directory of Open Access Journals (Sweden)

    Bing Ye

    Full Text Available Cross-modal plasticity is characterized as the hypersensitivity of remaining modalities after a sensory function is lost in rodents, which ensures their awareness to environmental changes. Cellular and molecular mechanisms underlying cross-modal sensory plasticity remain unclear. We aim to study the role of different types of neurons in cross-modal plasticity.In addition to behavioral tasks in mice, whole-cell recordings at the excitatory and inhibitory neurons, and their two-photon imaging, were conducted in piriform cortex. We produced a mouse model of cross-modal sensory plasticity that olfactory function was upregulated by trimming whiskers to deprive their sensory inputs. In the meantime of olfactory hypersensitivity, pyramidal neurons and excitatory synapses were functionally upregulated, as well as GABAergic cells and inhibitory synapses were downregulated in piriform cortex from the mice of cross-modal sensory plasticity, compared with controls. A crosswire connection between barrel cortex and piriform cortex was established in cross-modal plasticity.An upregulation of pyramidal neurons and a downregulation of GABAergic neurons strengthen the activities of neuronal networks in piriform cortex, which may be responsible for olfactory hypersensitivity after a loss of whisker tactile input. This finding provides the clues for developing therapeutic strategies to promote sensory recovery and substitution.

  12. Ultra-low input transcriptomics reveal the spore functional content and phylogenetic affiliations of poorly studied arbuscular mycorrhizal fungi.

    Science.gov (United States)

    Beaudet, Denis; Chen, Eric C H; Mathieu, Stephanie; Yildirir, Gokalp; Ndikumana, Steve; Dalpé, Yolande; Séguin, Sylvie; Farinelli, Laurent; Stajich, Jason E; Corradi, Nicolas

    2017-12-02

    Arbuscular mycorrhizal fungi (AMF) are a group of soil microorganisms that establish symbioses with the vast majority of land plants. To date, generation of AMF coding information has been limited to model genera that grow well axenically; Rhizoglomus and Gigaspora. Meanwhile, data on the functional gene repertoire of most AMF families is non-existent. Here, we provide primary large-scale transcriptome data from eight poorly studied AMF species (Acaulospora morrowiae, Diversispora versiforme, Scutellospora calospora, Racocetra castanea, Paraglomus brasilianum, Ambispora leptoticha, Claroideoglomus claroideum and Funneliformis mosseae) using ultra-low input ribonucleic acid (RNA)-seq approaches. Our analyses reveals that quiescent spores of many AMF species harbour a diverse functional diversity and solidify known evolutionary relationships within the group. Our findings demonstrate that RNA-seq data obtained from low-input RNA are reliable in comparison to conventional RNA-seq experiments. Thus, our methodology can potentially be used to deepen our understanding of fungal microbial function and phylogeny using minute amounts of RNA material. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  13. Transcutaneous measurement of the arterial input function in positron emission tomography

    International Nuclear Information System (INIS)

    Litton, J.E.; Eriksson, L.

    1990-01-01

    Positron emission tomography (PET) provides a powerful tool in medical research. Biochemical function can be both precisely localized and quantitatively measured. To achieve reliable quantitation it is necessary to know the time course of activity concentration in the arterial blood during the measurement. In this study the arterial blood curve from the brachial artery is compared to the activity measured in the internal carotid artery with a new transcutaneous detector

  14. Quantitative contrast-enhanced first-pass cardiac perfusion MRI at 3 tesla with accurate arterial input function and myocardial wall enhancement.

    Science.gov (United States)

    Breton, Elodie; Kim, Daniel; Chung, Sohae; Axel, Leon

    2011-09-01

    To develop, and validate in vivo, a robust quantitative first-pass perfusion cardiovascular MR (CMR) method with accurate arterial input function (AIF) and myocardial wall enhancement. A saturation-recovery (SR) pulse sequence was modified to sequentially acquire multiple slices after a single nonselective saturation pulse at 3 Tesla. In each heartbeat, an AIF image is acquired in the aortic root with a short time delay (TD) (50 ms), followed by the acquisition of myocardial images with longer TD values (∼150-400 ms). Longitudinal relaxation rates (R(1) = 1/T(1)) were calculated using an ideal saturation recovery equation based on the Bloch equation, and corresponding gadolinium contrast concentrations were calculated assuming fast water exchange condition. The proposed method was validated against a reference multi-point SR method by comparing their respective R(1) measurements in the blood and left ventricular myocardium, before and at multiple time-points following contrast injections, in 7 volunteers. R(1) measurements with the proposed method and reference multi-point method were strongly correlated (r > 0.88, P < 10(-5)) and in good agreement (mean difference ±1.96 standard deviation 0.131 ± 0.317/0.018 ± 0.140 s(-1) for blood/myocardium, respectively). The proposed quantitative first-pass perfusion CMR method measured accurate R(1) values for quantification of AIF and myocardial wall contrast agent concentrations in 3 cardiac short-axis slices, in a total acquisition time of 523 ms per heartbeat. Copyright © 2011 Wiley-Liss, Inc.

  15. Functional decomposition with an efficient input support selection for sub-functions based on information relationship measures

    NARCIS (Netherlands)

    Rawski, M.; Jozwiak, L.; Luba, T.

    2001-01-01

    The functional decomposition of binary and multi-valued discrete functions and relations has been gaining more and more recognition. It has important applications in many fields of modern digital system engineering, such as combinational and sequential logic synthesis for VLSI systems, pattern

  16. Dynamic PET of human liver inflammation: impact of kinetic modeling with optimization-derived dual-blood input function.

    Science.gov (United States)

    Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik

    2018-05-30

    The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018

  17. ESTIMATION OF COB-DOUGLAS AND TRANSLOG PRODUCTION FUNCTIONS WITH CAPITAL AND GENDER DISAGGREGATED LABOR INPUTS IN THE USA

    Directory of Open Access Journals (Sweden)

    Gertrude Sebunya Muwanga

    2018-01-01

    Full Text Available This is an empirical investigation of the homogeneity of gender disaggregated labor using the Douglas, single/multi-factor translog production functions; and labor productivity functions for the USA.   The results based on the single factor translog model, indicated that: an increase in the capita/female labor ratio increases aggregate output; male labor is more productive than female labor, which is more productive than capital; a simultaneous increase in quantity allocated and productivity of the leads to an increase in output; female labor productivity has grown slower than male labor productivity; it much easier to substitute male labor for capital compared to female labor; and the three inputs are neither perfect substitutes nor perfect complements. As a consequence, male and female labor are not homogenous inputs. Efforts to investigate the factors influencing gender disaggregated labor productivity; and designing policies to achieve gender parity in numbers/productivity in the labor force and increasing the ease of substitutability between male labor and female labor are required.

  18. Temporal dynamics of hot desert microbial communities reveal structural and functional responses to water input.

    Science.gov (United States)

    Armstrong, Alacia; Valverde, Angel; Ramond, Jean-Baptiste; Makhalanyane, Thulani P; Jansson, Janet K; Hopkins, David W; Aspray, Thomas J; Seely, Mary; Trindade, Marla I; Cowan, Don A

    2016-09-29

    The temporal dynamics of desert soil microbial communities are poorly understood. Given the implications for ecosystem functioning under a global change scenario, a better understanding of desert microbial community stability is crucial. Here, we sampled soils in the central Namib Desert on sixteen different occasions over a one-year period. Using Illumina-based amplicon sequencing of the 16S rRNA gene, we found that α-diversity (richness) was more variable at a given sampling date (spatial variability) than over the course of one year (temporal variability). Community composition remained essentially unchanged across the first 10 months, indicating that spatial sampling might be more important than temporal sampling when assessing β-diversity patterns in desert soils. However, a major shift in microbial community composition was found following a single precipitation event. This shift in composition was associated with a rapid increase in CO 2 respiration and productivity, supporting the view that desert soil microbial communities respond rapidly to re-wetting and that this response may be the result of both taxon-specific selection and changes in the availability or accessibility of organic substrates. Recovery to quasi pre-disturbance community composition was achieved within one month after rainfall.

  19. Temporal dynamics of hot desert microbial communities reveal structural and functional responses to water input

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Alacia; Valverde, Angel; Ramond, Jean-Baptiste; Makhalanyane, Thulani P.; Jansson, Janet K.; Hopkins, David W.; Aspray, Thomas J.; Seely, Mary; Trindade, Marla I.; Cowan, Don A.

    2016-09-29

    The temporal dynamics of desert soil microbial communities are poorly understood. Given the implications for ecosystem functioning under a global change scenario, a better understanding of desert microbial community stability is crucial. Here, we sampled soils in the central Namib Desert on sixteen different occasions over a one-year period. Using Illumina-based amplicon sequencing of the 16S rRNA gene, we found that α-diversity (richness) was more variable at a given sampling date (spatial variability) than over the course of one year (temporal variability). Community composition remained essentially unchanged across the first 10 months, indicating that spatial sampling might be more important than temporal sampling when assessing β-diversity patterns in desert soils. However, a major shift in microbial community composition was found following a single precipitation event. This shift in composition was associated with a rapid increase in CO2 respiration and productivity, supporting the view that desert soil microbial communities respond rapidly to re-wetting and that this response may be the result of both taxon-specific selection and changes in the availability or accessibility of organic substrates. Recovery to quasi pre-disturbance community composition was achieved within one month after rainfall.

  20. RET Functions as a Dual-Specificity Kinase that Requires Allosteric Inputs from Juxtamembrane Elements

    Directory of Open Access Journals (Sweden)

    Iván Plaza-Menacho

    2016-12-01

    Full Text Available Receptor tyrosine kinases exhibit a variety of activation mechanisms despite highly homologous catalytic domains. Such diversity arises through coupling of extracellular ligand-binding portions with highly variable intracellular sequences flanking the tyrosine kinase domain and specific patterns of autophosphorylation sites. Here, we show that the juxtamembrane (JM segment enhances RET catalytic domain activity through Y687. This phospho-site is also required by the JM region to rescue an otherwise catalytically deficient RET activation-loop mutant lacking tyrosines. Structure-function analyses identified interactions between the JM hinge, αC helix, and an unconventional activation-loop serine phosphorylation site that engages the HRD motif and promotes phospho-tyrosine conformational accessibility and regulatory spine assembly. We demonstrate that this phospho-S909 arises from an intrinsic RET dual-specificity kinase activity and show that an equivalent serine is required for RET signaling in Drosophila. Our findings reveal dual-specificity and allosteric components for the mechanism of RET activation and signaling with direct implications for drug discovery.

  1. Strategies for the generation of parametric images of [11C]PIB with plasma input functions considering discriminations and reproducibility.

    Science.gov (United States)

    Edison, Paul; Brooks, David J; Turkheimer, Federico E; Archer, Hilary A; Hinz, Rainer

    2009-11-01

    Pittsburgh compound B or [11C]PIB is an amyloid imaging agent which shows a clear differentiation between subjects with Alzheimer's disease (AD) and controls. However the observed signal difference in other forms of dementia such as dementia with Lewy bodies (DLB) is smaller, and mild cognitively impaired (MCI) subjects and some healthy elderly normals may show intermediate levels of [11C]PIB binding. The cerebellum, a commonly used reference region for non-specific tracer uptake in [11C]PIB studies in AD may not be valid in Prion disorders or monogenic forms of AD. The aim of this work was to: 1-compare methods for generating parametric maps of [11C]PIB retention in tissue using a plasma input function in respect of their ability to discriminate between AD subjects and controls and 2-estimate the test-retest reproducibility in AD subjects. 12 AD subjects (5 of which underwent a repeat scan within 6 weeks) and 10 control subjects had 90 minute [11C]PIB dynamic PET scans, and arterial plasma input functions were measured. Parametric maps were generated with graphical analysis of reversible binding (Logan plot), irreversible binding (Patlak plot), and spectral analysis. Between group differentiation was calculated using Student's t-test and comparisons between different methods were made using p values. Reproducibility was assessed by intraclass correlation coefficients (ICC). We found that the 75 min value of the impulse response function showed the best group differentiation and had a higher ICC than volume of distribution maps generated from Logan and spectral analysis. Patlak analysis of [11C]PIB binding was the least reproducible.

  2. MPC for LPV Systems Based on Parameter-Dependent Lyapunov Function with Perturbation on Control Input Strategy

    Directory of Open Access Journals (Sweden)

    Pornchai Bumroongsri

    2012-04-01

    Full Text Available In this paper, the model predictive control (MPC algorithm for linear parameter varying (LPV systems is proposed. The proposed algorithm consists of two steps. The first step is derived by using parameter-dependent Lyapunov function and the second step is derived by using the perturbation on control input strategy. In order to achieve good control performance, the bounds on the rate of variation of the parameters are taken into account in the controller synthesis. An overall algorithm is proved to guarantee robust stability. The controller design is illustrated with two case studies of continuous stirred-tank reactors. Comparisons with other MPC algorithms for LPV systems have been undertaken. The results show that the proposed algorithm can achieve better control performance.

  3. Preliminary evaluation of MRI-derived input function for quantitative measurement of glucose metabolism in an integrated PET-MRI

    International Nuclear Information System (INIS)

    Anazodo, Udunna; Kewin, Matthew; Finger, Elizabeth; Thiessen, Jonathan; Hadway, Jennifer; Butler, John; Pavlosky, William; Prato, Frank; Thompson, Terry; St Lawrence, Keith

    2015-01-01

    PET semi-quantitative methods such as relative uptake value can be robust but offer no biological information and do not account for intra-subject variability in tracer administration or clearance. Simultaneous multimodal measurements that combine PET and MRI not only permit crucial multiparametric measurements, it provides means of applying tracer kinetic modelling without the need for serial arterial blood sampling. In this study we adapted an image-derived input function (IDIF) method to improve characterization of glucose metabolism in an ongoing dementia study. Here we present preliminary results in a small group of frontotemporal dementia patients and controls. IDIF was obtained directly from dynamic PET data guided by regions of interest drawn on carotid vessels on high resolution T1-weighted MR Images. IDIF was corrected for contamination of non-arterial voxels. A validation of the method was performed in a porcine model in a PET-CT scanner comparing IDIF to direct arterial blood samples. Metabolic rate of glucose (CMRglc) was measured voxel-by-voxel in gray matter producing maps that were compared between groups. Net influx rate (Ki) and global mean CMRglc are reported. A good correlation (r = 0.9 p<0.0001) was found between corrected IDIF and input function measured from direct arterial blood sampling in the validation study. In 3 FTD and 3 controls, a trend towards hypometabolism was found in frontal, temporal and parietal lobes similar to significant differences previously reported by other groups. The global mean CMRglc and Ki observed in control subjects are in line with previous reports. In general, kinetic modelling of PET-FDG using an MR-IDIF can improve characterization of glucose metabolism in dementia. This method is feasible in multimodal studies that aim to combine PET molecular imaging with MRI as dynamic PET can be acquired along with multiple MRI measurements.

  4. Preliminary evaluation of MRI-derived input function for quantitative measurement of glucose metabolism in an integrated PET-MRI

    Energy Technology Data Exchange (ETDEWEB)

    Anazodo, Udunna; Kewin, Matthew [Lawson Health Research Institute, Department of Medical Biophysics, Western University, London, Ontario (Canada); Finger, Elizabeth [Department of Clinical Neurological Sciences, Western University, London, Ontario (Canada); Thiessen, Jonathan; Hadway, Jennifer; Butler, John [Lawson Health Research Institute, Department of Medical Biophysics, Western University, London, Ontario (Canada); Pavlosky, William [Diagnostic Imaging, St Joseph' s Health Care, London, Ontario (Canada); Prato, Frank; Thompson, Terry; St Lawrence, Keith [Lawson Health Research Institute, Department of Medical Biophysics, Western University, London, Ontario (Canada)

    2015-05-18

    PET semi-quantitative methods such as relative uptake value can be robust but offer no biological information and do not account for intra-subject variability in tracer administration or clearance. Simultaneous multimodal measurements that combine PET and MRI not only permit crucial multiparametric measurements, it provides means of applying tracer kinetic modelling without the need for serial arterial blood sampling. In this study we adapted an image-derived input function (IDIF) method to improve characterization of glucose metabolism in an ongoing dementia study. Here we present preliminary results in a small group of frontotemporal dementia patients and controls. IDIF was obtained directly from dynamic PET data guided by regions of interest drawn on carotid vessels on high resolution T1-weighted MR Images. IDIF was corrected for contamination of non-arterial voxels. A validation of the method was performed in a porcine model in a PET-CT scanner comparing IDIF to direct arterial blood samples. Metabolic rate of glucose (CMRglc) was measured voxel-by-voxel in gray matter producing maps that were compared between groups. Net influx rate (Ki) and global mean CMRglc are reported. A good correlation (r = 0.9 p<0.0001) was found between corrected IDIF and input function measured from direct arterial blood sampling in the validation study. In 3 FTD and 3 controls, a trend towards hypometabolism was found in frontal, temporal and parietal lobes similar to significant differences previously reported by other groups. The global mean CMRglc and Ki observed in control subjects are in line with previous reports. In general, kinetic modelling of PET-FDG using an MR-IDIF can improve characterization of glucose metabolism in dementia. This method is feasible in multimodal studies that aim to combine PET molecular imaging with MRI as dynamic PET can be acquired along with multiple MRI measurements.

  5. Impact of regulation on English and Welsh water-only companies: an input-distance function approach.

    Science.gov (United States)

    Molinos-Senante, María; Porcher, Simon; Maziotis, Alexandros

    2017-07-01

    The assessment of productivity change over time and its drivers is of great significance for water companies and regulators when setting urban water tariffs. This issue is even more relevant in privatized water industries, such as those in England and Wales, where the price-cap regulation is adopted. In this paper, an input-distance function is used to estimate productivity change and its determinants for the English and Welsh water-only companies (WoCs) over the period of 1993-2009. The impacts of several exogenous variables on companies' efficiencies are also explored. From a policy perspective, this study describes how regulators can use this type of modeling and results to calculate illustrative X factors for the WoCs. The results indicate that the 1994 and 1999 price reviews stimulated technical change, and there were small efficiency gains. However, the 2004 price review did not accelerate efficiency change or improve technical change. The results also indicated that during the whole period of study, the excessive scale of the WoCs contributed negatively to productivity growth. On average, WoCs reported relatively high efficiency levels, which suggests that they had already been investing in technologies that reduce long-term input requirements with respect to exogenous and service-quality variables. Finally, an average WoC needs to improve its productivity toward that of the best company by 1.58%. The methodology and results of this study are of great interest to both regulators and water-company managers for evaluating the effectiveness of regulation and making informed decisions.

  6. Biosensor for label-free DNA quantification based on functionalized LPGs.

    Science.gov (United States)

    Gonçalves, Helena M R; Moreira, Luis; Pereira, Leonor; Jorge, Pedro; Gouveia, Carlos; Martins-Lopes, Paula; Fernandes, José R A

    2016-10-15

    A label-free fiber optic biosensor based on a long period grating (LPG) and a basic optical interrogation scheme using off the shelf components is used for the detection of in-situ DNA hybridization. A new methodology is proposed for the determination of the spectral position of the LPG mode resonance. The experimental limit of detection obtained for the DNA was 62±2nM and the limit of quantification was 209±7nM. The sample specificity was experimentally demonstrated using DNA targets with different base mismatches relatively to the probe and was found that the system has a single base mismatch selectivity. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Advancement in PET quantification using 3D-OP-OSEM point spread function reconstruction with the HRRT

    Energy Technology Data Exchange (ETDEWEB)

    Varrone, Andrea; Sjoeholm, Nils; Gulyas, Balazs; Halldin, Christer; Farde, Lars [Karolinska Hospital, Karolinska Institutet, Department of Clinical Neuroscience, Psychiatry Section and Stockholm Brain Institute, Stockholm (Sweden); Eriksson, Lars [Karolinska Hospital, Karolinska Institutet, Department of Clinical Neuroscience, Psychiatry Section and Stockholm Brain Institute, Stockholm (Sweden); Siemens Molecular Imaging, Knoxville, TN (United States); University of Stockholm, Department of Physics, Stockholm (Sweden)

    2009-10-15

    Image reconstruction including the modelling of the point spread function (PSF) is an approach improving the resolution of the PET images. This study assessed the quantitative improvements provided by the implementation of the PSF modelling in the reconstruction of the PET data using the High Resolution Research Tomograph (HRRT). Measurements were performed on the NEMA-IEC/2001 (Image Quality) phantom for image quality and on an anthropomorphic brain phantom (STEPBRAIN). PSF reconstruction was also applied to PET measurements in two cynomolgus monkeys examined with [{sup 18}F]FE-PE2I (dopamine transporter) and with [{sup 11}C]MNPA (D{sub 2} receptor), and in one human subject examined with [{sup 11}C]raclopride (D{sub 2} receptor). PSF reconstruction increased the recovery coefficient (RC) in the NEMA phantom by 11-40% and the grey to white matter ratio in the STEPBRAIN phantom by 17%. PSF reconstruction increased binding potential (BP{sub ND}) in the striatum and midbrain by 14 and 18% in the [{sup 18}F]FE-PE2I study, and striatal BP{sub ND} by 6 and 10% in the [{sup 11}C]MNPA and [{sup 11}C]raclopride studies. PSF reconstruction improved quantification by increasing the RC and thus reducing the partial volume effect. This method provides improved conditions for PET quantification in clinical studies with the HRRT system, particularly when targeting receptor populations in small brain structures. (orig.)

  8. Improving the arterial input function in dynamic contrast enhanced MRI by fitting the signal in the complex plane.

    Science.gov (United States)

    Simonis, Frank F J; Sbrizzi, Alessandro; Beld, Ellis; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2016-10-01

    Dynamic contrast enhanced (DCE) imaging is a widely used technique in oncologic imaging. An essential prerequisite for obtaining quantitative values from DCE-MRI is the determination of the arterial input function (AIF). However, it is very challenging to accurately estimate the AIF using MR. A comprehensive model, which uses complex data instead of either magnitude or phase, was developed to improve AIF estimation. The model was first applied to simulated data. Subsequently, the accuracy of the estimated contrast agent concentration was validated in a phantom. Finally the method was applied to existing DCE scans of 13 prostate cancer patients. The complex signal method combines the complementary strengths of the magnitude and phase method, increasing the precision and accuracy of concentration estimation in simulated and phantom data. The in vivo AIFs show a good agreement between arterial voxels (standard deviation in the peak and tail equal 0.4 mM and 0.12 mM, respectively). Furthermore, the dynamic behavior closely followed the AIF obtained with DCE-CT in the same patients (mean correlation coefficient: 0.92). By using the complex signal, the AIF estimation becomes more accurate and precise. This might enable patient specific AIFs, thereby improving the quantitative values obtained from DCE-MRI. Magn Reson Med 76:1236-1245, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  9. Emphysema quantification from CT scans using novel application of diaphragm curvature estimation: comparison with standard quantification methods and pulmonary function data

    Science.gov (United States)

    Keller, Brad M.; Reeves, Anthony P.; Yankelevitz, David F.; Henschke, Claudia I.; Barr, R. Graham

    2009-02-01

    Emphysema is a disease of the lungs that destroys the alveolar air sacs and induces long-term respiratory dysfunction. CT scans allow for the imaging of the anatomical basis of emphysema and quantification of the underlying disease state. Several measures have been introduced for the quantification emphysema directly from CT data; most,however, are based on the analysis of density information provided by the CT scans, which vary by scanner and can be hard to standardize across sites and time. Given that one of the anatomical variations associated with the progression of emphysema is the flatting of the diaphragm due to the loss of elasticity in the lung parenchyma, curvature analysis of the diaphragm would provide information about emphysema from CT. Therefore, we propose a new, non-density based measure of the curvature of the diaphragm that would allow for further quantification methods in a robust manner. To evaluate the new method, 24 whole-lung scans were analyzed using the ratios of the lung height and diaphragm width to diaphragm height as curvature estimates as well as using the emphysema index as comparison. Pearson correlation coefficients showed a strong trend of several of the proposed diaphragm curvature measures to have higher correlations, of up to r=0.57, with DLCO% and VA than did the emphysema index. Furthermore, we found emphysema index to have only a 0.27 correlation to the proposed measures, indicating that the proposed measures evaluate different aspects of the disease.

  10. Absolute quantification of pharmacokinetic distribution of RES colloids in individuals with normal liver function

    International Nuclear Information System (INIS)

    Herzog, H.; Spohr, G.; Notohamiprodjo, G.; Feinendegen, L.E.

    1987-01-01

    Estimates of the radiation dose resulting from liver-spleen scintigraphy 99 TCsup(m)-labelled colloids are based on pharmacokinetic data mainly determined in animals. The aim of this study was to check the pharmacokinetic data by direct, absolute in vivo quantification in man. Liver and spleen activities were directly measured using a double-energy window technique. Activities in other organs were quantified by conjugate whole-body scans. All measurement procedures were checked using the whole-body Alderson phantom. Pharmacokinetic data for sulphur colloid, tin colloid, human serum albumin (HSA) millimicrospheres, and phytate were obtained in 13 to 20 normal subjects for each type of colloid. Depending on the colloid type liver uptake was between 54 and 75% of the total administered dose (TAD) and spleen uptake was 3.5 to 21% TAD. Activity measured in blood, urine, lung and thyroid proved to be far from negligible. The results of this work suggest a correction of the animal-based data of colloid distribution and radiation dose on the basis of the direct measurement of absolute uptake in man. (author)

  11. The anterior cerebral artery is an appropriate arterial input function for perfusion-CT processing in patients with acute stroke

    International Nuclear Information System (INIS)

    Wintermark, Max; Lau, Benison C.; Chien, Jeffrey; Arora, Sandeep

    2008-01-01

    Dynamic perfusion-CT (PCT) with deconvolution requires an arterial input function (AIF) for postprocessing. In clinical settings, the anterior cerebral artery (ACA) is often chosen for simplicity. The goals of this study were to determine how the AIF selection influences PCT results in acute stroke patients and whether the ACA is an appropriate default AIF. We retrospectively identified consecutive patients suspected of hemispheric stroke of less than 48 h duration who were evaluated on admission by PCT. PCT datasets were postprocessed using multiple AIF, and cerebral blood volume (CBV) and flow (CBF), and mean transit time (MTT) values were measured in the corresponding territories. Results from corresponding territories in the same patients were compared using paired t-tests. The volumes of infarct core and tissue at risk obtained with different AIFs were compared to the final infarct volume. Of 113 patients who met the inclusion criteria, 55 with stroke were considered for analysis. The MTT values obtained with an ''ischemic'' AIF tended to be shorter (P=0.055) and the CBF values higher (P=0.108) than those obtained using a ''nonischemic'' AIF. CBV values were not influenced by the selection of the AIF. No statistically significant difference was observed between the size of the PCT infarct core (P=0.121) and tissue at risk (P=0.178), regardless of AIF selection. In acute stroke patients, the selection of the AIF has no statistically significant impact of the PCT results; standardization of the PCT postprocessing using the ACA as the default AIF is adequate. (orig.)

  12. Quantification of benign lesion regression as a function of 532-nm pulsed potassium titanyl phosphate laser parameter selection.

    Science.gov (United States)

    Mallur, Pavan S; Tajudeen, Bobby A; Aaronson, Nicole; Branski, Ryan C; Amin, Milan R

    2011-03-01

    Although the potassium titanyl phosphate (KTP) laser is versatile, the variability in laser parameters for laryngeal pathologies and the lack of clinical efficacy data remain problematic. We provide preliminary data regarding these parameters for benign lesion regression. In addition, we describe a novel method for the quantification of the effects of the KTP laser on vocal fold (VF) lesions. Retrospective chart review. Images were captured from examinations before and after in-office KTP treatment in patients with a range of benign lesions. Laser settings were noted for each patient. Imaging software was then used to calculate a ratio of lesion area to VF length. Ten percent of images were requantified to determine inter-rater reliability. Thirty-two patients underwent 47 procedures for lesions including hemorrhagic polyp, nonhemorrhagic polyp, vocal process granuloma, Reinke's edema, cyst/pseudocyst, leukoplakia, and squamous cell carcinoma in situ. No statistically significant differences were observed with regard to the laser parameters used as a function of lesion type. Regardless, by 1 month following treatment, all lesions had significantly decreased in size, except nonhemorrhagic polyps. Similar data were obtained at 2-month follow-up. We then compared the pre-KTP lesion size with the smallest lesion size quantified during the 1-year follow-up period. All lesions were significantly smaller, with the exception of Reinke's edema. Inter-rater reliability was quite good. KTP laser effectively reduced VF lesion size, irrespective of the laser parameters used. In addition, our quantification method for lesion size appeared to be both viable and reliable. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.

  13. The Influence of Prosodic Input in the Second Language Classroom: Does It Stimulate Child Acquisition of Word Order and Function Words?

    Science.gov (United States)

    Campfield, Dorota E.; Murphy, Victoria A.

    2017-01-01

    This paper reports on an intervention study with young Polish beginners (mean age: 8 years, 3 months) learning English at school. It seeks to identify whether exposure to rhythmic input improves knowledge of word order and function words. The "prosodic bootstrapping hypothesis", relevant in developmental psycholinguistics, provided the…

  14. Short-Term Memory in Mathematics-Proficient and Mathematics-Disabled Students as a Function of Input-Modality/Output-Modality Pairings.

    Science.gov (United States)

    Webster, Raymond E.

    1980-01-01

    A significant two-way input modality by output modality interaction suggested that short term memory capacity among the groups differed as a function of the modality used to present the items in combination with the output response required. (Author/CL)

  15. Estimating the arterial input function from dynamic contrast-enhanced MRI data with compensation for flow enhancement (II): Applications in spine diagnostics and assessment of crohn's disease

    NARCIS (Netherlands)

    van Schie, Jeroen J. N.; Lavini, Cristina; van Vliet, Lucas J.; Kramer, Gem; Pieters-van den Bos, Indra; Marcus, J. T.; Stoker, Jaap; Vos, Frans M.

    2017-01-01

    Pharmacokinetic (PK) models can describe microvascular density and integrity. An essential component of PK models is the arterial input function (AIF) representing the time-dependent concentration of contrast agent (CA) in the blood plasma supplied to a tissue. To evaluate a novel method for

  16. Estimating the arterial input function from dynamic contrast-enhanced MRI data with compensation for flow enhancement (I): Theory, method, and phantom experiments

    NARCIS (Netherlands)

    van Schie, Jeroen J. N.; Lavini, Cristina; van Vliet, Lucas J.; Vos, Frans M.

    2017-01-01

    The arterial input function (AIF) represents the time-dependent arterial contrast agent (CA) concentration that is used in pharmacokinetic modeling. To develop a novel method for estimating the AIF from dynamic contrast-enhanced (DCE-) MRI data, while compensating for flow enhancement. Signal

  17. A data input controller for an alphanumeric and function keyboard with ports to the CAMAC-dataway or the serial plasma display controller

    International Nuclear Information System (INIS)

    Zahn, J.; Komor, Z.; Geldmeyer, H.J.

    1976-01-01

    A data input controller has been developed to allow the data transfer from an alphanumeric and function keyboard to the CAMAC-dataway or via the plasma display controller SIG-8AS/S and a serial transmission line to the TTY-/V.24-port of a computer. (orig.) [de

  18. Calibrated image-derived input functions for the determination of the metabolic uptake rate of glucose with [18F]-FDG PET

    DEFF Research Database (Denmark)

    Christensen, Anders Nymark; Reichkendler, Michala H.; Larsen, Rasmus

    2014-01-01

    We investigated the use of a simple calibration method to remove bias in previously proposed approaches to image-derived input functions (IDIFs) when used to calculate the metabolic uptake rate of glucose (Km) from dynamic [18F]-FDG PET scans of the thigh. Our objective was to obtain nonbiased, low...

  19. Quantification of right and left ventricular function by cardiovascular magnetic resonance

    International Nuclear Information System (INIS)

    Bellenger, N.G.; Smith, G.C.; Pennell, D.J.; Grothues, F.

    2000-01-01

    Cardiac dysfunction is a major cause of cardiovascular morbidity and mortality. Accurate and reproducible assessment of cardiac function is essential for the diagnosis, the assessment of prognosis and evaluation of a patient's response to therapy. Cardiovascular magnetic resonance (CMR) provides a measure of global and regional function that is not only accurate and reproducible but is noninvasive, free of ionising radiation, and independent of the geometric assumptions and acoustic windows that limit echocardiography. With the advent of faster scanners, automated analysis, increasing availability and reducing costs, CMR is fast becoming a clinically tenable reference standard for the measurement of cardiac function. (orig.) [de

  20. Functional group quantification of polymer nanomembranes with soft x-rays

    Science.gov (United States)

    Sunday, Daniel F.; Chan, Edwin P.; Orski, Sara V.; Nieuwendaal, Ryan C.; Stafford, Christopher M.

    2018-03-01

    Polyamide nanomembranes are at the heart of water desalination, a process which plays a critical role in clean water production. Improving their efficiency requires a better understanding of the relationship between chemistry, network structure, and performance but few techniques afford compositional information in ultrathin films (reference materials to establish quantitative relationships between changes in the optical constants and functional group density, and then use the results to evaluate the functional group concentrations of polyamide nanomembranes. We demonstrate that the difference in the amide carbonyl and carboxylic acid group concentrations can be used to calculate the crosslink density, which is shown to vary significantly across three different polyamide chemistries. A clear relationship is established between the functional group density and the permselectivity (α ), indicating that more densely crosslinked materials result in a higher α of the nanomembranes. Finally, measurements on a polyamide/poly(acrylic acid) bilayer demonstrate the ability of this approach to quantify depth-dependent functional group concentrations in thin films.

  1. SU-E-J-86: Lobar Lung Function Quantification by PET Galligas and CT Ventilation Imaging in Lung Cancer Patients

    International Nuclear Information System (INIS)

    Eslick, E; Kipritidis, J; Keall, P; Bailey, D; Bailey, E

    2014-01-01

    Purpose: The purpose of this study was to quantify the lobar lung function using the novel PET Galligas ([68Ga]-carbon nanoparticle) ventilation imaging and the investigational CT ventilation imaging in lung cancer patients pre-treatment. Methods: We present results on our first three lung cancer patients (2 male, mean age 78 years) as part of an ongoing ethics approved study. For each patient a PET Galligas ventilation (PET-V) image and a pair of breath hold CT images (end-exhale and end-inhale tidal volumes) were acquired using a Siemens Biograph PET CT. CT-ventilation (CT-V) images were created from the pair of CT images using deformable image registration (DIR) algorithms and the Hounsfield Unit (HU) ventilation metric. A comparison of ventilation quantification from each modality was done on the lobar level and the voxel level. A Bland-Altman plot was used to assess the difference in mean percentage contribution of each lobe to the total lung function between the two modalities. For each patient, a voxel-wise Spearmans correlation was calculated for the whole lungs between the two modalities. Results: The Bland-Altman plot demonstrated strong agreement between PET-V and CT-V for assessment of lobar function (r=0.99, p<0.001; range mean difference: −5.5 to 3.0). The correlation between PET-V and CT-V at the voxel level was moderate(r=0.60, p<0.001). Conclusion: This preliminary study on the three patients data sets demonstrated strong agreement between PET and CT ventilation imaging for the assessment of pre-treatment lung function at the lobar level. Agreement was only moderate at the level of voxel correlations. These results indicate that CT ventilation imaging has potential for assessing pre-treatment lobar lung function in lung cancer patients

  2. Advances in the quantification of mitochondrial function in primary human immune cells through extracellular flux analysis.

    Directory of Open Access Journals (Sweden)

    Dequina Nicholas

    Full Text Available Numerous studies show that mitochondrial energy generation determines the effectiveness of immune responses. Furthermore, changes in mitochondrial function may regulate lymphocyte function in inflammatory diseases like type 2 diabetes. Analysis of lymphocyte mitochondrial function has been facilitated by introduction of 96-well format extracellular flux (XF96 analyzers, but the technology remains imperfect for analysis of human lymphocytes. Limitations in XF technology include the lack of practical protocols for analysis of archived human cells, and inadequate data analysis tools that require manual quality checks. Current analysis tools for XF outcomes are also unable to automatically assess data quality and delete untenable data from the relatively high number of biological replicates needed to power complex human cell studies. The objectives of work presented herein are to test the impact of common cellular manipulations on XF outcomes, and to develop and validate a new automated tool that objectively analyzes a virtually unlimited number of samples to quantitate mitochondrial function in immune cells. We present significant improvements on previous XF analyses of primary human cells that will be absolutely essential to test the prediction that changes in immune cell mitochondrial function and fuel sources support immune dysfunction in chronic inflammatory diseases like type 2 diabetes.

  3. Trigeminal, Visceral and Vestibular Inputs May Improve Cognitive Functions by Acting through the Locus Coeruleus and the Ascending Reticular Activating System: A New Hypothesis

    Directory of Open Access Journals (Sweden)

    Vincenzo De Cicco

    2018-01-01

    Full Text Available It is known that sensory signals sustain the background discharge of the ascending reticular activating system (ARAS which includes the noradrenergic locus coeruleus (LC neurons and controls the level of attention and alertness. Moreover, LC neurons influence brain metabolic activity, gene expression and brain inflammatory processes. As a consequence of the sensory control of ARAS/LC, stimulation of a sensory channel may potential influence neuronal activity and trophic state all over the brain, supporting cognitive functions and exerting a neuroprotective action. On the other hand, an imbalance of the same input on the two sides may lead to an asymmetric hemispheric excitability, leading to an impairment in cognitive functions. Among the inputs that may drive LC neurons and ARAS, those arising from the trigeminal region, from visceral organs and, possibly, from the vestibular system seem to be particularly relevant in regulating their activity. The trigeminal, visceral and vestibular control of ARAS/LC activity may explain why these input signals: (1 affect sensorimotor and cognitive functions which are not directly related to their specific informational content; and (2 are effective in relieving the symptoms of some brain pathologies, thus prompting peripheral activation of these input systems as a complementary approach for the treatment of cognitive impairments and neurodegenerative disorders.

  4. The effect of blood inflow and B(1)-field inhomogeneity on measurement of the arterial input function in axial 3D spoiled gradient echo dynamic contrast-enhanced MRI.

    Science.gov (United States)

    Roberts, Caleb; Little, Ross; Watson, Yvonne; Zhao, Sha; Buckley, David L; Parker, Geoff J M

    2011-01-01

    A major potential confound in axial 3D dynamic contrast-enhanced magnetic resonance imaging studies is the blood inflow effect; therefore, the choice of slice location for arterial input function measurement within the imaging volume must be considered carefully. The objective of this study was to use computer simulations, flow phantom, and in vivo studies to describe and understand the effect of blood inflow on the measurement of the arterial input function. All experiments were done at 1.5 T using a typical 3D dynamic contrast-enhanced magnetic resonance imaging sequence, and arterial input functions were extracted for each slice in the imaging volume. We simulated a set of arterial input functions based on the same imaging parameters and accounted for blood inflow and radiofrequency field inhomogeneities. Measured arterial input functions along the vessel length from both in vivo and the flow phantom agreed with simulated arterial input functions and show large overestimations in the arterial input function in the first 30 mm of the vessel, whereas arterial input functions measured more centrally achieve accurate contrast agent concentrations. Use of inflow-affected arterial input functions in tracer kinetic modeling shows potential errors of up to 80% in tissue microvascular parameters. These errors emphasize the importance of careful placement of the arterial input function definition location to avoid the effects of blood inflow. © 2010 Wiley-Liss, Inc.

  5. ICN_Atlas: Automated description and quantification of functional MRI activation patterns in the framework of intrinsic connectivity networks.

    Science.gov (United States)

    Kozák, Lajos R; van Graan, Louis André; Chaudhary, Umair J; Szabó, Ádám György; Lemieux, Louis

    2017-12-01

    Generally, the interpretation of functional MRI (fMRI) activation maps continues to rely on assessing their relationship to anatomical structures, mostly in a qualitative and often subjective way. Recently, the existence of persistent and stable brain networks of functional nature has been revealed; in particular these so-called intrinsic connectivity networks (ICNs) appear to link patterns of resting state and task-related state connectivity. These networks provide an opportunity of functionally-derived description and interpretation of fMRI maps, that may be especially important in cases where the maps are predominantly task-unrelated, such as studies of spontaneous brain activity e.g. in the case of seizure-related fMRI maps in epilepsy patients or sleep states. Here we present a new toolbox (ICN_Atlas) aimed at facilitating the interpretation of fMRI data in the context of ICN. More specifically, the new methodology was designed to describe fMRI maps in function-oriented, objective and quantitative way using a set of 15 metrics conceived to quantify the degree of 'engagement' of ICNs for any given fMRI-derived statistical map of interest. We demonstrate that the proposed framework provides a highly reliable quantification of fMRI activation maps using a publicly available longitudinal (test-retest) resting-state fMRI dataset. The utility of the ICN_Atlas is also illustrated on a parametric task-modulation fMRI dataset, and on a dataset of a patient who had repeated seizures during resting-state fMRI, confirmed on simultaneously recorded EEG. The proposed ICN_Atlas toolbox is freely available for download at http://icnatlas.com and at http://www.nitrc.org for researchers to use in their fMRI investigations. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Quantification of heterogeneity in lung disease with image-based pulmonary function testing.

    Science.gov (United States)

    Stahr, Charlene S; Samarage, Chaminda R; Donnelley, Martin; Farrow, Nigel; Morgan, Kaye S; Zosky, Graeme; Boucher, Richard C; Siu, Karen K W; Mall, Marcus A; Parsons, David W; Dubsky, Stephen; Fouras, Andreas

    2016-07-27

    Computed tomography (CT) and spirometry are the mainstays of clinical pulmonary assessment. Spirometry is effort dependent and only provides a single global measure that is insensitive for regional disease, and as such, poor for capturing the early onset of lung disease, especially patchy disease such as cystic fibrosis lung disease. CT sensitively measures change in structure associated with advanced lung disease. However, obstructions in the peripheral airways and early onset of lung stiffening are often difficult to detect. Furthermore, CT imaging poses a radiation risk, particularly for young children, and dose reduction tends to result in reduced resolution. Here, we apply a series of lung tissue motion analyses, to achieve regional pulmonary function assessment in β-ENaC-overexpressing mice, a well-established model of lung disease. The expiratory time constants of regional airflows in the segmented airway tree were quantified as a measure of regional lung function. Our results showed marked heterogeneous lung function in β-ENaC-Tg mice compared to wild-type littermate controls; identified locations of airway obstruction, and quantified regions of bimodal airway resistance demonstrating lung compensation. These results demonstrate the applicability of regional lung function derived from lung motion as an effective alternative respiratory diagnostic tool.

  7. 99mTc-GSA dynamic SPECT for regional hepatic functional reserve estimation. Assessment of quantification

    International Nuclear Information System (INIS)

    Hwang, Eui-Hyo

    1999-01-01

    The aim of this study is the assessment of the physiological implication of estimated parameters and the clinical value of this analyzing method for hepatic functional reserve estimation. After venous injection of 185 MBq of GSA, fifteen sequential sets of SPECT data were acquired for 15 minutes. First 5 sets SPECT images were analyzed by Patlak plot and hepatic GSA clearance was obtained in each matrix. The sum of hepatic GSA clearance in each matrix (total hepatic GSA clearance) was calculated as an index of whole liver functional reserve. Total hepatic GSA clearance was compared with receptor index or effective blood flow (EHBF) of whole liver which were analyzed by Direct Integral Linear Least Square Regression (DILS) method for the assessment of the physiological implications of hepatic GSA clearance. The clinical value of total hepatic GSA clearance was assessed in comparisons with the conventional hepatic function test. A very good correlations were observed between total hepatic GSA clearance and receptor index, whereas the correlations between total hepatic GSA clearance and EHBF were not significant. Significant correlations were also observed between total hepatic GSA clearance and the conventional hepatic function tests, such as choline esterase, albumin, hepaplastin test, ICG R15. (K.H.)

  8. The challenge of preoperative quantification of functional tricuspid regurgitation and of right ventricle function: what information is clinically relevant?

    Science.gov (United States)

    Hahn, Rebecca T

    2017-10-01

    Functional or secondary tricuspid regurgitation (TR) is the most common etiology of severe TR in the western world. The presence of functional TR, either isolated or in combination with left heart disease is associated with unfavorable natural history however surgical mortality for isolated tricuspid valve interventions remain higher than for any other single valve surgery. Determining the severity of TR remains a controversial area and will continue to evolve as new techniques for assessing this valve as well as the right ventricle, are investigated. The following review will describe tricuspid anatomy, define echocardiographic views for evaluating tricuspid valve and right heart morphology and function, that are relevant to the pre-procedural assessment of functional TR.

  9. Quantification of amine functional groups and their influence on OM/OC in the IMPROVE network

    Science.gov (United States)

    Kamruzzaman, Mohammed; Takahama, Satoshi; Dillner, Ann M.

    2018-01-01

    Recently, we developed a method using FT-IR spectroscopy coupled with partial least squares (PLS) regression to measure the four most abundant organic functional groups, aliphatic C-H, alcohol OH, carboxylic acid OH and carbonyl C=O, in atmospheric particulate matter. These functional groups are summed to estimate organic matter (OM) while the carbon from the functional groups is summed to estimate organic carbon (OC). With this method, OM and OM/OC can be estimated for each sample rather than relying on one assumed value to convert OC measurements to OM. This study continues the development of the FT-IR and PLS method for estimating OM and OM/OC by including the amine functional group. Amines are ubiquitous in the atmosphere and come from motor vehicle exhaust, animal husbandry, biomass burning, and vegetation among other sources. In this study, calibration standards for amines are produced by aerosolizing individual amine compounds and collecting them on PTFE filters using an IMPROVE sampler, thereby mimicking the filter media and collection geometry of ambient standards. The moles of amine functional group on each standard and a narrow range of amine-specific wavenumbers in the FT-IR spectra (wavenumber range 1 550-1 500 cm-1) are used to develop a PLS calibration model. The PLS model is validated using three methods: prediction of a set of laboratory standards not included in the model, a peak height analysis and a PLS model with a broader wavenumber range. The model is then applied to the ambient samples collected throughout 2013 from 16 IMPROVE sites in the USA. Urban sites have higher amine concentrations than most rural sites, but amine functional groups account for a lower fraction of OM at urban sites. Amine concentrations, contributions to OM and seasonality vary by site and sample. Amine has a small impact on the annual average OM/OC for urban sites, but for some rural sites including amine in the OM/OC calculations increased OM/OC by 0.1 or more.

  10. Quantification of vascular function changes under different emotion states: A pilot study.

    Science.gov (United States)

    Xia, Yirong; Yang, Licai; Mao, Xueqin; Zheng, Dingchang; Liu, Chengyu

    2017-01-01

    Recent studies have indicated that physiological parameters change with different emotion states. This study aimed to quantify the changes of vascular function at different emotion and sub-emotion states. Twenty young subjects were studied with their finger photoplethysmographic (PPG) pulses recorded at three distinct emotion states: natural (1 minute), happiness and sadness (10 minutes for each). Within the period of happiness and sadness emotion states, two sub-emotion states (calmness and outburst) were identified with the synchronously recorded videos. Reflection index (RI) and stiffness index (SI), two widely used indices of vascular function, were derived from the PPG pulses to quantify their differences between three emotion states, as well as between two sub-emotion states. The results showed that, when compared with the natural emotion, RI and SI decreased in both happiness and sadness emotions. The decreases in RI were significant for both happiness and sadness emotions (both Pemotion (Pemotions, there was significant difference in RI (Pemotion in comparison with the calmness one for both happiness and sadness emotions (both Pemotion only in sadness emotion (Pemotion measurements. This pilot study confirmed that vascular function changes with diffenrt emotion states could be quantified by the simple PPG measurement.

  11. Development of program for renal function study with quantification analysis of nuclear medicine image

    International Nuclear Information System (INIS)

    Song, Ju Young; Lee, Hyoung Koo; Suh, Tae Suk; Choe, Bo Young; Shinn, Kyung Sub; Chung, Yong An; Kim, Sung Joon; Chung, Soo Kyo

    2001-01-01

    In this study, we developed a new software tool for the analysis of renal scintigraphy which can be modified more easily by a user who needs to study new clinical applications, and the appropriateness of the results from our program was studied. The analysis tool was programmed with IDL5.2 and designed for use on a personal computer running Windows. For testing the developed tool and studying the appropriateness of the calculated glomerular filtration rate (GFR), 99m Tc-DTPA was adminstered to 10 adults in normal condition. In order to study the appropriateness of the calculated mean transit time (MTT). 99m Tc-DTPA and 99m Tc-MAG3 were administered to 11 adults in normal condition and 22 kidneys were analyzed. All the images were acquired with ORBITOR, the Siemens gamma camera. With the developed tool, we could show dynamic renal images and time activity curve (TAC) in each ROI and calculate clinical parameters of renal function. The results calculated by the developed tool were not different statistically from the results obtained by the Siemens application program (Tmax: p=0.68, Relative Renal Function: p=1.0 GFR: p=0.25) and the developed program proved reasonable. The MTT calculation tool proved to be reasonable by the evaluation of the influence of hydration status on MTT. We have obtained reasonable clinical parameters for the evaluation of renal function with the software tool developed in this study. The developed tool could prove more practical than conventional, commercial programs

  12. Quantification of left ventricular function by gated myocardial perfusion SPECT using multidimTM

    International Nuclear Information System (INIS)

    Huang Jin; Song Wenzhong; Chen Mingxi

    2004-01-01

    Objective: To evaluate the accuracy of left ventricular function by gated SPECT using the software Multidim TM . Methods: Rest gated SPECT was performed on 42 cases involved 26 normal subjects and 16 patients with myocardial infarct (MI). All cases underwent rest equilibrium radionuclide angiocardiography (ERNA) within 1 w. Results: (1)End diastolic volume (EDV), end systolic volume (ESV) and left ventricular ejection fraction(LVEF)were calculated using the software MultidimtMand ERNA. The correlation coefficient between the two Methods was 0.90,0.89,0.92 respectively(P TM , but the EDV and ESV value measured by this software maybe high with small heart. (authors)

  13. Survey of Quantification and Distance Functions Used for Internet-based Weak-link Sociological Phenomena

    Science.gov (United States)

    2016-03-01

    1967), pp. 360–363. [6] M. Benzi, E. Estrada, and C. Klymko, Ranking hubs and authorities using matrix functions, Linear Algebra and its Applications...PageRank, Linear Algebra Appl. 386 (2004) 51-65. [26] R.H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Transactions on...P1 = 1/N , Pk+1 = GPk, ∀k = 1, 2, · · · (5) until the difference of Pk+1 and Pk within a given tolerance. For an elementary explanation of Google

  14. Separation of input function for rapid measurement of quantitative CMRO2 and CBF in a single PET scan with a dual tracer administration method

    International Nuclear Information System (INIS)

    Kudomi, Nobuyuki; Watabe, Hiroshi; Hayashi, Takuya; Iida, Hidehiro

    2007-01-01

    Cerebral metabolic rate of oxygen (CMRO 2 ), oxygen extraction fraction (OEF) and cerebral blood flow (CBF) images can be quantified using positron emission tomography (PET) by administrating 15 O-labelled water (H 15 2 O) and oxygen ( 15 O 2 ). Conventionally, those images are measured with separate scans for three tracers C 15 O for CBV, H 15 2 O for CBF and 15 O 2 for CMRO 2 , and there are additional waiting times between the scans in order to minimize the influence of the radioactivity from the previous tracers, which results in a relatively long study period. We have proposed a dual tracer autoradiographic (DARG) approach (Kudomi et al 2005), which enabled us to measure CBF, OEF and CMRO 2 rapidly by sequentially administrating H 15 2 O and 15 O 2 within a short time. Because quantitative CBF and CMRO 2 values are sensitive to arterial input function, it is necessary to obtain accurate input function and a drawback of this approach is to require separation of the measured arterial blood time-activity curve (TAC) into pure water and oxygen input functions under the existence of residual radioactivity from the first injected tracer. For this separation, frequent manual sampling was required. The present paper describes two calculation methods: namely a linear and a model-based method, to separate the measured arterial TAC into its water and oxygen components. In order to validate these methods, we first generated a blood TAC for the DARG approach by combining the water and oxygen input functions obtained in a series of PET studies on normal human subjects. The combined data were then separated into water and oxygen components by the present methods. CBF and CMRO 2 were calculated using those separated input functions and tissue TAC. The quantitative accuracy in the CBF and CMRO 2 values by the DARG approach did not exceed the acceptable range, i.e., errors in those values were within 5%, when the area under the curve in the input function of the second tracer

  15. Quantification of left ventricular regional functions using ECG-gated myocardial perfusion SPECT. Validation of left ventricular systolic functions

    International Nuclear Information System (INIS)

    Yamamoto, Akira; Takahashi, Naoto; Iwahara, Shin-ichiro; Munakata, Kazuo; Hosoya, Tetsuo

    2006-01-01

    We have developed a program to quantify regional left ventricular (LV) function and wall motion synchrony using electrocardiogram (ECG)-gated myocardial perfusion SPECT (MPS). This preliminary study was undertaken to validate the use of this program for estimating regional LV systolic function. Patients were subjected to MPS by 99m Tc-sestamibi at rest. The study included 20 patients who were confirmed to have a low probability of coronary artery disease (LPG; low probability group), 19 heart disease patients who were examined by MPS and equilibrium radionuclide angiography (ERNA) (ERG; ERNA group), and 24 patients who were examined by MPS and 2-dimensional echocardiography (2DE) (2DEG; 2DE group). The values of the ejection fraction (EF) and peak ejection rate (PER) were estimated. The global functions evaluated by this program were compared with those obtained by ERNA in the ERG. For regional assessment, the reference values of the functional indices were obtained for 17 LV segments in LPG. The Z score, (reference average value of the segment-patient's value of the segment)/reference standard deviation of the segment, was used for the evaluation of regional functions; a score equal to or greater than 2 was defined as abnormal. Semiquantitative visual interpretation of 2DE was used as the standard to assess wall motion. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of these criteria and the relationship between 2DE grading and Z scoring were validated in 2DEG. The values of the global EF and PER evaluated by this program correlated with those determined by ERNA (r=0.76 and 0.58, respectively; p -10 ). The potential of this program to quantify the regional systolic function was validated. (author)

  16. Functional Characterization of Bean Zaragoza Starch (Phaseolus Lunatus L. and Quantification of the Resistant Starch

    Directory of Open Access Journals (Sweden)

    Piedad M. Montero-Castillo

    2013-06-01

    Full Text Available Legumes are a potential source of starch, representing between 30 and 50% of its dry weight, this is an essential energy source for humans. Currently its use is widespread in the food industry as an additive or raw material in food compounds, due to its nutritional, functional properties as a thickening agent and stabilizer of suspensions and dispersions. We evaluated several functional properties of starch variety zaragoza red bean, was obtained initial gelatinization temperature and final (71°C (81°C respectively, the solubility was 8.3% at 90°C, swelling power was 6.6% at 80°C, and water retention capacity was 4.4% at 80°C. The apparent viscosity was evaluated between 20 and 75 °C giving as results viscosities between 1.096 and 0.98 Cp respectively. The results showed that the tested temperatures significantly affect the solubility, swelling power, water holding capacity and viscosity of the starch. The amylose and amylopectin content was 21.1% and 78.19%. Finally, was obtained 9,24% resistant starch and compared with other conventional non starchy sources in order to acquire new knowledge about this material native to the Colombian Caribbean coast.

  17. Non-invasive quantification of pancreatic exocrine function using secretin-stimulated MRCP

    International Nuclear Information System (INIS)

    Punwani, S.; Gillams, A.R.; Lees, W.R.

    2003-01-01

    Our objective was to quantify water volume using magnetic resonance cholangiopancreatography (MRCP) sequences and apply this to secretin-stimulated studies with the aim of quantifying pancreatic exocrine function. A commercially available single-shot MRCP sequence was used in conjunction with a body phased-array coil and a 1.5-T MR system. Signal intensity was measured in samples of water, pancreatic, duodenal juice, and secretin-stimulated pancreatic juice. A water phantom was made and MR calculated volumes compared with known water volumes within the phantom. Changes in small intestinal volume in response to secretin were measured in a group of 11 patients with no evidence of pancreatic disease. Changes in water volume were plotted over time. The pancreatic duct diameter before and after secretin was noted and filling defects were sought. All patients also underwent an axial breath-hold T1-weighted gradient-echo sequence and the pancreatic parenchyma was evaluated for size and signal intensity. There was no difference in the signal intensity of the different juice samples. There was excellent correlation between known and calculated MRCP volumes (χ 2 =0.99). All patients demonstrated normal duct morphology on MRCP and normal pancreatic parenchyma on T1-weighted imaging. The mean flow rate in the patient population was 8.1±2.5 ml/s over a median of 7 min (range 5-9 min). The MRCP sequence can be used to measure water volume. Sequential MRCP measurements following secretin permitted calculation of volume change and flow rate. This should prove useful as an indicator of pancreatic exocrine function. (orig.)

  18. Optimization model of peach production relevant to input energies – Yield function in Chaharmahal va Bakhtiari province, Iran

    International Nuclear Information System (INIS)

    Ghatrehsamani, Shirin; Ebrahimi, Rahim; Kazi, Salim Newaz; Badarudin Badry, Ahmad; Sadeghinezhad, Emad

    2016-01-01

    The aim of this study was to determine the amount of input–output energy used in peach production and to develop an optimal model of production in Chaharmahal va Bakhtiari province, Iran. Data were collected from 100 producers by administering a questionnaire in face-to-face interviews. Farms were selected based on random sampling method. Results revealed that the total energy of production is 47,951.52 MJ/ha and the highest share of energy consumption belongs to chemical fertilizers (35.37%). Consumption of direct energy was 47.4% while indirect energy was 52.6%. Also, Total energy consumption was divided into two groups; renewable and non-renewable (19.2% and 80.8% respectively). Energy use efficiency, Energy productivity, Specific energy and Net energy were calculated as 0.433, 0.228 (kg/MJ), 4.38 (MJ/kg) and −27,161.722 (MJ/ha), respectively. According to the negative sign for Net energy, if special strategy is used, energy dismiss will decrease and negative effect of some parameters could be omitted. In the present case the amount is indicating decimate of production energy. In addition, energy efficiency was not high enough. Some of the input energies were applied to machinery, chemical fertilizer, water irrigation and electricity which had significant effect on increasing production and MPP (marginal physical productivity) was determined for variables. This parameter was positive for energy groups namely; machinery, diesel fuel, chemical fertilizer, water irrigation and electricity while it was negative for other kind of energy such as chemical pesticides and human labor. Finally, there is a need to pursue a new policy to force producers to undertake energy-efficient practices to establish sustainable production systems without disrupting the natural resources. In addition, extension activities are needed to improve the efficiency of energy consumption and to sustain the natural resources. - Highlights: • Replacing non-renewable energy with renewable

  19. Comparison between Radionuclide Ventriculography and Echocardiography for Quantification of Left Ventricular Systolic Function in Rats Exposed to Doxorubicin

    Directory of Open Access Journals (Sweden)

    Luciano Fonseca Lemos de Oliveira

    Full Text Available Abstract Background: Radionuclide ventriculography (RV is a validated method to evaluate the left ventricular systolic function (LVSF in small rodents. However, no prior study has compared the results of RV with those obtained by other imaging methods in this context. Objectives: To compare the results of LVSF obtained by RV and echocardiography (ECHO in an experimental model of cardiotoxicity due to doxorubicin (DXR in rats. Methods: Adult male Wistar rats serving as controls (n = 7 or receiving DXR (n = 22 in accumulated doses of 8, 12, and 16 mg/kg were evaluated with ECHO performed with a Sonos 5500 Philips equipment (12-MHz transducer and RV obtained with an Orbiter-Siemens gamma camera using a pinhole collimator with a 4-mm aperture. Histopathological quantification of myocardial fibrosis was performed after euthanasia. Results: The control animals showed comparable results in the LVSF analysis obtained with ECHO and RV (83.5 ± 5% and 82.8 ± 2.8%, respectively, p > 0.05. The animals that received DXR presented lower LVSF values when compared with controls (p < 0.05; however, the LVSF values obtained by RV (60.6 ± 12.5% were lower than those obtained by ECHO (71.8 ± 10.1%, p = 0.0004 in this group. An analysis of the correlation between the LVSF and myocardial fibrosis showed a moderate correlation when the LVSF was assessed by ECHO (r = -0.69, p = 0.0002 and a stronger correlation when it was assessed by RV (r = -0.79, p < 0.0001. On multiple regression analysis, only RV correlated independently with myocardial fibrosis. Conclusion: RV is an alternative method to assess the left ventricular function in small rodents in vivo. When compared with ECHO, RV showed a better correlation with the degree of myocardial injury in a model of DXR-induced cardiotoxicity.

  20. Comparison between radionuclide ventriculography and echocardiography for quantification of left ventricular systolic function in rats exposed to doxorubicin

    International Nuclear Information System (INIS)

    Oliveira, Luciano Fonseca Lemos de; Carvalho, Eduardo Elias Vieira de; Romano, Minna Moreira Dias; Maciel, Benedito Carlos; Simões, Marcus Vinicius

    2017-01-01

    Background: Radionuclide ventriculography (RV) is a validated method to evaluate the left ventricular systolic function (LVSF) in small rodents. However, no prior study has compared the results of RV with those obtained by other imaging methods in this context. Objectives: To compare the results of LVSF obtained by RV and echocardiography (ECHO) in an experimental model of cardiotoxicity due to doxorubicin (DXR) in rats. Methods: Adult male Wistar rats serving as controls (n = 7) or receiving DXR (n = 22) in accumulated doses of 8, 12, and 16 mg/kg were evaluated with ECHO performed with a Sonos 5500 Philips equipment (12-MHz transducer) and RV obtained with an Orbiter-Siemens gamma camera using a pinhole collimator with a 4-mm aperture. Histopathological quantification of myocardial fibrosis was performed after euthanasia. Results: The control animals showed comparable results in the LVSF analysis obtained with ECHO and RV (83.5 ± 5% and 82.8 ± 2.8%, respectively, p > 0.05). The animals that received DXR presented lower LVSF values when compared with controls (p < 0.05); however, the LVSF values obtained by RV (60.6 ± 12.5%) were lower than those obtained by ECHO (71.8 ± 10.1%, p = 0.0004) in this group. An analysis of the correlation between the LVSF and myocardial fibrosis showed a moderate correlation when the LVSF was assessed by ECHO (r = -0.69, p = 0.0002) and a stronger correlation when it was assessed by RV (r = -0.79, p < 0.0001). On multiple regression analysis, only RV correlated independently with myocardial fibrosis. Conclusion: RV is an alternative method to assess the left ventricular function in small rodents in vivo. When compared with ECHO, RV showed a better correlation with the degree of myocardial injury in a model of DXR-induced cardiotoxicity. (author)

  1. Comparison between radionuclide ventriculography and echocardiography for quantification of left ventricular systolic function in rats exposed to doxorubicin

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Luciano Fonseca Lemos de; Carvalho, Eduardo Elias Vieira de; Romano, Minna Moreira Dias; Maciel, Benedito Carlos; Simões, Marcus Vinicius, E-mail: msimoes@fmrp.usp.br [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Centro de Cardiologia; O' Connell, João Lucas; Pulici, Érica Carolina Campos [Universidade Federal de Uberlândia (UFU), MG (Brazil)

    2017-01-15

    Background: Radionuclide ventriculography (RV) is a validated method to evaluate the left ventricular systolic function (LVSF) in small rodents. However, no prior study has compared the results of RV with those obtained by other imaging methods in this context. Objectives: To compare the results of LVSF obtained by RV and echocardiography (ECHO) in an experimental model of cardiotoxicity due to doxorubicin (DXR) in rats. Methods: Adult male Wistar rats serving as controls (n = 7) or receiving DXR (n = 22) in accumulated doses of 8, 12, and 16 mg/kg were evaluated with ECHO performed with a Sonos 5500 Philips equipment (12-MHz transducer) and RV obtained with an Orbiter-Siemens gamma camera using a pinhole collimator with a 4-mm aperture. Histopathological quantification of myocardial fibrosis was performed after euthanasia. Results: The control animals showed comparable results in the LVSF analysis obtained with ECHO and RV (83.5 ± 5% and 82.8 ± 2.8%, respectively, p > 0.05). The animals that received DXR presented lower LVSF values when compared with controls (p < 0.05); however, the LVSF values obtained by RV (60.6 ± 12.5%) were lower than those obtained by ECHO (71.8 ± 10.1%, p = 0.0004) in this group. An analysis of the correlation between the LVSF and myocardial fibrosis showed a moderate correlation when the LVSF was assessed by ECHO (r = -0.69, p = 0.0002) and a stronger correlation when it was assessed by RV (r = -0.79, p < 0.0001). On multiple regression analysis, only RV correlated independently with myocardial fibrosis. Conclusion: RV is an alternative method to assess the left ventricular function in small rodents in vivo. When compared with ECHO, RV showed a better correlation with the degree of myocardial injury in a model of DXR-induced cardiotoxicity. (author)

  2. Quantification of lacrimal function after D-shaped field irradiation for retinoblastoma

    International Nuclear Information System (INIS)

    Imhof, S.M.; Tan, K.E.W.P.; Hofman, P.

    1993-01-01

    To study the quantitative effects of mega-voltage external beam irradiation in a D-shaped field in patients with retinoblastoma, biomicroscopy was performed in 61 patients and tear function tests (Schirmer-lactoferrin and lysozyme tests) on 45 eyes in 34 irradiated patients. The results were compared with those obtained in 25 non-irradiated control eyes. The Schirmer test was significantly diminished in irradiated eyes, as were the lactoferrin and lysozyme values. A mild to severe keratitis was found in 17 of the 61 patients (28%). A significant correlation (p<0.005) was found between the severe keratitis and the mean Schirmer values; the mean lactoferrin and lysozyme values were diminished in all patients but did not correlate significantly with the corneal abnormalities. These quantitative data, obtained in patients treated for retinoblastoma, affirm the qualitative data found in patients irradiated for other reasons such as orbital or sinus tumours. Irradiation for retinoblastoma is not a harmless treatment and serious late side effects have to be considered. (Author)

  3. Fault Detection Coverage Quantification of Automatic Test Functions of Digital I and C System in NPPs

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Lee, Seung Jun; Hur, Seop; Lee, Young Jun; Jang, Seung Cheol

    2011-01-01

    Recently, analog instrument and control (I and C) systems in nuclear power plants (NPPs) have been replaced with digital systems for safer and more efficient operations. Digital I and C systems have adopted various fault-tolerant techniques that help the system correctly and safely perform the specific required functions in spite of the presence of faults. Each fault-tolerant technique has a different inspection period from real-time monitoring to monthly testing. The range covered by each fault-tolerant technique is also different. The digital I and C system, therefore, adopts multiple barriers consisting of various fault-tolerant techniques to increase total fault detection coverage. Even though these fault-tolerant techniques are adopted to ensure and improve the safety of a system, their effects have not been properly considered yet in most PSA models. Therefore, it is necessary to develop an evaluation method that can describe these features of a digital I and C system. Several issues must be considered in the fault coverage estimation of a digital I and C system, and two of them were handled in this work. The first is to quantify the fault coverage of each fault-tolerant technique implemented in the system, and the second is to exclude the duplicated effect of fault-tolerant techniques implemented simultaneously at each level of the system's hierarchy, as a fault occurring in a system might be detected by one or more fault-tolerant techniques. For this work, fault injection experiment was used to obtain the exact relations between faults and multiple barriers of fault-tolerant techniques. This experiment was applied to a bistable processor (BP) of a reactor protection system

  4. Immunodetection and quantification of insulin-like antigens in sprouts: development of an efficient functional food

    Directory of Open Access Journals (Sweden)

    Manju Pathak

    2011-11-01

    Full Text Available Background: Hormone Insulin is a drug used for the treatment of type 1 and type 2 Diabetes Mellitus. Insulin used in this experiment is derived from bovine and pork pancreas, as well as through recombinant technology. Patients with diabetes mellitus face an inability to utilize glucose from blood due to either less secretion of insulin, or the inability of the insulin to act; As a result of this glucose levels in the blood rise. The prevention and treatment of type 2 Diabetes Mellitus one is world’s major public health issues. Natural alternatives have a big role to play in this field. This study aims at discovering functional foods rich in Insulin like proteins. Here we are reporting Insulin-like proteins synthesizing during the embryo development stage of Glycine max: soybean, Vigna radiata: moong and Vigna unguiculata: cowpea seeds. Hence, germination transforms these seeds containing human insulin like proteins.Methods: In our investigation we have provided protein extraction with Enzyme-linked immunosorbent assay (ELISA. The plant materials weighing 1g were crushed in mortar and pestle, and the protein from the plant materials was extracted with 20 ml of 0.05 M sodium phosphate buffer (pH 7.6. The suspensions were centrifuged at 6000 rpm for 15 min, and the clear supernatants were subjected to Enzyme linked immunosorbent assay (ELISA for the detection of insulin-like proteins. We have used USDA nutritional data sources for the analysis of new products. Results: Our results demonstrate that Insulin is not expressed in dry mature dormant seeds, but is expressed only during the embryo development stage. Dry mature dormant seeds and the seeds germinated for 24 hours, 48 hours, 72 hours, and 96 hours of Glycine max, Vigna radiata and Vigna unguiculata, were investigated for expression of insulin through immunodetection using anti-insulin antibodies. Dry dormant seeds of all the three seeds showed zero expression at 450 nm for insulin, while

  5. Uncertainty quantification in ion–solid interaction simulations

    Energy Technology Data Exchange (ETDEWEB)

    Preuss, R., E-mail: preuss@ipp.mpg.de; Toussaint, U. von

    2017-02-15

    Within the framework of Bayesian uncertainty quantification we propose a non-intrusive reduced-order spectral approach (polynomial chaos expansion) to the simulation of ion–solid interactions. The method not only reduces the number of function evaluations but provides simultaneously a quantitative measure for which combinations of inputs have the most important impact on the result. It is applied to SDTRIM-simulations (Möller et al., 1988) with several uncertain and Gaussian distributed input parameters (i.e. angle, projectile energy, surface binding energy, target composition) and the results are compared to full-grid based approaches and sampling based methods with respect to reliability, efficiency and scalability.

  6. Input signal shaping based on harmonic frequency response function for suppressing nonlinear optical frequency in frequency-scanning interferometry

    Science.gov (United States)

    Zhu, Yu; Liu, Zhigang; Deng, Wen; Deng, Zhongwen

    2018-05-01

    Frequency-scanning interferometry (FSI) using an external cavity diode laser (ECDL) is essential for many applications of the absolute distance measurement. However, owing to the hysteresis and creep of the piezoelectric actuator inherent in the ECDL, the optical frequency scanning exhibits a nonlinearity that seriously affects the phase extraction accuracy of the interference signal and results in the reduction of the measurement accuracy. To suppress the optical frequency nonlinearity, a harmonic frequency synthesis method for shaping the desired input signal instead of the original triangular wave is presented. The effectiveness of the presented shaping method is demonstrated through the comparison of the experimental results. Compared with an incremental Renishaw interferometer, the standard deviation of the displacement measurement of the FSI system is less than 2.4 μm when driven by the shaped signal.

  7. Evaluation of the use of a standard input function for compartment analysis of [123I]iomazenil data. Factors influencing the quantitative results

    International Nuclear Information System (INIS)

    Seike, Yujiro; Hashikawa, Kazuo; Oku, Naohiko

    2004-01-01

    Adoption of standard input function (SIF) has been proposed for kinetic analysis of receptor binding potential (BP), instead of invasive frequent arterial samplings. The purpose of this study was to assess the SIP method in quantitative analysis of [ 123 I]iomazenil (IMZ), a central benzodiazepine antagonist, for SPECT. SPECT studies were performed on 10 patients with cerebrovascular disease or Alzheimer disease. Intermittent dynamic SPECT scans were performed from 0 to 201 min after IMZ-injection. BPs calculated from SIFs obtained from normal volunteers (BP s ) were compared with those of individual arterial samplings (BP O ). Good correlations were shown between BP O s and BP S s in the 9 subjects, but maximum BP S s were four times larger than the corresponding BP O s in one case. There were no abnormal laboratory data in this patient, but the relative arterial input count in the late period was higher than the SIF. Simulation studies with modified input functions revealed that height in the late period can produce significant errors in estimated BPs. These results suggested that the simplified method with one-point arterial sampling and SIF can not be applied clinically. One additional arterial sampling in the late period may be useful. (author)

  8. The human motor neuron pools receive a dominant slow‐varying common synaptic input

    Science.gov (United States)

    Negro, Francesco; Yavuz, Utku Şükrü

    2016-01-01

    Key points Motor neurons in a pool receive both common and independent synaptic inputs, although the proportion and role of their common synaptic input is debated.Classic correlation techniques between motor unit spike trains do not measure the absolute proportion of common input and have limitations as a result of the non‐linearity of motor neurons.We propose a method that for the first time allows an accurate quantification of the absolute proportion of low frequency common synaptic input (60%) of common input, irrespective of their different functional and control properties.These results increase our knowledge about the role of common and independent input to motor neurons in force control. Abstract Motor neurons receive both common and independent synaptic inputs. This observation is classically based on the presence of a significant correlation between pairs of motor unit spike trains. The functional significance of different relative proportions of common input across muscles, individuals and conditions is still debated. One of the limitations in our understanding of correlated input to motor neurons is that it has not been possible so far to quantify the absolute proportion of common input with respect to the total synaptic input received by the motor neurons. Indeed, correlation measures of pairs of output spike trains only allow for relative comparisons. In the present study, we report for the first time an approach for measuring the proportion of common input in the low frequency bandwidth (60%) proportion of common low frequency oscillations with respect to their total synaptic input. These results suggest that the central nervous system provides a large amount of common input to motor neuron pools, in a similar way to that for muscles with different functional and control properties. PMID:27151459

  9. Quantification of renal function

    International Nuclear Information System (INIS)

    Mubarak, Amani Hayder

    1999-06-01

    The evaluation of glomerular filtration rate (GFR) with Tc99m-DTPA using single injection with multiple blood sample method (plasma clearance), is a standard and reliable method but the procedure is complicated and may not suitable for routine clinical use. Alternatively, estimation of GFR by using Tc99m-DTPA and gamma camera computer system is very simple, dose not require sampling of blood or urine and provide individual kidney value of GFR (integral, uptake index methods)

  10. Quantification of thymidine kinase (TK1) mRNA in normal and leukemic cells and investigation of structure-function relatiosnhip of recombinant TK1enzyme

    DEFF Research Database (Denmark)

    Kristensen, Tina

    Thymidine kinase (TK) catalyses the ATP-dependent phosphorylation of thymidine to thymidine monophosphate, which is subsequency phosphorylated to thymidine triphosphate and utilized for DNA synthesis. Human cytosolic TK (TKI) is cell cycle regulated, e.g. the TK1 activity increases sharply at the G...... patients with chronic lymphatic leukemia (CLL). 2: Structure-function relationship of recombinant TKI. In the first part a sensitive method (competitive PCR) for quantification of TKI mRNA was established. The TKI mRNA level was quantified in quiescent lymphocytes from control donors (n = 6...... are characterized as being quiescent, the TK activity was in the same range as in quiescent lymphocytes from control donors. However, quantification of the TKI mRNA level shows that all five CLL patients had a very high level (6 to 22 x IO6 copies mg-’ protein) of TKI mRNA, corresponding to the level in dividing...

  11. Image-derived and arterial blood sampled input functions for quantitative PET imaging of the angiotensin II subtype 1 receptor in the kidney

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Tao; Tsui, Benjamin M. W.; Li, Xin; Vranesic, Melin; Lodge, Martin A.; Gulaldi, Nedim C. M.; Szabo, Zsolt, E-mail: zszabo@jhmi.edu [Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins School of Medicine, Baltimore, Maryland 21287 (United States)

    2015-11-15

    Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method in pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent

  12. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  13. Quantifying dynamic contrast-enhanced MRI of the knee in children with juvenile rheumatoid arthritis using an arterial input function (AIF) extracted from popliteal artery enhancement, and the effect of the choice of the AIF on the kinetic parameters.

    Science.gov (United States)

    Workie, Dagnachew W; Dardzinski, Bernard J

    2005-09-01

    Quantification of dynamic contrast-enhanced (DCE) MRI based on pharmacokinetic modeling requires specification of the arterial input function (AIF). A full representation of the plasma concentration data, including the initial rise and decay parts, considering the delay and dispersion of the bolus contrast is important. This work deals with modeling of DCE-MRI data from the knees of children with a history of juvenile rheumatoid arthritis (JRA) by using an AIF extracted from the signal enhancement data from the nearby popliteal artery. Three models for the AIFs were considered: a triexponential (AIF1), a gamma-variate plus a biexponential (AIF2), and a biexponential (AIF3). The pharmacokinetic parameters obtained from the model were Ktrans', kep, and V'p. The results from AIF1 and AIF2 showed no statistically significant difference. However, some statistically significant differences were seen with AIF3, particularly for parameters Ktrans' and V'p in the synovium (SNVM). These results suggest the importance of obtaining an appropriate AIF representation in pharmacokinetic modeling of JRA. Specifically, the initial rising part of the AIF should be incorporated for optimal pharmacokinetic modeling results. The pharmacokinetic parameters (mean+/-SD) derived from AIF1, using the average plasma concentration data, were as follows: SNVM Ktrans'(min-1)=0.52+/-0.34, kep(min-1)=0.71+/-0.39, and V'p=0.33+/-0.16, and for the distal femoral physis (DFP) Ktrans'(min-1)=1.83+/-1.78, kep(min-1)=2.65+/-1.80, and V'p=0.46+/-0.31. The pharmacokinetic parameters in the SNVM may be useful for investigating activity and therapeutic efficacy in studies of JRA. Longitudinal studies are necessary to find or demonstrate the parameter that is more sensitive to disease activity. Copyright (c) 2005 Wiley-Liss, Inc.

  14. A new procedure for automatic fitting of the basilar-membrane input-output function to individual behavioral data

    DEFF Research Database (Denmark)

    Kowalewski, Borys; Fereczkowski, Michal; MacDonald, Ewen

    2016-01-01

    system and, potentially, for clinical diagnostics. Computational algorithms are available that mimic the functioning of the nonlinear cochlear processing. One such algorithm is the dual resonance non-linear (DRNL) filterbank [6]. Its parameters can be modified to account for individual hearing loss, e.......g., based on behavioral, temporal masking curves (TMC) data. This approach was used within the framework of the computational auditory signal-processing and perception (CASP) model to account for various aspects of SNHL [4]. However, due to the computational complexity, on-line fitting of the DRNL...

  15. Spectroscopic analysis and in vitro imaging applications of a pH responsive AIE sensor with a two-input inhibit function.

    Science.gov (United States)

    Zhou, Zhan; Gu, Fenglong; Peng, Liang; Hu, Ying; Wang, Qianming

    2015-08-04

    A novel terpyridine derivative formed stable aggregates in aqueous media (DMSO/H2O = 1/99) with dramatically enhanced fluorescence compared to its organic solution. Moreover, the ultra-violet absorption spectra also demonstrated specific responses to the incorporation of water. The yellow emission at 557 nm changed to a solution with intense greenish luminescence only in the presence of protons and it conformed to a molecular logic gate with a two-input INHIBIT function. This molecular-based material could permeate into live cells and remain undissociated in the cytoplasm. The new aggregation induced emission (AIE) pH type bio-probe permitted easy collection of yellow luminescence images on a fluorescent microscope. As designed, it displayed striking green emission in organelles at low internal pH. This feature enabled the self-assembled structure to have a whole new function for the pH detection within the field of cell imaging.

  16. Automatic individual arterial input functions calculated from PCA outperform manual and population-averaged approaches for the pharmacokinetic modeling of DCE-MR images.

    Science.gov (United States)

    Sanz-Requena, Roberto; Prats-Montalbán, José Manuel; Martí-Bonmatí, Luis; Alberich-Bayarri, Ángel; García-Martí, Gracián; Pérez, Rosario; Ferrer, Alberto

    2015-08-01

    To introduce a segmentation method to calculate an automatic arterial input function (AIF) based on principal component analysis (PCA) of dynamic contrast enhanced MR (DCE-MR) imaging and compare it with individual manually selected and population-averaged AIFs using calculated pharmacokinetic parameters. The study included 65 individuals with prostate examinations (27 tumors and 38 controls). Manual AIFs were individually extracted and also averaged to obtain a population AIF. Automatic AIFs were individually obtained by applying PCA to volumetric DCE-MR imaging data and finding the highest correlation of the PCs with a reference AIF. Variability was assessed using coefficients of variation and repeated measures tests. The different AIFs were used as inputs to the pharmacokinetic model and correlation coefficients, Bland-Altman plots and analysis of variance tests were obtained to compare the results. Automatic PCA-based AIFs were successfully extracted in all cases. The manual and PCA-based AIFs showed good correlation (r between pharmacokinetic parameters ranging from 0.74 to 0.95), with differences below the manual individual variability (RMSCV up to 27.3%). The population-averaged AIF showed larger differences (r from 0.30 to 0.61). The automatic PCA-based approach minimizes the variability associated to obtaining individual volume-based AIFs in DCE-MR studies of the prostate. © 2014 Wiley Periodicals, Inc.

  17. Quantification of Emphysema with a Three-Dimensional Chest CT Scan: Correlation with the Visual Emphysema Scoring on Chest CT, Pulmonary Function Tests and Dyspnea Severity

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyun Jeong; Hwang, Jung Hwa [Dept. of Radiology, Soonchunhyang University Seoul Hospital, Seoul (Korea, Republic of)

    2011-09-15

    We wanted to prospectively evaluate the correlation between the quantification of emphysema using 3D CT densitometry with the visual emphysema score, pulmonary function tests (PFT) and the dyspnea score in patients with chronic obstructive pulmonary disease (COPD). Non-enhanced chest CT with 3D reconstruction was performed in 28 men with COPD (age 54-88 years). With histogram analysis, the total lung volume, mean lung density and proportion of low attenuation lung volume below predetermined thresholds were measured. The CT parameters were compared with the visual emphysema score, the PFT and the dyspnea score. A low attenuation lung volume below -950 HU was well correlated with the DLco and FEV{sub 1}/FVC. A Low attenuation lung volume below -950 HU and -930 HU was correlated with visual the emphysema score. A low attenuation lung volume below -950 HU was correlated with the dyspnea score, although the correlations between the other CT parameters and the dyspnea score were not significant. Objective quantification of emphysema using 3D CT densitometry was correlated with the visual emphysema score. A low attenuation lung volume below -950 HU was correlated with the DLco, the FEV{sub 1}/FVC and the dyspnea score.

  18. Quantification of Emphysema with a Three-Dimensional Chest CT Scan: Correlation with the Visual Emphysema Scoring on Chest CT, Pulmonary Function Tests and Dyspnea Severity

    International Nuclear Information System (INIS)

    Park, Hyun Jeong; Hwang, Jung Hwa

    2011-01-01

    We wanted to prospectively evaluate the correlation between the quantification of emphysema using 3D CT densitometry with the visual emphysema score, pulmonary function tests (PFT) and the dyspnea score in patients with chronic obstructive pulmonary disease (COPD). Non-enhanced chest CT with 3D reconstruction was performed in 28 men with COPD (age 54-88 years). With histogram analysis, the total lung volume, mean lung density and proportion of low attenuation lung volume below predetermined thresholds were measured. The CT parameters were compared with the visual emphysema score, the PFT and the dyspnea score. A low attenuation lung volume below -950 HU was well correlated with the DLco and FEV 1 /FVC. A Low attenuation lung volume below -950 HU and -930 HU was correlated with visual the emphysema score. A low attenuation lung volume below -950 HU was correlated with the dyspnea score, although the correlations between the other CT parameters and the dyspnea score were not significant. Objective quantification of emphysema using 3D CT densitometry was correlated with the visual emphysema score. A low attenuation lung volume below -950 HU was correlated with the DLco, the FEV 1 /FVC and the dyspnea score.

  19. Effect of rehabilitation worker input on visual function outcomes in individuals with low vision: study protocol for a randomised controlled trial.

    Science.gov (United States)

    Acton, Jennifer H; Molik, Bablin; Binns, Alison; Court, Helen; Margrain, Tom H

    2016-02-24

    Visual Rehabilitation Officers help people with a visual impairment maintain their independence. This intervention adopts a flexible, goal-centred approach, which may include training in mobility, use of optical and non-optical aids, and performance of activities of daily living. Although Visual Rehabilitation Officers are an integral part of the low vision service in the United Kingdom, evidence that they are effective is lacking. The purpose of this exploratory trial is to estimate the impact of a Visual Rehabilitation Officer on self-reported visual function, psychosocial and quality-of-life outcomes in individuals with low vision. In this exploratory, assessor-masked, parallel group, randomised controlled trial, participants will be allocated either to receive home visits from a Visual Rehabilitation Officer (n = 30) or to a waiting list control group (n = 30) in a 1:1 ratio. Adult volunteers with a visual impairment, who have been identified as needing rehabilitation officer input by a social worker, will take part. Those with an urgent need for a Visual Rehabilitation Officer or who have a cognitive impairment will be excluded. The primary outcome measure will be self-reported visual function (48-item Veterans Affairs Low Vision Visual Functioning Questionnaire). Secondary outcome measures will include psychological and quality-of-life metrics: the Patient Health Questionnaire (PHQ-9), the Warwick-Edinburgh Mental Well-being Scale (WEMWBS), the Adjustment to Age-related Visual Loss Scale (AVL-12), the Standardised Health-related Quality of Life Questionnaire (EQ-5D) and the UCLA Loneliness Scale. The interviewer collecting the outcomes will be masked to the group allocations. The analysis will be undertaken on a complete case and intention-to-treat basis. Analysis of covariance (ANCOVA) will be applied to follow-up questionnaire scores, with the baseline score as a covariate. This trial is expected to provide robust effect size estimates of the intervention

  20. Semiautomatic determination of arterial input functions for quantitative dynamic contrast-enhanced magnetic resonance imaging in non-small cell lung cancer patients.

    Science.gov (United States)

    Chung, Julius; Kim, Jae-Hun; Lee, Eun Ju; Kim, Yoo Na; Yi, Chin A

    2015-03-01

    The aim of this study was to validate a semiautomatic detection method for the arterial input functions (AIFs) using Kendall coefficient of concordance (KCC) for quantitative analysis of dynamic contrast-enhanced magnetic resonance imaging in non-small cell lung cancer patients. We prospectively enrolled 28 patients (17 men, 11 women; mean age, 62 years) who had biopsy-proven non-small cell lung cancer. All enrolled patients underwent dynamic contrast-enhanced magnetic resonance imaging of the entire thorax. For the quantitative measurement of pharmacokinetic parameters, K and ve, of the lung cancers, AIFs were determined in 2 different ways: a manual method that involved 3 independent thoracic radiologists selecting a region of interest (ROI) within the aortic arch in the 2D coronal plane and a semiautomatic method that used in-house software to establish a KCC score, which provided a measure of similarity to typical AIF pattern. Three independent readers selected voxel clusters with high KCC scores calculated 3-dimensionally across planes in the data set. K and ve were correlated using intraclass correlation coefficients (ICCs), and Bland-Altman plots were used to examine agreement across methods and reproducibility within a method. Arterial input functions were determined using the data from ROI volumes that were significantly larger in the semiautomatic method (mean ± SD, 3360 ± 768 mm) than in the manual method (677 ± 380 mm) (P < 0.001). K showed very strong agreement (ICC, 0.927) and ve showed moderately strong agreement (ICC, 0.718) between the semiautomatic and manual methods. The reproducibility for K (ICCmanual, 0.813 and ICCsemiautomatic, 0.998; P < 0.001) and ve (ICCmanual, 0.455 and ICCsemiautomatic, 0.985, P < 0.001) was significantly better with the semiautomatic method than the manual method. We found semiautomated detection using KCC to be a robust method for determining the AIF. This method allows for larger ROIs specified in 3D across planes

  1. GARFEM input deck description

    Energy Technology Data Exchange (ETDEWEB)

    Zdunek, A.; Soederberg, M. (Aeronautical Research Inst. of Sweden, Bromma (Sweden))

    1989-01-01

    The input card deck for the finite element program GARFEM version 3.2 is described in this manual. The program includes, but is not limited to, capabilities to handle the following problems: * Linear bar and beam element structures, * Geometrically non-linear problems (bar and beam), both static and transient dynamic analysis, * Transient response dynamics from a catalog of time varying external forcing function types or input function tables, * Eigenvalue solution (modes and frequencies), * Multi point constraints (MPC) for the modelling of mechanisms and e.g. rigid links. The MPC definition is used only in the geometrically linearized sense, * Beams with disjunct shear axis and neutral axis, * Beams with rigid offset. An interface exist that connects GARFEM with the program GAROS. GAROS is a program for aeroelastic analysis of rotating structures. Since this interface was developed GARFEM now serves as a preprocessor program in place of NASTRAN which was formerly used. Documentation of the methods applied in GARFEM exists but is so far limited to the capacities in existence before the GAROS interface was developed.

  2. Ecosystem function and service quantification and valuation in a conventional winter wheat production system with DAISY model in Denmark

    DEFF Research Database (Denmark)

    Ghaley, Bhim Bahadur; Porter, John Roy

    2014-01-01

    and ES provision. The objective was to quantify two EF: soil water storage and nitrogen mineralization and three ES: food and fodder production and carbon sequestration, in a conventional winter wheat production system at 2.6% SOM compared to 50% lower (1.3%) and 50% higher (3.9%) SOM in Denmark by DAISY...... model. At 2.6% SOM, the food and fodder production was 6.49 and 6.86tha-1year-1 respectively whereas carbon sequestration and soil water storage was 9.73tha-1year-1 and 684mmha-1year-1 respectively and nitrogen mineralisation was 83.58kgha-1year-1. At 2.6% SOM, the two EF and three ES values were US......$ 177 and US$ 2542ha-1year-1 respectively equivalent to US$ 96 and US$1370 millionyear-1 respectively in Denmark. The EF and ES quantities and values were positively correlated with SOM content. Hence, the quantification and valuation of EF and ES provides an empirical tool for optimising the EF and ES...

  3. Moment analysis of the time-dependent transmission of a step-function input of a radioactive gas through an adsorber bed

    International Nuclear Information System (INIS)

    Lee, T.V.; Rothstein, D.; Madey, R.

    1986-01-01

    The time-dependent concentration of a radioactive gas at the outlet of an adsorber bed for a step change in the input concentration is analyzed by the method of moments. This moment analysis yields analytical expressions for calculating the kinetic parameters of a gas adsorbed on a porous solid in terms of observables from a time-dependent transmission curve. Transmission is the ratio of the adsorbate outlet concentration to that at the inlet. The three nonequilibrium parameters are the longitudinal diffusion coefficient, the solid-phase diffusion coefficient, and the interfacial mass-transfer coefficient. Three quantities that can be extracted in principle from an experimental transmission curve are the equilibrium transmission, the average residence (or propagation) time, and the first-moment relative to the propagation time. The propagation time for a radioactive gas is given by the time integral of one minus the transmission (expressed as a fraction of the steady-state transmission). The steady-state transmission, the propagation time, and the first-order moment are functions of the three kinetic parameters and the equilibrium adsorption capacity. The equilibrium adsorption capacity is extracted from an experimental transmission curve for a stable gaseous isotope. The three kinetic parameters can be obtained by solving the three analytical expressions simultaneously. No empirical correlations are required

  4. Comparison of arterial input functions measured from ultra-fast dynamic contrast enhanced MRI and dynamic contrast enhanced computed tomography in prostate cancer patients

    Science.gov (United States)

    Wang, Shiyang; Lu, Zhengfeng; Fan, Xiaobing; Medved, Milica; Jiang, Xia; Sammet, Steffen; Yousuf, Ambereen; Pineda, Federico; Oto, Aytekin; Karczmar, Gregory S.

    2018-02-01

    The purpose of this study was to evaluate the accuracy of arterial input functions (AIFs) measured from dynamic contrast enhanced (DCE) MRI following a low dose of contrast media injection. The AIFs measured from DCE computed tomography (CT) were used as ‘gold standard’. A total of twenty patients received CT and MRI scans on the same day. Patients received 120 ml Iohexol in DCE-CT and a low dose of (0.015 mM kg-1) of gadobenate dimeglumine in DCE-MRI. The AIFs were measured in the iliac artery and normalized to the CT and MRI contrast agent doses. To correct for different temporal resolution and sampling periods of CT and MRI, an empirical mathematical model (EMM) was used to fit the AIFs first. Then numerical AIFs (AIFCT and AIFMRI) were calculated based on fitting parameters. The AIFMRI was convolved with a ‘contrast agent injection’ function (AIFMRICON ) to correct for the difference between MRI and CT contrast agent injection times (~1.5 s versus 30 s). The results show that the EMMs accurately fitted AIFs measured from CT and MRI. There was no significant difference (p  >  0.05) between the maximum peak amplitude of AIFs from CT (22.1  ±  4.1 mM/dose) and MRI after convolution (22.3  ±  5.2 mM/dose). The shapes of the AIFCT and AIFMRICON were very similar. Our results demonstrated that AIFs can be accurately measured by MRI following low dose contrast agent injection.

  5. Superposition Quantification

    Science.gov (United States)

    Chang, Li-Na; Luo, Shun-Long; Sun, Yuan

    2017-11-01

    The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182

  6. Combined Quantification of the Global Proteome, Phosphoproteome, and Proteolytic Cleavage to Characterize Altered Platelet Functions in the Human Scott Syndrome.

    Science.gov (United States)

    Solari, Fiorella A; Mattheij, Nadine J A; Burkhart, Julia M; Swieringa, Frauke; Collins, Peter W; Cosemans, Judith M E M; Sickmann, Albert; Heemskerk, Johan W M; Zahedi, René P

    2016-10-01

    The Scott syndrome is a very rare and likely underdiagnosed bleeding disorder associated with mutations in the gene encoding anoctamin-6. Platelets from Scott patients are impaired in various Ca 2+ -dependent responses, including phosphatidylserine exposure, integrin closure, intracellular protein cleavage, and cytoskeleton-dependent morphological changes. Given the central role of anoctamin-6 in the platelet procoagulant response, we used quantitative proteomics to understand the underlying molecular mechanisms and the complex phenotypic changes in Scott platelets compared with control platelets. Therefore, we applied an iTRAQ-based multi-pronged strategy to quantify changes in (1) the global proteome, (2) the phosphoproteome, and (3) proteolytic events between resting and stimulated Scott and control platelets. Our data indicate a limited number of proteins with decreased (70) or increased (64) expression in Scott platelets, among those we confirmed the absence of anoctamin-6 and the strong up-regulation of aquaporin-1 by parallel reaction monitoring. The quantification of 1566 phosphopeptides revealed major differences between Scott and control platelets after stimulation with thrombin/convulxin or ionomycin. In Scott platelets, phosphorylation levels of proteins regulating cytoskeletal or signaling events were increased. Finally, we quantified 1596 N-terminal peptides in activated Scott and control platelets, 180 of which we identified as calpain-regulated, whereas a distinct set of 23 neo-N termini was caspase-regulated. In Scott platelets, calpain-induced cleavage of cytoskeleton-linked and signaling proteins was downregulated, in accordance with an increased phosphorylation state. Thus, multipronged proteomic profiling of Scott platelets provides detailed insight into their protection against detrimental Ca 2+ -dependent changes that are normally associated with phosphatidylserine exposure. © 2016 by The American Society for Biochemistry and Molecular

  7. Spatially Resolved MR-Compatible Doppler Ultrasound: Proof of Concept for Triggering of Diagnostic Quality Cardiovascular MRI for Function and Flow Quantification at 3T.

    Science.gov (United States)

    Crowe, Lindsey Alexandra; Manasseh, Gibran; Chmielewski, Aneta; Hachulla, Anne-Lise; Speicher, Daniel; Greiser, Andreas; Muller, Hajo; de Perrot, Thomas; Vallee, Jean-Paul; Salomir, Rares

    2018-02-01

    We demonstrate the use of a magnetic-resonance (MR)-compatible ultrasound (US) imaging probe using spatially resolved Doppler for diagnostic quality cardiovascular MR imaging (MRI) as an initial step toward hybrid US/MR fetal imaging. A newly developed technology for a dedicated MR-compatible phased array ultrasound-imaging probe acquired pulsed color Doppler carotid images, which were converted in near-real time to a trigger signal for cardiac cine and flow quantification MRI. Ultrasound and MR data acquired simultaneously were interference free. Conventional electrocardiogram (ECG) and the proposed spatially resolved Doppler triggering were compared in 10 healthy volunteers. A synthetic "false-triggered" image was retrospectively processed using metric optimized gating (MOG). Images were scored by expert readers, and sharpness, cardiac function and aortic flow were quantified. Four-dimensional (4-D) flow (two volunteers) showed feasibility of Doppler triggering over a long acquisition time. Imaging modalities were compatible. US probe positioning was stable and comfortable. Image quality scores and quantified sharpness were statistically equal for Doppler- and ECG-triggering (p ). ECG-, Doppler-triggered, and MOG ejection fractions were equivalent (p ), with false-triggered values significantly lower (p 0.05). 4-D flow quantification gave consistent results between ECG and Doppler triggering. We report interference-free pulsed color Doppler ultrasound during MR data acquisition. Cardiovascular MRI of diagnostic quality was successfully obtained with pulsed color Doppler triggering. The hardware platform could further enable advanced free-breathing cardiac imaging. Doppler ultrasound triggering is applicable where ECG is compromised due to pathology or interference at higher magnetic fields, and where direct ECG is impossible, i.e., fetal imaging.

  8. PREMIUM - Benchmark on the quantification of the uncertainty of the physical models in the system thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Skorek, Tomasz; Crecy, Agnes de

    2013-01-01

    PREMIUM (Post BEMUSE Reflood Models Input Uncertainty Methods) is an activity launched with the aim to push forward the methods of quantification of physical models uncertainties in thermal-hydraulic codes. It is endorsed by OECD/NEA/CSNI/WGAMA. The benchmark PREMIUM is addressed to all who applies uncertainty evaluation methods based on input uncertainties quantification and propagation. The benchmark is based on a selected case of uncertainty analysis application to the simulation of quench front propagation in an experimental test facility. Application to an experiment enables evaluation and confirmation of the quantified probability distribution functions on the basis of experimental data. The scope of the benchmark comprises a review of the existing methods, selection of potentially important uncertain input parameters, preliminary quantification of the ranges and distributions of the identified parameters, evaluation of the probability density function using experimental results of tests performed on FEBA test facility and confirmation/validation of the performed quantification on the basis of blind calculation of Reflood 2-D PERICLES experiment. (authors)

  9. A novel framework for linking functional diversity of plants with other trophic levels for the quantification of ecosystem services.

    Czech Academy of Sciences Publication Activity Database

    Lavorel, S.; Storkey, J.; Bardgett, R.D.; de Bello, Francesco; Berg, M. P.; Le Roux, X.; Moretti, M.; Mulder, Ch.; Pakeman, R. J.; Diaz, S.; Harrington, R.

    2013-01-01

    Roč. 24, č. 5 (2013), 942-948 ISSN 1100-9233 R&D Projects: GA ČR GAP505/12/1296 Institutional support: RVO:67985939 Keywords : Functional trait * Ecosystem functioning * Biotic interactions Subject RIV: EF - Botanics Impact factor: 3.372, year: 2013

  10. Corrections of arterial input function for dynamic H215O PET to assess perfusion of pelvic tumours: arterial blood sampling versus image extraction

    International Nuclear Information System (INIS)

    Luedemann, L; Sreenivasa, G; Michel, R; Rosner, C; Plotkin, M; Felix, R; Wust, P; Amthauer, H

    2006-01-01

    Assessment of perfusion with 15 O-labelled water (H 2 15 O) requires measurement of the arterial input function (AIF). The arterial time activity curve (TAC) measured using the peripheral sampling scheme requires corrections for delay and dispersion. In this study, parametrizations with and without arterial spillover correction for fitting of the tissue curve are evaluated. Additionally, a completely noninvasive method for generation of the AIF from a dynamic positron emission tomography (PET) acquisition is applied to assess perfusion of pelvic tumours. This method uses a volume of interest (VOI) to extract the TAC from the femoral artery. The VOI TAC is corrected for spillover using a separate tissue TAC and for recovery by determining the recovery coefficient on a coregistered CT data set. The techniques were applied in five patients with pelvic tumours who underwent a total of 11 examinations. Delay and dispersion correction of the blood TAC without arterial spillover correction yielded in seven examinations solutions inconsistent with physiology. Correction of arterial spillover increased the fitting accuracy and yielded consistent results in all patients. Generation of an AIF from PET image data was investigated as an alternative to arterial blood sampling and was shown to have an intrinsic potential to determine the AIF noninvasively and reproducibly. The AIF extracted from a VOI in a dynamic PET scan was similar in shape to the blood AIF but yielded significantly higher tissue perfusion values (mean of 104.0 ± 52.0%) and lower partition coefficients (-31.6 ± 24.2%). The perfusion values and partition coefficients determined with the VOI technique have to be corrected in order to compare the results with those of studies using a blood AIF

  11. TART input manual

    International Nuclear Information System (INIS)

    Kimlinger, J.R.; Plechaty, E.F.

    1982-01-01

    The TART code is a Monte Carlo neutron/photon transport code that is only on the CRAY computer. All the input cards for the TART code are listed, and definitions for all input parameters are given. The execution and limitations of the code are described, and input for two sample problems are given

  12. Evaluation of severe accident risks: Quantification of major input parameters

    International Nuclear Information System (INIS)

    Harper, F.T.; Breeding, R.J.; Brown, T.D.; Gregory, J.J.; Payne, A.C.; Gorham, E.D.; Amos, C.N.

    1990-12-01

    This report records part of the vast amount of information received during the expert judgment elicitation process that took place in support of the NUREG-1150 effort sponsored by the US Nuclear Regulatory Commission. The results of the In-Vessel Expert Panel are presented in this part of Volume 2 of NUREG/CR-4551. The In-Vessel Panel considered six issues: temperature-induced pressurized water reactor (PWR) hot leg or surge line failure before vessel breach; temperature-induced steam generator tube rupture (SGTR) before vessel breach; boiling water reactor (BWR) in-vessel hydrogen production; BWR bottom head failure; PWR in-vessel hydrogen generation; and PWR bottom head failure. 83 refs., 58 figs., 56 tabs

  13. Evaluation of severe accident risks: Quantification of major input parameters

    International Nuclear Information System (INIS)

    Harper, F.T.; Payne, A.C.; Breeding, R.J.; Gorham, E.D.; Brown, T.D.; Rightley, G.S.; Gregory, J.J.; Murfin, W.; Amos, C.N.

    1991-04-01

    This report records part of the vast amount of information received during the expert judgment elicitation process that took place in support of the NUREG-1150 effort sponsored by the U.S. Nuclear Regulatory Commission. The results of the Containment Loads and Molten Core/Containment Interaction Expert Panel Elicitation are presented in this part of Volume 2 of NUREG/CR-4551. The Containment Loads Expert Panel considered seven issues: (1) hydrogen phenomena at Grand Gulf; (2) hydrogen burn at vessel breach at Sequoyah; (3) BWR reactor building failure due to hydrogen; (4) Grand Gulf containment loads at vessel breach; (5) pressure increment in the Sequoyah containment at vessel breach; (6) loads at vessel breach: Surry; and (7) pressure increment in the Zion containment at vessel breach. The report begins with a brief discussion of the methods used to elicit the information from the experts. The information for each issue is then presented in five sections: (1) a brief definition of the issue, (2) a brief summary of the technical rationale supporting the distributions developed by each of the experts, (3) a brief description of the operations that the project staff performed on the raw elicitation results in order to aggregate the distributions, (4) the aggregated distributions, and (5) the individual expert elicitation summaries. The Molten Core/Containment Interaction Panel considered three issues. The results of the following two of these issues are presented in this document: (1) Peach Bottom drywell shell meltthrough; and (2) Grand Gulf pedestal erosion. 89 figs., 154 tabs

  14. Direct qPCR quantification using the Quantifiler(®) Trio DNA quantification kit.

    Science.gov (United States)

    Liu, Jason Yingjie

    2014-11-01

    The effectiveness of a direct quantification assay is essential to the adoption of the combined direct quantification/direct STR workflow. In this paper, the feasibility of using the Quantifiler(®) Trio DNA quantification kit for the direct quantification of forensic casework samples was investigated. Both low-level touch DNA samples and blood samples were collected on PE swabs and quantified directly. The increased sensitivity of the Quantifiler(®) Trio kit enabled the detection of less than 10pg of DNA in unprocessed touch samples and also minimizes the stochastic effect experienced by different targets in the same sample. The DNA quantity information obtained from a direct quantification assay using the Quantifiler(®) Trio kit can also be used to accurately estimate the optimal input DNA quantity for a direct STR amplification reaction. The correlation between the direct quantification results (Quantifiler(®) Trio kit) and the direct STR results (GlobalFiler™ PCR amplification kit(*)) for low-level touch DNA samples indicates that direct quantification using the Quantifiler(®) Trio DNA quantification kit is more reliable than the Quantifiler(®) Duo DNA quantification kit for predicting the STR results of unprocessed touch DNA samples containing less than 10pg of DNA. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Flexible and re-configurable optical three-input XOR logic gate of phase-modulated signals with multicast functionality for potential application in optical physical-layer network coding.

    Science.gov (United States)

    Lu, Guo-Wei; Qin, Jun; Wang, Hongxiang; Ji, XuYuefeng; Sharif, Gazi Mohammad; Yamaguchi, Shigeru

    2016-02-08

    Optical logic gate, especially exclusive-or (XOR) gate, plays important role in accomplishing photonic computing and various network functionalities in future optical networks. On the other hand, optical multicast is another indispensable functionality to efficiently deliver information in optical networks. In this paper, for the first time, we propose and experimentally demonstrate a flexible optical three-input XOR gate scheme for multiple input phase-modulated signals with a 1-to-2 multicast functionality for each XOR operation using four-wave mixing (FWM) effect in single piece of highly-nonlinear fiber (HNLF). Through FWM in HNLF, all of the possible XOR operations among input signals could be simultaneously realized by sharing a single piece of HNLF. By selecting the obtained XOR components using a followed wavelength selective component, the number of XOR gates and the participant light in XOR operations could be flexibly configured. The re-configurability of the proposed XOR gate and the function integration of the optical logic gate and multicast in single device offer the flexibility in network design and improve the network efficiency. We experimentally demonstrate flexible 3-input XOR gate for four 10-Gbaud binary phase-shift keying signals with a multicast scale of 2. Error-free operations for the obtained XOR results are achieved. Potential application of the integrated XOR and multicast function in network coding is also discussed.

  16. A new method based on fractal variance function for analysis and quantification of sympathetic and vagal activity in variability of R-R time series in ECG signals

    Energy Technology Data Exchange (ETDEWEB)

    Conte, Elio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); School of Advanced International Studies on Nuclear, Theoretical and Nonlinear Methodologies-Bari (Italy)], E-mail: fisio2@fisiol.uniba.it; Federici, Antonio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari, Bari (Italy); Zbilut, Joseph P. [Department of Molecular Biophysics and Physiology, Rush University Medical Center, 1653W Congress, Chicago, IL 60612 (United States)

    2009-08-15

    It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.

  17. A new method based on fractal variance function for analysis and quantification of sympathetic and vagal activity in variability of R-R time series in ECG signals

    International Nuclear Information System (INIS)

    Conte, Elio; Federici, Antonio; Zbilut, Joseph P.

    2009-01-01

    It is known that R-R time series calculated from a recorded ECG, are strongly correlated to sympathetic and vagal regulation of the sinus pacemaker activity. In human physiology it is a crucial question to estimate such components with accuracy. Fourier analysis dominates still to day the data analysis efforts of such data ignoring that FFT is valid under some crucial restrictions that results largely violated in R-R time series data as linearity and stationarity. In order to go over such approach, we introduce a new method, called CZF. It is based on variogram analysis. It is aimed from a profound link with Recurrence Quantification Analysis that is a basic tool for investigation of non linear and non stationary time series. Therefore, a relevant feature of the method is that it finally may be applied also in cases of non linear and non stationary time series analysis. In addition, the method enables also to analyze the fractal variance function, the Generalized Fractal Dimension and, finally, the relative probability density function of the data. The CZF gives very satisfactory results. In the present paper it has been applied to direct experimental cases of normal subjects, patients with hypertension before and after therapy and in children under some different conditions of experimentation.

  18. Assessment of automatic quantification of myocardial perfusion and left ventricular function derived from ECG gated myocardial SPECT with {sup 99m}Tc-tetrofosmin in ischemic heart disease

    Energy Technology Data Exchange (ETDEWEB)

    Abe, Mitsunori; Habara, Hirokazu; Tatsuno, Hironari; Fukuda, Hiroshi; Hamada, Noriko; Kazatani, Yukio [Ehime Prefectural Central Hospital (Japan)

    1999-09-01

    Non-invasive assessment of ischemic heart disease (IHD) requires information of both myocardial perfusion and left ventricular (LV) function. Recently, automatic quantification of ECG-gated myocardial scintigraphy with {sup 99m}Tc-tetrofosmin (QGS) can provide both of them. QGS, coronary angiograms (CAG) and left venticulograms (LVG) were performed in 83 patients with severe IHD in same period. Significant stenosis of coronary artery in CAG were assessed by QGS. The sensitivity, specificity and accuracy of significant stenosis by QGS was excellent (85%, 93% and 88%). The LV end-distolic and end-systolic volumes (EDV and ESV), LV ejection fraction (EF) and regional LV wall motion determined by QGS were compared to LVG. There was a good correlation between the values obtained from QGS and LVG (EDV: r=0.86, ESV: r=0.94, EF: r=0.84, p<0.0001), but QGS tended to underestimate EDV and EF. High complete agreement of regional LV wall motion was gained with 427 (74.0%) out of total 581 segments. In conclusion, QGS data was considered to be useful for assessment of determine significant stenosis and LV function in severe IHD. (author)

  19. Quantification of mutation-derived bias for alternate mating functionalities of the Saccharomyces cerevisiae Ste2p pheromone receptor.

    Science.gov (United States)

    Choudhary, Pooja; Loewen, Michele C

    2016-01-01

    Although well documented for mammalian G-protein-coupled receptors, alternate functionalities and associated alternate signalling remain to be unequivocally established for the Saccharomyces cerevisiae pheromone Ste2p receptor. Here, evidence supporting alternate functionalities for Ste2p is re-evaluated, extended and quantified. In particular, strong mating and constitutive signalling mutations, focusing on residues S254, P258 and S259 in TM6 of Ste2p, are stacked and investigated in terms of their effects on classical G-protein-mediated signal transduction associated with cell cycle arrest, and alternatively, their impact on downstream mating projection and zygote formation events. In relative dose response experiments, accounting for systemic and observational bias, mutational-derived functional differences were observed, validating the S254L-derived bias for downstream mating responses and highlighting complex relationships between TM6-mutation derived constitutive signalling and ligand-induced functionalities. Mechanistically, localization studies suggest that alterations to receptor trafficking may contribute to mutational bias, in addition to expected receptor conformational stabilization effects. Overall, these results extend previous observations and quantify the contributions of Ste2p variants to mediating cell cycle arrest versus downstream mating functionalities. © Crown copyright 2015.

  20. A function-behavior-structure framework for quantification and reproduction of emotional haptic experience in using an electronic device

    International Nuclear Information System (INIS)

    Bae, Il Ju; Lee, Soo Hong; Ok, Hyung Seok; Lee, Jae In

    2013-01-01

    A user's haptic experience in using an electronic device is related to the continuous and dynamic variances of the structural state of the device. Since the changes of the structural component cause complex changes of the dynamics, it is difficult to predict the user's experience. We propose a function-behavior-structure framework to predict and improve the user's experience. The framework consists of the function layer model, the behavior layer model, and the structure layer model. Especially, the independent behavior model to the device is based on a physical phenomenon. Finally, an optimized structure which produces an ideal haptic experience for a cell phone is suggested.

  1. Upper Limb Evaluation in Duchenne Muscular Dystrophy: Fat-Water Quantification by MRI, Muscle Force and Function Define Endpoints for Clinical Trials.

    Science.gov (United States)

    Ricotti, Valeria; Evans, Matthew R B; Sinclair, Christopher D J; Butler, Jordan W; Ridout, Deborah A; Hogrel, Jean-Yves; Emira, Ahmed; Morrow, Jasper M; Reilly, Mary M; Hanna, Michael G; Janiczek, Robert L; Matthews, Paul M; Yousry, Tarek A; Muntoni, Francesco; Thornton, John S

    2016-01-01

    A number of promising experimental therapies for Duchenne muscular dystrophy (DMD) are emerging. Clinical trials currently rely on invasive biopsies or motivation-dependent functional tests to assess outcome. Quantitative muscle magnetic resonance imaging (MRI) could offer a valuable alternative and permit inclusion of non-ambulant DMD subjects. The aims of our study were to explore the responsiveness of upper-limb MRI muscle-fat measurement as a non-invasive objective endpoint for clinical trials in non-ambulant DMD, and to investigate the relationship of these MRI measures to those of muscle force and function. 15 non-ambulant DMD boys (mean age 13.3 y) and 10 age-gender matched healthy controls (mean age 14.6 y) were recruited. 3-Tesla MRI fat-water quantification was used to measure forearm muscle fat transformation in non-ambulant DMD boys compared with healthy controls. DMD boys were assessed at 4 time-points over 12 months, using 3-point Dixon MRI to measure muscle fat-fraction (f.f.). Images from ten forearm muscles were segmented and mean f.f. and cross-sectional area recorded. DMD subjects also underwent comprehensive upper limb function and force evaluation. Overall mean baseline forearm f.f. was higher in DMD than in healthy controls (pmuscle f.f. as a biomarker to monitor disease progression in the upper limb in non-ambulant DMD, with sensitivity adequate to detect group-level change over time intervals practical for use in clinical trials. Clinical validity is supported by the association of the progressive fat transformation of muscle with loss of muscle force and function.

  2. Quantification of the impact of a confounding variable on functional connectivity confirms anti-correlated networks in the resting-state.

    Science.gov (United States)

    Carbonell, F; Bellec, P; Shmuel, A

    2014-02-01

    The effect of regressing out the global average signal (GAS) in resting state fMRI data has become a concern for interpreting functional connectivity analyses. It is not clear whether the reported anti-correlations between the Default Mode and the Dorsal Attention Networks are intrinsic to the brain, or are artificially created by regressing out the GAS. Here we introduce a concept, Impact of the Global Average on Functional Connectivity (IGAFC), for quantifying the sensitivity of seed-based correlation analyses to the regression of the GAS. This voxel-wise IGAFC index is defined as the product of two correlation coefficients: the correlation between the GAS and the fMRI time course of a voxel, times the correlation between the GAS and the seed time course. This definition enables the calculation of a threshold at which the impact of regressing-out the GAS would be large enough to introduce spurious negative correlations. It also yields a post-hoc impact correction procedure via thresholding, which eliminates spurious correlations introduced by regressing out the GAS. In addition, we introduce an Artificial Negative Correlation Index (ANCI), defined as the absolute difference between the IGAFC index and the impact threshold. The ANCI allows a graded confidence scale for ranking voxels according to their likelihood of showing artificial correlations. By applying this method, we observed regions in the Default Mode and Dorsal Attention Networks that were anti-correlated. These findings confirm that the previously reported negative correlations between the Dorsal Attention and Default Mode Networks are intrinsic to the brain and not the result of statistical manipulations. Our proposed quantification of the impact that a confound may have on functional connectivity can be generalized to global effect estimators other than the GAS. It can be readily applied to other confounds, such as systemic physiological or head movement interferences, in order to quantify their

  3. Impact of arterial input function selection on the accuracy of dynamic contrast-enhanced MRI quantitative analysis for the diagnosis of clinically significant prostate cancer.

    Science.gov (United States)

    Azahaf, Mustapha; Haberley, Marc; Betrouni, Nacim; Ernst, Olivier; Behal, Hélène; Duhamel, Alain; Ouzzane, Adil; Puech, Philippe

    2016-03-01

    Using a limited temporal resolution dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) dataset to assess the impact of the arterial input function (AIF) choice on the transfer constant (K(trans) ) to distinguish prostate carcinoma (PCa) from benign tissue. Thirty-eight patients with clinically important peripheral PCa (≥0.5 cc) were retrospectively studied. These patients underwent 1.5T multiparametric prostate MR with PCa and benign regions of interest (ROIs) selected using a visual registration with morphometric reconstruction obtained from radical prostatectomy. Using three pharmacokinetic (PK) analysis software programs, the mean K(trans) of ROIs was computed using three AIFs: an individual AIF (Ind-AIF) and two literature population average AIFs of Weinmann (W-AIF) and of Fritz-Hansen (FH-AIF). A pairwise comparison of the area under the receiver operating characteristic curves (AUROCC) obtained with different AIFs was performed. AUROCCs obtained with W-AIF (ranging from 0.801 to 0.843) were significantly higher than FH-AIF (ranging from 0.698 to 0.780, 0.002 ≤ P ≤ 0.045) and similar to or higher than Ind-AIF (ranging from 0.591 to 0.839, 0.014 ≤ P ≤ 0.9). Ind-AIF and FH-AIF provided similar AUROCC (0.34 ≤ P ≤ 0.81). The pairwise correlation of K(trans) values was moderate to very strong when comparing W-AIF with FH-AIF (the Spearman's correlation coefficients [SCCs] ranged from 0.55 to 0.93) and very weak to moderate when comparing W-AIF with Ind-AIF (the SCCs ranged from 0.018 to 0.59) or FH-AIF with Ind-AIF (the SCCs ranged from 0.30 to 0.51). W-AIF yielded a higher performance than FH-AIF and a similar or higher performance than Ind-AIF in distinguishing PCa from benign tissue. © 2015 Wiley Periodicals, Inc.

  4. Estimating the arterial input function from dynamic contrast-enhanced MRI data with compensation for flow enhancement (I): Theory, method, and phantom experiments.

    Science.gov (United States)

    van Schie, Jeroen J N; Lavini, Cristina; van Vliet, Lucas J; Vos, Frans M

    2018-05-01

    The arterial input function (AIF) represents the time-dependent arterial contrast agent (CA) concentration that is used in pharmacokinetic modeling. To develop a novel method for estimating the AIF from dynamic contrast-enhanced (DCE-) MRI data, while compensating for flow enhancement. Signal simulation and phantom measurements. Time-intensity curves (TICs) were simulated for different numbers of excitation pulses modeling flow effects. A phantom experiment was performed in which a solution (without CA) was passed through a straight tube, at constant flow velocity. Dynamic fast spoiled gradient echo (FSPGRs) at 3T MRI, both in the simulations and in the phantom experiment. TICs were generated for a duration of 373 seconds and sampled at intervals of 1.247 seconds (300 timepoints). The proposed method first estimates the number of pulses that spins have received, and then uses this knowledge to accurately estimate the CA concentration. The difference between the median of the estimated number of pulses and the true value was determined, as well as the interquartile range (IQR) of the estimations. The estimated CA concentrations were evaluated in the same way. The estimated number of pulses was also used to calculate flow velocity. The difference between the median estimated and reference number of pulses varied from -0.005 to -1.371 (corresponding IQRs: 0.853 and 48.377) at true values of 10 and 180 pulses, respectively. The difference between the median estimated CA concentration and the reference value varied from -0.00015 to 0.00306 mmol/L (corresponding IQRs: 0.01989 and 1.51013 mmol/L) at true values of 0.5 and 8.0 mmol/l, respectively, at an intermediate value of 100 pulses. The estimated flow velocities in the phantom were within 10% of the reference value. The proposed method accurately corrects the MRI signal affected by the inflow effect. 1 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018;47:1190-1196. © 2017 International Society for Magnetic

  5. Estimating the arterial input function from dynamic contrast-enhanced MRI data with compensation for flow enhancement (II): Applications in spine diagnostics and assessment of crohn's disease.

    Science.gov (United States)

    van Schie, Jeroen J N; Lavini, Cristina; van Vliet, Lucas J; Kramer, Gem; Pieters-van den Bos, Indra; Marcus, J T; Stoker, Jaap; Vos, Frans M

    2018-05-01

    Pharmacokinetic (PK) models can describe microvascular density and integrity. An essential component of PK models is the arterial input function (AIF) representing the time-dependent concentration of contrast agent (CA) in the blood plasma supplied to a tissue. To evaluate a novel method for subject-specific AIF estimation that takes inflow effects into account. Retrospective study. Thirteen clinical patients referred for spine-related complaints; 21 patients from a study into luminal Crohn's disease with known Crohn's Disease Endoscopic Index of Severity (CDEIS). Dynamic fast spoiled gradient echo (FSPGR) at 3T. A population-averaged AIF, AIFs derived from distally placed regions of interest (ROIs), and the new AIF method were applied. Tofts' PK model parameters (including v p and K trans ) obtained with the three AIFs were compared. In the Crohn's patients K trans was correlated to CDEIS. The median values of the PK model parameters from the three methods were compared using a Mann-Whitney U-test. The associated variances were statistically assessed by the Brown-Forsythe test. Spearman's rank correlation coefficient was computed to test the correlation of K trans to CDEIS. The median v p was significantly larger when using the distal ROI approach, compared to the two other methods (P < 0.05 for both comparisons, in both applications). Also, the variances in v p were significantly larger with the ROI approach (P < 0.05 for all comparisons). In the Crohn's disease study, the estimated K trans parameter correlated better with the CDEIS (r = 0.733, P < 0.001) when the proposed AIF was used, compared to AIFs from the distal ROI method (r = 0.429, P = 0.067) or the population-averaged AIF (r = 0.567, P = 0.011). The proposed method yielded realistic PK model parameters and improved the correlation of the K trans parameter with CDEIS, compared to existing approaches. 3 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2018;47:1197-1204. © 2017

  6. Uncertainty quantification theory, implementation, and applications

    CERN Document Server

    Smith, Ralph C

    2014-01-01

    The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...

  7. A function-behavior-structure framework for quantification and reproduction of emotional haptic experience in using an electronic device

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Il Ju; Lee, Soo Hong [Yonsei University, Seoul (Korea, Republic of); Ok, Hyung Seok; Lee, Jae In [LG Electronics Inc, Seoul (Korea, Republic of)

    2013-08-15

    A user's haptic experience in using an electronic device is related to the continuous and dynamic variances of the structural state of the device. Since the changes of the structural component cause complex changes of the dynamics, it is difficult to predict the user's experience. We propose a function-behavior-structure framework to predict and improve the user's experience. The framework consists of the function layer model, the behavior layer model, and the structure layer model. Especially, the independent behavior model to the device is based on a physical phenomenon. Finally, an optimized structure which produces an ideal haptic experience for a cell phone is suggested.

  8. Parametric estimation of covariance function in Gaussian-process based Kriging models. Application to uncertainty quantification for computer experiments

    International Nuclear Information System (INIS)

    Bachoc, F.

    2013-01-01

    The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr

  9. Quantification of flash flood economic risk using ultra-detailed stage-damage functions and 2-D hydraulic models

    Science.gov (United States)

    Garrote, J.; Alvarenga, F. M.; Díez-Herrero, A.

    2016-10-01

    The village of Pajares de Pedraza (Segovia, Spain) is located in the floodplain of the Cega River, a left bank tributary of the Douro River. Repeated flash flood events occur in this small village because of its upstream catchment area, mountainous character and impermeable lithology, which reduce concentration time to just a few hours. River overbank flow has frequently caused flooding and property damage to homes and rural properties, most notably in 1927, 1991, 1996, 2001, 2013 and 2014. Consequently, a detailed analysis was carried out to quantify the economic risk of flash floods in peri-urban and rural areas. Magnitudes and exceedance probabilities were obtained from a flood frequency analysis of maximum discharges. To determine the extent and characteristics of the flooded area, we performed 2D hydraulic modeling (Iber 2.0 software) based on LIDAR (1 m) topography and considering three different scenarios associated with the initial construction (1997) and subsequent extension (2013) of a linear defense structure (rockfill dike or levee) to protect the population. Specific stage-damage functions were expressly developed using in situ data collection for exposed elements, with special emphasis on urban-type categories. The average number of elements and their unit value were established. The relationship between water depth and the height at which electric outlets, furniture, household goods, etc. were located was analyzed; due to its effect on the form of the function. Other nonspecific magnitude-damage functions were used in order to compare both economic estimates. The results indicate that the use of non-specific magnitude-damage functions leads to a significant overestimation of economic losses, partly linked to the use of general economic cost data. Furthermore, a detailed classification and financial assessment of exposed assets is the most important step to ensure a correct estimate of financial losses. In both cases, this should include a

  10. Input-output supervisor

    International Nuclear Information System (INIS)

    Dupuy, R.

    1970-01-01

    The input-output supervisor is the program which monitors the flow of informations between core storage and peripheral equipments of a computer. This work is composed of three parts: 1 - Study of a generalized input-output supervisor. With sample modifications it looks like most of input-output supervisors which are running now on computers. 2 - Application of this theory on a magnetic drum. 3 - Hardware requirement for time-sharing. (author) [fr

  11. A nanobody:GFP bacterial platform that enables functional enzyme display and easy quantification of display capacity.

    Science.gov (United States)

    Wendel, Sofie; Fischer, Emil C; Martínez, Virginia; Seppälä, Susanna; Nørholm, Morten H H

    2016-05-03

    Bacterial surface display is an attractive technique for the production of cell-anchored, functional proteins and engineering of whole-cell catalysts. Although various outer membrane proteins have been used for surface display, an easy and versatile high-throughput-compatible assay for evaluating and developing surface display systems is missing. Using a single domain antibody (also called nanobody) with high affinity for green fluorescent protein (GFP), we constructed a system that allows for fast, fluorescence-based detection of displayed proteins. The outer membrane hybrid protein LppOmpA and the autotransporter C-IgAP exposed the nanobody on the surface of Escherichia coli with very different efficiency. Both anchors were capable of functionally displaying the enzyme Chitinase A as a fusion with the nanobody, and this considerably increased expression levels compared to displaying the nanobody alone. We used flow cytometry to analyse display capability on single-cell versus population level and found that the signal peptide of the anchor has great effect on display efficiency. We have developed an inexpensive and easy read-out assay for surface display using nanobody:GFP interactions. The assay is compatible with the most common fluorescence detection methods, including multi-well plate whole-cell fluorescence detection, SDS-PAGE in-gel fluorescence, microscopy and flow cytometry. We anticipate that the platform will facilitate future in-depth studies on the mechanism of protein transport to the surface of living cells, as well as the optimisation of applications in industrial biotech.

  12. Quantification of functional groups and modeling of their ionization behavior in the active layer of FT30 reverse osmosis membrane.

    Science.gov (United States)

    Coronell, Orlando; Mariñas, Benito J; Zhang, Xijing; Cahill, David G

    2008-07-15

    A new experimental approach was developed to measure the concentration of charged functional groups (FGs) in the active layer of thin-film composite reverse osmosis (RO) and nanofiltration (NF) membranes as a function of solution pH. FT30 RO membrane, with a fully aromatic polyamide (PA) active layer sandwiched between a polysulfone support and a coating layer, was used. The experiments consisted of saturating charged FGs with heavy ion probes, and determining the ion probe concentration by Rutherford backscattering spectrometry (RBS). Deprotonated carboxylic groups were saturated with Ag+, and protonated amine groups with W04(2-). The ionization behavior of carboxylic and amine groups was modeled based on acid-base equilibrium theory. While the ionization behavior of amine groups was satisfactorily described by one dissociation constant (pKa = 4.74), two pKa values (5.23 and 8.97) were necessary to describe the titration curve of carboxylic groups. These results were consistent with the bimodal pore size distribution (PSD) of FT30 active layer reported in the literature. The calculated total concentrations of carboxylic and amine groups in the active layer of the FT30 RO membrane studied were 0.432 and 0.036 M, respectively, and the isoelectric point (IEP) was 4.7. The total concentration of carboxylic and amine groups revealed that the degree of cross-linking of the PA active layer of the FT30 RO membrane studied was 94%.

  13. Quantification of Right and Left Ventricular Function in Cardiac MR Imaging: Comparison of Semiautomatic and Manual Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Jose Martin Carreira

    2013-04-01

    Full Text Available The purpose of this study was to evaluate the performance of a semiautomatic segmentation method for the anatomical and functional assessment of both ventricles from cardiac cine magnetic resonance (MR examinations, reducing user interaction to a “mouse-click”. Fifty-two patients with cardiovascular diseases were examined using a 1.5-T MR imaging unit. Several parameters of both ventricles, such as end-diastolic volume (EDV, end-systolic volume (ESV and ejection fraction (EF, were quantified by an experienced operator using the conventional method based on manually-defined contours, as the standard of reference; and a novel semiautomatic segmentation method based on edge detection, iterative thresholding and region growing techniques, for evaluation purposes. No statistically significant differences were found between the two measurement values obtained for each parameter (p > 0.05. Correlation to estimate right ventricular function was good (r > 0.8 and turned out to be excellent (r > 0.9 for the left ventricle (LV. Bland-Altman plots revealed acceptable limits of agreement between the two methods (95%. Our study findings indicate that the proposed technique allows a fast and accurate assessment of both ventricles. However, further improvements are needed to equal results achieved for the right ventricle (RV using the conventional methodology.

  14. A nanobody:GFP bacterial platform that enables functional enzyme display and easy quantification of display capacity

    DEFF Research Database (Denmark)

    Wendel, Sofie; Christian Fischer, Emil; Martinez, Virginia

    2016-01-01

    Background: Bacterial surface display is an attractive technique for the production of cell-anchored, functional proteins and engineering of whole-cell catalysts. Although various outer membrane proteins have been used for surface display, an easy and versatile high-throughput-compatible assay...... to displaying the nanobody alone. We used flow cytometry to analyse display capability on single-cell versus population level and found that the signal peptide of the anchor has great effect on display efficiency.Conclusions: We have developed an inexpensive and easy read-out assay for surface display using...... nanobody: GFP interactions. The assay is compatible with the most common fluorescence detection methods, including multi-well plate whole-cell fluorescence detection, SDS-PAGE in-gel fluorescence, microscopy and flow cytometry. We anticipate that the platform will facilitate future in-depth studies...

  15. Upper Limb Evaluation in Duchenne Muscular Dystrophy: Fat-Water Quantification by MRI, Muscle Force and Function Define Endpoints for Clinical Trials.

    Directory of Open Access Journals (Sweden)

    Valeria Ricotti

    Full Text Available A number of promising experimental therapies for Duchenne muscular dystrophy (DMD are emerging. Clinical trials currently rely on invasive biopsies or motivation-dependent functional tests to assess outcome. Quantitative muscle magnetic resonance imaging (MRI could offer a valuable alternative and permit inclusion of non-ambulant DMD subjects. The aims of our study were to explore the responsiveness of upper-limb MRI muscle-fat measurement as a non-invasive objective endpoint for clinical trials in non-ambulant DMD, and to investigate the relationship of these MRI measures to those of muscle force and function.15 non-ambulant DMD boys (mean age 13.3 y and 10 age-gender matched healthy controls (mean age 14.6 y were recruited. 3-Tesla MRI fat-water quantification was used to measure forearm muscle fat transformation in non-ambulant DMD boys compared with healthy controls. DMD boys were assessed at 4 time-points over 12 months, using 3-point Dixon MRI to measure muscle fat-fraction (f.f.. Images from ten forearm muscles were segmented and mean f.f. and cross-sectional area recorded. DMD subjects also underwent comprehensive upper limb function and force evaluation.Overall mean baseline forearm f.f. was higher in DMD than in healthy controls (p<0.001. A progressive f.f. increase was observed in DMD over 12 months, reaching significance from 6 months (p<0.001, n = 7, accompanied by a significant loss in pinch strength at 6 months (p<0.001, n = 9 and a loss of upper limb function and grip force observed over 12 months (p<0.001, n = 8.These results support the use of MRI muscle f.f. as a biomarker to monitor disease progression in the upper limb in non-ambulant DMD, with sensitivity adequate to detect group-level change over time intervals practical for use in clinical trials. Clinical validity is supported by the association of the progressive fat transformation of muscle with loss of muscle force and function.

  16. SSYST-3. Input description

    International Nuclear Information System (INIS)

    Meyder, R.

    1983-12-01

    The code system SSYST-3 is designed to analyse the thermal and mechanical behaviour of a fuel rod during a LOCA. The report contains a complete input-list for all modules and several tested inputs for a LOCA analysis. (orig.)

  17. MDS MIC Catalog Inputs

    Science.gov (United States)

    Johnson-Throop, Kathy A.; Vowell, C. W.; Smith, Byron; Darcy, Jeannette

    2006-01-01

    This viewgraph presentation reviews the inputs to the MDS Medical Information Communique (MIC) catalog. The purpose of the group is to provide input for updating the MDS MIC Catalog and to request that MMOP assign Action Item to other working groups and FSs to support the MITWG Process for developing MIC-DDs.

  18. Quantification of Kinetic Rate Law Parameters of Uranium Release from Sodium Autunite as a Function of Aqueous Bicarbonate Concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Gudavalli, Ravi; Katsenovich, Yelena; Wellman, Dawn M.; Lagos, Leonel; Tansel, Berrin

    2013-09-05

    ABSTRACT: Hydrogen carbonate is one of the most significant components within the uranium geochemical cycle. In aqueous solutions, hydrogen carbonate forms strong complexes with uranium. As such, aqueous bicarbonate may significantly increase the rate of uranium release from uranium minerals. Quantifying the relationship of aqueous hydrogen carbonate solutions to the rate of uranium release during dissolution is critical to understanding the long-term fate of uranium within the environment. Single-pass flow-through (SPTF) experiments were conducted to estimate the rate of uranium release from Na meta-autunite as a function of bicarbonate solutions (0.0005-0.003 M) under the pH range of 6-11 and temperatures of 5-60oC. Consistent with the results of previous investigation, the rate of uranium release from sodium autunite exhibited minimal dependency on temperature; but were strongly dependent on pH and increasing concentrations of bicarbonate solutions. Most notably at pH 7, the rate of uranium release exhibited 370 fold increases relative to the rate of uranium release in the absence of bicarbonate. However, the effect of increasing concentrations of bicarbonate solutions on the release of uranium was significantly less under higher pH conditions. It is postulated that at high pH values, surface sites are saturated with carbonate, thus the addition of more bicarbonate would have less effect on uranium release. Results indicate the activation energies were unaffected by temperature and bicarbonate concentration variations, but were strongly dependent on pH conditions. As pH increased from 6 to 11, activation energy values were observed to decrease from 29.94 kJ mol-1 to 13.07 kJ mol-1. The calculated activation energies suggest a surface controlled dissolution mechanism.

  19. Quantification of landfill methane using modified Intergovernmental Panel on Climate Change's waste model and error function analysis.

    Science.gov (United States)

    Govindan, Siva Shangari; Agamuthu, P

    2014-10-01

    Waste management can be regarded as a cross-cutting environmental 'mega-issue'. Sound waste management practices support the provision of basic needs for general health, such as clean air, clean water and safe supply of food. In addition, climate change mitigation efforts can be achieved through reduction of greenhouse gas emissions from waste management operations, such as landfills. Landfills generate landfill gas, especially methane, as a result of anaerobic degradation of the degradable components of municipal solid waste. Evaluating the mode of generation and collection of landfill gas has posted a challenge over time. Scientifically, landfill gas generation rates are presently estimated using numerical models. In this study the Intergovernmental Panel on Climate Change's Waste Model is used to estimate the methane generated from a Malaysian sanitary landfill. Key parameters of the model, which are the decay rate and degradable organic carbon, are analysed in two different approaches; the bulk waste approach and waste composition approach. The model is later validated using error function analysis and optimum decay rate, and degradable organic carbon for both approaches were also obtained. The best fitting values for the bulk waste approach are a decay rate of 0.08 y(-1) and degradable organic carbon value of 0.12; and for the waste composition approach the decay rate was found to be 0.09 y(-1) and degradable organic carbon value of 0.08. From this validation exercise, the estimated error was reduced by 81% and 69% for the bulk waste and waste composition approach, respectively. In conclusion, this type of modelling could constitute a sensible starting point for landfills to introduce careful planning for efficient gas recovery in individual landfills. © The Author(s) 2014.

  20. PLEXOS Input Data Generator

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-01

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  1. ColloInputGenerator

    DEFF Research Database (Denmark)

    2013-01-01

    This is a very simple program to help you put together input files for use in Gries' (2007) R-based collostruction analysis program. It basically puts together a text file with a frequency list of lexemes in the construction and inserts a column where you can add the corpus frequencies. It requires...... it as input for basic collexeme collostructional analysis (Stefanowitsch & Gries 2003) in Gries' (2007) program. ColloInputGenerator is, in its current state, based on programming commands introduced in Gries (2009). Projected updates: Generation of complete work-ready frequency lists....

  2. Development of Quantification Method for Bioluminescence Imaging

    International Nuclear Information System (INIS)

    Kim, Hyeon Sik; Min, Jung Joon; Lee, Byeong Il; Choi, Eun Seo; Tak, Yoon O; Choi, Heung Kook; Lee, Ju Young

    2009-01-01

    Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time

  3. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  4. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  5. Input description for BIOPATH

    International Nuclear Information System (INIS)

    Marklund, J.E.; Bergstroem, U.; Edlund, O.

    1980-01-01

    The computer program BIOPATH describes the flow of radioactivity within a given ecosystem after a postulated release of radioactive material and the resulting dose for specified population groups. The present report accounts for the input data necessary to run BIOPATH. The report also contains descriptions of possible control cards and an input example as well as a short summary of the basic theory.(author)

  6. Input and execution

    International Nuclear Information System (INIS)

    Carr, S.; Lane, G.; Rowling, G.

    1986-11-01

    This document describes the input procedures, input data files and operating instructions for the SYVAC A/C 1.03 computer program. SYVAC A/C 1.03 simulates the groundwater mediated movement of radionuclides from underground facilities for the disposal of low and intermediate level wastes to the accessible environment, and provides an estimate of the subsequent radiological risk to man. (author)

  7. Gestures and multimodal input

    OpenAIRE

    Keates, Simeon; Robinson, Peter

    1999-01-01

    For users with motion impairments, the standard keyboard and mouse arrangement for computer access often presents problems. Other approaches have to be adopted to overcome this. In this paper, we will describe the development of a prototype multimodal input system based on two gestural input channels. Results from extensive user trials of this system are presented. These trials showed that the physical and cognitive loads on the user can quickly become excessive and detrimental to the interac...

  8. Impacts of Sensation, Perception, and Motor Abilities of the Ipsilesional Upper Limb on Hand Functions in Unilateral Stroke: Quantifications From Biomechanical and Functional Perspectives.

    Science.gov (United States)

    Hsu, Hsiu-Yun; Ke, Chia-Wen; Kuan, Ta-Shen; Yang, Hsiu-Ching; Tsai, Ching-Liang; Kuo, Li-Chieh

    2018-02-01

    The presence of subtle losses in hand dexterity after stroke affects the regaining of independence with regard to activities of daily living. Therefore, awareness of ipsilesional upper extremity (UE) function may be of importance when developing a comprehensive rehabilitation program. However, current hand function tests seem to be unable to identify asymptomatic UE impairments. To assess the motor coordination as well as the sensory perception of an ipsilesional UE using biomechanical analysis of performance-oriented tasks and conducting a Manual Tactile Test (MTT). Case-controlled study. A university hospital. A total of 21 patients with unilateral stroke, along with 21 matched healthy control subjects, were recruited. Each participant was requested to perform a pinch-holding-up activity (PHUA) test, object-transport task, and reach-to-grasp task via motion capture, as well as the MTT. The kinetic data of the PHUA test, kinematics analysis of functional movements, and time requirement of MTT were analyzed. Patients with ipsilesional UE had an inferior ability to scale and produce pinch force precisely when conducting the PHUA test compared to the healthy controls (P perception (P sensation-perception-motor system in the ipsilesional UE. Integration of sensorimotor training programs for ipsilesional UE in future neuro-rehabilitation strategies may provide more beneficial effects to regain patients' motor recovery and to promote daily living activity independence than focusing on paretic arm motor training alone. III. Copyright © 2018 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  9. Hypothesis: Low frequency heart rate variability (LF-HRV) is an input for undisclosed yet biological adaptive control, governing the cardiovascular regulations to assure optimal functioning.

    Science.gov (United States)

    Gabbay, Uri; Bobrovsky, Ben Zion

    2012-02-01

    Cardiovascular regulation is considered today as having three levels: autoregulations, neural regulations and hormonal regulations. We hypothesize that the cardiovascular regulation has an additional (fourth) control level which is outer, hierarchical (adaptive) loop where LF-HRV amplitude serves as a reference input which the neural cardiovascular center detects and responses in order to maintain LF-HRV around some prescribed level. Supporting evidences: LF-HRV absence during artificial cardiac pacing may be associated with "pacemaker syndrome" which had not been sufficiently understood regardless of apparently unimpaired cardiovascular performance. The hypothesis may provide an essential basis for understanding several cardiovascular morbidities and insight toward diagnostic measures and treatments (including but not limited to adding variability to the pulse generator of artificial pacemakers to eliminate "pace maker syndrome"). Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model

    Science.gov (United States)

    Nikbay, Melike; Heeg, Jennifer

    2017-01-01

    This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.

  11. Microbial Communities Are Well Adapted to Disturbances in Energy Input.

    Science.gov (United States)

    Fernandez-Gonzalez, Nuria; Huber, Julie A; Vallino, Joseph J

    2016-01-01

    Although microbial systems are well suited for studying concepts in ecological theory, little is known about how microbial communities respond to long-term periodic perturbations beyond diel oscillations. Taking advantage of an ongoing microcosm experiment, we studied how methanotrophic microbial communities adapted to disturbances in energy input over a 20-day cycle period. Sequencing of bacterial 16S rRNA genes together with quantification of microbial abundance and ecosystem function were used to explore the long-term dynamics (510 days) of methanotrophic communities under continuous versus cyclic chemical energy supply. We observed that microbial communities appeared inherently well adapted to disturbances in energy input and that changes in community structure in both treatments were more dependent on internal dynamics than on external forcing. The results also showed that the rare biosphere was critical to seeding the internal community dynamics, perhaps due to cross-feeding or other strategies. We conclude that in our experimental system, internal feedbacks were more important than external drivers in shaping the community dynamics over time, suggesting that ecosystems can maintain their function despite inherently unstable community dynamics. IMPORTANCE Within the broader ecological context, biological communities are often viewed as stable and as only experiencing succession or replacement when subject to external perturbations, such as changes in food availability or the introduction of exotic species. Our findings indicate that microbial communities can exhibit strong internal dynamics that may be more important in shaping community succession than external drivers. Dynamic "unstable" communities may be important for ecosystem functional stability, with rare organisms playing an important role in community restructuring. Understanding the mechanisms responsible for internal community dynamics will certainly be required for understanding and manipulating

  12. A Method to Select Test Input Cases for Safety-critical Software

    International Nuclear Information System (INIS)

    Kim, Heeeun; Kang, Hyungook; Son, Hanseong

    2013-01-01

    This paper proposes a new testing methodology for effective and realistic quantification of RPS software failure probability. Software failure probability quantification is important factor in digital system safety assessment. In this study, the method for software test case generation is briefly described. The test cases generated by this method reflect the characteristics of safety-critical software and past inputs. Furthermore, the number of test cases can be reduced, but it is possible to perform exhaustive test. Aspect of software also can be reflected as failure data, so the final failure data can include the failure of software itself and external influences. Software reliability is generally accepted as the key factor in software quality since it quantifies software failures which can make a powerful system inoperative. In the KNITS (Korea Nuclear Instrumentation and Control Systems) project, the software for the fully digitalized reactor protection system (RPS) was developed under a strict procedure including unit testing and coverage measurement. Black box testing is one type of Verification and validation (V and V), in which given input values are entered and the resulting output values are compared against the expected output values. Programmable logic controllers (PLCs) were used in implementing critical systems and function block diagram (FBD) is a commonly used implementation language for PLC

  13. Presynaptic muscarinic receptors, calcium channels, and protein kinase C modulate the functional disconnection of weak inputs at polyinnervated neonatal neuromuscular synapses.

    Science.gov (United States)

    Santafe, M M; Garcia, N; Lanuza, M A; Tomàs, M; Besalduch, N; Tomàs, J

    2009-04-01

    We studied the relation among calcium inflows, voltage-dependent calcium channels (VDCC), presynaptic muscarinic acetylcholine receptors (mAChRs), and protein kinase C (PKC) activity in the modulation of synapse elimination. We used intracellular recording to determine the synaptic efficacy in dually innervated endplates of the levator auris longus muscle of newborn rats during axonal competition in the postnatal synaptic elimination period. In these dual junctions, the weak nerve terminal was potentiated by partially reducing calcium entry (P/Q-, N-, or L-type VDCC-specific block or 500 muM magnesium ions), M1- or M4-type selective mAChR block, or PKC block. Moreover, reducing calcium entry or blocking PKC or mAChRs results in unmasking functionally silent nerve endings that now recover neurotransmitter release. Our results show interactions between these molecules and indicate that there is a release inhibition mechanism based on an mAChR-PKC-VDCC intracellular cascade. When it is fully active in certain weak motor axons, it can depress ACh release and even disconnect synapses. We suggest that this mechanism plays a central role in the elimination of redundant neonatal synapses, because functional axonal withdrawal can indeed be reversed by mAChRs, VDCCs, or PKC block.

  14. Statistical identification of effective input variables

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications

  15. Dynamical Analysis of a Class of Prey-Predator Model with Beddington-DeAngelis Functional Response, Stochastic Perturbation, and Impulsive Toxicant Input

    Directory of Open Access Journals (Sweden)

    Feifei Bian

    2017-01-01

    Full Text Available A stochastic prey-predator system in a polluted environment with Beddington-DeAngelis functional response is proposed and analyzed. Firstly, for the system with white noise perturbation, by analyzing the limit system, the existence of boundary periodic solutions and positive periodic solutions is proved and the sufficient conditions for the existence of boundary periodic solutions and positive periodic solutions are derived. And then for the stochastic system, by introducing Markov regime switching, the sufficient conditions for extinction or persistence of such system are obtained. Furthermore, we proved that the system is ergodic and has a stationary distribution when the concentration of toxicant is a positive constant. Finally, two examples with numerical simulations are carried out in order to illustrate the theoretical results.

  16. Improved correlation between CT emphysema quantification and pulmonary function test by density correction of volumetric CT data based on air and aortic density

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Soo [Department of Radiology, Chungnam National University Hospital, Chungnam National University School of Medicine (Korea, Republic of); Seo, Joon Beom, E-mail: seojb@amc.seoul.kr [Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center (Korea, Republic of); Kim, Namkug; Chae, Eun Jin [Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center (Korea, Republic of); Lee, Young Kyung [Department of Radiology, Kyung Hee University Hospital at Gangdong (Korea, Republic of); Oh, Yeon Mok; Lee, Sang Do [Division of Pulmonology, Department of Internal Medicine, University of Ulsan College of Medicine, Asan Medical Center (Korea, Republic of)

    2014-01-15

    Objectives: To determine the improvement of emphysema quantification with density correction and to determine the optimal site to use for air density correction on volumetric computed tomography (CT). Methods: Seventy-eight CT scans of COPD patients (GOLD II–IV, smoking history 39.2 ± 25.3 pack-years) were obtained from several single-vendor 16-MDCT scanners. After density measurement of aorta, tracheal- and external air, volumetric CT density correction was conducted (two reference values: air, −1000 HU/blood, +50 HU). Using in-house software, emphysema index (EI) and mean lung density (MLD) were calculated. Differences in air densities, MLD and EI prior to and after density correction were evaluated (paired t-test). Correlation between those parameters and FEV{sub 1} and FEV{sub 1}/FVC were compared (age- and sex adjusted partial correlation analysis). Results: Measured densities (HU) of tracheal- and external air differed significantly (−990 ± 14, −1016 ± 9, P < 0.001). MLD and EI on original CT data, after density correction using tracheal- and external air also differed significantly (MLD: −874.9 ± 27.6 vs. −882.3 ± 24.9 vs. −860.5 ± 26.6; EI: 16.8 ± 13.4 vs. 21.1 ± 14.5 vs. 9.7 ± 10.5, respectively, P < 0.001). The correlation coefficients between CT quantification indices and FEV{sub 1}, and FEV{sub 1}/FVC increased after density correction. The tracheal air correction showed better results than the external air correction. Conclusion: Density correction of volumetric CT data can improve correlations of emphysema quantification and PFT.

  17. Improved correlation between CT emphysema quantification and pulmonary function test by density correction of volumetric CT data based on air and aortic density

    International Nuclear Information System (INIS)

    Kim, Song Soo; Seo, Joon Beom; Kim, Namkug; Chae, Eun Jin; Lee, Young Kyung; Oh, Yeon Mok; Lee, Sang Do

    2014-01-01

    Objectives: To determine the improvement of emphysema quantification with density correction and to determine the optimal site to use for air density correction on volumetric computed tomography (CT). Methods: Seventy-eight CT scans of COPD patients (GOLD II–IV, smoking history 39.2 ± 25.3 pack-years) were obtained from several single-vendor 16-MDCT scanners. After density measurement of aorta, tracheal- and external air, volumetric CT density correction was conducted (two reference values: air, −1000 HU/blood, +50 HU). Using in-house software, emphysema index (EI) and mean lung density (MLD) were calculated. Differences in air densities, MLD and EI prior to and after density correction were evaluated (paired t-test). Correlation between those parameters and FEV 1 and FEV 1 /FVC were compared (age- and sex adjusted partial correlation analysis). Results: Measured densities (HU) of tracheal- and external air differed significantly (−990 ± 14, −1016 ± 9, P < 0.001). MLD and EI on original CT data, after density correction using tracheal- and external air also differed significantly (MLD: −874.9 ± 27.6 vs. −882.3 ± 24.9 vs. −860.5 ± 26.6; EI: 16.8 ± 13.4 vs. 21.1 ± 14.5 vs. 9.7 ± 10.5, respectively, P < 0.001). The correlation coefficients between CT quantification indices and FEV 1 , and FEV 1 /FVC increased after density correction. The tracheal air correction showed better results than the external air correction. Conclusion: Density correction of volumetric CT data can improve correlations of emphysema quantification and PFT

  18. Quantification in emission tomography

    International Nuclear Information System (INIS)

    Buvat, Irene

    2011-11-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) and positron emission tomography (PET) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena 2 - Main problems impacting quantification in PET and SPECT: problems, consequences, correction methods, results (Attenuation, scattering, partial volume effect, movement, un-stationary spatial resolution in SPECT, fortuitous coincidences in PET, standardisation in PET); 3 - Synthesis: accessible efficiency, know-how, Precautions, beyond the activity measurement

  19. FLUTAN input specifications

    International Nuclear Information System (INIS)

    Borgwaldt, H.; Baumann, W.; Willerding, G.

    1991-05-01

    FLUTAN is a highly vectorized computer code for 3-D fluiddynamic and thermal-hydraulic analyses in cartesian and cylinder coordinates. It is related to the family of COMMIX codes originally developed at Argonne National Laboratory, USA. To a large extent, FLUTAN relies on basic concepts and structures imported from COMMIX-1B and COMMIX-2 which were made available to KfK in the frame of cooperation contracts in the fast reactor safety field. While on the one hand not all features of the original COMMIX versions have been implemented in FLUTAN, the code on the other hand includes some essential innovative options like CRESOR solution algorithm, general 3-dimensional rebalacing scheme for solving the pressure equation, and LECUSSO-QUICK-FRAM techniques suitable for reducing 'numerical diffusion' in both the enthalphy and momentum equations. This report provides users with detailed input instructions, presents formulations of the various model options, and explains by means of comprehensive sample input, how to use the code. (orig.) [de

  20. Input or intimacy

    Directory of Open Access Journals (Sweden)

    Judit Navracsics

    2014-01-01

    Full Text Available According to the critical period hypothesis, the earlier the acquisition of a second language starts, the better. Owing to the plasticity of the brain, up until a certain age a second language can be acquired successfully according to this view. Early second language learners are commonly said to have an advantage over later ones especially in phonetic/phonological acquisition. Native-like pronunciation is said to be most likely to be achieved by young learners. However, there is evidence of accentfree speech in second languages learnt after puberty as well. Occasionally, on the other hand, a nonnative accent may appear even in early second (or third language acquisition. Cross-linguistic influences are natural in multilingual development, and we would expect the dominant language to have an impact on the weaker one(s. The dominant language is usually the one that provides the largest amount of input for the child. But is it always the amount that counts? Perhaps sometimes other factors, such as emotions, ome into play? In this paper, data obtained from an EnglishPersian-Hungarian trilingual pair of siblings (under age 4 and 3 respectively is analyzed, with a special focus on cross-linguistic influences at the phonetic/phonological levels. It will be shown that beyond the amount of input there are more important factors that trigger interference in multilingual development.

  1. Measuring Input Thresholds on an Existing Board

    Science.gov (United States)

    Kuperman, Igor; Gutrich, Daniel G.; Berkun, Andrew C.

    2011-01-01

    A critical PECL (positive emitter-coupled logic) interface to Xilinx interface needed to be changed on an existing flight board. The new Xilinx input interface used a CMOS (complementary metal-oxide semiconductor) type of input, and the driver could meet its thresholds typically, but not in worst-case, according to the data sheet. The previous interface had been based on comparison with an external reference, but the CMOS input is based on comparison with an internal divider from the power supply. A way to measure what the exact input threshold was for this device for 64 inputs on a flight board was needed. The measurement technique allowed an accurate measurement of the voltage required to switch a Xilinx input from high to low for each of the 64 lines, while only probing two of them. Directly driving an external voltage was considered too risky, and tests done on any other unit could not be used to qualify the flight board. The two lines directly probed gave an absolute voltage threshold calibration, while data collected on the remaining 62 lines without probing gave relative measurements that could be used to identify any outliers. The PECL interface was forced to a long-period square wave by driving a saturated square wave into the ADC (analog to digital converter). The active pull-down circuit was turned off, causing each line to rise rapidly and fall slowly according to the input s weak pull-down circuitry. The fall time shows up as a change in the pulse width of the signal ready by the Xilinx. This change in pulse width is a function of capacitance, pulldown current, and input threshold. Capacitance was known from the different trace lengths, plus a gate input capacitance, which is the same for all inputs. The pull-down current is the same for all inputs including the two that are probed directly. The data was combined, and the Excel solver tool was used to find input thresholds for the 62 lines. This was repeated over different supply voltages and

  2. Quantification of local mobilities

    DEFF Research Database (Denmark)

    Zhang, Y. B.

    2018-01-01

    A new method for quantification of mobilities of local recrystallization boundary segments is presented. The quantification is based on microstructures characterized using electron microscopy and on determination of migration velocities and driving forces for local boundary segments. Pure aluminium...... is investigated and the results show that even for a single recrystallization boundary, different boundary segments migrate differently, and the differences can be understood based on variations in mobilities and local deformed microstructures. The present work has important implications for understanding...

  3. GAROS input deck description

    Energy Technology Data Exchange (ETDEWEB)

    Vollan, A.; Soederberg, M. (Aeronautical Research Inst. of Sweden, Bromma (Sweden))

    1989-01-01

    This report describes the input for the programs GAROS1 and GAROS2, version 5.8 and later, February 1988. The GAROS system, developed by Arne Vollan, Omega GmbH, is used for the analysis of the mechanical and aeroelastic properties for general rotating systems. It has been specially designed to meet the requirements of aeroelastic stability and dynamic response of horizontal axis wind energy converters. Some of the special characteristics are: * The rotor may have one or more blades. * The blades may be rigidly attached to the hub, or they may be fully articulated. * The full elastic properties of the blades, the hub, the machine house and the tower are taken into account. * With the same basic model, a number of different analyses can be performed: Snap-shot analysis, Floquet method, transient response analysis, frequency response analysis etc.

  4. Access to Research Inputs

    DEFF Research Database (Denmark)

    Czarnitzki, Dirk; Grimpe, Christoph; Pellens, Maikel

    2015-01-01

    The viability of modern open science norms and practices depends on public disclosure of new knowledge, methods, and materials. However, increasing industry funding of research can restrict the dissemination of results and materials. We show, through a survey sample of 837 German scientists in life...... sciences, natural sciences, engineering, and social sciences, that scientists who receive industry funding are twice as likely to deny requests for research inputs as those who do not. Receiving external funding in general does not affect denying others access. Scientists who receive external funding...... of any kind are, however, 50 % more likely to be denied access to research materials by others, but this is not affected by being funded specifically by industry...

  5. Access to Research Inputs

    DEFF Research Database (Denmark)

    Czarnitzki, Dirk; Grimpe, Christoph; Pellens, Maikel

    The viability of modern open science norms and practices depend on public disclosure of new knowledge, methods, and materials. However, increasing industry funding of research can restrict the dissemination of results and materials. We show, through a survey sample of 837 German scientists in life...... sciences, natural sciences, engineering, and social sciences, that scientists who receive industry funding are twice as likely to deny requests for research inputs as those who do not. Receiving external funding in general does not affect denying others access. Scientists who receive external funding...... of any kind are, however, 50% more likely to be denied access to research materials by others, but this is not affected by being funded specifically by industry....

  6. Modeling and generating input processes

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M.E.

    1987-01-01

    This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.

  7. Reprocessing input data validation

    International Nuclear Information System (INIS)

    Persiani, P.J.; Bucher, R.G.; Pond, R.B.; Cornella, R.J.

    1990-01-01

    The Isotope Correlation Technique (ICT), in conjunction with the gravimetric (Pu/U ratio) method for mass determination, provides an independent verification of the input accountancy at the dissolver or accountancy stage of the reprocessing plant. The Isotope Correlation Technique has been applied to many classes of domestic and international reactor systems (light-water, heavy-water, graphite, and liquid-metal) operating in a variety of modes (power, research, production, and breeder), and for a variety of reprocessing fuel cycle management strategies. Analysis of reprocessing operations data based on isotopic correlations derived for assemblies in a PWR environment and fuel management scheme, yielded differences between the measurement-derived and ICT-derived plutonium mass determinations of (-0.02 ± 0.23)% for the measured U-235 and (+0.50 ± 0.31)% for the measured Pu-239, for a core campaign. The ICT analyses has been implemented for the plutonium isotopics in a depleted uranium assembly in a heavy-water, enriched uranium system and for the uranium isotopes in the fuel assemblies in light-water, highly-enriched systems. 7 refs., 5 figs., 4 tabs

  8. Inverse uncertainty quantification of reactor simulations under the Bayesian framework using surrogate models constructed by polynomial chaos expansion

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Xu, E-mail: xuwu2@illinois.edu; Kozlowski, Tomasz

    2017-03-15

    Modeling and simulations are naturally augmented by extensive Uncertainty Quantification (UQ) and sensitivity analysis requirements in the nuclear reactor system design, in which uncertainties must be quantified in order to prove that the investigated design stays within acceptance criteria. Historically, expert judgment has been used to specify the nominal values, probability density functions and upper and lower bounds of the simulation code random input parameters for the forward UQ process. The purpose of this paper is to replace such ad-hoc expert judgment of the statistical properties of input model parameters with inverse UQ process. Inverse UQ seeks statistical descriptions of the model random input parameters that are consistent with the experimental data. Bayesian analysis is used to establish the inverse UQ problems based on experimental data, with systematic and rigorously derived surrogate models based on Polynomial Chaos Expansion (PCE). The methods developed here are demonstrated with the Point Reactor Kinetics Equation (PRKE) coupled with lumped parameter thermal-hydraulics feedback model. Three input parameters, external reactivity, Doppler reactivity coefficient and coolant temperature coefficient are modeled as uncertain input parameters. Their uncertainties are inversely quantified based on synthetic experimental data. Compared with the direct numerical simulation, surrogate model by PC expansion shows high efficiency and accuracy. In addition, inverse UQ with Bayesian analysis can calibrate the random input parameters such that the simulation results are in a better agreement with the experimental data.

  9. Fluorescent quantification of melanin.

    Science.gov (United States)

    Fernandes, Bruno; Matamá, Teresa; Guimarães, Diana; Gomes, Andreia; Cavaco-Paulo, Artur

    2016-11-01

    Melanin quantification is reportedly performed by absorption spectroscopy, commonly at 405 nm. Here, we propose the implementation of fluorescence spectroscopy for melanin assessment. In a typical in vitro assay to assess melanin production in response to an external stimulus, absorption spectroscopy clearly overvalues melanin content. This method is also incapable of distinguishing non-melanotic/amelanotic control cells from those that are actually capable of performing melanogenesis. Therefore, fluorescence spectroscopy is the best method for melanin quantification as it proved to be highly specific and accurate, detecting even small variations in the synthesis of melanin. This method can also be applied to the quantification of melanin in more complex biological matrices like zebrafish embryos and human hair. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  11. Quantification of global myocardial function by cine MRI deformable registration-based analysis: Comparison with MR feature tracking and speckle-tracking echocardiography.

    Science.gov (United States)

    Lamacie, Mariana M; Thavendiranathan, Paaladinesh; Hanneman, Kate; Greiser, Andreas; Jolly, Marie-Pierre; Ward, Richard; Wintersperger, Bernd J

    2017-04-01

    To evaluate deformable registration algorithms (DRA)-based quantification of cine steady-state free-precession (SSFP) for myocardial strain assessment in comparison with feature-tracking (FT) and speckle-tracking echocardiography (STE). Data sets of 28 patients/10 volunteers, undergoing same-day 1.5T cardiac MRI and echocardiography were included. LV global longitudinal (GLS), circumferential (GCS) and radial (GRS) peak systolic strain were assessed on cine SSFP data using commercially available FT algorithms and prototype DRA-based algorithms. STE was applied as standard of reference for accuracy, precision and intra-/interobserver reproducibility testing. DRA showed narrower limits of agreement compared to STE for GLS (-4.0 [-0.9,-7.9]) and GCS (-5.1 [1.1,-11.2]) than FT (3.2 [11.2,-4.9]; 3.8 [13.9,-6.3], respectively). While both DRA and FT demonstrated significant differences to STE for GLS and GCS (all pcine MRI. • Inverse DRA demonstrated superior reproducibility compared to feature-tracking (FT) methods. • Cine MR DRA and FT analysis demonstrate differences to speckle-tracking echocardiography • DRA demonstrated better correlation with STE than FT for MR-derived global strain data.

  12. Terahertz identification and quantification of penicillamine enantiomers

    International Nuclear Information System (INIS)

    Ji Te; Zhao Hongwei; Chen Min; Xiao Tiqiao; Han Pengyu

    2013-01-01

    Identification and characterization of L-, D- and DL- penicillamine were demonstrated by Terahertz time-domain spectroscopy (THz-TDS). To understand the physical origins of the low frequency resonant modes, the density functional theory (DFT) was adopted for theoretical calculation. It was found that the collective THz frequency motions were decided by the intramolecular and intermolecular hydrogen bond interactions. Moreover, the quantification of penicillamine enantiomers mixture was demonstrated by a THz spectra fitting method with a relative error of less than 3.5%. This technique can be a valuable tool for the discrimination and quantification of chiral drugs in pharmaceutical industry. (authors)

  13. [11C]Harmine Binding to Brain Monoamine Oxidase A: Test-Retest Properties and Noninvasive Quantification.

    Science.gov (United States)

    Zanderigo, Francesca; D'Agostino, Alexandra E; Joshi, Nandita; Schain, Martin; Kumar, Dileep; Parsey, Ramin V; DeLorenzo, Christine; Mann, J John

    2018-02-08

    Inhibition of the isoform A of monoamine oxidase (MAO-A), a mitochondrial enzyme catalyzing deamination of monoamine neurotransmitters, is useful in treatment of depression and anxiety disorders. [ 11 C]harmine, a MAO-A PET radioligand, has been used to study mood disorders and antidepressant treatment. However, [ 11 C]harmine binding test-retest characteristics have to date only been partially investigated. Furthermore, since MAO-A is ubiquitously expressed, no reference region is available, thus requiring arterial blood sampling during PET scanning. Here, we investigate [ 11 C]harmine binding measurements test-retest properties; assess effects of using a minimally invasive input function estimation on binding quantification and repeatability; and explore binding potentials estimation using a reference region-free approach. Quantification of [ 11 C]harmine distribution volume (V T ) via kinetic models and graphical analyses was compared based on absolute test-retest percent difference (TRPD), intraclass correlation coefficient (ICC), and identifiability. The optimal procedure was also used with a simultaneously estimated input function in place of the measured curve. Lastly, an approach for binding potentials quantification in absence of a reference region was evaluated. [ 11 C]harmine V T estimates quantified using arterial blood and kinetic modeling showed average absolute TRPD values of 7.7 to 15.6 %, and ICC values between 0.56 and 0.86, across brain regions. Using simultaneous estimation (SIME) of input function resulted in V T estimates close to those obtained using arterial input function (r = 0.951, slope = 1.073, intercept = - 1.037), with numerically but not statistically higher test-retest difference (range 16.6 to 22.0 %), but with overall poor ICC values, between 0.30 and 0.57. Prospective studies using [ 11 C]harmine are possible given its test-retest repeatability when binding is quantified using arterial blood. Results with SIME of

  14. Image-derived input function in dynamic human PET/CT: methodology and validation with 11C-acetate and 18F-fluorothioheptadecanoic acid in muscle and 18F-fluorodeoxyglucose in brain

    International Nuclear Information System (INIS)

    Croteau, Etienne; Lavallee, Eric; Hubert, Laurent; Rousseau, Jacques A.; Lecomte, Roger; Labbe, Sebastien M.; Carpentier, Andre C.; Pifferi, Fabien; Cunnane, Stephen C.; Benard, Francois

    2010-01-01

    Despite current advances in PET/CT systems, blood sampling still remains the standard method to obtain the radiotracer input function for tracer kinetic modelling. The purpose of this study was to validate the use of image-derived input functions (IDIF) of the carotid and femoral arteries to measure the arterial input function (AIF) in PET imaging. The data were obtained from two different research studies, one using 18 F-FDG for brain imaging and the other using 11 C-acetate and 18 F-fluoro-6-thioheptadecanoic acid ( 18 F-FTHA) in femoral muscles. The method was validated with two phantom systems. First, a static phantom consisting of syringes of different diameters containing radioactivity was used to determine the recovery coefficient (RC) and spill-in factors. Second, a dynamic phantom built to model bolus injection and clearance of tracers was used to establish the correlation between blood sampling, AIF and IDIF. The RC was then applied to the femoral artery data from PET imaging studies with 11 C-acetate and 18 F-FTHA and to carotid artery data from brain imaging with 18 F-FDG. These IDIF data were then compared to actual AIFs from patients. With 11 C-acetate, the perfusion index in the femoral muscle was 0.34±0.18 min -1 when estimated from the actual time-activity blood curve, 0.29±0.15 min -1 when estimated from the corrected IDIF, and 0.66±0.41 min -1 when the IDIF data were not corrected for RC. A one-way repeated measures (ANOVA) and Tukey's test showed a statistically significant difference for the IDIF not corrected for RC (p 18 F-FTHA there was a strong correlation between Patlak slopes, the plasma to tissue transfer rate calculated using the true plasma radioactivity content and the corrected IDIF for the femoral muscles (vastus lateralis r=0.86, p=0.027; biceps femoris r=0.90, p=0.017). On the other hand, there was no correlation between the values derived using the AIF and those derived using the uncorrected IDIF. Finally, in the brain imaging

  15. Quantification of global myocardial function by cine MRI deformable registration-based analysis: Comparison with MR feature tracking and speckle-tracking echocardiography

    International Nuclear Information System (INIS)

    Lamacie, Mariana M.; Thavendiranathan, Paaladinesh; Hanneman, Kate; Greiser, Andreas; Jolly, Marie-Pierre; Ward, Richard; Wintersperger, Bernd J.

    2017-01-01

    To evaluate deformable registration algorithms (DRA)-based quantification of cine steady-state free-precession (SSFP) for myocardial strain assessment in comparison with feature-tracking (FT) and speckle-tracking echocardiography (STE). Data sets of 28 patients/10 volunteers, undergoing same-day 1.5T cardiac MRI and echocardiography were included. LV global longitudinal (GLS), circumferential (GCS) and radial (GRS) peak systolic strain were assessed on cine SSFP data using commercially available FT algorithms and prototype DRA-based algorithms. STE was applied as standard of reference for accuracy, precision and intra-/interobserver reproducibility testing. DRA showed narrower limits of agreement compared to STE for GLS (-4.0 [-0.9,-7.9]) and GCS (-5.1 [1.1,-11.2]) than FT (3.2 [11.2,-4.9]; 3.8 [13.9,-6.3], respectively). While both DRA and FT demonstrated significant differences to STE for GLS and GCS (all p<0.001), only DRA correlated significantly to STE for GLS (r=0.47; p=0.006). However, good correlation was demonstrated between MR techniques (GLS:r=0.74; GCS:r=0.80; GRS:r=0.45, all p<0.05). Comparing DRA with FT, intra-/interobserver coefficient of variance was lower (1.6 %/3.2 % vs. 6.4 %/5.7 %) and intraclass-correlation coefficient was higher. DRA GCS and GRS data presented zero variability for repeated observations. DRA is an automated method that allows myocardial deformation assessment with superior reproducibility compared to FT. (orig.)

  16. Quantification of global myocardial function by cine MRI deformable registration-based analysis: Comparison with MR feature tracking and speckle-tracking echocardiography

    Energy Technology Data Exchange (ETDEWEB)

    Lamacie, Mariana M. [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); Thavendiranathan, Paaladinesh [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); University of Toronto, Department of Medicine, Division of Cardiology, Toronto, Ontario (Canada); Hanneman, Kate [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); University of Toronto, Department of Medical Imaging, Toronto, Ontario (Canada); Greiser, Andreas [Siemens Healthcare, Erlangen (Germany); Jolly, Marie-Pierre [Medical Imaging Technologies, Siemens Healthcare, Princeton, NJ (United States); Ward, Richard [University of Toronto, Department of Medicine, Division of Cardiology, Toronto, Ontario (Canada); Wintersperger, Bernd J. [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); University of Toronto, Department of Medical Imaging, Toronto, Ontario (Canada); Toronto General Hospital, Department of Medical Imaging, Toronto, Ontario (Canada)

    2017-04-15

    To evaluate deformable registration algorithms (DRA)-based quantification of cine steady-state free-precession (SSFP) for myocardial strain assessment in comparison with feature-tracking (FT) and speckle-tracking echocardiography (STE). Data sets of 28 patients/10 volunteers, undergoing same-day 1.5T cardiac MRI and echocardiography were included. LV global longitudinal (GLS), circumferential (GCS) and radial (GRS) peak systolic strain were assessed on cine SSFP data using commercially available FT algorithms and prototype DRA-based algorithms. STE was applied as standard of reference for accuracy, precision and intra-/interobserver reproducibility testing. DRA showed narrower limits of agreement compared to STE for GLS (-4.0 [-0.9,-7.9]) and GCS (-5.1 [1.1,-11.2]) than FT (3.2 [11.2,-4.9]; 3.8 [13.9,-6.3], respectively). While both DRA and FT demonstrated significant differences to STE for GLS and GCS (all p<0.001), only DRA correlated significantly to STE for GLS (r=0.47; p=0.006). However, good correlation was demonstrated between MR techniques (GLS:r=0.74; GCS:r=0.80; GRS:r=0.45, all p<0.05). Comparing DRA with FT, intra-/interobserver coefficient of variance was lower (1.6 %/3.2 % vs. 6.4 %/5.7 %) and intraclass-correlation coefficient was higher. DRA GCS and GRS data presented zero variability for repeated observations. DRA is an automated method that allows myocardial deformation assessment with superior reproducibility compared to FT. (orig.)

  17. Non-invasive liver fibrosis score calculated by combination of virtual touch tissue quantification and serum liver functional tests in chronic hepatitis C patients.

    Science.gov (United States)

    Takaki, Shintaro; Kawakami, Yoshiiku; Miyaki, Daisuke; Nakahara, Takashi; Naeshiro, Noriaki; Murakami, Eisuke; Tanaka, Mio; Honda, Yohji; Yokoyama, Satoe; Nagaoki, Yuko; Kawaoka, Tomokazu; Hiramatsu, Akira; Tsuge, Masataka; Hiraga, Nobuhiko; Imamura, Michio; Hyogo, Hideyuki; Aikata, Hiroshi; Takahashi, Shoichi; Arihiro, Koji; Chayama, Kazuaki

    2014-03-01

    Acoustic radiation force impulse (ARFI) technology, involving the shear wave velocity (SWV) with virtual touch tissue quantification (VTTQ), are currently available for the assessment of liver fibrosis, while there is no index derived from the combination of SWV and blood tests. The aim of this study was to develop a new index for assessment of liver fibrosis. The subjects were 176 consecutive patients with hepatitis C (training set [n = 120] and validation set [n = 56]) who underwent liver biopsy in our institution. In the training set, SWV, international normalized ratio (INR) and alanine aminotransferase (ALT) correlated independently and significantly with fibrosis. According to this, we developed the VIA index = -1.282 + 0.965 × SWV + 1.785 INR + 0.00185 ALT. The areas under the receiver-operator curve (AUROC) of the VIA index were 0.838 for the diagnosis of significant fibrosis (≥F2), 0.904 for the severe fibrosis (≥F3) and 0.958 for the cirrhosis (F4) in the training set. While in the validation set, AUROC of the VIA index were 0.917 for F2 or higher, 0.906 for F3 or higher and 1.000 for F4, respectively. AUROC of the VIA index was improved compared to SWV alone, equivalent for VIA for the diagnosis of F2 or higher, and superior to that of FIB-4 index and aspartate aminotransferase-to-platelet ratio index for the diagnosis of F3 or higher and F4. The VIA index is potentially more useful for assessment of liver fibrosis than SWV alone, and easily and accurately measures liver fibrosis stage. © 2013 The Japan Society of Hepatology.

  18. Disease quantification in dermatology

    DEFF Research Database (Denmark)

    Greve, Tanja Maria; Kamp, Søren; Jemec, Gregor B E

    2013-01-01

    Accurate documentation of disease severity is a prerequisite for clinical research and the practice of evidence-based medicine. The quantification of skin diseases such as psoriasis currently relies heavily on clinical scores. Although these clinical scoring methods are well established and very ...

  19. Stochastic approach for radionuclides quantification

    Science.gov (United States)

    Clement, A.; Saurel, N.; Perrin, G.

    2018-01-01

    Gamma spectrometry is a passive non-destructive assay used to quantify radionuclides present in more or less complex objects. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the calibration coefficient are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modelling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometric shape, matrix composition, matrix and source distribution. Some of them are strongly dependent on package data knowledge and operator backgrounds. The French Commissariat à l'Energie Atomique (CEA) is developing a new methodology to quantify nuclear materials in waste packages and waste drums without operator adjustment and internal package configuration knowledge. This method suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability densities of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum, and outside dimensions of interest objects. The methodology is testing to quantify actinide activity in different kind of matrix, composition, and configuration of sources standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology.

  20. Computer Generated Inputs for NMIS Processor Verification

    International Nuclear Information System (INIS)

    J. A. Mullens; J. E. Breeding; J. A. McEvers; R. W. Wysor; L. G. Chiang; J. R. Lenarduzzi; J. T. Mihalczo; J. K. Mattingly

    2001-01-01

    Proper operation of the Nuclear Identification Materials System (NMIS) processor can be verified using computer-generated inputs [BIST (Built-In-Self-Test)] at the digital inputs. Preselected sequences of input pulses to all channels with known correlation functions are compared to the output of the processor. These types of verifications have been utilized in NMIS type correlation processors at the Oak Ridge National Laboratory since 1984. The use of this test confirmed a malfunction in a NMIS processor at the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) in 1998. The NMIS processor boards were returned to the U.S. for repair and subsequently used in NMIS passive and active measurements with Pu at VNIIEF in 1999

  1. Six axis force feedback input device

    Science.gov (United States)

    Ohm, Timothy (Inventor)

    1998-01-01

    The present invention is a low friction, low inertia, six-axis force feedback input device comprising an arm with double-jointed, tendon-driven revolute joints, a decoupled tendon-driven wrist, and a base with encoders and motors. The input device functions as a master robot manipulator of a microsurgical teleoperated robot system including a slave robot manipulator coupled to an amplifier chassis, which is coupled to a control chassis, which is coupled to a workstation with a graphical user interface. The amplifier chassis is coupled to the motors of the master robot manipulator and the control chassis is coupled to the encoders of the master robot manipulator. A force feedback can be applied to the input device and can be generated from the slave robot to enable a user to operate the slave robot via the input device without physically viewing the slave robot. Also, the force feedback can be generated from the workstation to represent fictitious forces to constrain the input device's control of the slave robot to be within imaginary predetermined boundaries.

  2. Enhanced Input in LCTL Pedagogy

    Directory of Open Access Journals (Sweden)

    Marilyn S. Manley

    2009-08-01

    Full Text Available Language materials for the more-commonly-taught languages (MCTLs often include visual input enhancement (Sharwood Smith 1991, 1993 which makes use of typographical cues like bolding and underlining to enhance the saliency of targeted forms. For a variety of reasons, this paper argues that the use of enhanced input, both visual and oral, is especially important as a tool for the lesscommonly-taught languages (LCTLs. As there continues to be a scarcity of teaching resources for the LCTLs, individual teachers must take it upon themselves to incorporate enhanced input into their own self-made materials. Specific examples of how to incorporate both visual and oral enhanced input into language teaching are drawn from the author’s own experiences teaching Cuzco Quechua. Additionally, survey results are presented from the author’s Fall 2010 semester Cuzco Quechua language students, supporting the use of both visual and oral enhanced input.

  3. Enhanced Input in LCTL Pedagogy

    Directory of Open Access Journals (Sweden)

    Marilyn S. Manley

    2010-08-01

    Full Text Available Language materials for the more-commonly-taught languages (MCTLs often include visual input enhancement (Sharwood Smith 1991, 1993 which makes use of typographical cues like bolding and underlining to enhance the saliency of targeted forms. For a variety of reasons, this paper argues that the use of enhanced input, both visual and oral, is especially important as a tool for the lesscommonly-taught languages (LCTLs. As there continues to be a scarcity of teaching resources for the LCTLs, individual teachers must take it upon themselves to incorporate enhanced input into their own self-made materials. Specific examples of how to incorporate both visual and oral enhanced input into language teaching are drawn from the author’s own experiences teaching Cuzco Quechua. Additionally, survey results are presented from the author’s Fall 2010 semester Cuzco Quechua language students, supporting the use of both visual and oral enhanced input.

  4. A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y

    2011-10-27

    Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.

  5. Quantification of Global Left Ventricular Function: Comparison of Multidetector Computed Tomography and Magnetic Resonance Imaging. A Meta-analysis and Review of the Current Literature

    International Nuclear Information System (INIS)

    Vleuten, P.A. van der; Willems, T.P.; Goette, M.J.; Tio, R.A.; Greuter, M.J.; Zijlstra, F.; Oudkerk, M.

    2006-01-01

    Cardiac morbidity and mortality are closely related to cardiac volumes and global left ventricular (LV) function, expressed as left ventricular ejection fraction. Accurate assessment of these parameters is required for the prediction of prognosis in individual patients as well as in entire cohorts. The current standard of reference for left ventricular function is analysis by short-axis magnetic resonance imaging. In recent years, major extensive technological improvements have been achieved in computed tomography. The most marked development has been the introduction of the multidetector CT (MDCT), which has significantly improved temporal and spatial resolutions. In order to assess the current status of MDCT for analysis of LV function, the current available literature on this subject was reviewed. The data presented in this review indicate that the global left ventricular functional parameters measured by contemporary multi-detector row systems combined with adequate reconstruction algorithms and post-processing tools show a narrow diagnostic window and are interchangeable with those obtained by MRI

  6. On the Nature of the Input in Optimality Theory

    DEFF Research Database (Denmark)

    Heck, Fabian; Müller, Gereon; Vogel, Ralf

    2002-01-01

    The input has two main functions in optimality theory (Prince and Smolensky 1993). First, the input defines the candidate set, in other words it determines which output candidates compete for optimality, and which do not. Second, the input is referred to by faithfulness constraints that prohibit...... output candidates from deviating from specifications in the input. Whereas there is general agreement concerning the relevance of the input in phonology, the nature of the input in syntax is notoriously unclear. In this article, we show that the input should not be taken to define syntactic candidate...... and syntax is due to a basic, irreducible difference between these two components of grammar: Syntax is an information preserving system, phonology is not....

  7. Harmonize input selection for sediment transport prediction

    Science.gov (United States)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  8. Material input of nuclear fuel

    International Nuclear Information System (INIS)

    Rissanen, S.; Tarjanne, R.

    2001-01-01

    The Material Input (MI) of nuclear fuel, expressed in terms of the total amount of natural material needed for manufacturing a product, is examined. The suitability of the MI method for assessing the environmental impacts of fuels is also discussed. Material input is expressed as a Material Input Coefficient (MIC), equalling to the total mass of natural material divided by the mass of the completed product. The material input coefficient is, however, only an intermediate result, which should not be used as such for the comparison of different fuels, because the energy contents of nuclear fuel is about 100 000-fold compared to the energy contents of fossil fuels. As a final result, the material input is expressed in proportion to the amount of generated electricity, which is called MIPS (Material Input Per Service unit). Material input is a simplified and commensurable indicator for the use of natural material, but because it does not take into account the harmfulness of materials or the way how the residual material is processed, it does not alone express the amount of environmental impacts. The examination of the mere amount does not differentiate between for example coal, natural gas or waste rock containing usually just sand. Natural gas is, however, substantially more harmful for the ecosystem than sand. Therefore, other methods should also be used to consider the environmental load of a product. The material input coefficient of nuclear fuel is calculated using data from different types of mines. The calculations are made among other things by using the data of an open pit mine (Key Lake, Canada), an underground mine (McArthur River, Canada) and a by-product mine (Olympic Dam, Australia). Furthermore, the coefficient is calculated for nuclear fuel corresponding to the nuclear fuel supply of Teollisuuden Voima (TVO) company in 2001. Because there is some uncertainty in the initial data, the inaccuracy of the final results can be even 20-50 per cent. The value

  9. Quantification of discreteness effects in cosmological N-body simulations: Initial conditions

    International Nuclear Information System (INIS)

    Joyce, M.; Marcos, B.

    2007-01-01

    The relation between the results of cosmological N-body simulations, and the continuum theoretical models they simulate, is currently not understood in a way which allows a quantification of N dependent effects. In this first of a series of papers on this issue, we consider the quantification of such effects in the initial conditions of such simulations. A general formalism developed in [A. Gabrielli, Phys. Rev. E 70, 066131 (2004).] allows us to write down an exact expression for the power spectrum of the point distributions generated by the standard algorithm for generating such initial conditions. Expanded perturbatively in the amplitude of the input (i.e. theoretical, continuum) power spectrum, we obtain at linear order the input power spectrum, plus two terms which arise from discreteness and contribute at large wave numbers. For cosmological type power spectra, one obtains as expected, the input spectrum for wave numbers k smaller than that characteristic of the discreteness. The comparison of real space correlation properties is more subtle because the discreteness corrections are not as strongly localized in real space. For cosmological type spectra the theoretical mass variance in spheres and two-point correlation function are well approximated above a finite distance. For typical initial amplitudes this distance is a few times the interparticle distance, but it diverges as this amplitude (or, equivalently, the initial redshift of the cosmological simulation) goes to zero, at fixed particle density. We discuss briefly the physical significance of these discreteness terms in the initial conditions, in particular, with respect to the definition of the continuum limit of N-body simulations

  10. Phasing Out a Polluting Input

    OpenAIRE

    Eriksson, Clas

    2015-01-01

    This paper explores economic policies related to the potential conflict between economic growth and the environment. It applies a model with directed technological change and focuses on the case with low elasticity of substitution between clean and dirty inputs in production. New technology is substituted for the polluting input, which results in a gradual decline in pollution along the optimal long-run growth path. In contrast to some recent work, the era of pollution and environmental polic...

  11. NeutrAvidin Functionalization of CdSe/CdS Quantum Nanorods and Quantification of Biotin Binding Sites using Biotin-4-Fluorescein Fluorescence Quenching

    OpenAIRE

    Lippert, Lisa G.; Hallock, Jeffrey T.; Dadosh, Tali; Diroll, Benjamin T.; Murray, Christopher B.; Goldman, Yale E.

    2016-01-01

    We developed methods to solubilize, coat, and functionalize with NeutrAvidin elongated semiconductor nanocrystals (quantum nanorods, QRs) for use in single molecule polarized fluorescence microscopy. Three different ligands were compared with regard to efficacy for attaching NeutrAvidin using the “zero-length cross-linker” 1-ethyl-3-[3-(dimethylamino)propyl]-carbodiimide (EDC). Biotin-4-fluorescene (B4F), a fluorophore that is quenched when bound to avidin proteins, was used to quantify bioti...

  12. Solvent-assisted dispersive micro-SPE by using aminopropyl-functionalized magnetite nanoparticle followed by GC-PID for quantification of parabens in aqueous matrices.

    Science.gov (United States)

    Abbasghorbani, Maryam; Attaran, Abdolmohammad; Payehghadr, Mahmood

    2013-01-01

    In this research, solvent-assisted dispersive micro-SPE was introduced as a simple modified technique for the determination of parabens in water and cosmetic samples. Aminopropyl-functionalized magnetite nanoparticles (MNPs) were successfully synthesized and applied. GC with photoionization detector was used for the separation and detection of parabens. In this method, hexylacetate (15 μL) as a solvent and aminopropyl-functionalized MNPs (5 μg) as a sorbent were added to an aqueous sample (10 mL) and then the sample was sonicated. Dispersed magnetite was collected in the bottom of the conical tube by using a strong magnet and then ACN was added as a desorption solvent. Forty microliters of this solvent was transferred into a microvial and then acetic anhydride and pyridine were added, thus derivatization was performed by acetic anhydride. After evaporation, 1 μL of derivatized sample was injected into a gas chromatograph for analysis. Several important parameters, such as kind of organic solvent, desorption solvent and volume, amount of aminopropyl-functionalized MNPs and effect of salt addition were investigated. Under optimum conditions, the limits of detection achieved were between 50 and 300 ng/L, with RSDs (n = 5) lower than 8%. Under the optimum conditions, the enrichment factors ranged from 217 to 1253 and the extraction recoveries ranged from 10 to 62%. The recoveries were obtained for the analytes in river water and mouthwash solution and hand cream in the range of 87-103%. The advantages of proposed method are simplicity of operation, rapidity, high extraction yields, and environmental friendly character. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Heart deformation analysis for automated quantification of cardiac function and regional myocardial motion patterns: A proof of concept study in patients with cardiomyopathy and healthy subjects

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Kai, E-mail: kai-lin@northwestern.edu; Collins, Jeremy D.; Chowdhary, Varun; Markl, Michael; Carr, James C.

    2016-10-15

    Highlights: • Heart deformation analysis (HDA) can quantify global and regional cardiac function. • HDA works based on cine CMR images without the needs of operator interaction. • HDA-derived cardiac motion indices are reproducible. - Abstract: Objective: To test the performance of HDA in characterizing left ventricular (LV) function and regional myocardial motion patterns in the context of cardiomyopathy based on cine cardiovascular magnetic resonance (CMR). Materials and methods: Following the approval of the institutional review board (IRB), standard cine images of 45 subjects, including 15 healthy volunteers, 15 patients with hypertrophic cardiomyopathy (HCM) and 15 patients with dilated cardiomyopathy (DCM) were retrospectively analyzed using HDA. The variations of LV ejection fraction (LVEF), LV mass (LVM), and regional myocardial motion indices, including radial (Drr), circumferential (Dcc) displacement, radial (Vrr) and circumferential (Vcc) velocity, radial (Err), circumferential (Ecc) and shear (Ess) strain and radial (SRr) and circumferential (SRc) strain rate, were calculated and compared among subject groups. Inter-study reproducibility of HDA-derived myocardial motion indices were tested on 15 volunteers by using intra-class correlation coefficient (ICC) and coefficient of variation (CoV). Results: HDA identified significant differences in cardiac function and motion indices between subject groups. DCM patients had significantly lower LVEF (33.5 ± 9.65%), LVM (105.88 ± 21.93 g), peak Drr (0.29 ± 0.11 cm), Vrr-sys (2.14 ± 0.72 cm/s), Err (0.17 ± 0.08), Ecc (−0.08 ± 0.03), SRr-sys (0.91 ± 0.44s{sup −1}) and SRc-sys (−0.64 ± 0.27s{sup −1}) compared to the other two groups. HCM patients demonstrated increased LVM (171.69 ± 34.19) and lower peak Vcc-dia (0.78 ± 0.30 cm/s) than other subjects. Good inter-study reproducibility was found for all HDA-derived myocardial indices in healthy volunteers (ICC = 0.664–0.942, CoV = 15.1%–37

  14. Heart deformation analysis for automated quantification of cardiac function and regional myocardial motion patterns: A proof of concept study in patients with cardiomyopathy and healthy subjects

    International Nuclear Information System (INIS)

    Lin, Kai; Collins, Jeremy D.; Chowdhary, Varun; Markl, Michael; Carr, James C.

    2016-01-01

    Highlights: • Heart deformation analysis (HDA) can quantify global and regional cardiac function. • HDA works based on cine CMR images without the needs of operator interaction. • HDA-derived cardiac motion indices are reproducible. - Abstract: Objective: To test the performance of HDA in characterizing left ventricular (LV) function and regional myocardial motion patterns in the context of cardiomyopathy based on cine cardiovascular magnetic resonance (CMR). Materials and methods: Following the approval of the institutional review board (IRB), standard cine images of 45 subjects, including 15 healthy volunteers, 15 patients with hypertrophic cardiomyopathy (HCM) and 15 patients with dilated cardiomyopathy (DCM) were retrospectively analyzed using HDA. The variations of LV ejection fraction (LVEF), LV mass (LVM), and regional myocardial motion indices, including radial (Drr), circumferential (Dcc) displacement, radial (Vrr) and circumferential (Vcc) velocity, radial (Err), circumferential (Ecc) and shear (Ess) strain and radial (SRr) and circumferential (SRc) strain rate, were calculated and compared among subject groups. Inter-study reproducibility of HDA-derived myocardial motion indices were tested on 15 volunteers by using intra-class correlation coefficient (ICC) and coefficient of variation (CoV). Results: HDA identified significant differences in cardiac function and motion indices between subject groups. DCM patients had significantly lower LVEF (33.5 ± 9.65%), LVM (105.88 ± 21.93 g), peak Drr (0.29 ± 0.11 cm), Vrr-sys (2.14 ± 0.72 cm/s), Err (0.17 ± 0.08), Ecc (−0.08 ± 0.03), SRr-sys (0.91 ± 0.44s −1 ) and SRc-sys (−0.64 ± 0.27s −1 ) compared to the other two groups. HCM patients demonstrated increased LVM (171.69 ± 34.19) and lower peak Vcc-dia (0.78 ± 0.30 cm/s) than other subjects. Good inter-study reproducibility was found for all HDA-derived myocardial indices in healthy volunteers (ICC = 0.664–0.942, CoV = 15.1%–37

  15. Quantification of pancreatic exocrine function with secretin-enhanced magnetic resonance cholangiopancreatography: normal values and short-term effects of pancreatic duct drainage procedures in chronic pancreatitis. Initial results

    International Nuclear Information System (INIS)

    Bali, M.A.; Sztantics, A.; Metens, T.; Matos, C.; Arvanitakis, M.; Delhaye, M.; Deviere, J.

    2005-01-01

    The aim of this study was to quantify pancreatic exocrine function in normal subjects and in patients with chronic pancreatitis (CP) before and after pancreatic duct drainage procedures (PDDP) with dynamic secretin-enhanced magnetic resonance (MR) cholangiopancreatography (S-MRCP). Pancreatic exocrine secretions [quantified by pancreatic flow output (PFO) and total excreted volume (TEV)] were quantified twice in ten healthy volunteers and before and after treatment in 20 CP patients (18 classified as severe, one as moderate, and one as mild according to the Cambridge classification). PFO and TEV were derived from a linear regression between MR-calculated volumes and time. In all subjects, pancreatic exocrine fluid volume initially increased linearly with time during secretin stimulation. In controls, the mean PFO and TEV were 6.8 ml/min and 97 ml; intra-individual deviations were 0.8 ml/min and 16 ml. In 10/20 patients with impaired exocrine secretions before treatment, a significant increase of PFO and TEV was observed after treatment (P<0.05); 3/20 patients presented post-procedural acute pancreatitis and a reduced PFO. The S-MRCP quantification method used in the present study is reproducible and provides normal values for PFO and TEV in the range of those obtained from previous published intubation studies. The initial results in CP patients have demonstrated non-invasively a significant short-term improvement of PFO and TEV after PDDP. (orig.)

  16. Quantification of heterogeneity observed in medical images

    OpenAIRE

    Brooks, Frank J; Grigsby, Perry W

    2013-01-01

    Background There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging mod...

  17. Compositional Solution Space Quantification for Probabilistic Software Analysis

    Science.gov (United States)

    Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem

    2014-01-01

    Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.

  18. Electrocardiograph-gated single photon emission computed tomography radionuclide angiography presents good interstudy reproducibility for the quantification of global systolic right ventricular function.

    Science.gov (United States)

    Daou, Doumit; Coaguila, Carlos; Vilain, Didier

    2007-05-01

    Electrocardiograph-gated single photon emission computed tomography (SPECT) radionuclide angiography provides accurate measurement of right ventricular ejection fraction and end-diastolic and end-systolic volumes. In this study, we report the interstudy precision and reliability of SPECT radionuclide angiography for the measurement of global systolic right ventricular function using two, three-dimensional volume processing methods (SPECT-QBS, SPECT-35%). These were compared with equilibrium planar radionuclide angiography. Ten patients with chronic coronary artery disease having two SPECT and planar radionuclide angiography acquisitions were included. For the right ventricular ejection fraction, end-diastolic volume and end-systolic volume, the interstudy precision and reliability were better with SPECT-35% than with SPECT-QBS. The sample sizes needed to objectify a change in right ventricular volumes or ejection fraction were lower with SPECT-35% than with SPECT-QBS. The interstudy precision and reliability of SPECT-35% and SPECT-QBS for the right ventricle were better than those of equilibrium planar radionuclide angiography, but poorer than those previously reported for the left ventricle with SPECT radionuclide angiography on the same population. SPECT-35% and SPECT-QBS present good interstudy precision and reliability for right ventricular function, with the results favouring the use of SPECT-35%. The results are better than those of equilibrium planar radionuclide angiography, but poorer than those previously reported for the left ventricle with SPECT radionuclide angiography. They need to be confirmed in a larger population.

  19. Quantification of beta-cell function during IVGTT in Type II and non-diabetic subjects: assessment of insulin secretion by mathematical methods

    DEFF Research Database (Denmark)

    Kjems, L L; Vølund, A; Madsbad, Sten

    2001-01-01

    AIMS/HYPOTHESIS: We compared four methods to assess their accuracy in measuring insulin secretion during an intravenous glucose tolerance test in patients with Type II (non-insulin-dependent) diabetes mellitus and with varying beta-cell function and matched control subjects. METHODS: Eight control...... subjects and eight Type II diabetic patients underwent an intravenous glucose tolerance test with tolbutamide and an intravenous bolus injection of C-peptide to assess C-peptide kinetics. Insulin secretion rates were determined by the Eaton deconvolution (reference method), the Insulin SECretion method...... (ISEC) based on population kinetic parameters as well as one-compartment and two-compartment versions of the combined model of insulin and C-peptide kinetics. To allow a comparison of the accuracy of the four methods, fasting rates and amounts of insulin secreted during the first phase (0-10 min...

  20. Improving the spatial and temporal resolution with quantification of uncertainty and errors in earth observation data sets using Data Interpolating Empirical Orthogonal Functions methodology

    Science.gov (United States)

    El Serafy, Ghada; Gaytan Aguilar, Sandra; Ziemba, Alexander

    2016-04-01

    There is an increasing use of process-based models in the investigation of ecological systems and scenario predictions. The accuracy and quality of these models are improved when run with high spatial and temporal resolution data sets. However, ecological data can often be difficult to collect which manifests itself through irregularities in the spatial and temporal domain of these data sets. Through the use of Data INterpolating Empirical Orthogonal Functions(DINEOF) methodology, earth observation products can be improved to have full spatial coverage within the desired domain as well as increased temporal resolution to daily and weekly time step, those frequently required by process-based models[1]. The DINEOF methodology results in a degree of error being affixed to the refined data product. In order to determine the degree of error introduced through this process, the suspended particulate matter and chlorophyll-a data from MERIS is used with DINEOF to produce high resolution products for the Wadden Sea. These new data sets are then compared with in-situ and other data sources to determine the error. Also, artificial cloud cover scenarios are conducted in order to substantiate the findings from MERIS data experiments. Secondly, the accuracy of DINEOF is explored to evaluate the variance of the methodology. The degree of accuracy is combined with the overall error produced by the methodology and reported in an assessment of the quality of DINEOF when applied to resolution refinement of chlorophyll-a and suspended particulate matter in the Wadden Sea. References [1] Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Lacroix, G.; Park, Y.; Nechad, B.; Ruddick, K.G.; Beckers, J.-M. (2011). Cloud filling of ocean colour and sea surface temperature remote sensing products over the Southern North Sea by the Data Interpolating Empirical Orthogonal Functions methodology. J. Sea Res. 65(1): 114-130. Dx.doi.org/10.1016/j.seares.2010.08.002

  1. Detection and quantification by deformation imaging of the functional impact of septal compared to free wall preexcitation in the Wolff-Parkinson-White syndrome.

    Science.gov (United States)

    De Boeck, Bart W L; Teske, Arco J; Leenders, Geert E; Mohamed Hoesein, Firdaus A A; Loh, Peter; van Driel, Vincent J; Doevendans, Pieter A; Prinzen, Frits W; Cramer, Maarten J

    2010-08-15

    Pacing experiments in healthy animal hearts have suggested a larger detrimental effect of septal compared to free wall preexcitation. We investigated the intrinsic relation among the site of electrical preexcitation, mechanical dyssynchrony, and dysfunction in human patients. In 33 patients with Wolff-Parkinson-White (WPW) syndrome and 18 controls, regional myocardial deformation was assessed by speckle tracking mapping (ST-Map) to assess the preexcitation site, shortening sequences and dyssynchrony, and the extent of local and global ejecting shortening. The ST-Map data in patients with accessory atrioventricular pathways correctly diagnosed as located in the interventricular septum (IVS) (n = 11) or left ventricular free wall (LFW) (n = 12) were compared to the corresponding control values. A local ejecting shortening of <2 SD of the control values identified hypokinetic segments. The localization of the atrioventricular pathways by ST-Map matched with the invasive electrophysiology findings in 23 of 33 patients and was one segment different in 5 of 33 patients. In both WPW-IVS and WPW-LFW, local ejecting shortening was impaired at the preexcitation site (p <0.01). However, at similar electrical and mechanical dyssynchrony, WPW-IVS had more extensive hypokinesia than did WPW-LFW (3.6 +/- 0.9 vs 1.8 +/- 1.3 segments, p <0.01). Compared to controls, the left ventricular function was significantly reduced only in WPW-IVS (global ejecting shortening 17 +/- 2% vs 19 +/- 2%, p = 0.01; ejection fraction 55 +/- 5% vs 59 +/- 3%, p = 0.02). In conclusion, preexcitation is associated with local hypokinesia, which at comparable preexcitation is more extensive in WPW-IVS than in WPW-LFW and could adversely affect ventricular function. ST-Map might have a future role in detecting and guiding treatment of septal pathways with significant mechanical effects.

  2. Quantification of uncertainties in source term estimates for a BWR with Mark I containment

    International Nuclear Information System (INIS)

    Khatib-Rahbar, M.; Cazzoli, E.; Davis, R.; Ishigami, T.; Lee, M.; Nourbakhsh, H.; Schmidt, E.; Unwin, S.

    1988-01-01

    A methodology for quantification and uncertainty analysis of source terms for severe accident in light water reactors (QUASAR) has been developed. The objectives of the QUASAR program are (1) to develop a framework for performing an uncertainty evaluation of the input parameters of the phenomenological models used in the Source Term Code Package (STCP), and (2) to quantify the uncertainties in certain phenomenological aspects of source terms (that are not modeled by STCP) using state-of-the-art methods. The QUASAR methodology consists of (1) screening sensitivity analysis, where the most sensitive input variables are selected for detailed uncertainty analysis, (2) uncertainty analysis, where probability density functions (PDFs) are established for the parameters identified by the screening stage and propagated through the codes to obtain PDFs for the outputs (i.e., release fractions to the environment), and (3) distribution sensitivity analysis, which is performed to determine the sensitivity of the output PDFs to the input PDFs. In this paper attention is limited to a single accident progression sequence, namely; a station blackout accident in a BWR with a Mark I containment buildings. Identified as an important accident in the draft NUREG-1150 a station blackout involves loss of both off-site power and DC power resulting in failure of the diesels to start and in the unavailability of the high pressure injection and core isolation coding systems

  3. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    Science.gov (United States)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  4. Input to the PRAST computer code used in the SRS probabilistic risk assessment

    International Nuclear Information System (INIS)

    Kearnaghan, D.P.

    1992-01-01

    The PRAST (Production Reactor Algorithm for Source Terms) computer code was developed by Westinghouse Savannah River Company and Science Application International Corporation for the quantification of source terms for the SRS Savannah River Site (SRS) Reactor Probabilistic Risk Assessment. PRAST requires as input a set of release fractions, decontamination factors, transfer fractions and source term characteristics that accurately reflect the conditions that are evaluated by PRAST. This document links the analyses which form the basis for the PRAST input parameters. In addition, it gives the distribution of the input parameters that are uncertain and considered to be important to the evaluation of the source terms to the environment

  5. Quantification of biventricular myocardial function using cardiac magnetic resonance feature tracking, endocardial border delineation and echocardiographic speckle tracking in patients with repaired tetralogy of fallot and healthy controls

    Science.gov (United States)

    2012-01-01

    Background Parameters of myocardial deformation have been suggested to be superior to conventional measures of ventricular function in patients with tetralogy of Fallot (ToF), but have required non-routine, tagged cardiovascular magnetic resonance (CMR) techniques. We assessed biventricular myocardial function using CMR cine-based feature tracking (FT) and compared it to speckle tracking echocardiography (STE) and to simple endocardial border delineation (EBD). In addition, the relation between parameters of myocardial deformation and clinical parameters was assessed. Methods Overall, 28 consecutive adult patients with repaired ToF (age 40.4 ± 13.3 years) underwent standard steady-state-free precession sequence CMR, echocardiography, and cardiopulmonary exercise testing. In addition, 25 healthy subjects served as controls. Myocardial deformation was assessed by CMR based FT (TomTec Diogenes software), CMR based EBD (using custom written software) and STE (TomTec Cardiac Performance Analysis software). Results Feature tracking was feasible in all subjects. A close agreement was found between measures of global left (LV) and right ventricular (RV) global strain. Interobserver agreement for FT and STE was similar for longitudinal LV global strain, but FT showed better inter-observer reproducibility than STE for circumferential or radial LV and longitudinal RV global strain. Reproducibility of regional strain on FT was, however, poor. The relative systolic length change of the endocardial border measured by EBD yielded similar results to FT global strain. Clinically, biventricular longitudinal strain on FT was reduced compared to controls (P < 0.0001) and was related to the number of previous cardiac operations. In addition, FT derived RV strain was related to exercise capacity and VE/VCO2-slope. Conclusions Although neither the inter-study reproducibility nor accuracy of FT software were investigated, and its inter-observer reproducibility for regional

  6. Quantification of Functional Marker Genes for Denitrifying Microbial Populations in the Chandeleur Islands Impacted by the 2010 Gulf of Mexico Oil Spill

    Science.gov (United States)

    Crawford, P.; Flournoy, N.; Taylor, C.; Tatariw, C.; Mortazavi, B.; Sobecky, P.

    2017-12-01

    Barrier island ecosystems provide protection by reducing storm surges, dissipating wave energy, and economically through services such as fisheries, water catchment, and water quality. As these ecosystems are deteriorating and threatened in this century, services provided to humans are being valued monetarily to communicate their importance. Events such as the 2010 Gulf of Mexico oil spill, act as catalysts to accelerate deterioration and further loss of these vital ecosystem services. The oil spill impacted the Chandeleur Islands, barrier islands in Louisiana waters located forty miles south of Gulfport, MS. Island chain vegetation; i.e., Avicennia germinans and native Spartina alterniflora was heavily damaged as a result of the oil spill. As oil was deposited differentially, it was important to investigate the microbiology of oil-impacted areas as marsh vegetation is directly linked to microbe-driven ecosystem services such as denitrification, a nitrogen (N) cycle pathway. The objectives of this study were: i) characterize the biodiversity of microorganisms; ii) quantify denitrifying microbial populations using functional marker genes; and iii) measure rates of denitrification during a one-year period. Eco-functional marker genes narG, nirS, norB, nosZ, and nrfA were selected to represent denitrification. Three different marsh sites were selected for study based upon estimated amounts of prior oiling. Highest rates of denitrification were in September while the lowest rates were observed in February. The highest nirS abundance was detected for two of the three sites (Site 1 and 2) in September while Site 3 exhibited the highest abundance in November. Similarly, the highest abundances observed for norB and nosZ varied by site and by month. Weathered oil was also detected in some of the marsh sediment cores and chemically typed to Macondo oil. Studies such as this one are designed to characterize the barrier island microbial biodiversity and N cycle processes to

  7. Quantification of hand function by power grip and pinch strength force measurements in ulnar nerve lesion simulated by ulnar nerve block.

    Science.gov (United States)

    Wachter, Nikolaus Johannes; Mentzel, Martin; Krischak, Gert D; Gülke, Joachim

    2017-06-24

    In the assessment of hand and upper limb function, grip strength is of the major importance. The measurement by dynamometers has been established. In this study, the effect of a simulated ulnar nerve lesion on different grip force measurements was evaluated. In 25 healthy volunteers, grip force measurement was done by the JAMAR dynamometer (Fabrication Enterprises Inc, Irvington, NY) for power grip and by a pinch strength dynamometer for tip pinch strength, tripod grip, and key pinch strength. A within-subject research design was used in this prospective study. Each subject served as the control by preinjection measurements of grip and pinch strength. Subsequent measurements after ulnar nerve block were used to examine within-subject change. In power grip, there was a significant reduction of maximum grip force of 26.9% with ulnar nerve block compared with grip force without block (P force could be confirmed. However, the assessment of other dimensions of hand strength as tip pinch, tripod pinch and key pinch had more relevance in demonstrating hand strength changes resulting from an distal ulnar nerve lesion. The measurement of tip pinch, tripod grip and key pinch can improve the follow-up in hand rehabilitation. II. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.

  8. Multi-objective optimization strategy based on desirability functions used for electrophoratic separation and quantification of rosiglitazone and glimepiride in plasma and formulations.

    Science.gov (United States)

    Hefnawy, Mohamed M; Sultan, Maha A; Al-Johar, Haya I; Kassem, Mohamed G; Aboul-Enein, Hassan Y

    2012-01-01

    Multiple response simultaneous optimization employing Derringer's desirability function was used for the development of a capillary electrophoresis method for the simultaneous determination of rosiglitazone (RSG) and glimepiride (GLM) in plasma and formulations. Twenty experiments, taking the two resolutions, the analysis time, and the capillary current as the responses with three important factors--buffer morality, volte and column temperature--were used to design mathematical models. The experimental responses were fitted into a second order polynomial and the six responses were simultaneously optimized to predict the optimum conditions for the effective separation of the studied compounds. The separation was carried out by using capillary zone electrophoresis (CZE) with a silica capillary column and diode array detector at 210 nm. The optimum assay conditions were 52 mmol l⁻¹ phosphate buffer, pH 7, and voltage of 22 kV at 29 °C. The method showed good agreement between the experimental data and predictive value throughout the studied parameter space. The assay limit of detection was 0.02 µg ml⁻¹ and the effective working range at relative standard deviation (RSD) of ≤ 5% was 0.05-16 µg ml⁻¹ (r = 0.999) for both drugs. Analytical recoveries of the studied drugs from spiked plasma were 97.2-101.9 ± 0.31-3.0%. The precision of the assay was satisfactory; RSD was 1.07 and 1.14 for intra- and inter-assay precision, respectively. The proposed method has a great value in routine analysis of RSG and GLM for its therapeutic monitoring and pharmacokinetic studies. Copyright © 2011 John Wiley & Sons, Ltd.

  9. TH-C-19A-09: Quantification of Transmission and Backscatter Factors as a function of Distance to Inhomogeneity Interface for Three Types of Surgical Implant Plates

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D; Mills, M; Wang, B [University of Louisville, Louisville, KY (United States)

    2014-06-15

    Purpose: Carbon fiber materials have been increasingly used clinically, mainly in orthopedics, as an alternative to metallic implants because of their minimal artifacts on CT and MRI images. This study characterizes the transmission and backscatter property of carbon fiber plates (CarboFix Orthopedics, Herzeliya, Israel) with measurements for radiation therapy applications, and compares them to traditional Stainless Steel (SS) and Titanium (Ti) metal materials. Methods: For the transmission measurements, 1-mm-thick test plate was placed upstream from a plane parallel Markus chamber, separated by various thicknesses of polystyrene plates in 0.5 cm increments between 0 and 5 cm. With this setup, we quantified the radiation transmission as a function of distance to the inhomogeneity interface. The LINAC source to detector distance was maintained at 100 cm and 200 MU was delivered for each measurement. Two 3-cm solid water phantoms were placed at the top and bottom to provide build up. All the measurements were performed for 6 MV and 18 MV photons. The backscatter measurements had the identical setup, except that the test plate was downstream of the chamber from radiation. Results: The carbon fiber plates did not introduce any measureable inhomogeneity effect on the transmission and backscatter factor because of its low atomic number. In contrast, traditional metal implant materials caused up to 15% dose difference at upstream and 25% backscatter at downstream from radiation. Such differences decrease as the distance to the inhomogeneity interface increases and become unmeasurable at distance of 3 cm and 1 cm for upstream and downstream, respectively. Conclusion: A new type of carbon fiber implant plate was evaluated and found to have minimal inhomogeneity effect in MV radiation beams. Patients would benefit from a carbon based implant over metal for radiation therapy due to their minimal backscatter and imaging artifacts.

  10. NeutrAvidin Functionalization of CdSe/CdS Quantum Nanorods and Quantification of Biotin Binding Sites using Biotin-4-Fluorescein Fluorescence Quenching.

    Science.gov (United States)

    Lippert, Lisa G; Hallock, Jeffrey T; Dadosh, Tali; Diroll, Benjamin T; Murray, Christopher B; Goldman, Yale E

    2016-03-16

    We developed methods to solubilize, coat, and functionalize with NeutrAvidin elongated semiconductor nanocrystals (quantum nanorods, QRs) for use in single molecule polarized fluorescence microscopy. Three different ligands were compared with regard to efficacy for attaching NeutrAvidin using the "zero-length cross-linker" 1-ethyl-3-[3-(dimethylamino)propyl]carbodiimide (EDC). Biotin-4-fluorescene (B4F), a fluorophore that is quenched when bound to avidin proteins, was used to quantify biotin binding activity of the NeutrAvidin coated QRs and biotin binding activity of commercially available streptavidin coated quantum dots (QDs). All three coating methods produced QRs with NeutrAvidin coating density comparable to the streptavidin coating density of the commercially available quantum dots (QDs) in the B4F assay. One type of QD available from the supplier (ITK QDs) exhibited ∼5-fold higher streptavidin surface density compared to our QRs, whereas the other type of QD (PEG QDs) had 5-fold lower density. The number of streptavidins per QD increased from ∼7 streptavidin tetramers for the smallest QDs emitting fluorescence at 525 nm (QD525) to ∼20 tetramers for larger, longer wavelength QDs (QD655, QD705, and QD800). QRs coated with NeutrAvidin using mercaptoundecanoicacid (MUA) and QDs coated with streptavidin bound to biotinylated cytoplasmic dynein in single molecule TIRF microscopy assays, whereas Poly(maleic anhydride-alt-1-ocatdecene) (PMAOD) or glutathione (GSH) QRs did not bind cytoplasmic dynein. The coating methods require optimization of conditions and concentrations to balance between substantial NeutrAvidin binding vs tendency of QRs to aggregate and degrade over time.

  11. Response sensitivity of barrel neuron subpopulations to simulated thalamic input.

    Science.gov (United States)

    Pesavento, Michael J; Rittenhouse, Cynthia D; Pinto, David J

    2010-06-01

    Our goal is to examine the relationship between neuron- and network-level processing in the context of a well-studied cortical function, the processing of thalamic input by whisker-barrel circuits in rodent neocortex. Here we focus on neuron-level processing and investigate the responses of excitatory and inhibitory barrel neurons to simulated thalamic inputs applied using the dynamic clamp method in brain slices. Simulated inputs are modeled after real thalamic inputs recorded in vivo in response to brief whisker deflections. Our results suggest that inhibitory neurons require more input to reach firing threshold, but then fire earlier, with less variability, and respond to a broader range of inputs than do excitatory neurons. Differences in the responses of barrel neuron subtypes depend on their intrinsic membrane properties. Neurons with a low input resistance require more input to reach threshold but then fire earlier than neurons with a higher input resistance, regardless of the neuron's classification. Our results also suggest that the response properties of excitatory versus inhibitory barrel neurons are consistent with the response sensitivities of the ensemble barrel network. The short response latency of inhibitory neurons may serve to suppress ensemble barrel responses to asynchronous thalamic input. Correspondingly, whereas neurons acting as part of the barrel circuit in vivo are highly selective for temporally correlated thalamic input, excitatory barrel neurons acting alone in vitro are less so. These data suggest that network-level processing of thalamic input in barrel cortex depends on neuron-level processing of the same input by excitatory and inhibitory barrel neurons.

  12. Validation and uncertainty quantification of detector response functions for a 1″×2″ NaI collimated detector intended for inverse radioisotope source mapping applications

    Science.gov (United States)

    Nelson, N.; Azmy, Y.; Gardner, R. P.; Mattingly, J.; Smith, R.; Worrall, L. G.; Dewji, S.

    2017-11-01

    Detector response functions (DRFs) are often used for inverse analysis. We compute the DRF of a sodium iodide (NaI) nuclear material holdup field detector using the code named g03 developed by the Center for Engineering Applications of Radioisotopes (CEAR) at NC State University. Three measurement campaigns were performed in order to validate the DRF's constructed by g03: on-axis detection of calibration sources, off-axis measurements of a highly enriched uranium (HEU) disk, and on-axis measurements of the HEU disk with steel plates inserted between the source and the detector to provide attenuation. Furthermore, this work quantifies the uncertainty of the Monte Carlo simulations used in and with g03, as well as the uncertainties associated with each semi-empirical model employed in the full DRF representation. Overall, for the calibration source measurements, the response computed by the DRF for the prediction of the full-energy peak region of responses was good, i.e. within two standard deviations of the experimental response. In contrast, the DRF tended to overestimate the Compton continuum by about 45-65% due to inadequate tuning of the electron range multiplier fit variable that empirically represents physics associated with electron transport that is not modeled explicitly in g03. For the HEU disk measurements, computed DRF responses tended to significantly underestimate (more than 20%) the secondary full-energy peaks (any peak of lower energy than the highest-energy peak computed) due to scattering in the detector collimator and aluminum can, which is not included in the g03 model. We ran a sufficiently large number of histories to ensure for all of the Monte Carlo simulations that the statistical uncertainties were lower than their experimental counterpart's Poisson uncertainties. The uncertainties associated with least-squares fits to the experimental data tended to have parameter relative standard deviations lower than the peak channel relative standard

  13. Accident sequence quantification with KIRAP

    International Nuclear Information System (INIS)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong.

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP's cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs

  14. Accident sequence quantification with KIRAP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP`s cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs.

  15. Inverse Problems and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander

    2014-01-06

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  16. Inverse Problems and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.

    2014-01-01

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  17. Inverse problems and uncertainty quantification

    KAUST Repository

    Litvinenko, Alexander

    2013-12-18

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  18. World Input-Output Network.

    Directory of Open Access Journals (Sweden)

    Federica Cerina

    Full Text Available Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD is one of the first efforts to construct the global multi-regional input-output (GMRIO tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  19. Parameter setting and input reduction

    NARCIS (Netherlands)

    Evers, A.; van Kampen, N.J.|info:eu-repo/dai/nl/126439737

    2008-01-01

    The language acquisition procedure identifies certain properties of the target grammar before others. The evidence from the input is processed in a stepwise order. Section 1 equates that order and its typical effects with an order of parameter setting. The question is how the acquisition procedure

  20. Constituency Input into Budget Management.

    Science.gov (United States)

    Miller, Norman E.

    1995-01-01

    Presents techniques for ensuring constituency involvement in district- and site-level budget management. Outlines four models for securing constituent input and focuses on strategies to orchestrate the more complex model for staff and community participation. Two figures are included. (LMI)

  1. Remote input/output station

    CERN Multimedia

    1972-01-01

    A general view of the remote input/output station installed in building 112 (ISR) and used for submitting jobs to the CDC 6500 and 6600. The card reader on the left and the line printer on the right are operated by programmers on a self-service basis.

  2. Lithium inputs to subduction zones

    NARCIS (Netherlands)

    Bouman, C.; Elliott, T.R.; Vroon, P.Z.

    2004-01-01

    We have studied the sedimentary and basaltic inputs of lithium to subduction zones. Various sediments from DSDP and ODP drill cores in front of the Mariana, South Sandwich, Banda, East Sunda and Lesser Antilles island arcs have been analysed and show highly variable Li contents and δ

  3. Uncertainty Quantification with Applications to Engineering Problems

    DEFF Research Database (Denmark)

    Bigoni, Daniele

    in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor...... and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose......-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix....

  4. Advances in forensic DNA quantification: a review.

    Science.gov (United States)

    Lee, Steven B; McCord, Bruce; Buel, Eric

    2014-11-01

    This review focuses upon a critical step in forensic biology: detection and quantification of human DNA from biological samples. Determination of the quantity and quality of human DNA extracted from biological evidence is important for several reasons. Firstly, depending on the source and extraction method, the quality (purity and length), and quantity of the resultant DNA extract can vary greatly. This affects the downstream method as the quantity of input DNA and its relative length can determine which genotyping procedure to use-standard short-tandem repeat (STR) typing, mini-STR typing or mitochondrial DNA sequencing. Secondly, because it is important in forensic analysis to preserve as much of the evidence as possible for retesting, it is important to determine the total DNA amount available prior to utilizing any destructive analytical method. Lastly, results from initial quantitative and qualitative evaluations permit a more informed interpretation of downstream analytical results. Newer quantitative techniques involving real-time PCR can reveal the presence of degraded DNA and PCR inhibitors, that provide potential reasons for poor genotyping results and may indicate methods to use for downstream typing success. In general, the more information available, the easier it is to interpret and process the sample resulting in a higher likelihood of successful DNA typing. The history of the development of quantitative methods has involved two main goals-improving precision of the analysis and increasing the information content of the result. This review covers advances in forensic DNA quantification methods and recent developments in RNA quantification. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Advancing agricultural greenhouse gas quantification*

    Science.gov (United States)

    Olander, Lydia; Wollenberg, Eva; Tubiello, Francesco; Herold, Martin

    2013-03-01

    given to methods appropriate to low-income countries, where strategies are needed for getting robust data with extremely limited resources in order to support national mitigation planning within widely accepted standards and thus provide access to essential international support, including climate funding. Managing agricultural emissions needs to occur in tandem with managing for agricultural productivity, resilience to climate change, and ecosystem impacts. Management decisions and priorities will require measures and information that identify GHG efficiencies in production and reduce inputs without reducing yields, while addressing climate resilience and maintaining other essential environmental services, such as water quality and support for pollinators. Another set of papers in this issue considers the critical synergies and tradeoffs possible between these multiple objectives of mitigation, resilience, and production efficiency to help us understand how we need to tackle these in our quantification systems. Significant capacity to quantify greenhouse gases is already built, and with some near-term strategic investment, could become an increasingly robust and useful tool for planning and development in the agricultural sector around the world. Acknowledgments The Climate Change Agriculture and Food Security Program of the Consultative Group on International Agricultural Research, the Technical Working Group on Agricultural Greenhouse Gases (T-AGG) at Duke University's Nicholas Institute for Environmental Policy Solutions, and the United Nations Food and Agriculture Organization (FAO) have come together to guide the development of this focus issue and associated activities and papers, given their common desire to improve our understanding of the state of agricultural greenhouse gas (GHG) quantification and to advance ideas for building data and methods that will help mitigation policy and programs move forward around the world. We thank the David and Lucile Packard

  6. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics.

    Science.gov (United States)

    Möller, Birgit; Poeschl, Yvonne; Plötner, Romina; Bürstenbinder, Katharina

    2017-11-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. © 2017 American Society of Plant Biologists. All Rights Reserved.

  7. A simple method for the quantification of benzodiazepine receptors using iodine-123 iomazenil and single-photon emission tomography

    International Nuclear Information System (INIS)

    Ito, Hiroshi; Goto, Ryoui; Koyama, Masamichi; Kawashima, Ryuta; Ono, Shuichi; Sato, Kazunori; Fukuda, Hiroshi

    1996-01-01

    Iodine-123 iomazenil (Iomazenil) is a ligand for central type benzodiazepine receptors that is suitable for single-photon emission tomography (SPET). The purpose of this study was to develop a simple method for the quantification of its binding potential (BP). The method is based on a two-compartment model (K 1 , influx rate constant; k 2 ', efflux rate constant; V T '(=K 1 /k 2 '), the total distribution volumes relative to the total arterial tracer concentration), and requires two SPET scans and one blood sampling. For a given input function, the radioactivity ratio of the early to delayed scans can be considered to tabulate as a function of k 2 ', and a table lookup procedure provides the corresponding k 2 ' value, from which K 1 and V t ' values are then calculated. The arterial input function is obtained by calibration of the standard input function by the single blood sampling. SPET studies were performed on 14 patients with cerebrovascular diseases, dementia or brain tumours (mean age ±SD, 56.0±12.2). None of the patients had any heart, renal or liver disease. A dynamic SPET scan was performed following intravenous bolus injection of Iomazenil. A static SPET scan was performed at 180 min after injection. Frequent blood sampling from the brachial artery was performed on all subjects for determination of the arterial input function. Two-compartment model analysis was validated for calculation of the V T ' value of Iomazenil. Good correlations were observed between V T ' values calculated by three-compartment model analysis and those calculated by the present method, in which the scan time combinations (early scan/delayed scan) used were 15/180 min, 30/180 min or 45/180 min (all combinations: r=0.92), supporting the validity of this method. The present method is simple and applicable for clinical use. (orig.)

  8. PREP-45, Input Preparation for CITATION-2

    International Nuclear Information System (INIS)

    Ramalho Carlos, C.A.

    1995-01-01

    1 - Description of program or function: A Fortran program has been created, which saves much effort in preparing sections 004 (intervals in the coordinates) and 005 (zone numbers) of the input data file for the multigroup theory code CITATION (version CITATION-2, NESC0387/09), particularly when a thin complicated mesh is used. 2 - Method of solution: A domain is defined for CITATION calculations through specifying its sub-domains (e.g. graphite, lead, beryllium, water and fuel sub-domains) in a compact and simple way. An independent and previous geometrical specification is made of the various types of elements which are envisaged to constitute the contents of the reactor core grid positions. Then the load table for the configuration is input and scanned throughout, thus enabling the geometric mesh description to be produced (section 004). Also the zone placement (section 005) is achieved by means of element description subroutines for the different types of element (which may require appropriate but simple changes in the actual cases). The output of PREP45 is directly obtained in a format which is compatible with CITATION-2 input. 3 - Restrictions on the complexity of the problem: Only rectangular two-dimensional Cartesian coordinates are considered. A maximum of 12 sub-domains in the x direction (18 in the y direction) and up to 8 distinct element types are considered in this version. Other limitations exist which can nevertheless be overcome with simple changes in the source program

  9. Input measurements in reprocessing plants

    International Nuclear Information System (INIS)

    Trincherini, P.R.; Facchetti, S.

    1980-01-01

    The aim of this work is to give a review of the methods and the problems encountered in measurements in 'input accountability tanks' of irradiated fuel treatment plants. This study was prompted by the conviction that more and more precise techniques and methods should be at the service of safeguards organizations and that ever greater efforts should be directed towards promoting knowledge of them among operators and all those general area of interest includes the nuclear fuel cycle. The overall intent is to show the necessity of selecting methods which produce measurements which are not only more precise but are absolutely reliable both for routine plant operation and for safety checks in the input area. A description and a critical evaluation of the most common physical and chemical methods are provided, together with an estimate of the precision and accuracy obtained in real operating conditions

  10. Real-Time PCR Quantification and Diversity Analysis of the Functional Genes aprA and dsrA of Sulfate-Reducing Prokaryotes in Marine Sediments of the Peru Continental Margin and the Black Sea.

    Science.gov (United States)

    Blazejak, Anna; Schippers, Axel

    2011-01-01

    Sulfate-reducing prokaryotes (SRP) are ubiquitous and quantitatively important members in many ecosystems, especially in marine sediments. However their abundance and diversity in subsurface marine sediments is poorly understood. In this study, the abundance and diversity of the functional genes for the enzymes adenosine 5'-phosphosulfate reductase (aprA) and dissimilatory sulfite reductase (dsrA) of SRP in marine sediments of the Peru continental margin and the Black Sea were analyzed, including samples from the deep biosphere (ODP site 1227). For aprA quantification a Q-PCR assay was designed and evaluated. Depth profiles of the aprA and dsrA copy numbers were almost equal for all sites. Gene copy numbers decreased concomitantly with depth from around 10(8)/g sediment close to the sediment surface to less than 10(5)/g sediment at 5 mbsf. The 16S rRNA gene copy numbers of total bacteria were much higher than those of the functional genes at all sediment depths and used to calculate the proportion of SRP to the total Bacteria. The aprA and dsrA copy numbers comprised in average 0.5-1% of the 16S rRNA gene copy numbers of total bacteria in the sediments up to a depth of ca. 40 mbsf. In the zone without detectable sulfate in the pore water from about 40-121 mbsf (Peru margin ODP site 1227), only dsrA (but not aprA) was detected with copy numbers of less than 10(4)/g sediment, comprising ca. 14% of the 16S rRNA gene copy numbers of total bacteria. In this zone, sulfate might be provided for SRP by anaerobic sulfide oxidation. Clone libraries of aprA showed that all isolated sequences originate from SRP showing a close relationship to aprA of characterized species or form a new cluster with only distant relation to aprA of isolated SRP. For dsrA a high diversity was detected, even up to 121 m sediment depth in the deep biosphere.

  11. Verb aspect, alternations and quantification

    Directory of Open Access Journals (Sweden)

    Svetla Koeva

    2015-11-01

    Full Text Available Verb aspect, alternations and quantification In this paper we are briefly discuss the nature of Bulgarian verb aspect and argue that the verb aspect pairs are different lexical units with different (although related meaning, different argument structure (reflecting categories, explicitness and referential status of arguments and different sets of semantic and syntactic alternations. The verb prefixes resulting in perfective verbs derivation in some cases can be interpreted as lexical quantifiers as well. Thus the Bulgarian verb aspect is related (in different way both with the potential for the generation of alternations and with the prefixal lexical quantification. It is shown that the scope of the lexical quantification by means of verbal prefixes is the quantified verb phrase and the scope remains constant in all derived alternations. The paper concerns the basic issues of these complex problems, while the detailed description of the conditions satisfying particular alternation or particular lexical quantification are subject of a more detailed study.

  12. Quantification of informed opinion

    International Nuclear Information System (INIS)

    Rasmuson, D.M.

    1985-01-01

    The objective of this session, Quantification of Informed Opinion, is to provide the statistician with a better understanding of this important area. The NRC uses informed opinion, sometimes called engineering judgment or subjective judgment, in many areas. Sometimes informed opinion is the only source of information that exists, especially in phenomenological areas, such as steam explosions, where experiments are costly and phenomena are very difficult to measure. There are many degrees of informed opinion. These vary from the weatherman who makes predictions concerning relatively high probability events with a large data base to the phenomenological expert who must use his intuition tempered with basic knowledge and little or no measured data to predict the behavior of events with a low probability of occurrence. The first paper in this session provides the reader with an overview of the subject area. The second paper provides some aspects that must be considered in the collection of informed opinion to improve the quality of the information. The final paper contains an example of the use of informed opinion in the area of seismic hazard characterization. These papers should be useful to researchers and statisticians who need to collect and use informed opinion in their work

  13. Quantification In Neurology

    Directory of Open Access Journals (Sweden)

    Netravati M

    2005-01-01

    Full Text Available There is a distinct shift of emphasis in clinical neurology in the last few decades. A few years ago, it was just sufficient for a clinician to precisely record history, document signs, establish diagnosis and write prescription. In the present context, there has been a significant intrusion of scientific culture in clinical practice. Several criteria have been proposed, refined and redefined to ascertain accurate diagnosis for many neurological disorders. Introduction of the concept of impairment, disability, handicap and quality of life has added new dimension to the measurement of health and disease and neurological disorders are no exception. "Best guess" treatment modalities are no more accepted and evidence based medicine has become an integral component of medical care. Traditional treatments need validation and new therapies require vigorous trials. Thus, proper quantification in neurology has become essential, both in practice and research methodology in neurology. While this aspect is widely acknowledged, there is a limited access to a comprehensive document pertaining to measurements in neurology. This following description is a critical appraisal of various measurements and also provides certain commonly used rating scales/scores in neurological practice.

  14. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  15. PERSPECTIVES ON A DOE CONSEQUENCE INPUTS FOR ACCIDENT ANALYSIS APPLICATIONS

    International Nuclear Information System (INIS)

    O'Kula, K.R.; Thoman, D.C.; Lowrie, J.; Keller, A.

    2008-01-01

    Department of Energy (DOE) accident analysis for establishing the required control sets for nuclear facility safety applies a series of simplifying, reasonably conservative assumptions regarding inputs and methodologies for quantifying dose consequences. Most of the analytical practices are conservative, have a technical basis, and are based on regulatory precedent. However, others are judgmental and based on older understanding of phenomenology. The latter type of practices can be found in modeling hypothetical releases into the atmosphere and the subsequent exposure. Often the judgments applied are not based on current technical understanding but on work that has been superseded. The objective of this paper is to review the technical basis for the major inputs and assumptions in the quantification of consequence estimates supporting DOE accident analysis, and to identify those that could be reassessed in light of current understanding of atmospheric dispersion and radiological exposure. Inputs and assumptions of interest include: Meteorological data basis; Breathing rate; and Inhalation dose conversion factor. A simple dose calculation is provided to show the relative difference achieved by improving the technical bases

  16. Smart mobility solution with multiple input Output interface.

    Science.gov (United States)

    Sethi, Aartika; Deb, Sujay; Ranjan, Prabhat; Sardar, Arghya

    2017-07-01

    Smart wheelchairs are commonly used to provide solution for mobility impairment. However their usage is limited primarily due to high cost owing from sensors required for giving input, lack of adaptability for different categories of input and limited functionality. In this paper we propose a smart mobility solution using smartphone with inbuilt sensors (accelerometer, camera and speaker) as an input interface. An Emotiv EPOC+ is also used for motor imagery based input control synced with facial expressions in cases of extreme disability. Apart from traction, additional functions like home security and automation are provided using Internet of Things (IoT) and web interfaces. Although preliminary, our results suggest that this system can be used as an integrated and efficient solution for people suffering from mobility impairment. The results also indicate a decent accuracy is obtained for the overall system.

  17. Jointness through vessel capacity input in a multispecies fishery

    DEFF Research Database (Denmark)

    Hansen, Lars Gårn; Jensen, Carsten Lynge

    2014-01-01

    capacity. We develop a fixed but allocatable input model of purse seine fisheries capturing this particular type of jointness. We estimate the model for the Norwegian purse seine fishery and find that it is characterized by nonjointness, while estimations for this fishery using the standard models imply...... are typically modeled as either independent single species fisheries or using standard multispecies functional forms characterized by jointness in inputs. We argue that production of each species is essentially independent but that jointness may be caused by competition for fixed but allocable input of vessel...

  18. IFF, Full-Screen Input Menu Generator for FORTRAN Program

    International Nuclear Information System (INIS)

    Seidl, Albert

    1991-01-01

    1 - Description of program or function: The IFF-package contains input modules for use within FORTRAN programs. This package enables the programmer to easily include interactive menu-directed data input (module VTMEN1) and command-word processing (module INPCOM) into a FORTRAN program. 2 - Method of solution: No mathematical operations are performed. 3 - Restrictions on the complexity of the problem: Certain restrictions of use may arise from the dimensioning of arrays. Field lengths are defined via PARAMETER-statements

  19. Quantification of cellular uptake of DNA nanostructures by qPCR

    DEFF Research Database (Denmark)

    Okholm, Anders Hauge; Nielsen, Jesper Sejrup; Vinther, Mathias

    2014-01-01

    interactions and structural and functional features of the DNA delivery device must be thoroughly investigated. Here, we present a rapid and robust method for the precise quantification of the component materials of DNA origami structures capable of entering cells in vitro. The quantification is performed...

  20. Framework for Modelling Multiple Input Complex Aggregations for Interactive Installations

    DEFF Research Database (Denmark)

    Padfield, Nicolas; Andreasen, Troels

    2012-01-01

    on fuzzy logic and provides a method for variably balancing interaction and user input with the intention of the artist or director. An experimental design is presented, demonstrating an intuitive interface for parametric modelling of a complex aggregation function. The aggregation function unifies...

  1. An overview of uncertainty quantification techniques with application to oceanic and oil-spill simulations

    KAUST Repository

    Iskandarani, Mohamed; Wang, Shitao; Srinivasan, Ashwanth; Carlisle Thacker, W.; Winokur, Justin; Knio, Omar

    2016-01-01

    We give an overview of four different ensemble-based techniques for uncertainty quantification and illustrate their application in the context of oil plume simulations. These techniques share the common paradigm of constructing a model proxy that efficiently captures the functional dependence of the model output on uncertain model inputs. This proxy is then used to explore the space of uncertain inputs using a large number of samples, so that reliable estimates of the model's output statistics can be calculated. Three of these techniques use polynomial chaos (PC) expansions to construct the model proxy, but they differ in their approach to determining the expansions' coefficients; the fourth technique uses Gaussian Process Regression (GPR). An integral plume model for simulating the Deepwater Horizon oil-gas blowout provides examples for illustrating the different techniques. A Monte Carlo ensemble of 50,000 model simulations is used for gauging the performance of the different proxies. The examples illustrate how regression-based techniques can outperform projection-based techniques when the model output is noisy. They also demonstrate that robust uncertainty analysis can be performed at a fraction of the cost of the Monte Carlo calculation.

  2. An overview of uncertainty quantification techniques with application to oceanic and oil-spill simulations

    KAUST Repository

    Iskandarani, Mohamed

    2016-04-22

    We give an overview of four different ensemble-based techniques for uncertainty quantification and illustrate their application in the context of oil plume simulations. These techniques share the common paradigm of constructing a model proxy that efficiently captures the functional dependence of the model output on uncertain model inputs. This proxy is then used to explore the space of uncertain inputs using a large number of samples, so that reliable estimates of the model\\'s output statistics can be calculated. Three of these techniques use polynomial chaos (PC) expansions to construct the model proxy, but they differ in their approach to determining the expansions\\' coefficients; the fourth technique uses Gaussian Process Regression (GPR). An integral plume model for simulating the Deepwater Horizon oil-gas blowout provides examples for illustrating the different techniques. A Monte Carlo ensemble of 50,000 model simulations is used for gauging the performance of the different proxies. The examples illustrate how regression-based techniques can outperform projection-based techniques when the model output is noisy. They also demonstrate that robust uncertainty analysis can be performed at a fraction of the cost of the Monte Carlo calculation.

  3. Influential input parameters for reflood model of MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Bang, Young Seok [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-10-15

    Best Estimate (BE) calculation has been more broadly used in nuclear industries and regulations to reduce the significant conservatism for evaluating Loss of Coolant Accident (LOCA). Reflood model has been identified as one of the problems in BE calculation. The objective of the Post BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) program of OECD/NEA is to make progress the issue of the quantification of the uncertainty of the physical models in system thermal hydraulic codes, by considering an experimental result especially for reflood. It is important to establish a methodology to identify and select the parameters influential to the response of reflood phenomena following Large Break LOCA. For this aspect, a reference calculation and sensitivity analysis to select the dominant influential parameters for FEBA experiment are performed.

  4. Sound effects: Multimodal input helps infants find displaced objects.

    Science.gov (United States)

    Shinskey, Jeanne L

    2017-09-01

    sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.

  5. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    Science.gov (United States)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of

  6. Chemical sensors are hybrid-input memristors

    Science.gov (United States)

    Sysoev, V. I.; Arkhipov, V. E.; Okotrub, A. V.; Pershin, Y. V.

    2018-04-01

    Memristors are two-terminal electronic devices whose resistance depends on the history of input signal (voltage or current). Here we demonstrate that the chemical gas sensors can be considered as memristors with a generalized (hybrid) input, namely, with the input consisting of the voltage, analyte concentrations and applied temperature. The concept of hybrid-input memristors is demonstrated experimentally using a single-walled carbon nanotubes chemical sensor. It is shown that with respect to the hybrid input, the sensor exhibits some features common with memristors such as the hysteretic input-output characteristics. This different perspective on chemical gas sensors may open new possibilities for smart sensor applications.

  7. FED, Geometry Input Generator for Program TRUMP

    International Nuclear Information System (INIS)

    Schauer, D.A.; Elrod, D.C.

    1996-01-01

    1 - Description of program or function: FED reduces the effort required to obtain the necessary geometric input for problems which are to be solved using the heat-transfer code, TRUMP (NESC 771). TRUMP calculates transient and steady-state temperature distributions in multidimensional systems. FED can properly zone any body of revolution in one, or three dimensions. 2 - Method of solution: The region of interest must first be divided into areas which may consist of a common material. The boundaries of these areas are the required FED input. Each area is subdivided into volume nodes, and the geometrical properties are calculated. Finally, FED connects the adjacent nodes to one another, using the proper surface area, interface distance, and, if specified, radiation form factor and interface conductance. 3 - Restrictions on the complexity of the problem: Rectangular bodies can only be approximated by using a very large radius of revolution compared to the total radial thickness and by considering only a small angular segment in the circumferential direction

  8. Pseudo-BINPUT, a free formal input package for Fortran programmes

    International Nuclear Information System (INIS)

    Gubbins, M.E.

    1977-11-01

    Pseudo - BINPUT is an input package for reading free format data in codeword control in a FORTRAN programme. To a large degree it mimics in function the Winfrith Subroutine Library routine BINPUT. By using calls of the data input package DECIN to mimic the input routine BINPUT, Pseudo - BINPUT combines some of the advantages of both systems. (U.K.)

  9. Input Shaping to Reduce Solar Array Structural Vibrations

    Science.gov (United States)

    Doherty, Michael J.; Tolson, Robert J.

    1998-01-01

    Structural vibrations induced by actuators can be minimized using input shaping. Input shaping is a feedforward method in which actuator commands are convolved with shaping functions to yield a shaped set of commands. These commands are designed to perform the maneuver while minimizing the residual structural vibration. In this report, input shaping is extended to stepper motor actuators. As a demonstration, an input-shaping technique based on pole-zero cancellation was used to modify the Solar Array Drive Assembly (SADA) actuator commands for the Lewis satellite. A series of impulses were calculated as the ideal SADA output for vibration control. These impulses were then discretized for use by the SADA stepper motor actuator and simulated actuator outputs were used to calculate the structural response. The effectiveness of input shaping is limited by the accuracy of the knowledge of the modal frequencies. Assuming perfect knowledge resulted in significant vibration reduction. Errors of 10% in the modal frequencies caused notably higher levels of vibration. Controller robustness was improved by incorporating additional zeros in the shaping function. The additional zeros did not require increased performance from the actuator. Despite the identification errors, the resulting feedforward controller reduced residual vibrations to the level of the exactly modeled input shaper and well below the baseline cases. These results could be easily applied to many other vibration-sensitive applications involving stepper motor actuators.

  10. Repositioning Recitation Input in College English Teaching

    Science.gov (United States)

    Xu, Qing

    2009-01-01

    This paper tries to discuss how recitation input helps overcome the negative influences on the basis of second language acquisition theory and confirms the important role that recitation input plays in improving college students' oral and written English.

  11. Stochastic weather inputs for improved urban water demand forecasting: application of nonlinear input variable selection and machine learning methods

    Science.gov (United States)

    Quilty, J.; Adamowski, J. F.

    2015-12-01

    Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.

  12. Textual Enhancement of Input: Issues and Possibilities

    Science.gov (United States)

    Han, ZhaoHong; Park, Eun Sung; Combs, Charles

    2008-01-01

    The input enhancement hypothesis proposed by Sharwood Smith (1991, 1993) has stimulated considerable research over the last 15 years. This article reviews the research on textual enhancement of input (TE), an area where the majority of input enhancement studies have aggregated. Methodological idiosyncrasies are the norm of this body of research.…

  13. 7 CFR 3430.607 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE RESEARCH, EDUCATION... § 3430.607 Stakeholder input. CSREES shall seek and obtain stakeholder input through a variety of forums...

  14. 7 CFR 3430.15 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.15 Section 3430.15... Stakeholder input. Section 103(c)(2) of the Agricultural Research, Extension, and Education Reform Act of 1998... RFAs for competitive programs. CSREES will provide instructions for submission of stakeholder input in...

  15. Superlattice band structure: New and simple energy quantification condition

    Energy Technology Data Exchange (ETDEWEB)

    Maiz, F., E-mail: fethimaiz@gmail.com [University of Cartage, Nabeul Engineering Preparatory Institute, Merazka, 8000 Nabeul (Tunisia); King Khalid University, Faculty of Science, Physics Department, P.O. Box 9004, Abha 61413 (Saudi Arabia)

    2014-10-01

    Assuming an approximated effective mass and using Bastard's boundary conditions, a simple method is used to calculate the subband structure for periodic semiconducting heterostructures. Our method consists to derive and solve the energy quantification condition (EQC), this is a simple real equation, composed of trigonometric and hyperbolic functions, and does not need any programming effort or sophistic machine to solve it. For less than ten wells heterostructures, we have derived and simplified the energy quantification conditions. The subband is build point by point; each point presents an energy level. Our simple energy quantification condition is used to calculate the subband structure of the GaAs/Ga{sub 0.5}Al{sub 0.5}As heterostructures, and build its subband point by point for 4 and 20 wells. Our finding shows a good agreement with previously published results.

  16. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  17. Autoregressive moving average (ARMA) model applied to quantification of cerebral blood flow using dynamic susceptibility contrast-enhanced magnetic resonance imaging

    International Nuclear Information System (INIS)

    Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki

    2003-01-01

    The purpose of this study was to investigate the feasibility of the autoregressive moving average (ARMA) model for quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI) in comparison with deconvolution analysis based on singular value decomposition (DA-SVD). Using computer simulations, we generated a time-dependent concentration of the contrast agent in the volume of interest (VOI) from the arterial input function (AIF) modeled as a gamma-variate function under various CBFs, cerebral blood volumes and signal-to-noise ratios (SNRs) for three different types of residue function (exponential, triangular, and box-shaped). We also considered the effects of delay and dispersion in AIF. The ARMA model and DA-SVD were used to estimate CBF values from the simulated concentration-time curves in the VOI and AIFs, and the estimated values were compared with the assumed values. We found that the CBF value estimated by the ARMA model was more sensitive to the SNR and the delay in AIF than that obtained by DA-SVD. Although the ARMA model considerably overestimated CBF at low SNRs, it estimated the CBF more accurately than did DA-SVD at high SNRs for the exponential or triangular residue function. We believe this study will contribute to an understanding of the usefulness and limitations of the ARMA model when applied to quantification of CBF with DSC-MRI. (author)

  18. Turn customer input into innovation.

    Science.gov (United States)

    Ulwick, Anthony W

    2002-01-01

    It's difficult to find a company these days that doesn't strive to be customer-driven. Too bad, then, that most companies go about the process of listening to customers all wrong--so wrong, in fact, that they undermine innovation and, ultimately, the bottom line. What usually happens is this: Companies ask their customers what they want. Customers offer solutions in the form of products or services. Companies then deliver these tangibles, and customers just don't buy. The reason is simple--customers aren't expert or informed enough to come up with solutions. That's what your R&D team is for. Rather, customers should be asked only for outcomes--what they want a new product or service to do for them. The form the solutions take should be up to you, and you alone. Using Cordis Corporation as an example, this article describes, in fine detail, a series of effective steps for capturing, analyzing, and utilizing customer input. First come indepth interviews, in which a moderator works with customers to deconstruct a process or activity in order to unearth "desired outcomes." Addressing participants' comments one at a time, the moderator rephrases them to be both unambiguous and measurable. Once the interviews are complete, researchers then compile a comprehensive list of outcomes that participants rank in order of importance and degree to which they are satisfied by existing products. Finally, using a simple mathematical formula called the "opportunity calculation," researchers can learn the relative attractiveness of key opportunity areas. These data can be used to uncover opportunities for product development, to properly segment markets, and to conduct competitive analysis.

  19. PREVIMER : Meteorological inputs and outputs

    Science.gov (United States)

    Ravenel, H.; Lecornu, F.; Kerléguer, L.

    2009-09-01

    PREVIMER is a pre-operational system aiming to provide a wide range of users, from private individuals to professionals, with short-term forecasts about the coastal environment along the French coastlines bordering the English Channel, the Atlantic Ocean, and the Mediterranean Sea. Observation data and digital modelling tools first provide 48-hour (probably 96-hour by summer 2009) forecasts of sea states, currents, sea water levels and temperatures. The follow-up of an increasing number of biological parameters will, in time, complete this overview of coastal environment. Working in partnership with the French Naval Hydrographic and Oceanographic Service (Service Hydrographique et Océanographique de la Marine, SHOM), the French National Weather Service (Météo-France), the French public science and technology research institute (Institut de Recherche pour le Développement, IRD), the European Institute of Marine Studies (Institut Universitaire Européen de la Mer, IUEM) and many others, IFREMER (the French public institute fo marine research) is supplying the technologies needed to ensure this pertinent information, available daily on Internet at http://www.previmer.org, and stored at the Operational Coastal Oceanographic Data Centre. Since 2006, PREVIMER publishes the results of demonstrators assigned to limited geographic areas and to specific applications. This system remains experimental. The following topics are covered : Hydrodynamic circulation, sea states, follow-up of passive tracers, conservative or non-conservative (specifically of microbiological origin), biogeochemical state, primary production. Lastly, PREVIMER provides researchers and R&D departments with modelling tools and access to the database, in which the observation data and the modelling results are stored, to undertake environmental studies on new sites. The communication will focus on meteorological inputs to and outputs from PREVIMER. It will draw the lessons from almost 3 years during

  20. Quantification of complex modular architecture in plants.

    Science.gov (United States)

    Reeb, Catherine; Kaandorp, Jaap; Jansson, Fredrik; Puillandre, Nicolas; Dubuisson, Jean-Yves; Cornette, Raphaël; Jabbour, Florian; Coudert, Yoan; Patiño, Jairo; Flot, Jean-François; Vanderpoorten, Alain

    2018-04-01

    Morphometrics, the assignment of quantities to biological shapes, is a powerful tool to address taxonomic, evolutionary, functional and developmental questions. We propose a novel method for shape quantification of complex modular architecture in thalloid plants, whose extremely reduced morphologies, combined with the lack of a formal framework for thallus description, have long rendered taxonomic and evolutionary studies extremely challenging. Using graph theory, thalli are described as hierarchical series of nodes and edges, allowing for accurate, homologous and repeatable measurements of widths, lengths and angles. The computer program MorphoSnake was developed to extract the skeleton and contours of a thallus and automatically acquire, at each level of organization, width, length, angle and sinuosity measurements. Through the quantification of leaf architecture in Hymenophyllum ferns (Polypodiopsida) and a fully worked example of integrative taxonomy in the taxonomically challenging thalloid liverwort genus Riccardia, we show that MorphoSnake is applicable to all ramified plants. This new possibility of acquiring large numbers of quantitative traits in plants with complex modular architectures opens new perspectives of applications, from the development of rapid species identification tools to evolutionary analyses of adaptive plasticity. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.

  1. Input and output constraints affecting irrigation development

    Science.gov (United States)

    Schramm, G.

    1981-05-01

    In many of the developing countries the expansion of irrigated agriculture is used as a major development tool for bringing about increases in agricultural output, rural economic growth and income distribution. Apart from constraints imposed by water availability, the major limitations considered to any acceleration of such programs are usually thought to be those of costs and financial resources. However, as is shown on the basis of empirical data drawn from Mexico, in reality the feasibility and effectiveness of such development programs is even more constrained by the lack of specialized physical and human factors on the input and market limitations on the output side. On the input side, the limited availability of complementary factors such as, for example, truly functioning credit systems for small-scale farmers or effective agricultural extension services impose long-term constraints on development. On the output side the limited availability, high risk, and relatively slow growth of markets for high-value crops sharply reduce the usually hoped-for and projected profitable crop mix that would warrant the frequently high costs of irrigation investments. Three conclusions are drawn: (1) Factors in limited supply have to be shadow-priced to reflect their high opportunity costs in alternative uses. (2) Re-allocation of financial resources from immediate construction of projects to longer-term increase in the supply of scarce, highly-trained manpower resources are necessary in order to optimize development over time. (3) Inclusion of high-value, high-income producing crops in the benefit-cost analysis of new projects is inappropriate if these crops could potentially be grown in already existing projects.

  2. Stabilization of (state, input)-disturbed CSTRs through the port-Hamiltonian systems approach

    OpenAIRE

    Lu, Yafei; Fang, Zhou; Gao, Chuanhou

    2017-01-01

    It is a universal phenomenon that the state and input of the continuous stirred tank reactor (CSTR) systems are both disturbed. This paper proposes a (state, input)-disturbed port-Hamiltonian framework that can be used to model and further designs a stochastic passivity based controller to asymptotically stabilize in probability the (state, input)-disturbed CSTR (sidCSTR) systems. The opposite entropy function and the availability function are selected as the Hamiltonian for the model and con...

  3. Praxis input/output interface

    International Nuclear Information System (INIS)

    Shapiro, R.E.; Evans, A. Jr.

    1981-01-01

    This document is intended as an introduction to the use of RMS facilities via Praxis (this interface hereafter called Praxis-RMS). It is presumed that the reader is familiar with Praxis conventions as well as with RMS use (at the MACRO level). Since Praxis-RMS was designed to be functionally equivalent to MACRO-RMS, the explanations follow the pattern of the DEC MACRO-RMS documentation (particularly the programmer's reference manual). A complete list of the procedures that make up Praxis-RMS appears at the end of this document (with parameters), along with the constants (grouped by type) that can be used as actual parameters

  4. Input filter compensation for switching regulators

    Science.gov (United States)

    Lee, F. C.; Kelkar, S. S.

    1982-01-01

    The problems caused by the interaction between the input filter, output filter, and the control loop are discussed. The input filter design is made more complicated because of the need to avoid performance degradation and also stay within the weight and loss limitations. Conventional input filter design techniques are then dicussed. The concept of pole zero cancellation is reviewed; this concept is the basis for an approach to control the peaking of the output impedance of the input filter and thus mitigate some of the problems caused by the input filter. The proposed approach for control of the peaking of the output impedance of the input filter is to use a feedforward loop working in conjunction with feedback loops, thus forming a total state control scheme. The design of the feedforward loop for a buck regulator is described. A possible implementation of the feedforward loop design is suggested.

  5. A statistical methodology for quantification of uncertainty in best estimate code physical models

    International Nuclear Information System (INIS)

    Vinai, Paolo; Macian-Juan, Rafael; Chawla, Rakesh

    2007-01-01

    A novel uncertainty assessment methodology, based on a statistical non-parametric approach, is presented in this paper. It achieves quantification of code physical model uncertainty by making use of model performance information obtained from studies of appropriate separate-effect tests. Uncertainties are quantified in the form of estimated probability density functions (pdf's), calculated with a newly developed non-parametric estimator. The new estimator objectively predicts the probability distribution of the model's 'error' (its uncertainty) from databases reflecting the model's accuracy on the basis of available experiments. The methodology is completed by applying a novel multi-dimensional clustering technique based on the comparison of model error samples with the Kruskall-Wallis test. This takes into account the fact that a model's uncertainty depends on system conditions, since a best estimate code can give predictions for which the accuracy is affected by the regions of the physical space in which the experiments occur. The final result is an objective, rigorous and accurate manner of assigning uncertainty to coded models, i.e. the input information needed by code uncertainty propagation methodologies used for assessing the accuracy of best estimate codes in nuclear systems analysis. The new methodology has been applied to the quantification of the uncertainty in the RETRAN-3D void model and then used in the analysis of an independent separate-effect experiment. This has clearly demonstrated the basic feasibility of the approach, as well as its advantages in yielding narrower uncertainty bands in quantifying the code's accuracy for void fraction predictions

  6. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  7. Collagen Quantification in Tissue Specimens.

    Science.gov (United States)

    Coentro, João Quintas; Capella-Monsonís, Héctor; Graceffa, Valeria; Wu, Zhuning; Mullen, Anne Maria; Raghunath, Michael; Zeugolis, Dimitrios I

    2017-01-01

    Collagen is the major extracellular protein in mammals. Accurate quantification of collagen is essential in the biomaterials (e.g., reproducible collagen scaffold fabrication), drug discovery (e.g., assessment of collagen in pathophysiologies, such as fibrosis), and tissue engineering (e.g., quantification of cell-synthesized collagen) fields. Although measuring hydroxyproline content is the most widely used method to quantify collagen in biological specimens, the process is very laborious. To this end, the Sircol™ Collagen Assay is widely used due to its inherent simplicity and convenience. However, this method leads to overestimation of collagen content due to the interaction of Sirius red with basic amino acids of non-collagenous proteins. Herein, we describe the addition of an ultrafiltration purification step in the process to accurately determine collagen content in tissues.

  8. Does Input Quality Drive Measured Differences in Firm Productivity?

    DEFF Research Database (Denmark)

    Fox, Jeremy T.; Smeets, Valerie Anne Rolande

    is roughly of the same order of magnitude as some competitive effects found in the literature, but input quality measures do not explain most productivity dispersion, despite economically large production function coefficients. We find that the wage bill explains as much dispersion as human capital measures.......Firms in the same industry can differ in measured productivity by multiples of 3. Griliches (1957) suggests one explanation: the quality of inputs differs across firms. We add labor market history variables such as experience and firm and industry tenure, as well as general human capital measures...

  9. Quantification of (R)-[11C]PK11195 binding in rheumatoid arthritis

    International Nuclear Information System (INIS)

    Kropholler, M.A.; Boellaard, R.; Kloet, R.W.; Lammertsma, A.A.; Elzinga, E.H.; Voskuyl, A.E.; Laken, C.J. van der; Dijkmans, B.A.C.; Maruyama, K.

    2009-01-01

    Rheumatoid arthritis (RA) involves migration of macrophages into inflamed areas. (R)-[ 11 C]PK11195 binds to peripheral benzodiazepine receptors, expressed on macrophages, and may be used to quantify inflammation using positron emission tomography (PET). This study evaluated methods for the quantification of (R)-[ 11 C]PK11195 binding in the knee joints of RA patients. Data from six patients with RA were analysed. Dynamic PET scans were acquired in 3-D mode following (R)-[ 11 C]PK11195 injection. During scanning arterial radioactivity concentrations were measured to determine the plasma (R)-[ 11 C]PK11195 concentrations. Data were analysed using irreversible and reversible one-tissue and two-tissue compartment models and input functions with various types of metabolite correction. Model preferences according to the Akaike information criterion (AIC) and correlations between measures were evaluated. Correlations between distribution volume (V d ) and standardized uptake values (SUV) were evaluated. AIC indicated optimal performance for a one-tissue reversible compartment model including blood volume. High correlations were observed between V d obtained using different input functions (R 2 =0.80-1.00) and between V d obtained with one- and two-tissue reversible compartment models (R 2 =0.75-0.94). A high correlation was observed between optimal V d and SUV after injection (R 2 =0.73). (R)-[ 11 C]PK11195 kinetics in the knee were best described by a reversible single-tissue compartment model including blood volume. Applying metabolite corrections did not increase sensitivity. Due to the high correlation with V d , SUV is a practical alternative for clinical use. (orig.)

  10. High-Voltage-Input Level Translator Using Standard CMOS

    Science.gov (United States)

    Yager, Jeremy A.; Mojarradi, Mohammad M.; Vo, Tuan A.; Blalock, Benjamin J.

    2011-01-01

    proposed integrated circuit would translate (1) a pair of input signals having a low differential potential and a possibly high common-mode potential into (2) a pair of output signals having the same low differential potential and a low common-mode potential. As used here, "low" and "high" refer to potentials that are, respectively, below or above the nominal supply potential (3.3 V) at which standard complementary metal oxide/semiconductor (CMOS) integrated circuits are designed to operate. The input common-mode potential could lie between 0 and 10 V; the output common-mode potential would be 2 V. This translation would make it possible to process the pair of signals by use of standard 3.3-V CMOS analog and/or mixed-signal (analog and digital) circuitry on the same integrated-circuit chip. A schematic of the circuit is shown in the figure. Standard 3.3-V CMOS circuitry cannot withstand input potentials greater than about 4 V. However, there are many applications that involve low-differential-potential, high-common-mode-potential input signal pairs and in which standard 3.3-V CMOS circuitry, which is relatively inexpensive, would be the most appropriate circuitry for performing other functions on the integrated-circuit chip that handles the high-potential input signals. Thus, there is a need to combine high-voltage input circuitry with standard low-voltage CMOS circuitry on the same integrated-circuit chip. The proposed circuit would satisfy this need. In the proposed circuit, the input signals would be coupled into both a level-shifting pair and a common-mode-sensing pair of CMOS transistors. The output of the level-shifting pair would be fed as input to a differential pair of transistors. The resulting differential current output would pass through six standoff transistors to be mirrored into an output branch by four heterojunction bipolar transistors. The mirrored differential current would be converted back to potential by a pair of diode-connected transistors

  11. Uncertainty quantification tools for multiphase gas-solid flow simulations using MFIX

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Rodney O. [Iowa State Univ., Ames, IA (United States); Passalacqua, Alberto [Iowa State Univ., Ames, IA (United States)

    2016-02-01

    Computational fluid dynamics (CFD) has been widely studied and used in the scientific community and in the industry. Various models were proposed to solve problems in different areas. However, all models deviate from reality. Uncertainty quantification (UQ) process evaluates the overall uncertainties associated with the prediction of quantities of interest. In particular it studies the propagation of input uncertainties to the outputs of the models so that confidence intervals can be provided for the simulation results. In the present work, a non-intrusive quadrature-based uncertainty quantification (QBUQ) approach is proposed. The probability distribution function (PDF) of the system response can be then reconstructed using extended quadrature method of moments (EQMOM) and extended conditional quadrature method of moments (ECQMOM). The report first explains the theory of QBUQ approach, including methods to generate samples for problems with single or multiple uncertain input parameters, low order statistics, and required number of samples. Then methods for univariate PDF reconstruction (EQMOM) and multivariate PDF reconstruction (ECQMOM) are explained. The implementation of QBUQ approach into the open-source CFD code MFIX is discussed next. At last, QBUQ approach is demonstrated in several applications. The method is first applied to two examples: a developing flow in a channel with uncertain viscosity, and an oblique shock problem with uncertain upstream Mach number. The error in the prediction of the moment response is studied as a function of the number of samples, and the accuracy of the moments required to reconstruct the PDF of the system response is discussed. The QBUQ approach is then demonstrated by considering a bubbling fluidized bed as example application. The mean particle size is assumed to be the uncertain input parameter. The system is simulated with a standard two-fluid model with kinetic theory closures for the particulate phase implemented into

  12. READDATA: a FORTRAN 77 codeword input package

    International Nuclear Information System (INIS)

    Lander, P.A.

    1983-07-01

    A new codeword input package has been produced as a result of the incompatibility between different dialects of FORTRAN, especially when character variables are passed as parameters. This report is for those who wish to use a codeword input package with FORTRAN 77. The package, called ''Readdata'', attempts to combine the best features of its predecessors such as BINPUT and pseudo-BINPUT. (author)

  13. CREATING INPUT TABLES FROM WAPDEG FOR RIP

    International Nuclear Information System (INIS)

    K.G. Mon

    1998-01-01

    The purpose of this calculation is to create tables for input into RIP ver. 5.18 (Integrated Probabilistic Simulator for Environmental Systems) from WAPDEG ver. 3.06 (Waste Package Degradation) output. This calculation details the creation of the RIP input tables for TSPA-VA REV.00

  14. Wave energy input into the Ekman layer

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper is concerned with the wave energy input into the Ekman layer, based on 3 observational facts that surface waves could significantly affect the profile of the Ekman layer. Under the assumption of constant vertical diffusivity, the analytical form of wave energy input into the Ekman layer is derived. Analysis of the energy balance shows that the energy input to the Ekman layer through the wind stress and the interaction of the Stokes-drift with planetary vorticity can be divided into two kinds. One is the wind energy input, and the other is the wave energy input which is dependent on wind speed, wave characteristics and the wind direction relative to the wave direction. Estimates of wave energy input show that wave energy input can be up to 10% in high-latitude and high-wind speed areas and higher than 20% in the Antarctic Circumpolar Current, compared with the wind energy input into the classical Ekman layer. Results of this paper are of significance to the study of wave-induced large scale effects.

  15. Input Enhancement and L2 Question Formation.

    Science.gov (United States)

    White, Lydia; And Others

    1991-01-01

    Investigated the extent to which form-focused instruction and corrective feedback (i.e., "input enhancement"), provided within a primarily communicative program, contribute to learners' accuracy in question formation. Study results are interpreted as evidence that input enhancement can bring about genuine changes in learners' interlanguage…

  16. The Input-Output Relationship of the Cholinergic Basal Forebrain

    Directory of Open Access Journals (Sweden)

    Matthew R. Gielow

    2017-02-01

    Full Text Available Basal forebrain cholinergic neurons influence cortical state, plasticity, learning, and attention. They collectively innervate the entire cerebral cortex, differentially controlling acetylcholine efflux across different cortical areas and timescales. Such control might be achieved by differential inputs driving separable cholinergic outputs, although no input-output relationship on a brain-wide level has ever been demonstrated. Here, we identify input neurons to cholinergic cells projecting to specific cortical regions by infecting cholinergic axon terminals with a monosynaptically restricted viral tracer. This approach revealed several circuit motifs, such as central amygdala neurons synapsing onto basolateral amygdala-projecting cholinergic neurons or strong somatosensory cortical input to motor cortex-projecting cholinergic neurons. The presence of input cells in the parasympathetic midbrain nuclei contacting frontally projecting cholinergic neurons suggest that the network regulating the inner eye muscles are additionally regulating cortical state via acetylcholine efflux. This dataset enables future circuit-level experiments to identify drivers of known cortical cholinergic functions.

  17. Network and neuronal membrane properties in hybrid networks reciprocally regulate selectivity to rapid thalamocortical inputs.

    Science.gov (United States)

    Pesavento, Michael J; Pinto, David J

    2012-11-01

    Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.

  18. Development and evaluation of a sandwich ELISA for quantification of the 20S proteasome in human plasma

    DEFF Research Database (Denmark)

    Dutaud, Dominique; Aubry, Laurent; Henry, Laurent

    2002-01-01

    Because quantification of the 20S proteasome by functional activity measurements is difficult and inaccurate, we have developed an indirect sandwich enzyme-linked immunosorbent assays (ELISA) for quantification of the 20S proteasome in human plasma. This sandwich ELISA uses a combination...

  19. Standardless quantification by parameter optimization in electron probe microanalysis

    International Nuclear Information System (INIS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-01-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: ► A method for standardless quantification in EPMA is presented. ► It gives better results than the commercial software GENESIS Spectrum. ► It gives better results than the software DTSA. ► It allows the determination of the conductive coating thickness. ► It gives an estimation for the concentration uncertainties.

  20. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  1. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    Energy Technology Data Exchange (ETDEWEB)

    Alwan, Aravind; Aluru, N.R.

    2013-12-15

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.

  2. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    International Nuclear Information System (INIS)

    Alwan, Aravind; Aluru, N.R.

    2013-01-01

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems

  3. QUANTIFICATION OF AN C-11 LABELED BETA-ADRENOCEPTOR LIGAND, S-(-)CGP-12177, IN PLASMA OF HUMANS AND RATS

    NARCIS (Netherlands)

    VANWAARDE, A; ANTHONIO, RL; ELSINGA, PH; POSTHUMUS, H; WEEMAES, AMA; BLANKSMA, PK; PAANS, AMJ; VAALBURG, W; Visser, Ton J.; Visser, Gerben

    1995-01-01

    beta-Adrenoceptors in human lungs and heart can be imaged with the radioligand 4-[3-[(1,1-dimethylethyl)amino]-2-hydroxypropoxy]-1 ,3-dihydro-2H-benzimidazol-2-C-11-one (CGP 12177, [C-11]I). For quantification of receptor density with compartment models by adjustment of rate constants, an 'input

  4. Inhibitory Gating of Basolateral Amygdala Inputs to the Prefrontal Cortex.

    Science.gov (United States)

    McGarry, Laura M; Carter, Adam G

    2016-09-07

    Interactions between the prefrontal cortex (PFC) and basolateral amygdala (BLA) regulate emotional behaviors. However, a circuit-level understanding of functional connections between these brain regions remains incomplete. The BLA sends prominent glutamatergic projections to the PFC, but the overall influence of these inputs is predominantly inhibitory. Here we combine targeted recordings and optogenetics to examine the synaptic underpinnings of this inhibition in the mouse infralimbic PFC. We find that BLA inputs preferentially target layer 2 corticoamygdala over neighboring corticostriatal neurons. However, these inputs make even stronger connections onto neighboring parvalbumin and somatostatin expressing interneurons. Inhibitory connections from these two populations of interneurons are also much stronger onto corticoamygdala neurons. Consequently, BLA inputs are able to drive robust feedforward inhibition via two parallel interneuron pathways. Moreover, the contributions of these interneurons shift during repetitive activity, due to differences in short-term synaptic dynamics. Thus, parvalbumin interneurons are activated at the start of stimulus trains, whereas somatostatin interneuron activation builds during these trains. Together, these results reveal how the BLA impacts the PFC through a complex interplay of direct excitation and feedforward inhibition. They also highlight the roles of targeted connections onto multiple projection neurons and interneurons in this cortical circuit. Our findings provide a mechanistic understanding for how the BLA can influence the PFC circuit, with important implications for how this circuit participates in the regulation of emotion. The prefrontal cortex (PFC) and basolateral amygdala (BLA) interact to control emotional behaviors. Here we show that BLA inputs elicit direct excitation and feedforward inhibition of layer 2 projection neurons in infralimbic PFC. BLA inputs are much stronger at corticoamygdala neurons compared

  5. MARS code manual volume II: input requirements

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This input manual provides a complete list of input required to run MARS. The manual is divided largely into two parts, namely, the one-dimensional part and the multi-dimensional part. The inputs for auxiliary parts such as minor edit requests and graph formatting inputs are shared by the two parts and as such mixed input is possible. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  6. Total dose induced increase in input offset voltage in JFET input operational amplifiers

    International Nuclear Information System (INIS)

    Pease, R.L.; Krieg, J.; Gehlhausen, M.; Black, J.

    1999-01-01

    Four different types of commercial JFET input operational amplifiers were irradiated with ionizing radiation under a variety of test conditions. All experienced significant increases in input offset voltage (Vos). Microprobe measurement of the electrical characteristics of the de-coupled input JFETs demonstrates that the increase in Vos is a result of the mismatch of the degraded JFETs. (authors)

  7. Investigation of bacterial hopanoid inputs to soils from Western Canada

    Energy Technology Data Exchange (ETDEWEB)

    Shunthirasingham, Chubashini [Department of Physical and Environmental Sciences, University of Toronto, Scarborough College, 1265 Military Trail, Toronto, Ont., M1C1A4 (Canada); Simpson, Myrna J. [Department of Physical and Environmental Sciences, University of Toronto, Scarborough College, 1265 Military Trail, Toronto, Ont., M1C1A4 (Canada)]. E-mail: myrna.simpson@utoronto.ca

    2006-06-15

    Hopanoids have been widely used as characteristic biomarkers to study inputs of bacterial biomass to sediments because they are preserved in the geologic record. A limited number of studies have been performed on hopanoid biomarkers in soils. The present study examined the distribution and potential preservation of hopanoids in soils that are developed under different climatic conditions and varying vegetative inputs. Solvent extraction and sequential chemical degradation methods were employed to extract both 'free' and 'bound' hopanoids, from three grassland soils, a grassland-forest transition soil, and a forest soil from Western Canada. Identification and quantification of hopanoids in the soil samples were carried out by gas chromatography-mass spectrometry. Methylbishomohopanol, bishomohopanol and bishomohopanoic acid were detected in all solvent extracts. The base hydrolysis and ruthenium tetroxide extracts contained only bishomohopanoic acid at a concentration range of 0.8-8.8 {mu}g/gC and 2.2-28.3 {mu}g/gC, respectively. The acid hydrolysis procedure did not release detectable amounts of hopanoids. The solvent extraction yielded the greatest amounts of 'free' hopanoids in two of the grassland soils (Dark Brown and Black Chernozems) and in the forest soil (Gray Luvisol). In contrast, the chemical degradation methods resulted in higher amounts of 'bound' hopanoids in the third grassland soil (Brown Chernozem) and the transition soil (Dark Gray Chernozem), indicating that more hopanoids exist in the 'bound' form in these soils. Overall, the forest and the transition soils contained more hopanoids than the grassland soils. This is hypothesized to be due to the greater degradation of hopanoids in the grassland soils and or sorption to clay minerals, as compared to the forest and transition soils.

  8. Investigation of bacterial hopanoid inputs to soils from Western Canada

    International Nuclear Information System (INIS)

    Shunthirasingham, Chubashini; Simpson, Myrna J.

    2006-01-01

    Hopanoids have been widely used as characteristic biomarkers to study inputs of bacterial biomass to sediments because they are preserved in the geologic record. A limited number of studies have been performed on hopanoid biomarkers in soils. The present study examined the distribution and potential preservation of hopanoids in soils that are developed under different climatic conditions and varying vegetative inputs. Solvent extraction and sequential chemical degradation methods were employed to extract both 'free' and 'bound' hopanoids, from three grassland soils, a grassland-forest transition soil, and a forest soil from Western Canada. Identification and quantification of hopanoids in the soil samples were carried out by gas chromatography-mass spectrometry. Methylbishomohopanol, bishomohopanol and bishomohopanoic acid were detected in all solvent extracts. The base hydrolysis and ruthenium tetroxide extracts contained only bishomohopanoic acid at a concentration range of 0.8-8.8 μg/gC and 2.2-28.3 μg/gC, respectively. The acid hydrolysis procedure did not release detectable amounts of hopanoids. The solvent extraction yielded the greatest amounts of 'free' hopanoids in two of the grassland soils (Dark Brown and Black Chernozems) and in the forest soil (Gray Luvisol). In contrast, the chemical degradation methods resulted in higher amounts of 'bound' hopanoids in the third grassland soil (Brown Chernozem) and the transition soil (Dark Gray Chernozem), indicating that more hopanoids exist in the 'bound' form in these soils. Overall, the forest and the transition soils contained more hopanoids than the grassland soils. This is hypothesized to be due to the greater degradation of hopanoids in the grassland soils and or sorption to clay minerals, as compared to the forest and transition soils

  9. Quantification of heterogeneity observed in medical images

    International Nuclear Information System (INIS)

    Brooks, Frank J; Grigsby, Perry W

    2013-01-01

    There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity

  10. Quantification of heterogeneity observed in medical images.

    Science.gov (United States)

    Brooks, Frank J; Grigsby, Perry W

    2013-03-02

    There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity.

  11. 36 CFR 1193.41 - Input, control, and mechanical functions.

    Science.gov (United States)

    2010-07-01

    ... little or no color perception. Provide at least one mode that does not require user color perception. (d.... Provide at least one mode that minimizes the cognitive, memory, language, and learning skills required of...

  12. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2004-03-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  13. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    International Nuclear Information System (INIS)

    Reer, B.

    2004-01-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  14. Sensitivity Analysis and Uncertainty Quantification for the LAMMPS Molecular Dynamics Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bhat, Kabekode Ghanasham [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-18

    We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.

  15. A simple method for measurement of cerebral blood flow using 123I-IMP SPECT with calibrated standard input function by one point blood sampling. Validation of calibration by one point venous blood sampling as a substitute for arterial blood sampling

    International Nuclear Information System (INIS)

    Ito, Hiroshi; Akaizawa, Takashi; Goto, Ryoui

    1994-01-01

    In a simplified method for measurement of cerebral blood flow using one 123 I-IMP SPECT scan and one point arterial blood sampling (Autoradiography method), input function is obtained by calibrating a standard input function by one point arterial blood sampling. A purpose of this study is validation of calibration by one point venous blood sampling as a substitute for one point arterial blood sampling. After intravenous infusion of 123 I-IMP, frequent arterial and venous blood sampling were simultaneously performed on 12 patients of CNS disease without any heart and lung disease and 5 normal volunteers. The radioactivity ratio of venous whole blood which obtained from cutaneous cubital vein to arterial whole blood were 0.76±0.08, 0.80±0.05, 0.81±0.06, 0.83±0.11 at 10, 20, 30, 50 min after 123 I-IMP infusion, respectively. The venous blood radioactivities were always 20% lower than those of arterial blood radioactivity during 50 min. However, the ratio which obtained from cutaneous dorsal hand vein to artery were 0.93±0.02, 0.94±0.05, 0.98±0.04, 0.98±0.03, at 10, 20, 30, 50 min after 123 I-IMP infusion, respectively. The venous blood radioactivity was consistent with artery. These indicate that arterio-venous difference of radioactivity in a peripheral cutaneous vein like a dorsal hand vein is minimal due to arteriovenous shunt in palm. Therefore, a substitution by blood sampling from cutaneous dorsal hand vein for artery will be possible. Optimized time for venous blood sampling evaluated by error analysis was 20 min after 123 I-IMP infusion, which is 10 min later than that of arterial blood sampling. (author)

  16. freeQuant: A Mass Spectrometry Label-Free Quantification Software Tool for Complex Proteome Analysis.

    Science.gov (United States)

    Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong

    2015-01-01

    Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.

  17. Input-output rearrangement of isolated converters

    DEFF Research Database (Denmark)

    Madsen, Mickey Pierre; Kovacevic, Milovan; Mønster, Jakob Døllner

    2015-01-01

    This paper presents a new way of rearranging the input and output of isolated converters. The new arrangement posses several advantages, as increased voltage range, higher power handling capabilities, reduced voltage stress and improved efficiency, for applications where galvanic isolation...

  18. Multiple Input - Multiple Output (MIMO) SAR

    Data.gov (United States)

    National Aeronautics and Space Administration — This effort will research and implement advanced Multiple-Input Multiple-Output (MIMO) Synthetic Aperture Radar (SAR) techniques which have the potential to improve...

  19. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  20. Volume measurement study for large scale input accountancy tank

    International Nuclear Information System (INIS)

    Uchikoshi, Seiji; Watanabe, Yuichi; Tsujino, Takeshi

    1999-01-01

    Large Scale Tank Calibration (LASTAC) facility, including an experimental tank which has the same volume and structure as the input accountancy tank of Rokkasho Reprocessing Plant (RRP) was constructed in Nuclear Material Control Center of Japan. Demonstration experiments have been carried out to evaluate a precision of solution volume measurement and to establish the procedure of highly accurate pressure measurement for a large scale tank with dip-tube bubbler probe system to be applied to the input accountancy tank of RRP. Solution volume in a tank is determined from substitution the solution level for the calibration function obtained in advance, which express a relation between the solution level and its volume in the tank. Therefore, precise solution volume measurement needs a precise calibration function that is determined carefully. The LASTAC calibration experiments using pure water showed good result in reproducibility. (J.P.N.)

  1. Estimating the input of wastewater-born micropollutants in a rural karst catchment (Gallusquelle, Germany)

    Science.gov (United States)

    Zirlewagen, Johannes; Hillebrand, Olav; Nödler, Karsten; Schiperski, Ferry; Scheytt, Traugott; Licha, Tobias

    2013-04-01

    The main focus of the AGRO research project is on the use of various micropollutants as indicators (e.g. for wastewater) in the catchment of the karst spring Gallusquelle, Swabian Alb. For modeling the micropollutants' fate in the subsurface and their occurrence in spring water, reliable estimates of the spatio-temporal input, i.e. input functions, are crucial. Therefore potential sources for wastewater-born substances are identified. These are the combined sewer system with a stormwater retention basin (untreated wastewater) and the river Fehla (treated wastewater). The micropollutants' concentrations and loads in the potentially infiltrating waters are estimated on the one hand by local water and substance consumption data and on the other hand by water sample analysis and stream gauging. The spring's discharge varies from 0.2-2.0 m³/s with an average of 0.5 m³/s. Treated spring water serves as drinking water for 45 000 people. The catchment area measures 45 km² and is rural in character with 55% forest, 27% grassland, 15% agriculture and 3% residential/industrial. Industrial activity is restricted to a few minor textile and metal works. There are around 4 000 inhabitants and except for a few farms, all households are connected to the public sewer system. The only surface water within the catchment is the stream Fehla, which forms a part of the catchment boundary. It was formerly identified as a sinking stream with an ephemeral part in the lower course. Connections to the Gallusquelle spring were proven by several tracer tests conducted in the 1960's, when the river started to become perennial over the whole course due to heavy colmatation. During a one week campaign, samples of wastewater and river water were taken three times per day. Additionally, hourly samples were taken during a 24 h period. Water samples were analysed for major ions and 58 micropollutants, including pharmaceuticals, stimulants (as caffeine), the artificial sweeteners acesulfame and

  2. Outsourcing, public Input provision and policy cooperation

    OpenAIRE

    Aronsson, Thomas; Koskela, Erkki

    2009-01-01

    This paper concerns public input provision as an instrument for redistribution under international outsourcing by using a model-economy comprising two countries, North and South, where firms in the North may outsource part of their low-skilled labor intensive production to the South. We consider two interrelated issues: (i) the incentives for each country to modify the provision of public input goods in response to international outsourcing, and (ii) whether international outsourcing justifie...

  3. Controlling uncertain neutral dynamic systems with delay in control input

    International Nuclear Information System (INIS)

    Park, Ju H.; Kwon, O.

    2005-01-01

    This article gives a novel criterion for the asymptotic stabilization of the zero solutions of a class of neutral systems with delays in control input. By constructing Lyapunov functionals, we have obtained the criterion which is expressed in terms of matrix inequalities. The solutions of the inequalities can be easily solved by efficient convex optimization algorithms. A numerical example is included to illustrate the design procedure of the proposed method

  4. Conceptual Design of GRIG (GUI Based RETRAN Input Generator)

    International Nuclear Information System (INIS)

    Lee, Gyung Jin; Hwang, Su Hyun; Hong, Soon Joon; Lee, Byung Chul; Jang, Chan Su; Um, Kil Sup

    2007-01-01

    For the development of high performance methodology using advanced transient analysis code, it is essential to generate the basic input of transient analysis code by rigorous QA procedures. There are various types of operating NPPs (Nuclear Power Plants) in Korea such as Westinghouse plants, KSNP(Korea Standard Nuclear Power Plant), APR1400 (Advance Power Reactor), etc. So there are some difficulties to generate and manage systematically the input of transient analysis code reflecting the inherent characteristics of various types of NPPs. To minimize the user faults and investment man power and to generate effectively and accurately the basic inputs of transient analysis code for all domestic NPPs, it is needed to develop the program that can automatically generate the basic input, which can be directly applied to the transient analysis, from the NPP design material. ViRRE (Visual RETRAN Running Environment) developed by KEPCO (Korea Electric Power Corporation) and KAERI (Korea Atomic Energy Research Institute) provides convenient working environment for Kori Unit 1/2. ViRRE shows the calculated results through on-line display but its capability is limited on the convenient execution of RETRAN. So it can not be used as input generator. ViSA (Visual System Analyzer) developed by KAERI is a NPA (Nuclear Plant Analyzer) using RETRAN and MARS code as thermal-hydraulic engine. ViSA contains both pre-processing and post-processing functions. In the pre-processing, only the trip data cards and boundary conditions can be changed through GUI mode based on pre-prepared text-input, so the capability of input generation is very limited. SNAP (Symbolic Nuclear Analysis Package) developed by Applied Programming Technology, Inc. and NRC (Nuclear Regulatory Commission) provides efficient working environment for the use of nuclear safety analysis codes such as RELAP5 and TRAC-M codes. SNAP covers wide aspects of thermal-hydraulic analysis from model creation through data analysis

  5. Comparison of five DNA quantification methods

    DEFF Research Database (Denmark)

    Nielsen, Karsten; Mogensen, Helle Smidt; Hedman, Johannes

    2008-01-01

    Six commercial preparations of human genomic DNA were quantified using five quantification methods: UV spectrometry, SYBR-Green dye staining, slot blot hybridization with the probe D17Z1, Quantifiler Human DNA Quantification kit and RB1 rt-PCR. All methods measured higher DNA concentrations than...... Quantification kit in two experiments. The measured DNA concentrations with Quantifiler were 125 and 160% higher than expected based on the manufacturers' information. When the Quantifiler human DNA standard (Raji cell line) was replaced by the commercial human DNA preparation G147A (Promega) to generate the DNA...... standard curve in the Quantifiler Human DNA Quantification kit, the DNA quantification results of the human DNA preparations were 31% higher than expected based on the manufacturers' information. The results indicate a calibration problem with the Quantifiler human DNA standard for its use...

  6. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, Putri Wikie; Suhartono, Suhartono

    2017-01-01

    -searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those

  7. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Conrad, Patrick [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Bigoni, Daniele [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Parno, Matthew [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-06-09

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a history of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT

  8. IM-135-562-00 IDIM instruction manual for the isolated digital input module for SLC

    International Nuclear Information System (INIS)

    Kieffer, J.

    1983-01-01

    This unit is designed as a general purpose digital input module. Each input is opto-isolated, and is designed to operate over a wide range of positive input voltages. The unit is nonlatching, each CAMAC Read of the unit presenting the data as seen at the inputs at the time of the Read command. The manual includes the following sections: specifications; front panel, lights and connectors; reference list; functional description; 82S100 logic equations; test and checkout procedures; appendix A, SLAC 82S100 programming data; and appendix B, JXK-FORTH 135-562 program listing

  9. Optimal control of LQR for discrete time-varying systems with input delays

    Science.gov (United States)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  10. Topographic and functional neuroanatomical study of GABAergic disinhibitory striatum-nigral inputs and inhibitory nigrocollicular pathways: neural hodology recruiting the substantia nigra, pars reticulata, for the modulation of the neural activity in the inferior colliculus involved with panic-like emotions.

    Science.gov (United States)

    Castellan-Baldan, Lissandra; da Costa Kawasaki, Mateus; Ribeiro, Sandro José; Calvo, Fabrício; Corrêa, Vani Maria Alves; Coimbra, Norberto Cysne

    2006-08-01

    Considering the influence of the substantia nigra on mesencephalic neurons involved with fear-induced reactions organized in rostral aspects of the dorsal midbrain, the present work investigated the topographical and functional neuroanatomy of similar influence on caudal division of the corpora quadrigemina, addressing: (a) the neural hodology connecting the neostriatum, the substantia nigra, periaqueductal gray matter and inferior colliculus (IC) neural networks; (b) the influence of the inhibitory neostriatonigral-nigrocollicular GABAergic links on the control of the defensive behavior organized in the IC. The effects of the increase or decrease of activity of nigrocollicular inputs on defensive responses elicited by either electrical or chemical stimulation of the IC were also determined. Electrolytic or chemical lesions of the substantia nigra, pars reticulata (SNpr), decreased the freezing and escape behaviors thresholds elicited by electrical stimulation of the IC, and increased the behavioral responses evoked by the GABAA blockade in the same sites of the mesencephalic tectum (MT) electrically stimulated. These findings were corroborated by similar effects caused by microinjections of the GABAA-receptor agonist muscimol in the SNpr, followed by electrical and chemical stimulations of the IC. The GABAA blockade in the SNpr caused a significant increase in the defensive behavior thresholds elicited by electrical stimulation of the IC and a decrease in the mean incidence of panic-like responses induced by microinjections of bicuculline in the mesencephalic tectum (inferior colliculus). These findings suggest that the substantia nigra receives GABAergic inputs that modulate local and also inhibitory GABAergic outputs toward the IC. In fact, neurotracing experiments with fast blue and iontophoretic microinjections of biotinylated dextran amine either into the inferior colliculus or in the reticular division of the substantia nigra demonstrated a neural link

  11. An Uncertainty Quantification Framework for Remote Sensing Retrievals

    Science.gov (United States)

    Braverman, A. J.; Hobbs, J.

    2017-12-01

    Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.

  12. An EPGPT-based approach for uncertainty quantification

    International Nuclear Information System (INIS)

    Wang, C.; Abdel-Khalik, H. S.

    2012-01-01

    Generalized Perturbation Theory (GPT) has been widely used by many scientific disciplines to perform sensitivity analysis and uncertainty quantification. This manuscript employs recent developments in GPT theory, collectively referred to as Exact-to-Precision Generalized Perturbation Theory (EPGPT), to enable uncertainty quantification for computationally challenging models, e.g. nonlinear models associated with many input parameters and many output responses and with general non-Gaussian parameters distributions. The core difference between EPGPT and existing GPT is in the way the problem is formulated. GPT formulates an adjoint problem that is dependent on the response of interest. It tries to capture via the adjoint solution the relationship between the response of interest and the constraints on the state variations. EPGPT recasts the problem in terms of a smaller set of what is referred to as the 'active' responses which are solely dependent on the physics model and the boundary and initial conditions rather than on the responses of interest. The objective of this work is to apply an EPGPT methodology to propagate cross-sections variations in typical reactor design calculations. The goal is to illustrate its use and the associated impact for situations where the typical Gaussian assumption for parameters uncertainties is not valid and when nonlinear behavior must be considered. To allow this demonstration, exaggerated variations will be employed to stimulate nonlinear behavior in simple prototypical neutronics models. (authors)

  13. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    Science.gov (United States)

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  14. Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.

    Science.gov (United States)

    Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei

    2018-06-01

    This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.

  15. Shaped input distributions for structural damage localization

    DEFF Research Database (Denmark)

    Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars

    2018-01-01

    localization method is cast that operates on the premise of shaping inputs—whose spatial distribution is fixed—by use of a model, such that these inputs, in one structural subdomain at a time, suppress certain steady-state vibration quantities (depending on the type of damage one seeks to interrogate for......). Accordingly, damage is localized when the vibration signature induced by the shaped inputs in the damaged state corresponds to that in the reference state, hereby implying that the approach does not point directly to damage. Instead, it operates with interrogation based on postulated damage patterns...

  16. A parallel input composite transimpedance amplifier

    Science.gov (United States)

    Kim, D. J.; Kim, C.

    2018-01-01

    A new approach to high performance current to voltage preamplifier design is presented. The design using multiple operational amplifiers (op-amps) has a parasitic capacitance compensation network and a composite amplifier topology for fast, precision, and low noise performance. The input stage consisting of a parallel linked JFET op-amps and a high-speed bipolar junction transistor (BJT) gain stage driving the output in the composite amplifier topology, cooperating with the capacitance compensation feedback network, ensures wide bandwidth stability in the presence of input capacitance above 40 nF. The design is ideal for any two-probe measurement, including high impedance transport and scanning tunneling microscopy measurements.

  17. Nuclear reaction inputs based on effective interactions

    Energy Technology Data Exchange (ETDEWEB)

    Hilaire, S.; Peru, S.; Dubray, N.; Dupuis, M.; Bauge, E. [CEA, DAM, DIF, Arpajon (France); Goriely, S. [Universite Libre de Bruxelles, Institut d' Astronomie et d' Astrophysique, CP-226, Brussels (Belgium)

    2016-11-15

    Extensive nuclear structure studies have been performed for decades using effective interactions as sole input. They have shown a remarkable ability to describe rather accurately many types of nuclear properties. In the early 2000 s, a major effort has been engaged to produce nuclear reaction input data out of the Gogny interaction, in order to challenge its quality also with respect to nuclear reaction observables. The status of this project, well advanced today thanks to the use of modern computers as well as modern nuclear reaction codes, is reviewed and future developments are discussed. (orig.)

  18. Simulation of a Multidimensional Input Quantum Perceptron

    Science.gov (United States)

    Yamamoto, Alexandre Y.; Sundqvist, Kyle M.; Li, Peng; Harris, H. Rusty

    2018-06-01

    In this work, we demonstrate the improved data separation capabilities of the Multidimensional Input Quantum Perceptron (MDIQP), a fundamental cell for the construction of more complex Quantum Artificial Neural Networks (QANNs). This is done by using input controlled alterations of ancillary qubits in combination with phase estimation and learning algorithms. The MDIQP is capable of processing quantum information and classifying multidimensional data that may not be linearly separable, extending the capabilities of the classical perceptron. With this powerful component, we get much closer to the achievement of a feedforward multilayer QANN, which would be able to represent and classify arbitrary sets of data (both quantum and classical).

  19. Load Estimation from Natural input Modal Analysis

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Canteli, Alfonso Fernández

    2005-01-01

    One application of Natural Input Modal Analysis consists in estimating the unknown load acting on structures such as wind loads, wave loads, traffic loads, etc. In this paper, a procedure to determine loading from a truncated modal model, as well as the results of an experimental testing programme...... estimation. In the experimental program a small structure subjected to vibration was used to estimate the loading from the measurements and the experimental modal space. The modal parameters were estimated by Natural Input Modal Analysis and the scaling factors of the mode shapes obtained by the mass change...

  20. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    Science.gov (United States)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at

  1. Reactor protection system software test-case selection based on input-profile considering concurrent events and uncertainties

    International Nuclear Information System (INIS)

    Khalaquzzaman, M.; Lee, Seung Jun; Cho, Jaehyun; Jung, Wondea

    2016-01-01

    Recently, the input-profile-based testing for safety critical software has been proposed for determining the number of test cases and quantifying the failure probability of the software. Input-profile of a reactor protection system (RPS) software is the input which causes activation of the system for emergency shutdown of a reactor. This paper presents a method to determine the input-profile of a RPS software which considers concurrent events/transients. A deviation of a process parameter value begins through an event and increases owing to the concurrent multi-events depending on the correlation of process parameters and severity of incidents. A case of reactor trip caused by feedwater loss and main steam line break is simulated and analyzed to determine the RPS software input-profile and estimate the number of test cases. The different sizes of the main steam line breaks (e.g., small, medium, large break) with total loss of feedwater supply are considered in constructing the input-profile. The uncertainties of the simulation related to the input-profile-based software testing are also included. Our study is expected to provide an option to determine test cases and quantification of RPS software failure probability. (author)

  2. Development and operation of K-URT data input system

    International Nuclear Information System (INIS)

    Kim, Yun Jae; Myoung, Noh Hoon; Kim, Jong Hyun; Han, Jae Jun

    2010-05-01

    Activities for TSPA(Total System Performance Assessment) on the permanent disposal of high level radioactive waste includes production of input data, safety assessment using input data, license procedure and others. These activities are performed in 5 steps as follows; (1) Adequate planning, (2) Controlled execution, (3) Complete documentation, (4) Thorough review, (5) Independent oversight. For the confidence building, it is very important to record and manage the materials obtained from research works in transparency. For the documentation of disposal research work from planning stage to data management stage, KAERI developed CYPRUS named CYBER R and D Platform for Radwaste Disposal in Underground System with a QA(Quality Assurance) System. In CYPRUS, QA system makes effects on other functions such as data management, project management and others. This report analyzes the structure of CYPRUS and proposes to accumulate qualified data, to provide a convenient application and to promote access and use of CYPRUS for a future-oriented system

  3. Rapid quantification of vesicle concentration for DOPG/DOPC and Cardiolipin/DOPC mixed lipid systems of variable composition.

    Science.gov (United States)

    Elmer-Dixon, Margaret M; Bowler, Bruce E

    2018-05-19

    A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.

  4. Exponential convergence rate (the spectral convergence) of the fast Pade transform for exact quantification in magnetic resonance spectroscopy

    International Nuclear Information System (INIS)

    Belkic, Dzevad

    2006-01-01

    This study deals with the most challenging numerical aspect for solving the quantification problem in magnetic resonance spectroscopy (MRS). The primary goal is to investigate whether it could be feasible to carry out a rigorous computation within finite arithmetics to reconstruct exactly all the machine accurate input spectral parameters of every resonance from a synthesized noiseless time signal. We also consider simulated time signals embedded in random Gaussian distributed noise of the level comparable to the weakest resonances in the corresponding spectrum. The present choice for this high-resolution task in MRS is the fast Pade transform (FPT). All the sought spectral parameters (complex frequencies and amplitudes) can unequivocally be reconstructed from a given input time signal by using the FPT. Moreover, the present computations demonstrate that the FPT can achieve the spectral convergence, which represents the exponential convergence rate as a function of the signal length for a fixed bandwidth. Such an extraordinary feature equips the FPT with the exemplary high-resolution capabilities that are, in fact, theoretically unlimited. This is illustrated in the present study by the exact reconstruction (within machine accuracy) of all the spectral parameters from an input time signal comprised of 25 harmonics, i.e. complex damped exponentials, including those for tightly overlapped and nearly degenerate resonances whose chemical shifts differ by an exceedingly small fraction of only 10 -11 ppm. Moreover, without exhausting even a quarter of the full signal length, the FPT is shown to retrieve exactly all the input spectral parameters defined with 12 digits of accuracy. Specifically, we demonstrate that when the FPT is close to the convergence region, an unprecedented phase transition occurs, since literally a few additional signal points are sufficient to reach the full 12 digit accuracy with the exponentially fast rate of convergence. This is the critical

  5. A Method to Select Software Test Cases in Consideration of Past Input Sequence

    International Nuclear Information System (INIS)

    Kim, Hee Eun; Kim, Bo Gyung; Kang, Hyun Gook

    2015-01-01

    In the Korea Nuclear I and C Systems (KNICS) project, the software for the fully-digitalized reactor protection system (RPS) was developed under a strict procedure. Even though the behavior of the software is deterministic, the randomness of input sequence produces probabilistic behavior of software. A software failure occurs when some inputs to the software occur and interact with the internal state of the digital system to trigger a fault that was introduced into the software during the software lifecycle. In this paper, the method to select test set for software failure probability estimation is suggested. This test set reflects past input sequence of software, which covers all possible cases. In this study, the method to select test cases for software failure probability quantification was suggested. To obtain profile of paired state variables, relationships of the variables need to be considered. The effect of input from human operator also have to be considered. As an example, test set of PZR-PR-Lo-Trip logic was examined. This method provides framework for selecting test cases of safety-critical software

  6. 7 CFR 3430.907 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.907 Section 3430.907 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE RESEARCH, EDUCATION, AND EXTENSION SERVICE, DEPARTMENT OF AGRICULTURE COMPETITIVE AND NONCOMPETITIVE NON-FORMULA FEDERAL...

  7. Input and Intake in Language Acquisition

    Science.gov (United States)

    Gagliardi, Ann C.

    2012-01-01

    This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from…

  8. A summary of WIMSD4 input option

    International Nuclear Information System (INIS)

    Halsall, M.J.

    1980-07-01

    A description is given of all the available input data options in the ICL 4/70 and IBM 370 versions of WIMSD4, with little more than a reference where there is already adequate documentation but with rather more detail where no such documentation exists. (author)

  9. Representations of space based on haptic input

    NARCIS (Netherlands)

    Zuidhoek, S.

    2005-01-01

    The present thesis focused on the representations of grasping space based on haptic input. We aimed at identifying their characteristics, and the underlying neurocognitive processes and mechanisms. To this end, we studied the systematic distortions in performance on several orientation perception

  10. Hydrogen Generation Rate Model Calculation Input Data

    International Nuclear Information System (INIS)

    KUFAHL, M.A.

    2000-01-01

    This report documents the procedures and techniques utilized in the collection and analysis of analyte input data values in support of the flammable gas hazard safety analyses. This document represents the analyses of data current at the time of its writing and does not account for data available since then

  11. Leaders’ receptivity to subordinates’ creative input: the role of achievement goals and composition of creative input

    NARCIS (Netherlands)

    Sijbom, R.B.L.; Janssen, O.; van Yperen, N.W.

    2015-01-01

    We identified leaders’ achievement goals and composition of creative input as important factors that can clarify when and why leaders are receptive to, and supportive of, subordinates’ creative input. As hypothesized, in two experimental studies, we found that relative to mastery goal leaders,

  12. Quantification of the vocal folds’ dynamic displacements

    International Nuclear Information System (INIS)

    Hernández-Montes, María del Socorro; Muñoz, Silvino; De La Torre, Manuel; Flores, Mauricio; Pérez, Carlos; Mendoza-Santoyo, Fernando

    2016-01-01

    Fast dynamic data acquisition techniques are required to investigate the motional behavior of the vocal folds (VFs) when they are subjected to a steady air-flow through the trachea. High-speed digital holographic interferometry (DHI) is a non-invasive full-field-of-view technique that has proved its usefulness to study rapid and non-repetitive object movements. Hence it is an ideal technique used here to measure VF displacements and vibration patterns at 2000 fps. Analyses from a set of 200 displacement images showed that VFs’ vibration cycles are established along their width (y) and length (x). Furthermore, the maximum deformation for the right and left VFs’ area may be quantified from these images, which in itself represents an important result in the characterization of this structure. At a controlled air pressure, VF displacements fall within the range ∼100–1740 nm, with a calculated precision and accuracy that yields a variation coefficient of 1.91%. High-speed acquisition of full-field images of VFs and their displacement quantification are on their own significant data in the study of their functional and physiological behavior since voice quality and production depend on how they vibrate, i.e. their displacement amplitude and frequency. Additionally, the use of high speed DHI avoids prolonged examinations and represents a significant scientific and technological alternative contribution in advancing the knowledge and working mechanisms of these tissues. (paper)

  13. Quantification of the vocal folds’ dynamic displacements

    Science.gov (United States)

    del Socorro Hernández-Montes, María; Muñoz, Silvino; De La Torre, Manuel; Flores, Mauricio; Pérez, Carlos; Mendoza-Santoyo, Fernando

    2016-05-01

    Fast dynamic data acquisition techniques are required to investigate the motional behavior of the vocal folds (VFs) when they are subjected to a steady air-flow through the trachea. High-speed digital holographic interferometry (DHI) is a non-invasive full-field-of-view technique that has proved its usefulness to study rapid and non-repetitive object movements. Hence it is an ideal technique used here to measure VF displacements and vibration patterns at 2000 fps. Analyses from a set of 200 displacement images showed that VFs’ vibration cycles are established along their width (y) and length (x). Furthermore, the maximum deformation for the right and left VFs’ area may be quantified from these images, which in itself represents an important result in the characterization of this structure. At a controlled air pressure, VF displacements fall within the range ~100-1740 nm, with a calculated precision and accuracy that yields a variation coefficient of 1.91%. High-speed acquisition of full-field images of VFs and their displacement quantification are on their own significant data in the study of their functional and physiological behavior since voice quality and production depend on how they vibrate, i.e. their displacement amplitude and frequency. Additionally, the use of high speed DHI avoids prolonged examinations and represents a significant scientific and technological alternative contribution in advancing the knowledge and working mechanisms of these tissues.

  14. Characterizing stroke lesions using digital templates and lesion quantification tools in a web-based imaging informatics system for a large-scale stroke rehabilitation clinical trial

    Science.gov (United States)

    Wang, Ximing; Edwardson, Matthew; Dromerick, Alexander; Winstein, Carolee; Wang, Jing; Liu, Brent

    2015-03-01

    Previously, we presented an Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) imaging informatics system that supports a large-scale phase III stroke rehabilitation trial. The ePR system is capable of displaying anonymized patient imaging studies and reports, and the system is accessible to multiple clinical trial sites and users across the United States via the web. However, the prior multicenter stroke rehabilitation trials lack any significant neuroimaging analysis infrastructure. In stroke related clinical trials, identification of the stroke lesion characteristics can be meaningful as recent research shows that lesion characteristics are related to stroke scale and functional recovery after stroke. To facilitate the stroke clinical trials, we hope to gain insight into specific lesion characteristics, such as vascular territory, for patients enrolled into large stroke rehabilitation trials. To enhance the system's capability for data analysis and data reporting, we have integrated new features with the system: a digital brain template display, a lesion quantification tool and a digital case report form. The digital brain templates are compiled from published vascular territory templates at each of 5 angles of incidence. These templates were updated to include territories in the brainstem using a vascular territory atlas and the Medical Image Processing, Analysis and Visualization (MIPAV) tool. The digital templates are displayed for side-by-side comparisons and transparent template overlay onto patients' images in the image viewer. The lesion quantification tool quantifies planimetric lesion area from user-defined contour. The digital case report form stores user input into a database, then displays contents in the interface to allow for reviewing, editing, and new inputs. In sum, the newly integrated system features provide the user with readily-accessible web-based tools to identify the vascular territory involved, estimate lesion area

  15. Standardless quantification by parameter optimization in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Limandri, Silvina P. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Bonetto, Rita D. [Centro de Investigacion y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco (CINDECA), CONICET, 47 Street 257, (1900) La Plata (Argentina); Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1 and 47 Streets (1900) La Plata (Argentina); Josa, Victor Galvan; Carreras, Alejo C. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Trincavelli, Jorge C., E-mail: trincavelli@famaf.unc.edu.ar [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina)

    2012-11-15

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum Registered-Sign for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: Black-Right-Pointing-Pointer A method for standardless quantification in EPMA is presented. Black-Right-Pointing-Pointer It gives better results than the commercial software GENESIS Spectrum. Black-Right-Pointing-Pointer It gives better results than the software DTSA. Black-Right-Pointing-Pointer It allows the determination of the conductive coating thickness. Black-Right-Pointing-Pointer It gives an estimation for the concentration uncertainties.

  16. A representation result for hysteresis operators with vector valued inputs and its application to models for magnetic materials

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Olaf, E-mail: Olaf.Klein@wias-berlin.de

    2014-02-15

    In this work, hysteresis operators mapping continuous vector-valued input functions being piecewise monotaffine, i.e. being piecewise the composition of a monotone with an affine function, to vector-valued output functions are considered. It is shown that the operator can be generated by a unique defined function on the convexity triple free strings. A formulation of a congruence property for periodic inputs is presented and reformulated as a condition for the generating string function.

  17. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  18. Uncertainty Quantification in Aerodynamics Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...

  19. Quantification of virus syndrome in chili peppers

    African Journals Online (AJOL)

    Jane

    2011-06-15

    Jun 15, 2011 ... alternative for the quantification of the disease' syndromes in regards to this crop. The result of these ..... parison of treatments such as cultivars or control measures and ..... Vascular discoloration and stem necrosis. 2.

  20. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.; Curioni, A.; Fedulova, I.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost

  1. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  2. Positron emission tomography quantification of serotonin transporter in suicide attempters with major depressive disorder.

    Science.gov (United States)

    Miller, Jeffrey M; Hesselgrave, Natalie; Ogden, R Todd; Sullivan, Gregory M; Oquendo, Maria A; Mann, J John; Parsey, Ramin V

    2013-08-15

    Several lines of evidence implicate abnormal serotonergic function in suicidal behavior and completed suicide, including low serotonin transporter binding in postmortem studies of completed suicide. We have also reported low in vivo serotonin transporter binding in major depressive disorder (MDD) during a major depressive episode using positron emission tomography (PET) with [(11)C]McN5652. We quantified regional brain serotonin transporter binding in vivo in depressed suicide attempters, depressed nonattempters, and healthy controls using PET and a superior radiotracer, [(11)C]DASB. Fifty-one subjects with DSM-IV current MDD, 15 of whom were past suicide attempters, and 32 healthy control subjects underwent PET scanning with [(11)C]DASB to quantify in vivo regional brain serotonin transporter binding. Metabolite-corrected arterial input functions and plasma free-fraction were acquired to improve quantification. Depressed suicide attempters had lower serotonin transporter binding in midbrain compared with depressed nonattempters (p = .031) and control subjects (p = .0093). There was no difference in serotonin transporter binding comparing all depressed subjects with healthy control subjects considering six a priori regions of interest simultaneously (p = .41). Low midbrain serotonin transporter binding appears to be related to the pathophysiology of suicidal behavior rather than of major depressive disorder. This is consistent with postmortem work showing low midbrain serotonin transporter binding capacity in depressed suicides and may partially explain discrepant in vivo findings quantifying serotonin transporter in depression. Future studies should investigate midbrain serotonin transporter binding as a predictor of suicidal behavior in MDD and determine the cause of low binding. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  3. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    Science.gov (United States)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  4. Lung involvement quantification in chest radiographs

    International Nuclear Information System (INIS)

    Giacomini, Guilherme; Alvarez, Matheus; Oliveira, Marcela de; Miranda, Jose Ricardo A.; Pina, Diana R.; Pereira, Paulo C.M.; Ribeiro, Sergio M.

    2014-01-01

    Tuberculosis (TB) caused by Mycobacterium tuberculosis, is an infectious disease which remains a global health problem. The chest radiography is the commonly method employed to assess the TB's evolution. The methods for quantification of abnormalities of chest are usually performed on CT scans (CT). This quantification is important to assess the TB evolution and treatment and comparing different treatments. However, precise quantification is not feasible for the amount of CT scans required. The purpose of this work is to develop a methodology for quantification of lung damage caused by TB through chest radiographs. It was developed an algorithm for computational processing of exams in Matlab, which creates a lungs' 3D representation, with compromised dilated regions inside. The quantification of lung lesions was also made for the same patients through CT scans. The measurements from the two methods were compared and resulting in strong correlation. Applying statistical Bland and Altman, all samples were within the limits of agreement, with a confidence interval of 95%. The results showed an average variation of around 13% between the two quantification methods. The results suggest the effectiveness and applicability of the method developed, providing better risk-benefit to the patient and cost-benefit ratio for the institution. (author)

  5. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    International Nuclear Information System (INIS)

    Brown, C.S.; Zhang, Hongbin

    2016-01-01

    VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis. A 2 × 2 fuel assembly model was developed and simulated by VERA-CS, and uncertainty quantification and sensitivity analysis were performed with fourteen uncertain input parameters. The minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surface temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. Parameters used as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.

  6. Biological 2-Input Decoder Circuit in Human Cells

    Science.gov (United States)

    2015-01-01

    Decoders are combinational circuits that convert information from n inputs to a maximum of 2n outputs. This operation is of major importance in computing systems yet it is vastly underexplored in synthetic biology. Here, we present a synthetic gene network architecture that operates as a biological decoder in human cells, converting 2 inputs to 4 outputs. As a proof-of-principle, we use small molecules to emulate the two inputs and fluorescent reporters as the corresponding four outputs. The experiments are performed using transient transfections in human kidney embryonic cells and the characterization by fluorescence microscopy and flow cytometry. We show a clear separation between the ON and OFF mean fluorescent intensity states. Additionally, we adopt the integrated mean fluorescence intensity for the characterization of the circuit and show that this metric is more robust to transfection conditions when compared to the mean fluorescent intensity. To conclude, we present the first implementation of a genetic decoder. This combinational system can be valuable toward engineering higher-order circuits as well as accommodate a multiplexed interface with endogenous cellular functions. PMID:24694115

  7. Biological 2-input decoder circuit in human cells.

    Science.gov (United States)

    Guinn, Michael; Bleris, Leonidas

    2014-08-15

    Decoders are combinational circuits that convert information from n inputs to a maximum of 2(n) outputs. This operation is of major importance in computing systems yet it is vastly underexplored in synthetic biology. Here, we present a synthetic gene network architecture that operates as a biological decoder in human cells, converting 2 inputs to 4 outputs. As a proof-of-principle, we use small molecules to emulate the two inputs and fluorescent reporters as the corresponding four outputs. The experiments are performed using transient transfections in human kidney embryonic cells and the characterization by fluorescence microscopy and flow cytometry. We show a clear separation between the ON and OFF mean fluorescent intensity states. Additionally, we adopt the integrated mean fluorescence intensity for the characterization of the circuit and show that this metric is more robust to transfection conditions when compared to the mean fluorescent intensity. To conclude, we present the first implementation of a genetic decoder. This combinational system can be valuable toward engineering higher-order circuits as well as accommodate a multiplexed interface with endogenous cellular functions.

  8. Fast metabolite identification with Input Output Kernel Regression

    Science.gov (United States)

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  9. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Science.gov (United States)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  10. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Geraci, Gianluca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vane, Zachary P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lacaze, Guilhem [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Oefelein, Joseph C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  11. Measurements of the dynamic input impedance of a dc SQUID

    International Nuclear Information System (INIS)

    Hilbert, C.; Clarke, J.

    1985-01-01

    The impedance of a circuit coupled magnetically via a mutual inductance M/sub i/ to a dc SQUID of geometric inductance L is modified by the dynamic input impedance of the SQUID, which can be characterized by the flux-to-current transfer function J/sub Phi/approx. =partialJ/partialPhi; J is the current circulating in the SQUID loop and ∫ is the flux applied to the loop. At the same time, the SQUID is modified by the presence of the input circuit in the lumped circuit approximation, one expects its inductance to be reduced to L'(1-α/sub e/ 2 )L, where α/sub e/ is an effective coupling coefficient. Calculations of J/sub Phi/ using an analog simulator are described and presented in the form of a dynamic inductance L and a dynamic resistance R versus bias current I and Phi. Experimental measurements of L and R were made on a planar, thin-film SQUID tightly coupled to a spiral input coil that was connected in series with a capacitor C/sub i/ to form a resonant circuit. Thus, J/sub Phi/ was determined from the change in the resonant frequency and quality factor of this circuit as a function of I and Phi. At low bias currents (low Josephson frequencies) the measured values of L were in reasonable agreement with values simulated for the reduced SQUID, while at higher bias currents (higher Josephson frequencies) the measured values were in better agreement with values simulated for the unscreened SQUID. Similar conclusions were reached in the comparison of the experimental and simulated values of the flux-to-voltage transfer function V/sub Phi/

  12. The Generalization Complexity Measure for Continuous Input Data

    Directory of Open Access Journals (Sweden)

    Iván Gómez

    2014-01-01

    defined in Boolean space, quantifies the complexity of data in relationship to the prediction accuracy that can be expected when using a supervised classifier like a neural network, SVM, and so forth. We first extend the original measure for its use with continuous functions to later on, using an approach based on the use of the set of Walsh functions, consider the case of having a finite number of data points (inputs/outputs pairs, that is, usually the practical case. Using a set of trigonometric functions a model that gives a relationship between the size of the hidden layer of a neural network and the complexity is constructed. Finally, we demonstrate the application of the introduced complexity measure, by using the generated model, to the problem of estimating an adequate neural network architecture for real-world data sets.

  13. Do efficiency scores depend on input mix?

    DEFF Research Database (Denmark)

    Asmild, Mette; Hougaard, Jens Leth; Kronborg, Dorte

    2013-01-01

    In this paper we examine the possibility of using the standard Kruskal-Wallis (KW) rank test in order to evaluate whether the distribution of efficiency scores resulting from Data Envelopment Analysis (DEA) is independent of the input (or output) mix of the observations. Since the DEA frontier...... is estimated, many standard assumptions for evaluating the KW test statistic are violated. Therefore, we propose to explore its statistical properties by the use of simulation studies. The simulations are performed conditional on the observed input mixes. The method, unlike existing approaches...... the assumption of mix independence is rejected the implication is that it, for example, is impossible to determine whether machine intensive project are more or less efficient than labor intensive projects....

  14. Application of computer voice input/output

    International Nuclear Information System (INIS)

    Ford, W.; Shirk, D.G.

    1981-01-01

    The advent of microprocessors and other large-scale integration (LSI) circuits is making voice input and output for computers and instruments practical; specialized LSI chips for speech processing are appearing on the market. Voice can be used to input data or to issue instrument commands; this allows the operator to engage in other tasks, move about, and to use standard data entry systems. Voice synthesizers can generate audible, easily understood instructions. Using voice characteristics, a control system can verify speaker identity for security purposes. Two simple voice-controlled systems have been designed at Los Alamos for nuclear safeguards applicaations. Each can easily be expanded as time allows. The first system is for instrument control that accepts voice commands and issues audible operator prompts. The second system is for access control. The speaker's voice is used to verify his identity and to actuate external devices

  15. Sensory Synergy as Environmental Input Integration

    Directory of Open Access Journals (Sweden)

    Fady eAlnajjar

    2015-01-01

    Full Text Available The development of a method to feed proper environmental inputs back to the central nervous system (CNS remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with 9 healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis’ sensory system to make the controller simpler

  16. Sensory synergy as environmental input integration.

    Science.gov (United States)

    Alnajjar, Fady; Itkonen, Matti; Berenz, Vincent; Tournier, Maxime; Nagai, Chikara; Shimoda, Shingo

    2014-01-01

    The development of a method to feed proper environmental inputs back to the central nervous system (CNS) remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with nine healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis' sensory system to make the controller simpler.

  17. Negative dielectrophoresis spectroscopy for rare analyte quantification in biological samples

    Science.gov (United States)

    Kirmani, Syed Abdul Mannan; Gudagunti, Fleming Dackson; Velmanickam, Logeeshan; Nawarathna, Dharmakeerthi; Lima, Ivan T., Jr.

    2017-03-01

    We propose the use of negative dielectrophoresis (DEP) spectroscopy as a technique to improve the detection limit of rare analytes in biological samples. We observe a significant dependence of the negative DEP force on functionalized polystyrene beads at the edges of interdigitated electrodes with respect to the frequency of the electric field. We measured this velocity of repulsion for 0% and 0.8% conjugation of avidin with biotin functionalized polystyrene beads with our automated software through real-time image processing that monitors the Rayleigh scattering from the beads. A significant difference in the velocity of the beads was observed in the presence of as little as 80 molecules of avidin per biotin functionalized bead. This technology can be applied in the detection and quantification of rare analytes that can be useful in the diagnosis and the treatment of diseases, such as cancer and myocardial infarction, with the use of polystyrene beads functionalized with antibodies for the target biomarkers.

  18. Prediction-Based Control for Nonlinear Systems with Input Delay

    Directory of Open Access Journals (Sweden)

    I. Estrada-Sánchez

    2017-01-01

    Full Text Available This work has two primary objectives. First, it presents a state prediction strategy for a class of nonlinear Lipschitz systems subject to constant time delay in the input signal. As a result of a suitable change of variable, the state predictor asymptotically provides the value of the state τ units of time ahead. Second, it proposes a solution to the stabilization and trajectory tracking problems for the considered class of systems using predicted states. The predictor-controller convergence is proved by considering a complete Lyapunov functional. The proposed predictor-based controller strategy is evaluated using numerical simulations.

  19. Molecular structure input on the web

    Directory of Open Access Journals (Sweden)

    Ertl Peter

    2010-02-01

    Full Text Available Abstract A molecule editor, that is program for input and editing of molecules, is an indispensable part of every cheminformatics or molecular processing system. This review focuses on a special type of molecule editors, namely those that are used for molecule structure input on the web. Scientific computing is now moving more and more in the direction of web services and cloud computing, with servers scattered all around the Internet. Thus a web browser has become the universal scientific user interface, and a tool to edit molecules directly within the web browser is essential. The review covers a history of web-based structure input, starting with simple text entry boxes and early molecule editors based on clickable maps, before moving to the current situation dominated by Java applets. One typical example - the popular JME Molecule Editor - will be described in more detail. Modern Ajax server-side molecule editors are also presented. And finally, the possible future direction of web-based molecule editing, based on technologies like JavaScript and Flash, is discussed.

  20. Improved Method for PD-Quantification in Power Cables

    DEFF Research Database (Denmark)

    Holbøll, Joachim T.; Villefrance, Rasmus; Henriksen, Mogens

    1999-01-01

    n this paper, a method is described for improved quantification of partial discharges(PD) in power cables. The method is suitable for PD-detection and location systems in the MHz-range, where pulse attenuation and distortion along the cable cannot be neglected. The system transfer function...... was calculated and measured in order to form basis for magnitude calculation after each measurements. --- Limitations and capabilities of the method will be discussed and related to relevant field applications of high frequent PD-measurements. --- Methods for increased signal/noise ratio are easily implemented...

  1. B Plant function analysis report

    International Nuclear Information System (INIS)

    Lund, D.P.

    1995-09-01

    The document contains the functions, function definitions, function interfaces, function interface definitions, Input Computer Automated Manufacturing Definition (IDEFO) diagrams, and a function hierarchy chart that describe what needs to be performed to deactivate B Plant

  2. Uncertainty quantification for environmental models

    Science.gov (United States)

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10

  3. Real-time PCR quantification and diversity analysis of the functional genes aprA and dsrA of sulfate-reducing bacteria in marine sediments of the Peru continental margin and the Black Sea

    OpenAIRE

    Axel eSchippers; Anna eBlazejak

    2011-01-01

    A quantitative, real-time PCR (Q-PCR) assay for the functional gene adenosine 5´-phosphosulfate reductase (aprA) of sulfate-reducing bacteria (SRB) was designed. This assay was applied together with described Q-PCR assays for dissimilatory sulfite reductase (dsrA) and the 16S rRNA gene of total Bacteria to marine sediments from the Peru margin (0 – 121 meters below seafloor (mbsf)) and the Black Sea (0 – 6 mbsf). Clone libraries of aprA show that all isolated sequences originate from SRB...

  4. T2{sup *} mapping from multi-echo dixon sequence on gadoxetic acid-enhanced magnetic resonance imaging for the hepatic fat quantification: Can it be used for hepatic function assessment?

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Kang, Hyo Jin; Lee, Sang Min; Yang, Hyun Kyung; Han, Joon Koo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2017-08-01

    To evaluate the diagnostic value of T2{sup *} mapping using 3D multi-echo Dixon gradient echo acquisition on gadoxetic acid-enhanced liver magnetic resonance imaging (MRI) as a tool to evaluate hepatic function. This retrospective study was approved by the IRB and the requirement of informed consent was waived. 242 patients who underwent liver MRIs, including 3D multi-echo Dixon fast gradient-recalled echo (GRE) sequence at 3T, before and after administration of gadoxetic acid, were included. Based on clinico-laboratory manifestation, the patients were classified as having normal liver function (NLF, n = 50), mild liver damage (MLD, n = 143), or severe liver damage (SLD, n = 30). The 3D multi-echo Dixon GRE sequence was obtained before, and 10 minutes after, gadoxetic acid administration. Pre- and post-contrast T2{sup *} values, as well as T2{sup *} reduction rates, were measured from T2{sup *} maps, and compared among the three groups. There was a significant difference in T2{sup *} reduction rates between the NLF and SLD groups (−0.2 ± 4.9% vs. 5.0 ± 6.9%, p = 0.002), and between the MLD and SLD groups (3.2 ± 6.0% vs. 5.0 ± 6.9%, p = 0.003). However, there was no significant difference in both the pre- and post-contrast T2{sup *} values among different liver function groups (p = 0.735 and 0.131, respectively). A receiver operating characteristic (ROC) curve analysis showed that the area under the ROC curve for using T2{sup *} reduction rates to differentiate the SLD group from the NLF group was 0.74 (95% confidence interval: 0.63–0.83). Incorporation of T2{sup *} mapping using 3D multi-echo Dixon GRE sequence in gadoxetic acid-enhanced liver MRI protocol may provide supplemental information for liver function deterioration in patients with SLD.

  5. Canonical multi-valued input Reed-Muller trees and forms

    Science.gov (United States)

    Perkowski, M. A.; Johnson, P. D.

    1991-01-01

    There is recently an increased interest in logic synthesis using EXOR gates. The paper introduces the fundamental concept of Orthogonal Expansion, which generalizes the ring form of the Shannon expansion to the logic with multiple-valued (mv) inputs. Based on this concept we are able to define a family of canonical tree circuits. Such circuits can be considered for binary and multiple-valued input cases. They can be multi-level (trees and DAG's) or flattened to two-level AND-EXOR circuits. Input decoders similar to those used in Sum of Products (SOP) PLA's are used in realizations of multiple-valued input functions. In the case of the binary logic the family of flattened AND-EXOR circuits includes several forms discussed by Davio and Green. For the case of the logic with multiple-valued inputs, the family of the flattened mv AND-EXOR circuits includes three expansions known from literature and two new expansions.

  6. New efficient five-input majority gate for quantum-dot cellular automata

    International Nuclear Information System (INIS)

    Farazkish, Razieh; Navi, Keivan

    2012-01-01

    A novel fault-tolerant five-input majority gate for quantum-dot cellular automata is presented. Quantum-dot cellular automata (QCA) is an emerging technology which is considered to be presented in future computers. Two principle logic elements in QCA are “majority gate” and “inverter.” In this paper, we propose a new approach to the design of fault-tolerant five-input majority gate by considering two-dimensional arrays of QCA cells. We analyze fault tolerance properties of such block five-input majority gate in terms of misalignment, missing, and dislocation cells. Some physical proofs are used for verifying five-input majority gate circuit layout and functionality. Our results clearly demonstrate that the redundant version of the block five-input majority gate is more robust than the standard style for this gate.

  7. Demonstration of uncertainty quantification and sensitivity analysis for PWR fuel performance with BISON

    International Nuclear Information System (INIS)

    Zhang, Hongbin; Zhao, Haihua; Zou, Ling; Burns, Douglas; Ladd, Jacob

    2017-01-01

    BISON is an advanced fuels performance code being developed at Idaho National Laboratory and is the code of choice for fuels performance by the U.S. Department of Energy (DOE)’s Consortium for Advanced Simulation of Light Water Reactors (CASL) Program. An approach to uncertainty quantification and sensitivity analysis with BISON was developed and a new toolkit was created. A PWR fuel rod model was developed and simulated by BISON, and uncertainty quantification and sensitivity analysis were performed with eighteen uncertain input parameters. The maximum fuel temperature and gap conductance were selected as the figures of merit (FOM). Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis. (author)

  8. Demonstration of Uncertainty Quantification and Sensitivity Analysis for PWR Fuel Performance with BISON

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Hongbin; Ladd, Jacob; Zhao, Haihua; Zou, Ling; Burns, Douglas

    2015-11-01

    BISON is an advanced fuels performance code being developed at Idaho National Laboratory and is the code of choice for fuels performance by the U.S. Department of Energy (DOE)’s Consortium for Advanced Simulation of Light Water Reactors (CASL) Program. An approach to uncertainty quantification and sensitivity analysis with BISON was developed and a new toolkit was created. A PWR fuel rod model was developed and simulated by BISON, and uncertainty quantification and sensitivity analysis were performed with eighteen uncertain input parameters. The maximum fuel temperature and gap conductance were selected as the figures of merit (FOM). Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis.

  9. Stochastic Systems Uncertainty Quantification and Propagation

    CERN Document Server

    Grigoriu, Mircea

    2012-01-01

    Uncertainty is an inherent feature of both properties of physical systems and the inputs to these systems that needs to be quantified for cost effective and reliable designs. The states of these systems satisfy equations with random entries, referred to as stochastic equations, so that they are random functions of time and/or space. The solution of stochastic equations poses notable technical difficulties that are frequently circumvented by heuristic assumptions at the expense of accuracy and rigor. The main objective of Stochastic Systems is to promoting the development of accurate and efficient methods for solving stochastic equations and to foster interactions between engineers, scientists, and mathematicians. To achieve these objectives Stochastic Systems presents: ·         A clear and brief review of essential concepts on probability theory, random functions, stochastic calculus, Monte Carlo simulation, and functional analysis   ·          Probabilistic models for random variables an...

  10. Functionalized synchrotron in-line phase-contrast computed tomography: a novel approach for simultaneous quantification of structural alterations and localization of barium-labelled alveolar macrophages within mouse lung samples

    Energy Technology Data Exchange (ETDEWEB)

    Dullin, Christian, E-mail: christian.dullin@med.uni-goettingen.de [University Medical Center Göttingen, Robert Koch Strasse 40, 37075 Göttingen (Germany); Monego, Simeone dal [Cluster in Biomedicine, AREA Science Park Basovizza, Trieste (Italy); Larsson, Emanuel [Elettra Sincrotrone Trieste, Strada Statale 14, km 163.5 in AREA Science Park, 34149 Basovizza (Trieste) (Italy); University of Trieste, Trieste (Italy); Linköping University, SE-581 83 Linkoeping (Sweden); Mohammadi, Sara [Elettra Sincrotrone Trieste, Strada Statale 14, km 163.5 in AREA Science Park, 34149 Basovizza (Trieste) (Italy); Krenkel, Martin [University of Göttingen, Göttingen (Germany); Garrovo, Chiara; Biffi, Stefania [IRCCS Burlo Garofolo, Trieste (Italy); Lorenzon, Andrea [Cluster in Biomedicine, AREA Science Park Basovizza, Trieste (Italy); Markus, Andrea [University Medical Center Göttingen, Robert Koch Strasse 40, 37075 Göttingen (Germany); Napp, Joanna [University Medical Center Göttingen, Robert Koch Strasse 40, 37075 Göttingen (Germany); Max Planck Institute for Experimental Medicine, Hermann-Rein-Strasse 3, 37075 Göttingen (Germany); University Medical Center Göttingen, Robert Koch Strasse 40, 37075 Göttingen (Germany); Salditt, Tim [University of Göttingen, Göttingen (Germany); Accardo, Agostino [University of Trieste, Trieste (Italy); Alves, Frauke [University Medical Center Göttingen, Robert Koch Strasse 40, 37075 Göttingen (Germany); Max Planck Institute for Experimental Medicine, Hermann-Rein-Strasse 3, 37075 Göttingen (Germany); University Medical Center Göttingen, Robert Koch Strasse 40, 37075 Göttingen (Germany); Tromba, Giuliana [Elettra Sincrotrone Trieste, Strada Statale 14, km 163.5 in AREA Science Park, 34149 Basovizza (Trieste) (Italy)

    2015-01-01

    This study presents an approach to increase the sensitivity of lung computed tomography (CT) imaging by utilizing in-line phase contrast CT in combination with single-distance phase-retrieval algorithms and a dedicated image-processing regime. As demonstrated here, functional CT imaging can be achieved for the assessment of both structural alterations in asthmatic mouse lung tissue and the accumulation pattern of instilled barium-sulfate-labelled macrophages in comparison with healthy controls. Functionalized computed tomography (CT) in combination with labelled cells is virtually non-existent due to the limited sensitivity of X-ray-absorption-based imaging, but would be highly desirable to realise cell tracking studies in entire organisms. In this study we applied in-line free propagation X-ray phase-contrast CT (XPCT) in an allergic asthma mouse model to assess structural changes as well as the biodistribution of barium-labelled macrophages in lung tissue. Alveolar macrophages that were barium-sulfate-loaded and fluorescent-labelled were instilled intratracheally into asthmatic and control mice. Mice were sacrificed after 24 h, lungs were kept in situ, inflated with air and scanned utilizing XPCT at the SYRMEP beamline (Elettra Synchrotron Light Source, Italy). Single-distance phase retrieval was used to generate data sets with ten times greater contrast-to-noise ratio than absorption-based CT (in our setup), thus allowing to depict and quantify structural hallmarks of asthmatic lungs such as reduced air volume, obstruction of airways and increased soft-tissue content. Furthermore, we found a higher concentration as well as a specific accumulation of the barium-labelled macrophages in asthmatic lung tissue. It is believe that XPCT will be beneficial in preclinical asthma research for both the assessment of therapeutic response as well as the analysis of the role of the recruitment of macrophages to inflammatory sites.

  11. The role of PET quantification in cardiovascular imaging.

    Science.gov (United States)

    Slomka, Piotr; Berman, Daniel S; Alexanderson, Erick; Germano, Guido

    2014-08-01

    Positron Emission Tomography (PET) has several clinical and research applications in cardiovascular imaging. Myocardial perfusion imaging with PET allows accurate global and regional measurements of myocardial perfusion, myocardial blood flow and function at stress and rest in one exam. Simultaneous assessment of function and perfusion by PET with quantitative software is currently the routine practice. Combination of ejection fraction reserve with perfusion information may improve the identification of severe disease. The myocardial viability can be estimated by quantitative comparison of fluorodeoxyglucose ( 18 FDG) and rest perfusion imaging. The myocardial blood flow and coronary flow reserve measurements are becoming routinely included in the clinical assessment due to enhanced dynamic imaging capabilities of the latest PET/CT scanners. Absolute flow measurements allow evaluation of the coronary microvascular dysfunction and provide additional prognostic and diagnostic information for coronary disease. Standard quantitative approaches to compute myocardial blood flow from kinetic PET data in automated and rapid fashion have been developed for 13 N-ammonia, 15 O-water and 82 Rb radiotracers. The agreement between software methods available for such analysis is excellent. Relative quantification of 82 Rb PET myocardial perfusion, based on comparisons to normal databases, demonstrates high performance for the detection of obstructive coronary disease. New tracers, such as 18 F-flurpiridaz may allow further improvements in the disease detection. Computerized analysis of perfusion at stress and rest reduces the variability of the assessment as compared to visual analysis. PET quantification can be enhanced by precise coregistration with CT angiography. In emerging clinical applications, the potential to identify vulnerable plaques by quantification of atherosclerotic plaque uptake of 18 FDG and 18 F-sodium fluoride tracers in carotids, aorta and coronary arteries

  12. Functional Programming

    OpenAIRE

    Chitil, Olaf

    2009-01-01

    Functional programming is a programming paradigm like object-oriented programming and logic programming. Functional programming comprises both a specific programming style and a class of programming languages that encourage and support this programming style. Functional programming enables the programmer to describe an algorithm on a high-level, in terms of the problem domain, without having to deal with machine-related details. A program is constructed from functions that only map inputs to ...

  13. The quantification of risk and tourism

    Directory of Open Access Journals (Sweden)

    Piet Croucamp

    2014-01-01

    Full Text Available Tourism in South Africa comprises 9.5% of Gross Domestic Product (GDP, but remains an underresearched industry, especially regarding the quantification of the risks prevailing in the social, political and economic environment in which the industry operates. Risk prediction, extrapolation forecasting is conducted largely in the context of a qualitative methodology. This article reflects on the quantification of social constructs as variables of risk in the tourism industry with reference to South Africa. The theory and methodology of quantification is briefly reviewed and the indicators of risk are conceptualized and operationalized. The identified indicators are scaled in indices for purposes of quantification. Risk assessments and the quantification of constructs rely heavily on the experience - often personal - of the researcher and this scholarly endeavour is, therefore, not inclusive of all possible identified indicators of risk. It is accepted that tourism in South Africa is an industry comprising of a large diversity of sectors, each with a different set of risk indicators and risk profiles. The emphasis of this article is thus on the methodology to be applied to a risk profile. A secondary endeavour is to provide for clarity about the conceptual and operational confines of risk in general, as well as how quantified risk relates to the tourism industry. The indices provided include both domesticand international risk indicators. The motivation for the article is to encourage a greater emphasis on quantitative research in our efforts to understand and manage a risk profile for the tourist industry.

  14. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    Science.gov (United States)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  15. Response of spiking neurons to correlated inputs

    International Nuclear Information System (INIS)

    Moreno, Ruben; Rocha, Jaime de la; Renart, Alfonso; Parga, Nestor

    2002-01-01

    The effect of a temporally correlated afferent current on the firing rate of a leaky integrate-and-fire neuron is studied. This current is characterized in terms of rates, autocorrelations, and cross correlations, and correlation time scale τ c of excitatory and inhibitory inputs. The output rate ν out is calculated in the Fokker-Planck formalism in the limit of both small and large τ c compared to the membrane time constant τ of the neuron. By simulations we check the analytical results, provide an interpolation valid for all τ c , and study the neuron's response to rapid changes in the correlation magnitude

  16. Culture Input in Foreign Language Teaching

    Institute of Scientific and Technical Information of China (English)

    胡晶

    2009-01-01

    Language and culture are highly interrelated, that is to say, language is not only the carrier of culture but it is also restricted by culture. Therefore, foreign language teaching aiming at cultivate students' intercultural communication should take culture differences into consideration. In this paper, the relationship between language and culture will be discussed. Then I will illustrate the importance of intercultural communication. Finally, according to the present situation of foreign language teaching in China, several strategies for cultural input in and out of class will be suggested.

  17. Mobile gaze input system for pervasive interaction

    DEFF Research Database (Denmark)

    2017-01-01

    feedback to the user in response to the received command input. The unit provides feedback to the user on how to position the mobile unit in front of his eyes. The gaze tracking unit interacts with one or more controlled devices via wireless or wired communications. Example devices include a lock......, a thermostat, a light or a TV. The connection between the gaze tracking unit may be temporary or longer-lasting. The gaze tracking unit may detect features of the eye that provide information about the identity of the user....

  18. Flexible input, dazzling output with IBM i

    CERN Document Server

    Victória-Pereira, Rafael

    2014-01-01

    Link your IBM i system to the modern business server world! This book presents easier and more flexible ways to get data into your IBM i system, along with rather surprising methods to export and present the vital business data it contains. You'll learn how to automate file transfers, seamlessly connect PC applications with your RPG programs, and much more. Input operations will become more flexible and user-proof, with self-correcting import processes and direct file transfers that require a minimum of user intervention. Also learn novel ways to present information: your DB2 data will look gr

  19. The Importance of Input and Interaction in SLA

    Institute of Scientific and Technical Information of China (English)

    党春花

    2009-01-01

    As is known to us, input and interaction play the crucial roles in second language acquisition (SLA). Different linguistic schools have different explanations to input and interaction Behaviorist theories hold a view that input is composed of stimuli and response, putting more emphasis on the importance of input, while mentalist theories find input is a necessary condition to SLA, not a sufficient condition. At present, social interaction theories, which is one type of cognitive linguistics, suggests that besides input, interaction is also essential to language acquisition. Then, this essay will discuss how input and interaction result in SLA.

  20. Real-time PCR quantification and diversity analysis of the functional genes aprA and dsrA of sulfate-reducing bacteria in marine sediments of the Peru continental margin and the Black Sea

    Directory of Open Access Journals (Sweden)

    Axel eSchippers

    2011-12-01

    Full Text Available A quantitative, real-time PCR (Q-PCR assay for the functional gene adenosine 5´-phosphosulfate reductase (aprA of sulfate-reducing bacteria (SRB was designed. This assay was applied together with described Q-PCR assays for dissimilatory sulfite reductase (dsrA and the 16S rRNA gene of total Bacteria to marine sediments from the Peru margin (0 – 121 meters below seafloor (mbsf and the Black Sea (0 – 6 mbsf. Clone libraries of aprA show that all isolated sequences originate from SRB showing a close relationship to aprA of characterised species or form a new cluster with only distant relation to aprA of isolated SRB. Below 40 mbsf no aprA genes could be amplified. This finding corresponds with results of the applied new Q-PCR assay for aprA. In contrast to the aprA the dsrA gene could be amplified up to sediment depths of 121 mbsf. Even in such an extreme environment a high diversity of this gene was detected. The 16S rRNA gene copy numbers of total Bacteria were much higher than those of the functional genes at all sediment depths and used to calculate the proportion of SRB to the total Bacteria. The aprA and dsrA copy numbers comprised in average 0.5 - 1 % of the 16S rRNA gene copy numbers of total Bacteria in the sediments up to a depth of ca. 40 mbsf. Depth profiles of the aprA and dsrA copy numbers were almost equal for all sites. Gene copy numbers decreased concomitantly with depth from around 108 / g sediment close to the sediment surface to less than 105 / g sediment at 5 mbsf. In the zone without detectable sulfate in the pore water from ca. 40 – 121 mbsf (Peru margin ODP site 1227, only dsrA (but not aprA was detected with copy numbers of less than 104 / g sediment, comprising ca. 14 % of the 16S rRNA gene copy numbers of total Bacteria. In this zone sulfate might be provided for SRB by anaerobic sulfide oxidation.

  1. A hybrid multibreath wash-in wash-out lung function quantification scheme in human subjects using hyperpolarized 3 He MRI for simultaneous assessment of specific ventilation, alveolar oxygen tension, oxygen uptake, and air trapping.

    Science.gov (United States)

    Hamedani, Hooman; Kadlecek, Stephen; Xin, Yi; Siddiqui, Sarmad; Gatens, Heather; Naji, Joseph; Ishii, Masaru; Cereda, Maurizio; Rossman, Milton; Rizi, Rahim

    2017-08-01

    To present a method for simultaneous acquisition of alveolar oxygen tension (P A O 2 ), specific ventilation (SV), and apparent diffusion coefficient (ADC) of hyperpolarized (HP) gas in the human lung, allowing reinterpretation of the P A O 2 and SV maps to produce a map of oxygen uptake (R). An imaging scheme was designed with a series of identical normoxic HP gas wash-in breaths to measure ADC, SV, P A O 2 , and R in less than 2 min. Signal dynamics were fit to an iterative recursive model that regionally solved for these parameters. This measurement was successfully performed in 12 subjects classified in three healthy, smoker, and chronic obstructive pulmonary disease (COPD) cohorts. The overall whole lung ADC, SV, P A O 2 , and R in healthy, smoker, and COPD subjects was 0.20 ± 0.03 cm 2 /s, 0.39 ± 0.06,113 ± 2 Torr, and 1.55 ± 0.35 Torr/s, respectively, in healthy subjects; 0.21 ± 0.03 cm 2 /s, 0.33 ± 0.06, 115.9 ± 4 Torr, and 0.97 ± 0.2 Torr/s, respectively, in smokers; and 0.25 ± 0.06 cm 2 /s, 0.23 ± 0.08, 114.8 ± 6.0Torr, and 0.94 ± 0.12 Torr/s, respectively, in subjects with COPD. Hetrogeneity of SV, P A O 2 , and R were indicators of both smoking-related changes and disease, and the severity of the disease correlated with the degree of this heterogeneity. Subjects with symptoms showed reduced oxygen uptake and specific ventilation. High-resolution, nearly coregistered and quantitative measures of lung function and structure were obtained with less than 1 L of HP gas. This hybrid multibreath technique produced measures of lung function that revealed clear differences among the cohorts and subjects and were confirmed by correlations with global lung measurements. Magn Reson Med 78:611-624, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  2. Functionalized synchrotron in-line phase-contrast computed tomography: a novel approach for simultaneous quantification of structural alterations and localization of barium-labelled alveolar macrophages within mouse lung samples.

    Science.gov (United States)

    Dullin, Christian; dal Monego, Simeone; Larsson, Emanuel; Mohammadi, Sara; Krenkel, Martin; Garrovo, Chiara; Biffi, Stefania; Lorenzon, Andrea; Markus, Andrea; Napp, Joanna; Salditt, Tim; Accardo, Agostino; Alves, Frauke; Tromba, Giuliana

    2015-01-01

    Functionalized computed tomography (CT) in combination with labelled cells is virtually non-existent due to the limited sensitivity of X-ray-absorption-based imaging, but would be highly desirable to realise cell tracking studies in entire organisms. In this study we applied in-line free propagation X-ray phase-contrast CT (XPCT) in an allergic asthma mouse model to assess structural changes as well as the biodistribution of barium-labelled macrophages in lung tissue. Alveolar macrophages that were barium-sulfate-loaded and fluorescent-labelled were instilled intratracheally into asthmatic and control mice. Mice were sacrificed after 24 h, lungs were kept in situ, inflated with air and scanned utilizing XPCT at the SYRMEP beamline (Elettra Synchrotron Light Source, Italy). Single-distance phase retrieval was used to generate data sets with ten times greater contrast-to-noise ratio than absorption-based CT (in our setup), thus allowing to depict and quantify structural hallmarks of asthmatic lungs such as reduced air volume, obstruction of airways and increased soft-tissue content. Furthermore, we found a higher concentration as well as a specific accumulation of the barium-labelled macrophages in asthmatic lung tissue. It is believe that XPCT will be beneficial in preclinical asthma research for both the assessment of therapeutic response as well as the analysis of the role of the recruitment of macrophages to inflammatory sites.

  3. Uncertainty quantification applied to the radiological characterization of radioactive waste.

    Science.gov (United States)

    Zaffora, B; Magistris, M; Saporta, G; Chevalier, J-P

    2017-09-01

    This paper describes the process adopted at the European Organization for Nuclear Research (CERN) to quantify uncertainties affecting the characterization of very-low-level radioactive waste. Radioactive waste is a by-product of the operation of high-energy particle accelerators. Radioactive waste must be characterized to ensure its safe disposal in final repositories. Characterizing radioactive waste means establishing the list of radionuclides together with their activities. The estimated activity levels are compared to the limits given by the national authority of the waste disposal. The quantification of the uncertainty affecting the concentration of the radionuclides is therefore essential to estimate the acceptability of the waste in the final repository but also to control the sorting, volume reduction and packaging phases of the characterization process. The characterization method consists of estimating the activity of produced radionuclides either by experimental methods or statistical approaches. The uncertainties are estimated using classical statistical methods and uncertainty propagation. A mixed multivariate random vector is built to generate random input parameters for the activity calculations. The random vector is a robust tool to account for the unknown radiological history of legacy waste. This analytical technique is also particularly useful to generate random chemical compositions of materials when the trace element concentrations are not available or cannot be measured. The methodology was validated using a waste population of legacy copper activated at CERN. The methodology introduced here represents a first approach for the uncertainty quantification (UQ) of the characterization process of waste produced at particle accelerators. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Exact reliability quantification of highly reliable systems with maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)

    2010-12-15

    When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.

  5. The Source Inversion Validation (SIV) Initiative: A Collaborative Study on Uncertainty Quantification in Earthquake Source Inversions

    Science.gov (United States)

    Mai, P. M.; Schorlemmer, D.; Page, M.

    2012-04-01

    Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.

  6. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  7. Modality of Input and Vocabulary Acquisition

    Directory of Open Access Journals (Sweden)

    Tetyana Sydorenko

    2010-06-01

    Full Text Available This study examines the effect of input modality (video, audio, and captions, i.e., on-screen text in the same language as audio on (a the learning of written and aural word forms, (b overall vocabulary gains, (c attention to input, and (d vocabulary learning strategies of beginning L2 learners. Twenty-six second-semester learners of Russian participated in this study. Group one (N = 8 saw video with audio and captions (VAC; group two (N = 9 saw video with audio (VA; group three (N = 9 saw video with captions (VC. All participants completed written and aural vocabulary tests and a final questionnaire.The results indicate that groups with captions (VAC and VC scored higher on written than on aural recognition of word forms, while the reverse applied to the VA group. The VAC group learned more word meanings than the VA group. Results from the questionnaire suggest that learners paid most attention to captions, followed by video and audio, and acquired most words by associating them with visual images. Pedagogical implications of this study are that captioned video tends to aid recognition of written word forms and the learning of word meaning, while non-captioned video tends to improve listening comprehension as it facilitates recognition of aural word forms.

  8. [Prosody, speech input and language acquisition].

    Science.gov (United States)

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  9. FLUTAN 2.0. Input specifications

    International Nuclear Information System (INIS)

    Willerding, G.; Baumann, W.

    1996-05-01

    FLUTAN is a highly vectorized computer code for 3D fluiddynamic and thermal-hydraulic analyses in Cartesian or cylinder coordinates. It is related to the family of COMMIX codes originally developed at Argonne National Laboratory, USA, and particularly to COMMIX-1A and COMMIX-1B, which were made available to FZK in the frame of cooperation contracts within the fast reactor safety field. FLUTAN 2.0 is an improved version of the FLUTAN code released in 1992. It offers some additional innovations, e.g. the QUICK-LECUSSO-FRAM techniques for reducing numerical diffusion in the k-ε turbulence model equations; a higher sophisticated wall model for specifying a mass flow outside the surface walls together with its flow path and its associated inlet and outlet flow temperatures; and a revised and upgraded pressure boundary condition to fully include the outlet cells in the solution process of the conservation equations. Last but not least, a so-called visualization option based on VISART standards has been provided. This report contains detailed input instructions, presents formulations of the various model options, and explains how to use the code by means of comprehensive sample input. (orig.) [de

  10. Radionuclides in the oceans inputs and inventories

    International Nuclear Information System (INIS)

    Guegueniat, P.; Germain, P.; Metivier, H.

    1996-01-01

    Ten years after Chernobyl, following the decision by France to end nuclear weapon testing in the Pacific ocean, after the end of the OECD-NEA Coordinated Research and Environmental Surveillance programme related to low-level waste dumping in the deep ocean, and one hundred years after the discovery of radioactivity, the IPSN wanted to compile and review the available information on artificial radioactivity levels in seas and oceans. International experts have been invited to present data on inputs and inventories of radionuclides in the marine environment, and to describe the evolution of radioactivity levels in water, sediments and living organisms. Different sources of radionuclides present in the aquatic environment are described: atmospheric fallout before and after Chernobyl, industrial wastes, dumped wastes and ships, nuclear ship accidents, river inputs, earth-sea atmospheric transfers and experimental sites for nuclear testing. Radioactivity levels due to these sources are dealt with at ocean (Atlantic, Pacific and Indian) and sea level (Channel, North Sea, Irish Sea, Mediterranean, Baltic, Black Sea and Arctic seas). These data collected in the present book give an up-to-date assessment of radionuclide distributions which will be very useful to address scientific and wider public concerns about radionuclides found in the aquatic environment. It gives many references useful to those who want to deepen their understanding of particular aspects of marine radioecology. (authors)

  11. Ground motion input in seismic evaluation studies

    International Nuclear Information System (INIS)

    Sewell, R.T.; Wu, S.C.

    1996-07-01

    This report documents research pertaining to conservatism and variability in seismic risk estimates. Specifically, it examines whether or not artificial motions produce unrealistic evaluation demands, i.e., demands significantly inconsistent with those expected from real earthquake motions. To study these issues, two types of artificial motions are considered: (a) motions with smooth response spectra, and (b) motions with realistic variations in spectral amplitude across vibration frequency. For both types of artificial motion, time histories are generated to match target spectral shapes. For comparison, empirical motions representative of those that might result from strong earthquakes in the Eastern U.S. are also considered. The study findings suggest that artificial motions resulting from typical simulation approaches (aimed at matching a given target spectrum) are generally adequate and appropriate in representing the peak-response demands that may be induced in linear structures and equipment responding to real earthquake motions. Also, given similar input Fourier energies at high-frequencies, levels of input Fourier energy at low frequencies observed for artificial motions are substantially similar to those levels noted in real earthquake motions. In addition, the study reveals specific problems resulting from the application of Western U.S. type motions for seismic evaluation of Eastern U.S. nuclear power plants

  12. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    Science.gov (United States)

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis

  13. Wideband Small-Signal Input dq Admittance Modeling of Six-Pulse Diode Rectifiers

    DEFF Research Database (Denmark)

    Yue, Xiaolong; Wang, Xiongfei; Blaabjerg, Frede

    2018-01-01

    This paper studies the wideband small-signal input dq admittance of six-pulse diode rectifiers. Considering the frequency coupling introduced by ripple frequency harmonics of d-and q-channel switching function, the proposed model successfully predicts the small-signal input dq admittance of six......-pulse diode rectifiers in high frequency regions that existing models fail to explain. Simulation and experimental results verify the accuracy of the proposed model....

  14. Quantification of Spatial Heterogeneity in Old Growth Forst of Korean Pine

    Science.gov (United States)

    Wang Zhengquan; Wang Qingcheng; Zhang Yandong

    1997-01-01

    Spatial hetergeneity is a very important issue in studying functions and processes of ecological systems at various scales. Semivariogram analysis is an effective technique to summarize spatial data, and quantification of sptail heterogeneity. In this paper, we propose some principles to use semivariograms to characterize and compare spatial heterogeneity of...

  15. Impact of acid atmospheric deposition on soils : quantification of chemical and hydrologic processes

    NARCIS (Netherlands)

    Grinsven, van J.J.M.

    1988-01-01

    Atmospheric deposition of SO x , NOx and NHx will cause major changes in the chemical composition of solutions in acid soils, which may affect the biological functions of the soil. This thesis deals with quantification of soil acidification by means of chemical

  16. Inverse problems and uncertainty quantification

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.

    2013-01-01

    computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example

  17. Guided Wave Delamination Detection and Quantification With Wavefield Data Analysis

    Science.gov (United States)

    Tian, Zhenhua; Campbell Leckey, Cara A.; Seebo, Jeffrey P.; Yu, Lingyu

    2014-01-01

    Unexpected damage can occur in aerospace composites due to impact events or material stress during off-nominal loading events. In particular, laminated composites are susceptible to delamination damage due to weak transverse tensile and inter-laminar shear strengths. Developments of reliable and quantitative techniques to detect delamination damage in laminated composites are imperative for safe and functional optimally-designed next-generation composite structures. In this paper, we investigate guided wave interactions with delamination damage and develop quantification algorithms by using wavefield data analysis. The trapped guided waves in the delamination region are observed from the wavefield data and further quantitatively interpreted by using different wavenumber analysis methods. The frequency-wavenumber representation of the wavefield shows that new wavenumbers are present and correlate to trapped waves in the damage region. These new wavenumbers are used to detect and quantify the delamination damage through the wavenumber analysis, which can show how the wavenumber changes as a function of wave propagation distance. The location and spatial duration of the new wavenumbers can be identified, providing a useful means not only for detecting the presence of delamination damage but also allowing for estimation of the delamination size. Our method has been applied to detect and quantify real delamination damage with complex geometry (grown using a quasi-static indentation technique). The detection and quantification results show the location, size, and shape of the delamination damage.

  18. Three-dimensional whole-brain perfusion quantification using pseudo-continuous arterial spin labeling MRI at multiple post-labeling delays: accounting for both arterial transit time and impulse response function.

    Science.gov (United States)

    Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M

    2014-02-01

    Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment

  19. Responses of tree and insect herbivores to elevated nitrogen inputs: A meta-analysis

    Science.gov (United States)

    Li, Furong; Dudley, Tom L.; Chen, Baoming; Chang, Xiaoyu; Liang, Liyin; Peng, Shaolin

    2016-11-01

    Increasing atmospheric nitrogen (N) inputs have the potential to alter terrestrial ecosystem function through impacts on plant-herbivore interactions. The goal of our study is to search for a general pattern in responses of tree characteristics important for herbivores and insect herbivorous performance to elevated N inputs. We conducted a meta-analysis based on 109 papers describing impacts of nitrogen inputs on tree characteristics and 16 papers on insect performance. The differences in plant characteristics and insect performance between broadleaves and conifers were also explored. Tree aboveground biomass, leaf biomass and leaf N concentration significantly increased under elevated N inputs. Elevated N inputs had no significantly overall effect on concentrations of phenolic compounds and lignin but adversely affected tannin, as defensive chemicals for insect herbivores. Additionally, the overall effect of insect herbivore performance (including development time, insect biomass, relative growth rate, and so on) was significantly increased by elevated N inputs. According to the inconsistent responses between broadleaves and conifers, broadleaves would be more likely to increase growth by light interception and photosynthesis rather than producing more defensive chemicals to elevated N inputs by comparison with conifers. Moreover, the overall carbohydrate concentration was significantly reduced by 13.12% in broadleaves while increased slightly in conifers. The overall tannin concentration decreased significantly by 39.21% in broadleaves but a 5.8% decrease in conifers was not significant. The results of the analysis indicated that elevated N inputs would provide more food sources and ameliorate tree palatability for insects, while the resistance of trees against their insect herbivores was weakened, especially for broadleaves. Thus, global forest insect pest problems would be aggravated by elevated N inputs. As N inputs continue to rise in the future, forest

  20. Benchmarking common quantification strategies for large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Hogrebe, Alexander; von Stechow, Louise; Bekker-Jensen, Dorte B

    2018-01-01

    Comprehensive mass spectrometry (MS)-based proteomics is now feasible, but reproducible quantification remains challenging, especially for post-translational modifications such as phosphorylation. Here, we compare the most popular quantification techniques for global phosphoproteomics: label-free...

  1. Do dyslexics have auditory input processing difficulties?

    DEFF Research Database (Denmark)

    Poulsen, Mads

    2011-01-01

    Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether....... The finding suggests that input processing difficulties are associated with the phonological deficit, but that these difficulties may be stronger above the level of phoneme perception.......Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether...... a group of dyslexics had word level receptive difficulties using an auditory lexical decision task with long words and nonsense words. The dyslexics were slower and less accurate than chronological age controls in an auditory lexical decision task, with disproportionate low performance on nonsense words...

  2. Auto Draw from Excel Input Files

    Science.gov (United States)

    Strauss, Karl F.; Goullioud, Renaud; Cox, Brian; Grimes, James M.

    2011-01-01

    The design process often involves the use of Excel files during project development. To facilitate communications of the information in the Excel files, drawings are often generated. During the design process, the Excel files are updated often to reflect new input. The problem is that the drawings often lag the updates, often leading to confusion of the current state of the design. The use of this program allows visualization of complex data in a format that is more easily understandable than pages of numbers. Because the graphical output can be updated automatically, the manual labor of diagram drawing can be eliminated. The more frequent update of system diagrams can reduce confusion and reduce errors and is likely to uncover symmetric problems earlier in the design cycle, thus reducing rework and redesign.

  3. Optimizing microwave photodetection: input-output theory

    Science.gov (United States)

    Schöndorf, M.; Govia, L. C. G.; Vavilov, M. G.; McDermott, R.; Wilhelm, F. K.

    2018-04-01

    High fidelity microwave photon counting is an important tool for various areas from background radiation analysis in astronomy to the implementation of circuit quantum electrodynamic architectures for the realization of a scalable quantum information processor. In this work we describe a microwave photon counter coupled to a semi-infinite transmission line. We employ input-output theory to examine a continuously driven transmission line as well as traveling photon wave packets. Using analytic and numerical methods, we calculate the conditions on the system parameters necessary to optimize measurement and achieve high detection efficiency. With this we can derive a general matching condition depending on the different system rates, under which the measurement process is optimal.

  4. Distribution Development for STORM Ingestion Input Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Fulton, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr to a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e-4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e-4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)

  5. Analysis on relation between safety input and accidents

    Institute of Scientific and Technical Information of China (English)

    YAO Qing-guo; ZHANG Xue-mu; LI Chun-hui

    2007-01-01

    The number of safety input directly determines the level of safety, and there exists dialectical and unified relations between safety input and accidents. Based on the field investigation and reliable data, this paper deeply studied the dialectical relationship between safety input and accidents, and acquired the conclusions. The security situation of the coal enterprises was related to the security input rate, being effected little by the security input scale, and build the relationship model between safety input and accidents on this basis, that is the accident model.

  6. Modal Parameter Identification from Responses of General Unknown Random Inputs

    DEFF Research Database (Denmark)

    Ibrahim, S. R.; Asmussen, J. C.; Brincker, Rune

    1996-01-01

    Modal parameter identification from ambient responses due to a general unknown random inputs is investigated. Existing identification techniques which are based on assumptions of white noise and or stationary random inputs are utilized even though the inputs conditions are not satisfied....... This is accomplished via adding. In cascade. A force cascade conversion to the structures system under consideration. The input to the force conversion system is white noise and the output of which is the actual force(s) applied to the structure. The white noise input(s) and the structures responses are then used...

  7. Impact of Personal Characteristics and Technical Factors on Quantification of Sodium 18F-Fluoride Uptake in Human Arteries

    DEFF Research Database (Denmark)

    Blomberg, Björn Alexander; Thomassen, Anders; de Jong, Pim A

    2015-01-01

    Sodium (18)F-fluoride ((18)F-NaF) PET/CT imaging is a promising imaging technique for assessment of atherosclerosis, but is hampered by a lack of validated quantification protocols. Both personal characteristics and technical factors can affect quantification of arterial (18)F-NaF uptake....... This study investigated if blood activity, renal function, injected dose, circulating time, and PET/CT system affect quantification of arterial (18)F-NaF uptake. METHODS: Eighty-nine healthy subjects were prospectively examined by (18)F-NaF PET/CT imaging. Arterial (18)F-NaF uptake was quantified...... assessed the effect of personal characteristics and technical factors on quantification of arterial (18)F-NaF uptake. RESULTS: NaFmax and TBRmax/mean were dependent on blood activity (β = .34 to .44, P

  8. Colour thresholding and objective quantification in bioimaging

    Science.gov (United States)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  9. Recurrence quantification analysis in Liu's attractor</