WorldWideScience

Sample records for providing quantitative estimates

  1. Quantitative estimation of cholinesterase-specific drug metabolism of carbamate inhibitors provided by the analysis of the area under the inhibition-time curve.

    Science.gov (United States)

    Zhou, Huimin; Xiao, Qiaoling; Tan, Wen; Zhan, Yiyi; Pistolozzi, Marco

    2017-09-10

    Several molecules containing carbamate groups are metabolized by cholinesterases. This metabolism includes a time-dependent catalytic step which temporary inhibits the enzymes. In this paper we demonstrate that the analysis of the area under the inhibition versus time curve (AUIC) can be used to obtain a quantitative estimation of the amount of carbamate metabolized by the enzyme. (R)-bambuterol monocarbamate and plasma butyrylcholinesterase were used as model carbamate-cholinesterase system. The inhibition of different concentrations of the enzyme was monitored for 5h upon incubation with different concentrations of carbamate and the resulting AUICs were analyzed. The amount of carbamate metabolized could be estimated with cholinesterases in a selected compartment in which the cholinesterase is confined (e.g. in vitro solutions, tissues or body fluids), either in vitro or in vivo. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Quantitative Estimation for the Effectiveness of Automation

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun

    2012-01-01

    In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities

  3. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  4. Smile line assessment comparing quantitative measurement and visual estimation

    NARCIS (Netherlands)

    Geld, P. Van der; Oosterveld, P.; Schols, J.; Kuijpers-Jagtman, A.M.

    2011-01-01

    INTRODUCTION: Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation

  5. Quantitative Compactness Estimates for Hamilton-Jacobi Equations

    Science.gov (United States)

    Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.

    2016-02-01

    We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.

  6. Quantitative genetic tools for insecticide resistance risk assessment: estimating the heritability of resistance

    Science.gov (United States)

    Michael J. Firko; Jane Leslie Hayes

    1990-01-01

    Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...

  7. WetLab-2: Providing Quantitative PCR Capabilities on ISS

    Science.gov (United States)

    Parra, Macarena; Jung, Jimmy Kar Chuen; Almeida, Eduardo; Boone, Travis David; Schonfeld, Julie; Tran, Luan Hoang

    2015-01-01

    The objective of NASA Ames Research Centers WetLab-2 Project is to place on the ISS a system capable of conducting gene expression analysis via quantitative real-time PCR (qRT-PCR) of biological specimens sampled or cultured on orbit. The WetLab-2 system is capable of processing sample types ranging from microbial cultures to animal tissues dissected on-orbit. The project has developed a RNA preparation module that can lyse cells and extract RNA of sufficient quality and quantity for use as templates in qRT-PCR reactions. Our protocol has the advantage that it uses non-toxic chemicals, alcohols or other organics. The resulting RNA is transferred into a pipette and then dispensed into reaction tubes that contain all lyophilized reagents needed to perform qRT-PCR reactions. These reaction tubes are mounted on rotors to centrifuge the liquid to the reaction window of the tube using a cordless drill. System operations require simple and limited crew actions including syringe pushes, valve turns and pipette dispenses. The resulting process takes less than 30 min to have tubes ready for loading into the qRT-PCR unit.The project has selected a Commercial-Off-The-Shelf (COTS) qRT-PCR unit, the Cepheid SmartCycler, that will fly in its COTS configuration. The SmartCycler has a number of advantages including modular design (16 independent PCR modules), low power consumption, rapid thermal ramp times and four-color detection. The ability to detect up to four fluorescent channels will enable multiplex assays that can be used to normalize for RNA concentration and integrity, and to study multiple genes of interest in each module. The WetLab-2 system will have the capability to downlink data from the ISS to the ground after a completed run and to uplink new programs. The ability to conduct qRT-PCR on-orbit eliminates the confounding effects on gene expression of reentry stresses and shock acting on live cells and organisms or the concern of RNA degradation of fixed samples. The

  8. Smile line assessment comparing quantitative measurement and visual estimation.

    Science.gov (United States)

    Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie

    2011-02-01

    Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  9. Quantitative estimation of diacetylmorphine by preparative TLC and UV spectroscopy

    International Nuclear Information System (INIS)

    Khan, L.; Siddiqui, M.T.; Ahmad, N.; Shafi, N.

    2001-01-01

    A simple and efficient method for the quantitative estimation of di acetylmorphine in narcotic products has been described. Comparative TLC of narcotic specimens with standards showed presence of morphine, monoacetylmorphine, diacetylmorphine papaverine and noscapine, Resolution of the mixtures was achieved by preparative TLC. Bands corresponding to diacetylmorphine scraped, eluted UV absorption of extracts measured and contents quantified. (author)

  10. Quantitative estimation of pollution in groundwater and surface ...

    African Journals Online (AJOL)

    Quantitative estimation of pollution in groundwater and surface water in Benin City and environs. ... Ethiopian Journal of Environmental Studies and Management ... Physico-chemical parameters were compared with regulatory standards from Federal Ministry of Environment for drinking water and they all fell within ...

  11. Quantitative Estimates of Bio-Remodeling on Coastal Rock Surfaces

    Directory of Open Access Journals (Sweden)

    Marta Pappalardo

    2016-05-01

    Full Text Available Remodeling of rocky coasts and erosion rates have been widely studied in past years, but not all the involved processes acting over rocks surface have been quantitatively evaluated yet. The first goal of this paper is to revise the different methodologies employed in the quantification of the effect of biotic agents on rocks exposed to coastal morphologic agents, comparing their efficiency. Secondly, we focus on geological methods to assess and quantify bio-remodeling, presenting some case studies in an area of the Mediterranean Sea in which different geological methods, inspired from the revised literature, have been tested in order to provide a quantitative assessment of the effects some biological covers exert over rocky platforms in tidal and supra-tidal environments. In particular, different experimental designs based on Schmidt hammer test results have been applied in order to estimate rock hardness related to different orders of littoral platforms and the bio-erosive/bio-protective role of Chthamalus ssp. and Verrucariaadriatica. All data collected have been analyzed using statistical tests to evaluate the significance of the measures and methodologies. The effectiveness of this approach is analyzed, and its limits are highlighted. In order to overcome the latter, a strategy combining geological and experimental–computational approaches is proposed, potentially capable of revealing novel clues on bio-erosion dynamics. An experimental-computational proposal, to assess the indirect effects of the biofilm coverage of rocky shores, is presented in this paper, focusing on the shear forces exerted during hydration-dehydration cycles. The results of computational modeling can be compared to experimental evidence, from nanoscopic to macroscopic scales.

  12. Stochastic evaluation of tsunami inundation and quantitative estimating tsunami risk

    International Nuclear Information System (INIS)

    Fukutani, Yo; Anawat, Suppasri; Abe, Yoshi; Imamura, Fumihiko

    2014-01-01

    We performed a stochastic evaluation of tsunami inundation by using results of stochastic tsunami hazard assessment at the Soma port in the Tohoku coastal area. Eleven fault zones along the Japan trench were selected as earthquake faults generating tsunamis. The results show that estimated inundation area of return period about 1200 years had good agreement with that in the 2011 Tohoku earthquake. In addition, we evaluated quantitatively tsunami risk for four types of building; a reinforced concrete, a steel, a brick and a wood at the Soma port by combining the results of inundation assessment and tsunami fragility assessment. The results of quantitative estimating risk would reflect properly vulnerability of the buildings, that the wood building has high risk and the reinforced concrete building has low risk. (author)

  13. A uniform quantitative stiff stability estimate for BDF schemes

    Directory of Open Access Journals (Sweden)

    Winfried Auzinger

    2006-01-01

    Full Text Available The concepts of stability regions, \\(A\\- and \\(A(\\alpha\\-stability - albeit based on scalar models - turned out to be essential for the identification of implicit methods suitable for the integration of stiff ODEs. However, for multistep methods, knowledge of the stability region provides no information on the quantitative stability behavior of the scheme. In this paper we fill this gap for the important class of Backward Differentiation Formulas (BDF. Quantitative stability bounds are derived which are uniformly valid in the stability region of the method. Our analysis is based on a study of the separation of the characteristic roots and a special similarity decomposition of the associated companion matrix.

  14. Quantitative Estimation of Transmitted and Reflected Lamb Waves at Discontinuity

    International Nuclear Information System (INIS)

    Lim, Hyung Jin; Sohn, Hoon

    2010-01-01

    For the application of Lamb wave to structural health monitoring(SHM), understanding its physical characteristic and interaction between Lamb wave and defect of the host structure is an important issue. In this study, reflected, transmitted and mode converted Lamb waves at discontinuity of a plate structure were simulated and the amplitude ratios are calculated theoretically using Modal decomposition method. The predicted results were verified comparing with finite element method(FEM) and experimental results simulating attached PZTs. The result shows that the theoretical prediction is close to the FEM and the experimental verification. Moreover, quantitative estimation method was suggested using amplitude ratio of Lamb wave at discontinuity

  15. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    Science.gov (United States)

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  16. Quantitative estimation of Nipah virus replication kinetics in vitro

    Directory of Open Access Journals (Sweden)

    Hassan Sharifah

    2006-06-01

    Full Text Available Abstract Background Nipah virus is a zoonotic virus isolated from an outbreak in Malaysia in 1998. The virus causes infections in humans, pigs, and several other domestic animals. It has also been isolated from fruit bats. The pathogenesis of Nipah virus infection is still not well described. In the present study, Nipah virus replication kinetics were estimated from infection of African green monkey kidney cells (Vero using the one-step SYBR® Green I-based quantitative real-time reverse transcriptase-polymerase chain reaction (qRT-PCR assay. Results The qRT-PCR had a dynamic range of at least seven orders of magnitude and can detect Nipah virus from as low as one PFU/μL. Following initiation of infection, it was estimated that Nipah virus RNA doubles at every ~40 minutes and attained peak intracellular virus RNA level of ~8.4 log PFU/μL at about 32 hours post-infection (PI. Significant extracellular Nipah virus RNA release occurred only after 8 hours PI and the level peaked at ~7.9 log PFU/μL at 64 hours PI. The estimated rate of Nipah virus RNA released into the cell culture medium was ~0.07 log PFU/μL per hour and less than 10% of the released Nipah virus RNA was infectious. Conclusion The SYBR® Green I-based qRT-PCR assay enabled quantitative assessment of Nipah virus RNA synthesis in Vero cells. A low rate of Nipah virus extracellular RNA release and low infectious virus yield together with extensive syncytial formation during the infection support a cell-to-cell spread mechanism for Nipah virus infection.

  17. Quantitative hyperbolicity estimates in one-dimensional dynamics

    International Nuclear Information System (INIS)

    Day, S; Kokubu, H; Pilarczyk, P; Luzzatto, S; Mischaikow, K; Oka, H

    2008-01-01

    We develop a rigorous computational method for estimating the Lyapunov exponents in uniformly expanding regions of the phase space for one-dimensional maps. Our method uses rigorous numerics and graph algorithms to provide results that are mathematically meaningful and can be achieved in an efficient way

  18. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    International Nuclear Information System (INIS)

    Bindschadler, Michael; Alessio, Adam M; Modgil, Dimple; La Riviere, Patrick J; Branch, Kelley R

    2014-01-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g) −1 , cardiac output = 3, 5, 8 L min −1 ). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  19. Radar-Derived Quantitative Precipitation Estimation Based on Precipitation Classification

    Directory of Open Access Journals (Sweden)

    Lili Yang

    2016-01-01

    Full Text Available A method for improving radar-derived quantitative precipitation estimation is proposed. Tropical vertical profiles of reflectivity (VPRs are first determined from multiple VPRs. Upon identifying a tropical VPR, the event can be further classified as either tropical-stratiform or tropical-convective rainfall by a fuzzy logic (FL algorithm. Based on the precipitation-type fields, the reflectivity values are converted into rainfall rate using a Z-R relationship. In order to evaluate the performance of this rainfall classification scheme, three experiments were conducted using three months of data and two study cases. In Experiment I, the Weather Surveillance Radar-1988 Doppler (WSR-88D default Z-R relationship was applied. In Experiment II, the precipitation regime was separated into convective and stratiform rainfall using the FL algorithm, and corresponding Z-R relationships were used. In Experiment III, the precipitation regime was separated into convective, stratiform, and tropical rainfall, and the corresponding Z-R relationships were applied. The results show that the rainfall rates obtained from all three experiments match closely with the gauge observations, although Experiment II could solve the underestimation, when compared to Experiment I. Experiment III significantly reduced this underestimation and generated the most accurate radar estimates of rain rate among the three experiments.

  20. The APEX Quantitative Proteomics Tool: Generating protein quantitation estimates from LC-MS/MS proteomics results

    Directory of Open Access Journals (Sweden)

    Saeed Alexander I

    2008-12-01

    Full Text Available Abstract Background Mass spectrometry (MS based label-free protein quantitation has mainly focused on analysis of ion peak heights and peptide spectral counts. Most analyses of tandem mass spectrometry (MS/MS data begin with an enzymatic digestion of a complex protein mixture to generate smaller peptides that can be separated and identified by an MS/MS instrument. Peptide spectral counting techniques attempt to quantify protein abundance by counting the number of detected tryptic peptides and their corresponding MS spectra. However, spectral counting is confounded by the fact that peptide physicochemical properties severely affect MS detection resulting in each peptide having a different detection probability. Lu et al. (2007 described a modified spectral counting technique, Absolute Protein Expression (APEX, which improves on basic spectral counting methods by including a correction factor for each protein (called Oi value that accounts for variable peptide detection by MS techniques. The technique uses machine learning classification to derive peptide detection probabilities that are used to predict the number of tryptic peptides expected to be detected for one molecule of a particular protein (Oi. This predicted spectral count is compared to the protein's observed MS total spectral count during APEX computation of protein abundances. Results The APEX Quantitative Proteomics Tool, introduced here, is a free open source Java application that supports the APEX protein quantitation technique. The APEX tool uses data from standard tandem mass spectrometry proteomics experiments and provides computational support for APEX protein abundance quantitation through a set of graphical user interfaces that partition thparameter controls for the various processing tasks. The tool also provides a Z-score analysis for identification of significant differential protein expression, a utility to assess APEX classifier performance via cross validation, and a

  1. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    Science.gov (United States)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  2. A combined usage of stochastic and quantitative risk assessment methods in the worksites: Application on an electric power provider

    International Nuclear Information System (INIS)

    Marhavilas, P.K.; Koulouriotis, D.E.

    2012-01-01

    An individual method cannot build either a realistic forecasting model or a risk assessment process in the worksites, and future perspectives should focus on the combined forecasting/estimation approach. The main purpose of this paper is to gain insight into a risk prediction and estimation methodological framework, using the combination of three different methods, including the proportional quantitative-risk-assessment technique (PRAT), the time-series stochastic process (TSP), and the method of estimating the societal-risk (SRE) by F–N curves. In order to prove the usefulness of the combined usage of stochastic and quantitative risk assessment methods, an application on an electric power provider industry is presented to, using empirical data.

  3. Quantitative Risk reduction estimation Tool For Control Systems, Suggested Approach and Research Needs

    Energy Technology Data Exchange (ETDEWEB)

    Miles McQueen; Wayne Boyer; Mark Flynn; Sam Alessi

    2006-03-01

    For the past year we have applied a variety of risk assessment technologies to evaluate the risk to critical infrastructure from cyber attacks on control systems. More recently, we identified the need for a stand alone control system risk reduction estimation tool to provide owners and operators of control systems with a more useable, reliable, and credible method for managing the risks from cyber attack. Risk is defined as the probability of a successful attack times the value of the resulting loss, typically measured in lives and dollars. Qualitative and ad hoc techniques for measuring risk do not provide sufficient support for cost benefit analyses associated with cyber security mitigation actions. To address the need for better quantitative risk reduction models we surveyed previous quantitative risk assessment research; evaluated currently available tools; developed new quantitative techniques [17] [18]; implemented a prototype analysis tool to demonstrate how such a tool might be used; used the prototype to test a variety of underlying risk calculational engines (e.g. attack tree, attack graph); and identified technical and research needs. We concluded that significant gaps still exist and difficult research problems remain for quantitatively assessing the risk to control system components and networks, but that a useable quantitative risk reduction estimation tool is not beyond reach.

  4. Novel whole brain segmentation and volume estimation using quantitative MRI

    Energy Technology Data Exchange (ETDEWEB)

    West, J. [Linkoeping University, Radiation Physics, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); SyntheticMR AB, Linkoeping (Sweden); Warntjes, J.B.M. [Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); SyntheticMR AB, Linkoeping (Sweden); Linkoeping University and Department of Clinical Physiology UHL, County Council of Oestergoetland, Clinical Physiology, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Lundberg, P. [Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); Linkoeping University and Department of Radiation Physics UHL, County Council of Oestergoetland, Radiation Physics, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Linkoeping University and Department of Radiology UHL, County Council of Oestergoetland, Radiology, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden)

    2012-05-15

    Brain segmentation and volume estimation of grey matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) are important for many neurological applications. Volumetric changes are observed in multiple sclerosis (MS), Alzheimer's disease and dementia, and in normal aging. A novel method is presented to segment brain tissue based on quantitative magnetic resonance imaging (qMRI) of the longitudinal relaxation rate R{sub 1}, the transverse relaxation rate R{sub 2} and the proton density, PD. Previously reported qMRI values for WM, GM and CSF were used to define tissues and a Bloch simulation performed to investigate R{sub 1}, R{sub 2} and PD for tissue mixtures in the presence of noise. Based on the simulations a lookup grid was constructed to relate tissue partial volume to the R{sub 1}-R{sub 2}-PD space. The method was validated in 10 healthy subjects. MRI data were acquired using six resolutions and three geometries. Repeatability for different resolutions was 3.2% for WM, 3.2% for GM, 1.0% for CSF and 2.2% for total brain volume. Repeatability for different geometries was 8.5% for WM, 9.4% for GM, 2.4% for CSF and 2.4% for total brain volume. We propose a new robust qMRI-based approach which we demonstrate in a patient with MS. (orig.)

  5. Novel whole brain segmentation and volume estimation using quantitative MRI

    International Nuclear Information System (INIS)

    West, J.; Warntjes, J.B.M.; Lundberg, P.

    2012-01-01

    Brain segmentation and volume estimation of grey matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) are important for many neurological applications. Volumetric changes are observed in multiple sclerosis (MS), Alzheimer's disease and dementia, and in normal aging. A novel method is presented to segment brain tissue based on quantitative magnetic resonance imaging (qMRI) of the longitudinal relaxation rate R 1 , the transverse relaxation rate R 2 and the proton density, PD. Previously reported qMRI values for WM, GM and CSF were used to define tissues and a Bloch simulation performed to investigate R 1 , R 2 and PD for tissue mixtures in the presence of noise. Based on the simulations a lookup grid was constructed to relate tissue partial volume to the R 1 -R 2 -PD space. The method was validated in 10 healthy subjects. MRI data were acquired using six resolutions and three geometries. Repeatability for different resolutions was 3.2% for WM, 3.2% for GM, 1.0% for CSF and 2.2% for total brain volume. Repeatability for different geometries was 8.5% for WM, 9.4% for GM, 2.4% for CSF and 2.4% for total brain volume. We propose a new robust qMRI-based approach which we demonstrate in a patient with MS. (orig.)

  6. Quantitative measures of walking and strength provide insight into brain corticospinal tract pathology in multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Nora E Fritz

    2017-01-01

    Quantitative measures of strength and walking are associated with brain corticospinal tract pathology. The addition of these quantitative measures to basic clinical information explains more of the variance in corticospinal tract fractional anisotropy and magnetization transfer ratio than the basic clinical information alone. Outcome measurement for multiple sclerosis clinical trials has been notoriously challenging; the use of quantitative measures of strength and walking along with tract-specific imaging methods may improve our ability to monitor disease change over time, with intervention, and provide needed guidelines for developing more effective targeted rehabilitation strategies.

  7. Physiological frailty index (PFI): quantitative in-life estimate of individual biological age in mice.

    Science.gov (United States)

    Antoch, Marina P; Wrobel, Michelle; Kuropatwinski, Karen K; Gitlin, Ilya; Leonova, Katerina I; Toshkov, Ilia; Gleiberman, Anatoli S; Hutson, Alan D; Chernova, Olga B; Gudkov, Andrei V

    2017-03-19

    The development of healthspan-extending pharmaceuticals requires quantitative estimation of age-related progressive physiological decline. In humans, individual health status can be quantitatively assessed by means of a frailty index (FI), a parameter which reflects the scale of accumulation of age-related deficits. However, adaptation of this methodology to animal models is a challenging task since it includes multiple subjective parameters. Here we report a development of a quantitative non-invasive procedure to estimate biological age of an individual animal by creating physiological frailty index (PFI). We demonstrated the dynamics of PFI increase during chronological aging of male and female NIH Swiss mice. We also demonstrated acceleration of growth of PFI in animals placed on a high fat diet, reflecting aging acceleration by obesity and provide a tool for its quantitative assessment. Additionally, we showed that PFI could reveal anti-aging effect of mTOR inhibitor rapatar (bioavailable formulation of rapamycin) prior to registration of its effects on longevity. PFI revealed substantial sex-related differences in normal chronological aging and in the efficacy of detrimental (high fat diet) or beneficial (rapatar) aging modulatory factors. Together, these data introduce PFI as a reliable, non-invasive, quantitative tool suitable for testing potential anti-aging pharmaceuticals in pre-clinical studies.

  8. Estimating the Cost of Providing Foundational Public Health Services.

    Science.gov (United States)

    Mamaril, Cezar Brian C; Mays, Glen P; Branham, Douglas Keith; Bekemeier, Betty; Marlowe, Justin; Timsina, Lava

    2017-12-28

    To estimate the cost of resources required to implement a set of Foundational Public Health Services (FPHS) as recommended by the Institute of Medicine. A stochastic simulation model was used to generate probability distributions of input and output costs across 11 FPHS domains. We used an implementation attainment scale to estimate costs of fully implementing FPHS. We use data collected from a diverse cohort of 19 public health agencies located in three states that implemented the FPHS cost estimation methodology in their agencies during 2014-2015. The average agency incurred costs of $48 per capita implementing FPHS at their current attainment levels with a coefficient of variation (CV) of 16 percent. Achieving full FPHS implementation would require $82 per capita (CV=19 percent), indicating an estimated resource gap of $34 per capita. Substantial variation in costs exists across communities in resources currently devoted to implementing FPHS, with even larger variation in resources needed for full attainment. Reducing geographic inequities in FPHS may require novel financing mechanisms and delivery models that allow health agencies to have robust roles within the health system and realize a minimum package of public health services for the nation. © Health Research and Educational Trust.

  9. Quantitative estimates of the volatility of ambient organic aerosol

    Directory of Open Access Journals (Sweden)

    C. D. Cappa

    2010-06-01

    Full Text Available Measurements of the sensitivity of organic aerosol (OA, and its components mass to changes in temperature were recently reported by Huffman et al.~(2009 using a tandem thermodenuder-aerosol mass spectrometer (TD-AMS system in Mexico City and the Los Angeles area. Here, we use these measurements to derive quantitative estimates of aerosol volatility within the framework of absorptive partitioning theory using a kinetic model of aerosol evaporation in the TD. OA volatility distributions (or "basis-sets" are determined using several assumptions as to the enthalpy of vaporization (ΔHvap. We present two definitions of "non-volatile OA," one being a global and one a local definition. Based on these definitions, our analysis indicates that a substantial fraction of the organic aerosol is comprised of non-volatile components that will not evaporate under any atmospheric conditions; on the order of 50–80% when the most realistic ΔHvap assumptions are considered. The sensitivity of the total OA mass to dilution and ambient changes in temperature has been assessed for the various ΔHvap assumptions. The temperature sensitivity is relatively independent of the particular ΔHvap assumptions whereas dilution sensitivity is found to be greatest for the low (ΔHvap = 50 kJ/mol and lowest for the high (ΔHvap = 150 kJ/mol assumptions. This difference arises from the high ΔHvap assumptions yielding volatility distributions with a greater fraction of non-volatile material than the low ΔHvap assumptions. If the observations are fit using a 1 or 2-component model the sensitivity of the OA to dilution is unrealistically high. An empirical method introduced by Faulhaber et al. (2009 has also been used to independently estimate a volatility distribution for the ambient OA and is found to give results consistent with the

  10. Quantitative estimates of the volatility of ambient organic aerosol

    Science.gov (United States)

    Cappa, C. D.; Jimenez, J. L.

    2010-06-01

    Measurements of the sensitivity of organic aerosol (OA, and its components) mass to changes in temperature were recently reported by Huffman et al.~(2009) using a tandem thermodenuder-aerosol mass spectrometer (TD-AMS) system in Mexico City and the Los Angeles area. Here, we use these measurements to derive quantitative estimates of aerosol volatility within the framework of absorptive partitioning theory using a kinetic model of aerosol evaporation in the TD. OA volatility distributions (or "basis-sets") are determined using several assumptions as to the enthalpy of vaporization (ΔHvap). We present two definitions of "non-volatile OA," one being a global and one a local definition. Based on these definitions, our analysis indicates that a substantial fraction of the organic aerosol is comprised of non-volatile components that will not evaporate under any atmospheric conditions; on the order of 50-80% when the most realistic ΔHvap assumptions are considered. The sensitivity of the total OA mass to dilution and ambient changes in temperature has been assessed for the various ΔHvap assumptions. The temperature sensitivity is relatively independent of the particular ΔHvap assumptions whereas dilution sensitivity is found to be greatest for the low (ΔHvap = 50 kJ/mol) and lowest for the high (ΔHvap = 150 kJ/mol) assumptions. This difference arises from the high ΔHvap assumptions yielding volatility distributions with a greater fraction of non-volatile material than the low ΔHvap assumptions. If the observations are fit using a 1 or 2-component model the sensitivity of the OA to dilution is unrealistically high. An empirical method introduced by Faulhaber et al. (2009) has also been used to independently estimate a volatility distribution for the ambient OA and is found to give results consistent with the high and variable ΔHvap assumptions. Our results also show that the amount of semivolatile gas-phase organics in equilibrium with the OA could range from ~20

  11. A quantitative framework for estimating water resources in India

    Digital Repository Service at National Institute of Oceanography (India)

    Shankar, D.; Kotamraju, V.; Shetye, S.R

    of information on the variables associated with hydrology, and second, the absence of an easily accessible quantitative framework to put these variables in perspective. In this paper, we discuss a framework that has been assembled to address both these issues...

  12. Quantitative estimation of seafloor features from photographs and their application to nodule mining

    Digital Repository Service at National Institute of Oceanography (India)

    Sharma, R.

    Methods developed for quantitative estimation of seafloor features from seabed photographs and their application for estimation of nodule sizes, coverage, abundance, burial, sediment thickness, extent of rock exposure, density of benthic organisms...

  13. Methodologies for quantitative systems pharmacology (QSP) models : Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, Hp; Agoram, B.; Davies, M.R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, Ph.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  14. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, H. P.; Agoram, B.; Davies, M. R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, P. H.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  15. Gender differences in pension wealth: estimates using provider data.

    Science.gov (United States)

    Johnson, R W; Sambamoorthi, U; Crystal, S

    1999-06-01

    Information from pension providers was examined to investigate gender differences in pension wealth at midlife. For full-time wage and salary workers approaching retirement age who had pension coverage, median pension wealth on the current job was 76% greater for men than women. Differences in wages, years of job tenure, and industry between men and women accounted for most of the gender gap in pension wealth on the current job. Less than one third of the wealth difference could not be explained by gender differences in education, demographics, or job characteristics. The less-advantaged employment situation of working women currently in midlife carries over into worse retirement income prospects. However, the gender gap in pensions is likely to narrow in the future as married women's employment experiences increasingly resemble those of men.

  16. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    Science.gov (United States)

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  17. Providing Open-Access Know How for Directors of Quantitative and Mathematics Support Centers

    Directory of Open Access Journals (Sweden)

    Michael Schuckers

    2017-01-01

    Full Text Available The purpose of this editorial is to introduce the quantitative literacy community to the newly published A Handbook for Directors of Quantitative and Mathematics Centers. QMaSCs (pronounced “Q-masks” can be broadly defined as centers that have supporting students in quantitative fields of study as part of their mission. Some focus only on calculus or mathematics; others concentrate on numeracy or quantitative literacy, and some do all of that. A QMaSC may be embedded in a mathematics department, or part of a learning commons, or a stand-alone center. There are hundreds of these centers in the U.S. The new handbook, which is the outgrowth of a 2013 NSF-sponsored, national workshop attended by 23 QMaSC directors from all quarters of the U.S., is available open access on the USF Scholar Commons and in hard copy from Amazon.com. This editorial by the handbook’s editors provides background and overview of the 20 detailed chapters on center leadership and management; community interactions; staffing, hiring and training; center assessment; and starting a center; and then a collection of ten case studies from research universities, four-year state colleges, liberal arts colleges, and a community college. The editorial ends by pointing out the need and potential benefits of a professional organization for QMaSC directors.

  18. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    Science.gov (United States)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  19. Comparison of conventional, model-based quantitative planar, and quantitative SPECT image processing methods for organ activity estimation using In-111 agents

    International Nuclear Information System (INIS)

    He, Bin; Frey, Eric C

    2006-01-01

    Accurate quantification of organ radionuclide uptake is important for patient-specific dosimetry. The quantitative accuracy from conventional conjugate view methods is limited by overlap of projections from different organs and background activity, and attenuation and scatter. In this work, we propose and validate a quantitative planar (QPlanar) processing method based on maximum likelihood (ML) estimation of organ activities using 3D organ VOIs and a projector that models the image degrading effects. Both a physical phantom experiment and Monte Carlo simulation (MCS) studies were used to evaluate the new method. In these studies, the accuracies and precisions of organ activity estimates for the QPlanar method were compared with those from conventional planar (CPlanar) processing methods with various corrections for scatter, attenuation and organ overlap, and a quantitative SPECT (QSPECT) processing method. Experimental planar and SPECT projections and registered CT data from an RSD Torso phantom were obtained using a GE Millenium VH/Hawkeye system. The MCS data were obtained from the 3D NCAT phantom with organ activity distributions that modelled the uptake of 111 In ibritumomab tiuxetan. The simulations were performed using parameters appropriate for the same system used in the RSD torso phantom experiment. The organ activity estimates obtained from the CPlanar, QPlanar and QSPECT methods from both experiments were compared. From the results of the MCS experiment, even with ideal organ overlap correction and background subtraction, CPlanar methods provided limited quantitative accuracy. The QPlanar method with accurate modelling of the physical factors increased the quantitative accuracy at the cost of requiring estimates of the organ VOIs in 3D. The accuracy of QPlanar approached that of QSPECT, but required much less acquisition and computation time. Similar results were obtained from the physical phantom experiment. We conclude that the QPlanar method, based

  20. Qualitative and quantitative cost estimation : a methodology analysis

    NARCIS (Netherlands)

    Aram, S.; Eastman, C.; Beetz, J.; Issa, R.; Flood, I.

    2014-01-01

    This paper reports on the first part of ongoing research with the goal of designing a framework and a knowledge-based system for 3D parametric model-based quantity take-off and cost estimation in the Architecture, Engineering and Construction (AEC) industry. The authors have studied and analyzed

  1. Quantitative pre-surgical lung function estimation with SPECT/CT

    International Nuclear Information System (INIS)

    Bailey, D. L.; Willowson, K. P.; Timmins, S.; Harris, B. E.; Bailey, E. A.; Roach, P. J.

    2009-01-01

    Full text:Objectives: To develop methodology to predict lobar lung function based on SPECT/CT ventilation and perfusion (V/Q) scanning in candidates for lobectomy for lung cancer. Methods: This combines two development areas from our group: quantitative SPECT based on CT-derived corrections for scattering and attenuation of photons, and SPECT V/Q scanning with lobar segmentation from CT. Eight patients underwent baseline pulmonary function testing (PFT) including spirometry, measure of DLCO and cario-pulmonary exercise testing. A SPECT/CT V/Q scan was acquired at baseline. Using in-house software each lobe was anatomically defined using CT to provide lobar ROIs which could be applied to the SPECT data. From these, individual lobar contribution to overall function was calculated from counts within the lobe and post-operative FEV1, DLCO and VO2 peak were predicted. This was compared with the quantitative planar scan method using 3 rectangular ROIs over each lung. Results: Post-operative FEV1 most closely matched that predicted by the planar quantification method, with SPECT V/Q over-estimating the loss of function by 8% (range - 7 - +23%). However, post-operative DLCO and VO2 peak were both accurately predicted by SPECT V/Q (average error of 0 and 2% respectively) compared with planar. Conclusions: More accurate anatomical definition of lobar anatomy provides better estimates of post-operative loss of function for DLCO and VO2 peak than traditional planar methods. SPECT/CT provides the tools for accurate anatomical defintions of the surgical target as well as being useful in producing quantitative 3D functional images for ventilation and perfusion.

  2. A Quantitative Property-Property Relationship for Estimating Packaging-Food Partition Coefficients of Organic Compounds

    DEFF Research Database (Denmark)

    Huang, L.; Ernstoff, Alexi; Xu, H.

    2017-01-01

    Organic chemicals encapsulated in beverage and food packaging can migrate to the food and lead to human exposures via ingestion. The packaging-food (Kpf) partition coefficient is a key parameter to estimate the chemical migration from packaging materials. Previous studies have simply set Kpf to 1...... or 1000, or provided separate linear correlations for several discrete values of ethanol equivalencies of food simulants (EtOH-eq). The aim of the present study is to develop a single quantitative property-property relationship (QPPR) valid for different chemical-packaging combinations and for water...... because only two packaging types are included. This preliminary QPPR demonstrates that the Kpf for various chemicalpackaging-food combinations can be estimated by a single linear correlation. Based on more than 1000 collected Kpf in 15 materials, we will present extensive results for other packaging types...

  3. Estimation of the number of fluorescent end-members for quantitative analysis of multispectral FLIM data.

    Science.gov (United States)

    Gutierrez-Navarro, Omar; Campos-Delgado, Daniel U; Arce-Santana, Edgar R; Maitland, Kristen C; Cheng, Shuna; Jabbour, Joey; Malik, Bilal; Cuenca, Rodrigo; Jo, Javier A

    2014-05-19

    Multispectral fluorescence lifetime imaging (m-FLIM) can potentially allow identifying the endogenous fluorophores present in biological tissue. Quantitative description of such data requires estimating the number of components in the sample, their characteristic fluorescent decays, and their relative contributions or abundances. Unfortunately, this inverse problem usually requires prior knowledge about the data, which is seldom available in biomedical applications. This work presents a new methodology to estimate the number of potential endogenous fluorophores present in biological tissue samples from time-domain m-FLIM data. Furthermore, a completely blind linear unmixing algorithm is proposed. The method was validated using both synthetic and experimental m-FLIM data. The experimental m-FLIM data include in-vivo measurements from healthy and cancerous hamster cheek-pouch epithelial tissue, and ex-vivo measurements from human coronary atherosclerotic plaques. The analysis of m-FLIM data from in-vivo hamster oral mucosa identified healthy from precancerous lesions, based on the relative concentration of their characteristic fluorophores. The algorithm also provided a better description of atherosclerotic plaques in term of their endogenous fluorophores. These results demonstrate the potential of this methodology to provide quantitative description of tissue biochemical composition.

  4. 49 CFR 375.409 - May household goods brokers provide estimates?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false May household goods brokers provide estimates? 375... Estimating Charges § 375.409 May household goods brokers provide estimates? A household goods broker must not... there is a written agreement between the broker and you, the carrier, adopting the broker's estimate as...

  5. Can genetic estimators provide robust estimates of the effective number of breeders in small populations?

    Directory of Open Access Journals (Sweden)

    Marion Hoehn

    Full Text Available The effective population size (N(e is proportional to the loss of genetic diversity and the rate of inbreeding, and its accurate estimation is crucial for the monitoring of small populations. Here, we integrate temporal studies of the gecko Oedura reticulata, to compare genetic and demographic estimators of N(e. Because geckos have overlapping generations, our goal was to demographically estimate N(bI, the inbreeding effective number of breeders and to calculate the N(bI/N(a ratio (N(a =number of adults for four populations. Demographically estimated N(bI ranged from 1 to 65 individuals. The mean reduction in the effective number of breeders relative to census size (N(bI/N(a was 0.1 to 1.1. We identified the variance in reproductive success as the most important variable contributing to reduction of this ratio. We used four methods to estimate the genetic based inbreeding effective number of breeders N(bI(gen and the variance effective populations size N(eV(gen estimates from the genotype data. Two of these methods - a temporal moment-based (MBT and a likelihood-based approach (TM3 require at least two samples in time, while the other two were single-sample estimators - the linkage disequilibrium method with bias correction LDNe and the program ONeSAMP. The genetic based estimates were fairly similar across methods and also similar to the demographic estimates excluding those estimates, in which upper confidence interval boundaries were uninformative. For example, LDNe and ONeSAMP estimates ranged from 14-55 and 24-48 individuals, respectively. However, temporal methods suffered from a large variation in confidence intervals and concerns about the prior information. We conclude that the single-sample estimators are an acceptable short-cut to estimate N(bI for species such as geckos and will be of great importance for the monitoring of species in fragmented landscapes.

  6. Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.

    Science.gov (United States)

    Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E

    2017-12-11

    Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.

  7. Developing Daily Quantitative Damage Estimates From Geospatial Layers To Support Post Event Recovery

    Science.gov (United States)

    Woods, B. K.; Wei, L. H.; Connor, T. C.

    2014-12-01

    With the growth of natural hazard data available in near real-time it is increasingly feasible to deliver damage estimates caused by natural disasters. These estimates can be used in disaster management setting or by commercial entities to optimize the deployment of resources and/or routing of goods and materials. This work outlines an end-to-end, modular process to generate estimates of damage caused by severe weather. The processing stream consists of five generic components: 1) Hazard modules that provide quantitate data layers for each peril. 2) Standardized methods to map the hazard data to an exposure layer based on atomic geospatial blocks. 3) Peril-specific damage functions that compute damage metrics at the atomic geospatial block level. 4) Standardized data aggregators, which map damage to user-specific geometries. 5) Data dissemination modules, which provide resulting damage estimates in a variety of output forms. This presentation provides a description of this generic tool set, and an illustrated example using HWRF-based hazard data for Hurricane Arthur (2014). In this example, the Python-based real-time processing ingests GRIB2 output from the HWRF numerical model, dynamically downscales it in conjunctions with a land cover database using a multiprocessing pool, and a just-in-time compiler (JIT). The resulting wind fields are contoured, and ingested into a PostGIS database using OGR. Finally, the damage estimates are calculated at the atomic block level and aggregated to user-defined regions using PostgreSQL queries to construct application specific tabular and graphics output.

  8. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  9. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  10. Nuclear magnetic resonance provides a quantitative description of protein conformational flexibility on physiologically important time scales.

    Science.gov (United States)

    Salmon, Loïc; Bouvignies, Guillaume; Markwick, Phineus; Blackledge, Martin

    2011-04-12

    A complete description of biomolecular activity requires an understanding of the nature and the role of protein conformational dynamics. In recent years, novel nuclear magnetic resonance-based techniques that provide hitherto inaccessible detail concerning biomolecular motions occurring on physiologically important time scales have emerged. Residual dipolar couplings (RDCs) provide precise information about time- and ensemble-averaged structural and dynamic processes with correlation times up to the millisecond and thereby encode key information for understanding biological activity. In this review, we present the application of two very different approaches to the quantitative description of protein motion using RDCs. The first is purely analytical, describing backbone dynamics in terms of diffusive motions of each peptide plane, using extensive statistical analysis to validate the proposed dynamic modes. The second is based on restraint-free accelerated molecular dynamics simulation, providing statistically sampled free energy-weighted ensembles that describe conformational fluctuations occurring on time scales from pico- to milliseconds, at atomic resolution. Remarkably, the results from these two approaches converge closely in terms of distribution and absolute amplitude of motions, suggesting that this kind of combination of analytical and numerical models is now capable of providing a unified description of protein conformational dynamics in solution.

  11. Reef-associated crustacean fauna: biodiversity estimates using semi-quantitative sampling and DNA barcoding

    Science.gov (United States)

    Plaisance, L.; Knowlton, N.; Paulay, G.; Meyer, C.

    2009-12-01

    The cryptofauna associated with coral reefs accounts for a major part of the biodiversity in these ecosystems but has been largely overlooked in biodiversity estimates because the organisms are hard to collect and identify. We combine a semi-quantitative sampling design and a DNA barcoding approach to provide metrics for the diversity of reef-associated crustacean. Twenty-two similar-sized dead heads of Pocillopora were sampled at 10 m depth from five central Pacific Ocean localities (four atolls in the Northern Line Islands and in Moorea, French Polynesia). All crustaceans were removed, and partial cytochrome oxidase subunit I was sequenced from 403 individuals, yielding 135 distinct taxa using a species-level criterion of 5% similarity. Most crustacean species were rare; 44% of the OTUs were represented by a single individual, and an additional 33% were represented by several specimens found only in one of the five localities. The Northern Line Islands and Moorea shared only 11 OTUs. Total numbers estimated by species richness statistics (Chao1 and ACE) suggest at least 90 species of crustaceans in Moorea and 150 in the Northern Line Islands for this habitat type. However, rarefaction curves for each region failed to approach an asymptote, and Chao1 and ACE estimators did not stabilize after sampling eight heads in Moorea, so even these diversity figures are underestimates. Nevertheless, even this modest sampling effort from a very limited habitat resulted in surprisingly high species numbers.

  12. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    Energy Technology Data Exchange (ETDEWEB)

    Tadayyon, Hadi [Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Sadeghi-Naini, Ali; Czarnota, Gregory, E-mail: Gregory.Czarnota@sunnybrook.ca [Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Radiation Oncology, Faculty of Medicine, University of Toronto, Toronto, Ontario M5T 1P5 (Canada); Wirtzfeld, Lauren [Department of Physics, Ryerson University, Toronto, Ontario M5B 2K3 (Canada); Wright, Frances C. [Division of Surgical Oncology, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada)

    2014-01-15

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  13. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    International Nuclear Information System (INIS)

    Tadayyon, Hadi; Sadeghi-Naini, Ali; Czarnota, Gregory; Wirtzfeld, Lauren; Wright, Frances C.

    2014-01-01

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  14. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    Science.gov (United States)

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  15. Improved dose–volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    International Nuclear Information System (INIS)

    Cheng Lishui; Hobbs, Robert F; Sgouros, George; Frey, Eric C; Segars, Paul W

    2013-01-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose–volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator–detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  16. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    Science.gov (United States)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  17. Providing Quantitative Information and a Nudge to Undergo Stool Testing in a Colorectal Cancer Screening Decision Aid: A Randomized Clinical Trial.

    Science.gov (United States)

    Schwartz, Peter H; Perkins, Susan M; Schmidt, Karen K; Muriello, Paul F; Althouse, Sandra; Rawl, Susan M

    2017-08-01

    Guidelines recommend that patient decision aids should provide quantitative information about probabilities of potential outcomes, but the impact of this information is unknown. Behavioral economics suggests that patients confused by quantitative information could benefit from a "nudge" towards one option. We conducted a pilot randomized trial to estimate the effect sizes of presenting quantitative information and a nudge. Primary care patients (n = 213) eligible for colorectal cancer screening viewed basic screening information and were randomized to view (a) quantitative information (quantitative module), (b) a nudge towards stool testing with the fecal immunochemical test (FIT) (nudge module), (c) neither a nor b, or (d) both a and b. Outcome measures were perceived colorectal cancer risk, screening intent, preferred test, and decision conflict, measured before and after viewing the decision aid, and screening behavior at 6 months. Patients viewing the quantitative module were more likely to be screened than those who did not ( P = 0.012). Patients viewing the nudge module had a greater increase in perceived colorectal cancer risk than those who did not ( P = 0.041). Those viewing the quantitative module had a smaller increase in perceived risk than those who did not ( P = 0.046), and the effect was moderated by numeracy. Among patients with high numeracy who did not view the nudge module, those who viewed the quantitative module had a greater increase in intent to undergo FIT ( P = 0.028) than did those who did not. The limitations of this study were the limited sample size and single healthcare system. Adding quantitative information to a decision aid increased uptake of colorectal cancer screening, while adding a nudge to undergo FIT did not increase uptake. Further research on quantitative information in decision aids is warranted.

  18. Quantitative estimation of muscle fatigue using surface electromyography during static muscle contraction.

    Science.gov (United States)

    Soo, Yewguan; Sugi, Masao; Nishino, Masataka; Yokoi, Hiroshi; Arai, Tamio; Kato, Ryu; Nakamura, Tatsuhiro; Ota, Jun

    2009-01-01

    Muscle fatigue is commonly associated with the musculoskeletal disorder problem. Previously, various techniques were proposed to index the muscle fatigue from electromyography signal. However, quantitative measurement is still difficult to achieve. This study aimed at proposing a method to estimate the degree of muscle fatigue quantitatively. A fatigue model was first constructed using handgrip dynamometer by conducting a series of static contraction tasks. Then the degree muscle fatigue can be estimated from electromyography signal with reasonable accuracy. The error of the estimated muscle fatigue was less than 10% MVC and no significant difference was found between the estimated value and the one measured using force sensor. Although the results were promising, there were still some limitations that need to be overcome in future study.

  19. Global quantitative indices reflecting provider process-of-care: data-base derivation.

    Science.gov (United States)

    Moran, John L; Solomon, Patricia J

    2010-04-19

    Controversy has attended the relationship between risk-adjusted mortality and process-of-care. There would be advantage in the establishment, at the data-base level, of global quantitative indices subsuming the diversity of process-of-care. A retrospective, cohort study of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 1993-2003, at the level of geographic and ICU-level descriptors (n = 35), for both hospital survivors and non-survivors. Process-of-care indices were established by analysis of: (i) the smoothed time-hazard curve of individual patient discharge and determined by pharmaco-kinetic methods as area under the hazard-curve (AUC), reflecting the integrated experience of the discharge process, and time-to-peak-hazard (TMAX, in days), reflecting the time to maximum rate of hospital discharge; and (ii) individual patient ability to optimize output (as length-of-stay) for recorded data-base physiological inputs; estimated as a technical production-efficiency (TE, scaled [0,(maximum)1]), via the econometric technique of stochastic frontier analysis. For each descriptor, multivariate correlation-relationships between indices and summed mortality probability were determined. The data-set consisted of 223129 patients from 99 ICUs with mean (SD) age and APACHE III score of 59.2(18.9) years and 52.7(30.6) respectively; 41.7% were female and 45.7% were mechanically ventilated within the first 24 hours post-admission. For survivors, AUC was maximal in rural and for-profit ICUs, whereas TMAX (>or= 7.8 days) and TE (>or= 0.74) were maximal in tertiary-ICUs. For non-survivors, AUC was maximal in tertiary-ICUs, but TMAX (>or= 4.2 days) and TE (>or= 0.69) were maximal in for-profit ICUs. Across descriptors, significant differences in indices were demonstrated (analysis-of-variance, P variance, for survivors (0.89) and non-survivors (0.89), was maximized by combinations of indices demonstrating a low correlation with

  20. Global quantitative indices reflecting provider process-of-care: data-base derivation

    Directory of Open Access Journals (Sweden)

    Solomon Patricia J

    2010-04-01

    Full Text Available Abstract Background Controversy has attended the relationship between risk-adjusted mortality and process-of-care. There would be advantage in the establishment, at the data-base level, of global quantitative indices subsuming the diversity of process-of-care. Methods A retrospective, cohort study of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 1993-2003, at the level of geographic and ICU-level descriptors (n = 35, for both hospital survivors and non-survivors. Process-of-care indices were established by analysis of: (i the smoothed time-hazard curve of individual patient discharge and determined by pharmaco-kinetic methods as area under the hazard-curve (AUC, reflecting the integrated experience of the discharge process, and time-to-peak-hazard (TMAX, in days, reflecting the time to maximum rate of hospital discharge; and (ii individual patient ability to optimize output (as length-of-stay for recorded data-base physiological inputs; estimated as a technical production-efficiency (TE, scaled [0,(maximum1], via the econometric technique of stochastic frontier analysis. For each descriptor, multivariate correlation-relationships between indices and summed mortality probability were determined. Results The data-set consisted of 223129 patients from 99 ICUs with mean (SD age and APACHE III score of 59.2(18.9 years and 52.7(30.6 respectively; 41.7% were female and 45.7% were mechanically ventilated within the first 24 hours post-admission. For survivors, AUC was maximal in rural and for-profit ICUs, whereas TMAX (≥ 7.8 days and TE (≥ 0.74 were maximal in tertiary-ICUs. For non-survivors, AUC was maximal in tertiary-ICUs, but TMAX (≥ 4.2 days and TE (≥ 0.69 were maximal in for-profit ICUs. Across descriptors, significant differences in indices were demonstrated (analysis-of-variance, P ≤ 0.0001. Total explained variance, for survivors (0.89 and non-survivors (0.89, was maximized by

  1. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  2. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    Science.gov (United States)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  3. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    Science.gov (United States)

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be

  4. [Quantitative estimation of vegetation cover and management factor in USLE and RUSLE models by using remote sensing data: a review].

    Science.gov (United States)

    Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie

    2012-06-01

    Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.

  5. Estimation of genetic parameters and detection of quantitative trait loci for metabolites in Danish Holstein milk

    DEFF Research Database (Denmark)

    Buitenhuis, Albert Johannes; Sundekilde, Ulrik; Poulsen, Nina Aagaard

    2013-01-01

    Small components and metabolites in milk are significant for the utilization of milk, not only in dairy food production but also as disease predictors in dairy cattle. This study focused on estimation of genetic parameters and detection of quantitative trait loci for metabolites in bovine milk. F...... for lactic acid to >0.8 for orotic acid and β-hydroxybutyrate. A single SNP association analysis revealed 7 genome-wide significant quantitative trait loci [malonate: Bos taurus autosome (BTA)2 and BTA7; galactose-1-phosphate: BTA2; cis-aconitate: BTA11; urea: BTA12; carnitine: BTA25...

  6. Estimating bioerosion rate on fossil corals: a quantitative approach from Oligocene reefs (NW Italy)

    Science.gov (United States)

    Silvestri, Giulia

    2010-05-01

    Bioerosion of coral reefs, especially when related to the activity of macroborers, is considered to be one of the major processes influencing framework development in present-day reefs. Macroboring communities affecting both living and dead corals are widely distributed also in the fossil record and their role is supposed to be analogously important in determining flourishing vs demise of coral bioconstructions. Nevertheless, many aspects concerning environmental factors controlling the incidence of bioerosion, shifting in composition of macroboring communities and estimation of bioerosion rate in different contexts are still poorly documented and understood. This study presents an attempt to quantify bioerosion rate on reef limestones characteristic of some Oligocene outcrops of the Tertiary Piedmont Basin (NW Italy) and deposited under terrigenous sedimentation within prodelta and delta fan systems. Branching coral rubble-dominated facies have been recognized as prevailing in this context. Depositional patterns, textures, and the generally low incidence of taphonomic features, such as fragmentation and abrasion, suggest relatively quiet waters where coral remains were deposited almost in situ. Thus taphonomic signatures occurring on corals can be reliably used to reconstruct environmental parameters affecting these particular branching coral assemblages during their life and to compare them with those typical of classical clear-water reefs. Bioerosion is sparsely distributed within coral facies and consists of a limited suite of traces, mostly referred to clionid sponges and polychaete and sipunculid worms. The incidence of boring bivalves seems to be generally lower. Together with semi-quantitative analysis of bioerosion rate along vertical logs and horizontal levels, two quantitative methods have been assessed and compared. These consist in the elaboration of high resolution scanned thin sections through software for image analysis (Photoshop CS3) and point

  7. Quantitative estimation of land surface evapotranspiration in Taiwan based on MODIS data

    Directory of Open Access Journals (Sweden)

    Che-sheng Zhan

    2011-09-01

    Full Text Available Land surface evapotranspiration (ET determines the local and regional water-heat balances. Accurate estimation of regional surface ET provides a scientific basis for the formulation and implementation of water conservation programs. This study set up a table of the momentum roughness length and zero-plane displacement related with land cover and an empirical relationship between land surface temperature and air temperature. A revised quantitative remote sensing ET model, the SEBS-Taiwan model, was developed. Based on Moderate Resolution Imaging Spectroradiometer (MODIS data, SEBS-Taiwan was used to simulate and evaluate the typical actual daily ET values in different seasons of 2002 and 2003 in Taiwan. SEBS-Taiwan generally performed well and could accurately simulate the actual daily ET. The simulated daily ET values matched the observed values satisfactorily. The results indicate that the net regional solar radiation, evaporation ratio, and surface ET values for the whole area of Taiwan are larger in summer than in spring, and larger in autumn than in winter. The results also show that the regional average daily ET values of 2002 are a little higher than those of 2003. Through analysis of the ET values from different types of land cover, we found that forest has the largest ET value, while water areas, bare land, and urban areas have the lowest ET values. Generally, the Northern Taiwan area, including Ilan County, Nantou County, and Hualien County, has higher ET values, while other cities, such as Chiayi, Taichung, and Tainan, have lower ET values.

  8. The Relative Performance of High Resolution Quantitative Precipitation Estimates in the Russian River Basin

    Science.gov (United States)

    Bytheway, J. L.; Biswas, S.; Cifelli, R.; Hughes, M.

    2017-12-01

    The Russian River carves a 110 mile path through Mendocino and Sonoma counties in western California, providing water for thousands of residents and acres of agriculture as well as a home for several species of endangered fish. The Russian River basin receives almost all of its precipitation during the October through March wet season, and the systems bringing this precipitation are often impacted by atmospheric river events as well as the complex topography of the region. This study will examine the performance of several high resolution (hourly, products and forecasts over the 2015-2016 and 2016-2017 wet seasons. Comparisons of event total rainfall as well as hourly rainfall will be performed using 1) rain gauges operated by the National Oceanic and Atmospheric Administration (NOAA) Physical Sciences Division (PSD), 2) products from the Multi-Radar/Multi-Sensor (MRMS) QPE dataset, and 3) quantitative precipitation forecasts from the High Resolution Rapid Refresh (HRRR) model at 1, 3, 6, and 12 hour lead times. Further attention will be given to cases or locations representing large disparities between the estimates.

  9. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    Science.gov (United States)

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are

  10. Using the ''Epiquant'' automatic analyzer for quantitative estimation of grain size

    Energy Technology Data Exchange (ETDEWEB)

    Tsivirko, E I; Ulitenko, A N; Stetsenko, I A; Burova, N M [Zaporozhskij Mashinostroitel' nyj Inst. (Ukrainian SSR)

    1979-01-01

    Application possibility of the ''Epiquant'' automatic analyzer to estimate qualitatively austenite grain in the 18Kh2N4VA steel has been investigated. Austenite grain has been clarified using the methods of cementation, oxidation and etching of the grain boundaries. Average linear size of grain at the length of 15 mm has been determined according to the total length of grain intersection line and the number of intersections at the boundaries. It is shown that the ''Epiquant'' analyzer ensures quantitative estimation of austenite grain size with relative error of 2-4 %.

  11. Parallel factor ChIP provides essential internal control for quantitative differential ChIP-seq.

    Science.gov (United States)

    Guertin, Michael J; Cullen, Amy E; Markowetz, Florian; Holding, Andrew N

    2018-04-17

    A key challenge in quantitative ChIP combined with high-throughput sequencing (ChIP-seq) is the normalization of data in the presence of genome-wide changes in occupancy. Analysis-based normalization methods were developed for transcriptomic data and these are dependent on the underlying assumption that total transcription does not change between conditions. For genome-wide changes in transcription factor (TF) binding, these assumptions do not hold true. The challenges in normalization are confounded by experimental variability during sample preparation, processing and recovery. We present a novel normalization strategy utilizing an internal standard of unchanged peaks for reference. Our method can be readily applied to monitor genome-wide changes by ChIP-seq that are otherwise lost or misrepresented through analytical normalization. We compare our approach to normalization by total read depth and two alternative methods that utilize external experimental controls to study TF binding. We successfully resolve the key challenges in quantitative ChIP-seq analysis and demonstrate its application by monitoring the loss of Estrogen Receptor-alpha (ER) binding upon fulvestrant treatment, ER binding in response to estrodiol, ER mediated change in H4K12 acetylation and profiling ER binding in patient-derived xenographs. This is supported by an adaptable pipeline to normalize and quantify differential TF binding genome-wide and generate metrics for differential binding at individual sites.

  12. Do group-specific equations provide the best estimates of stature?

    Science.gov (United States)

    Albanese, John; Osley, Stephanie E; Tuck, Andrew

    2016-04-01

    An estimate of stature can be used by a forensic anthropologist with the preliminary identification of an unknown individual when human skeletal remains are recovered. Fordisc is a computer application that can be used to estimate stature; like many other methods it requires the user to assign an unknown individual to a specific group defined by sex, race/ancestry, and century of birth before an equation is applied. The assumption is that a group-specific equation controls for group differences and should provide the best results most often. In this paper we assess the utility and benefits of using group-specific equations to estimate stature using Fordisc. Using the maximum length of the humerus and the maximum length of the femur from individuals with documented stature, we address the question: Do sex-, race/ancestry- and century-specific stature equations provide the best results when estimating stature? The data for our sample of 19th Century White males (n=28) were entered into Fordisc and stature was estimated using 22 different equation options for a total of 616 trials: 19th and 20th Century Black males, 19th and 20th Century Black females, 19th and 20th Century White females, 19th and 20th Century White males, 19th and 20th Century any, and 20th Century Hispanic males. The equations were assessed for utility in any one case (how many times the estimated range bracketed the documented stature) and in aggregate using 1-way ANOVA and other approaches. This group-specific equation that should have provided the best results was outperformed by several other equations for both the femur and humerus. These results suggest that group-specific equations do not provide better results for estimating stature while at the same time are more difficult to apply because an unknown must be allocated to a given group before stature can be estimated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Software project estimation the fundamentals for providing high quality information to decision makers

    CERN Document Server

    Abran, Alain

    2015-01-01

    Software projects are often late and over-budget and this leads to major problems for software customers. Clearly, there is a serious issue in estimating a realistic, software project budget. Furthermore, generic estimation models cannot be trusted to provide credible estimates for projects as complex as software projects. This book presents a number of examples using data collected over the years from various organizations building software. It also presents an overview of the non-for-profit organization, which collects data on software projects, the International Software Benchmarking Stan

  14. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma.

    Science.gov (United States)

    Yu, Jinhua; Shi, Zhifeng; Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan; Chen, Liang; Mao, Ying

    2017-08-01

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. • Noninvasive IDH1 status estimation can be obtained with a radiomics approach. • Automatic and quantitative processes were established for noninvasive biomarker estimation. • High-throughput MRI features are highly correlated to IDH1 states. • Area under the ROC curve of the proposed estimation method reached 0.86.

  15. Distribution and Quantitative Estimates of Variant Creutzfeldt-Jakob Disease Prions in Tissues of Clinical and Asymptomatic Patients.

    Science.gov (United States)

    Douet, Jean Y; Lacroux, Caroline; Aron, Naima; Head, Mark W; Lugan, Séverine; Tillier, Cécile; Huor, Alvina; Cassard, Hervé; Arnold, Mark; Beringue, Vincent; Ironside, James W; Andréoletti, Olivier

    2017-06-01

    In the United-Kingdom, ≈1 of 2,000 persons could be infected with variant Creutzfeldt-Jakob disease (vCJD). Therefore, risk of transmission of vCJD by medical procedures remains a major concern for public health authorities. In this study, we used in vitro amplification of prions by protein misfolding cyclic amplification (PMCA) to estimate distribution and level of the vCJD agent in 21 tissues from 4 patients who died of clinical vCJD and from 1 asymptomatic person with vCJD. PMCA identified major levels of vCJD prions in a range of tissues, including liver, salivary gland, kidney, lung, and bone marrow. Bioassays confirmed that the quantitative estimate of levels of vCJD prion accumulation provided by PMCA are indicative of vCJD infectivity levels in tissues. Findings provide critical data for the design of measures to minimize risk for iatrogenic transmission of vCJD.

  16. FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.

    Science.gov (United States)

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.

  17. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    Science.gov (United States)

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  18. Epithelium percentage estimation facilitates epithelial quantitative protein measurement in tissue specimens.

    Science.gov (United States)

    Chen, Jing; Toghi Eshghi, Shadi; Bova, George Steven; Li, Qing Kay; Li, Xingde; Zhang, Hui

    2013-12-01

    The rapid advancement of high-throughput tools for quantitative measurement of proteins has demonstrated the potential for the identification of proteins associated with cancer. However, the quantitative results on cancer tissue specimens are usually confounded by tissue heterogeneity, e.g. regions with cancer usually have significantly higher epithelium content yet lower stromal content. It is therefore necessary to develop a tool to facilitate the interpretation of the results of protein measurements in tissue specimens. Epithelial cell adhesion molecule (EpCAM) and cathepsin L (CTSL) are two epithelial proteins whose expressions in normal and tumorous prostate tissues were confirmed by measuring staining intensity with immunohistochemical staining (IHC). The expressions of these proteins were measured by ELISA in protein extracts from OCT embedded frozen prostate tissues. To eliminate the influence of tissue heterogeneity on epithelial protein quantification measured by ELISA, a color-based segmentation method was developed in-house for estimation of epithelium content using H&E histology slides from the same prostate tissues and the estimated epithelium percentage was used to normalize the ELISA results. The epithelium contents of the same slides were also estimated by a pathologist and used to normalize the ELISA results. The computer based results were compared with the pathologist's reading. We found that both EpCAM and CTSL levels, measured by ELISA assays itself, were greatly affected by epithelium content in the tissue specimens. Without adjusting for epithelium percentage, both EpCAM and CTSL levels appeared significantly higher in tumor tissues than normal tissues with a p value less than 0.001. However, after normalization by the epithelium percentage, ELISA measurements of both EpCAM and CTSL were in agreement with IHC staining results, showing a significant increase only in EpCAM with no difference in CTSL expression in cancer tissues. These results

  19. Speech graphs provide a quantitative measure of thought disorder in psychosis.

    Science.gov (United States)

    Mota, Natalia B; Vasconcelos, Nivaldo A P; Lemos, Nathalia; Pieretti, Ana C; Kinouchi, Osame; Cecchi, Guillermo A; Copelli, Mauro; Ribeiro, Sidarta

    2012-01-01

    Psychosis has various causes, including mania and schizophrenia. Since the differential diagnosis of psychosis is exclusively based on subjective assessments of oral interviews with patients, an objective quantification of the speech disturbances that characterize mania and schizophrenia is in order. In principle, such quantification could be achieved by the analysis of speech graphs. A graph represents a network with nodes connected by edges; in speech graphs, nodes correspond to words and edges correspond to semantic and grammatical relationships. To quantify speech differences related to psychosis, interviews with schizophrenics, manics and normal subjects were recorded and represented as graphs. Manics scored significantly higher than schizophrenics in ten graph measures. Psychopathological symptoms such as logorrhea, poor speech, and flight of thoughts were grasped by the analysis even when verbosity differences were discounted. Binary classifiers based on speech graph measures sorted schizophrenics from manics with up to 93.8% of sensitivity and 93.7% of specificity. In contrast, sorting based on the scores of two standard psychiatric scales (BPRS and PANSS) reached only 62.5% of sensitivity and specificity. The results demonstrate that alterations of the thought process manifested in the speech of psychotic patients can be objectively measured using graph-theoretical tools, developed to capture specific features of the normal and dysfunctional flow of thought, such as divergence and recurrence. The quantitative analysis of speech graphs is not redundant with standard psychometric scales but rather complementary, as it yields a very accurate sorting of schizophrenics and manics. Overall, the results point to automated psychiatric diagnosis based not on what is said, but on how it is said.

  20. Speech graphs provide a quantitative measure of thought disorder in psychosis.

    Directory of Open Access Journals (Sweden)

    Natalia B Mota

    Full Text Available BACKGROUND: Psychosis has various causes, including mania and schizophrenia. Since the differential diagnosis of psychosis is exclusively based on subjective assessments of oral interviews with patients, an objective quantification of the speech disturbances that characterize mania and schizophrenia is in order. In principle, such quantification could be achieved by the analysis of speech graphs. A graph represents a network with nodes connected by edges; in speech graphs, nodes correspond to words and edges correspond to semantic and grammatical relationships. METHODOLOGY/PRINCIPAL FINDINGS: To quantify speech differences related to psychosis, interviews with schizophrenics, manics and normal subjects were recorded and represented as graphs. Manics scored significantly higher than schizophrenics in ten graph measures. Psychopathological symptoms such as logorrhea, poor speech, and flight of thoughts were grasped by the analysis even when verbosity differences were discounted. Binary classifiers based on speech graph measures sorted schizophrenics from manics with up to 93.8% of sensitivity and 93.7% of specificity. In contrast, sorting based on the scores of two standard psychiatric scales (BPRS and PANSS reached only 62.5% of sensitivity and specificity. CONCLUSIONS/SIGNIFICANCE: The results demonstrate that alterations of the thought process manifested in the speech of psychotic patients can be objectively measured using graph-theoretical tools, developed to capture specific features of the normal and dysfunctional flow of thought, such as divergence and recurrence. The quantitative analysis of speech graphs is not redundant with standard psychometric scales but rather complementary, as it yields a very accurate sorting of schizophrenics and manics. Overall, the results point to automated psychiatric diagnosis based not on what is said, but on how it is said.

  1. A direct method for estimating the alpha/beta ratio from quantitative dose-response data

    International Nuclear Information System (INIS)

    Stuschke, M.

    1989-01-01

    A one-step optimization method based on a least squares fit of the linear quadratic model to quantitative tissue response data after fractionated irradiation is proposed. Suitable end-points that can be analysed by this method are growth delay, host survival and quantitative biochemical or clinical laboratory data. The functional dependence between the transformed dose and the measured response is approximated by a polynomial. The method allows for the estimation of the alpha/beta ratio and its confidence limits from all observed responses of the different fractionation schedules. Censored data can be included in the analysis. A method to test the appropriateness of the fit is presented. A computer simulation illustrates the method and its accuracy as examplified by the growth delay end point. A comparison with a fit of the linear quadratic model to interpolated isoeffect doses shows the advantages of the direct method. (orig./HP) [de

  2. Store turnover as a predictor of food and beverage provider turnover and associated dietary intake estimates in very remote Indigenous communities.

    Science.gov (United States)

    Wycherley, Thomas; Ferguson, Megan; O'Dea, Kerin; McMahon, Emma; Liberato, Selma; Brimblecombe, Julie

    2016-12-01

    Determine how very-remote Indigenous community (RIC) food and beverage (F&B) turnover quantities and associated dietary intake estimates derived from only stores, compare with values derived from all community F&B providers. F&B turnover quantity and associated dietary intake estimates (energy, micro/macronutrients and major contributing food types) were derived from 12-months transaction data of all F&B providers in three RICs (NT, Australia). F&B turnover quantities and dietary intake estimates from only stores (plus only the primary store in multiple-store communities) were expressed as a proportion of complete F&B provider turnover values. Food types and macronutrient distribution (%E) estimates were quantitatively compared. Combined stores F&B turnover accounted for the majority of F&B quantity (98.1%) and absolute dietary intake estimates (energy [97.8%], macronutrients [≥96.7%] and micronutrients [≥83.8%]). Macronutrient distribution estimates from combined stores and only the primary store closely aligned complete provider estimates (≤0.9% absolute). Food types were similar using combined stores, primary store or complete provider turnover. Evaluating combined stores F&B turnover represents an efficient method to estimate total F&B turnover quantity and associated dietary intake in RICs. In multiple-store communities, evaluating only primary store F&B turnover provides an efficient estimate of macronutrient distribution and major food types. © 2016 Public Health Association of Australia.

  3. Estimates of economic burden of providing inpatient care in childhood rotavirus gastroenteritis from Malaysia.

    Science.gov (United States)

    Lee, Way Seah; Poo, Muhammad Izzuddin; Nagaraj, Shyamala

    2007-12-01

    To estimate the cost of an episode of inpatient care and the economic burden of hospitalisation for childhood rotavirus gastroenteritis (GE) in Malaysia. A 12-month prospective, hospital-based study on children less than 14 years of age with rotavirus GE, admitted to University of Malaya Medical Centre, Kuala Lumpur, was conducted in 2002. Data on human resource expenditure, costs of investigations, treatment and consumables were collected. Published estimates on rotavirus disease incidence in Malaysia were searched. Economic burden of hospital care for rotavirus GE in Malaysia was estimated by multiplying the cost of each episode of hospital admission for rotavirus GE with national rotavirus incidence in Malaysia. In 2002, the per capita health expenditure by Malaysian Government was US$71.47. Rotavirus was positive in 85 (22%) of the 393 patients with acute GE admitted during the study period. The median cost of providing inpatient care for an episode of rotavirus GE was US$211.91 (range US$68.50-880.60). The estimated average cases of children hospitalised for rotavirus GE in Malaysia (1999-2000) was 8571 annually. The financial burden of providing inpatient care for rotavirus GE in Malaysian children was estimated to be US$1.8 million (range US$0.6 million-7.5 million) annually. The cost of providing inpatient care for childhood rotavirus GE in Malaysia was estimated to be US$1.8 million annually. The financial burden of rotavirus disease would be higher if cost of outpatient visits, non-medical and societal costs are included.

  4. Analytical performance of refractometry in quantitative estimation of isotopic concentration of heavy water in nuclear reactor

    International Nuclear Information System (INIS)

    Dhole, K.; Ghosh, S.; Datta, A.; Tripathy, M.K.; Bose, H.; Roy, M.; Tyagi, A.K.

    2011-01-01

    The method of refractometry has been investigated for the quantitative estimation of isotopic concentration of D 2 O (heavy water) in a simulated water sample. Viability of Refractometry as an excellent analytical technique for rapid and non-invasive determination of D 2 O concentration in water samples has been demonstrated. Temperature of the samples was precisely controlled to eliminate effect of temperature fluctuation on refractive index measurement. Calibration performance by this technique exhibited reasonable analytical response over a wide range (1-100%) of D 2 O concentration. (author)

  5. Quantitative estimation of hemorrhage in chronic subdural hematoma using the 51Cr erythrocyte labeling method

    International Nuclear Information System (INIS)

    Ito, H.; Yamamoto, S.; Saito, K.; Ikeda, K.; Hisada, K.

    1987-01-01

    Red cell survival studies using an infusion of chromium-51-labeled erythrocytes were performed to quantitatively estimate hemorrhage in the chronic subdural hematoma cavity of 50 patients. The amount of hemorrhage was determined during craniotomy. Between 6 and 24 hours after infusion of the labeled red cells, hemorrhage accounted for a mean of 6.7% of the hematoma content, indicating continuous or intermittent hemorrhage into the cavity. The clinical state of the patients and the density of the chronic subdural hematoma on computerized tomography scans were related to the amount of hemorrhage. Chronic subdural hematomas with a greater amount of hemorrhage frequently consisted of clots rather than fluid

  6. Reproducibility of CSF quantitative culture methods for estimating rate of clearance in cryptococcal meningitis.

    Science.gov (United States)

    Dyal, Jonathan; Akampurira, Andrew; Rhein, Joshua; Morawski, Bozena M; Kiggundu, Reuben; Nabeta, Henry W; Musubire, Abdu K; Bahr, Nathan C; Williams, Darlisha A; Bicanic, Tihana; Larsen, Robert A; Meya, David B; Boulware, David R

    2016-05-01

    Quantitative cerebrospinal fluid (CSF) cultures provide a measure of disease severity in cryptococcal meningitis. The fungal clearance rate by quantitative cultures has become a primary endpoint for phase II clinical trials. This study determined the inter-assay accuracy of three different quantitative culture methodologies. Among 91 participants with meningitis symptoms in Kampala, Uganda, during August-November 2013, 305 CSF samples were prospectively collected from patients at multiple time points during treatment. Samples were simultaneously cultured by three methods: (1) St. George's 100 mcl input volume of CSF with five 1:10 serial dilutions, (2) AIDS Clinical Trials Group (ACTG) method using 1000, 100, 10 mcl input volumes, and two 1:100 dilutions with 100 and 10 mcl input volume per dilution on seven agar plates; and (3) 10 mcl calibrated loop of undiluted and 1:100 diluted CSF (loop). Quantitative culture values did not statistically differ between St. George-ACTG methods (P= .09) but did for St. George-10 mcl loop (Pmethods was high (r≥0.88). For detecting sterility, the ACTG-method had the highest negative predictive value of 97% (91% St. George, 60% loop), but the ACTG-method had occasional (∼10%) difficulties in quantification due to colony clumping. For CSF clearance rate, St. George-ACTG methods did not differ overall (mean -0.05 ± 0.07 log10CFU/ml/day;P= .14) on a group level; however, individual-level clearance varied. The St. George and ACTG quantitative CSF culture methods produced comparable but not identical results. Quantitative cultures can inform treatment management strategies. © The Author 2016. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Psychological impact of providing women with personalised 10-year breast cancer risk estimates.

    Science.gov (United States)

    French, David P; Southworth, Jake; Howell, Anthony; Harvie, Michelle; Stavrinos, Paula; Watterson, Donna; Sampson, Sarah; Evans, D Gareth; Donnelly, Louise S

    2018-05-08

    The Predicting Risk of Cancer at Screening (PROCAS) study estimated 10-year breast cancer risk for 53,596 women attending NHS Breast Screening Programme. The present study, nested within the PROCAS study, aimed to assess the psychological impact of receiving breast cancer risk estimates, based on: (a) the Tyrer-Cuzick (T-C) algorithm including breast density or (b) T-C including breast density plus single-nucleotide polymorphisms (SNPs), versus (c) comparison women awaiting results. A sample of 2138 women from the PROCAS study was stratified by testing groups: T-C only, T-C(+SNPs) and comparison women; and by 10-year risk estimates received: 'moderate' (5-7.99%), 'average' (2-4.99%) or 'below average' (<1.99%) risk. Postal questionnaires were returned by 765 (36%) women. Overall state anxiety and cancer worry were low, and similar for women in T-C only and T-C(+SNPs) groups. Women in both T-C only and T-C(+SNPs) groups showed lower-state anxiety but slightly higher cancer worry than comparison women awaiting results. Risk information had no consistent effects on intentions to change behaviour. Most women were satisfied with information provided. There was considerable variation in understanding. No major harms of providing women with 10-year breast cancer risk estimates were detected. Research to establish the feasibility of risk-stratified breast screening is warranted.

  8. Health Impacts of Increased Physical Activity from Changes in Transportation Infrastructure: Quantitative Estimates for Three Communities

    Science.gov (United States)

    2015-01-01

    Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7–30.6), 0.6 (0.3–0.9), and 4.7 (2.1–7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches. PMID:26504832

  9. Quantitative estimation of carbonation and chloride penetration in reinforced concrete by laser-induced breakdown spectroscopy

    Science.gov (United States)

    Eto, Shuzo; Matsuo, Toyofumi; Matsumura, Takuro; Fujii, Takashi; Tanaka, Masayoshi Y.

    2014-11-01

    The penetration profile of chlorine in a reinforced concrete (RC) specimen was determined by laser-induced breakdown spectroscopy (LIBS). The concrete core was prepared from RC beams with cracking damage induced by bending load and salt water spraying. LIBS was performed using a specimen that was obtained by splitting the concrete core, and the line scan of laser pulses gave the two-dimensional emission intensity profiles of 100 × 80 mm2 within one hour. The two-dimensional profile of the emission intensity suggests that the presence of the crack had less effect on the emission intensity when the measurement interval was larger than the crack width. The chlorine emission spectrum was measured without using the buffer gas, which is usually used for chlorine measurement, by collinear double-pulse LIBS. The apparent diffusion coefficient, which is one of the most important parameters for chloride penetration in concrete, was estimated using the depth profile of chlorine emission intensity and Fick's law. The carbonation depth was estimated on the basis of the relationship between carbon and calcium emission intensities. When the carbon emission intensity was statistically higher than the calcium emission intensity at the measurement point, we determined that the point was carbonated. The estimation results were consistent with the spraying test results using phenolphthalein solution. These results suggest that the quantitative estimation by LIBS of carbonation depth and chloride penetration can be performed simultaneously.

  10. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jinhua [Fudan University, Department of Electronic Engineering, Shanghai (China); Computing and Computer-Assisted Intervention, Key Laboratory of Medical Imaging, Shanghai (China); Shi, Zhifeng; Chen, Liang; Mao, Ying [Fudan University, Department of Neurosurgery, Huashan Hospital, Shanghai (China); Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan [Fudan University, Department of Electronic Engineering, Shanghai (China)

    2017-08-15

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. (orig.)

  11. Quantifying the Extent of Emphysema : Factors Associated with Radiologists' Estimations and Quantitative Indices of Emphysema Severity Using the ECLIPSE Cohort

    NARCIS (Netherlands)

    Gietema, Hester A.; Mueller, Nestor L.; Fauerbach, Paola V. Nasute; Sharma, Sanjay; Edwards, Lisa D.; Camp, Pat G.; Coxson, Harvey O.

    Rationale and Objectives: This study investigated what factors radiologists take into account when estimating emphysema severity and assessed quantitative computed tomography (CT) measurements of low attenuation areas. Materials and Methods: CT scans and spirometry were obtained on 1519 chronic

  12. Can administrative health utilisation data provide an accurate diabetes prevalence estimate for a geographical region?

    Science.gov (United States)

    Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin

    2018-05-01

    To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. The perspective of healthcare providers and patients on health literacy: a systematic review of the quantitative and qualitative studies.

    Science.gov (United States)

    Rajah, Retha; Ahmad Hassali, Mohamed Azmi; Jou, Lim Ching; Murugiah, Muthu Kumar

    2018-03-01

    Health literacy (HL) is a multifaceted concept, thus understanding the perspective of healthcare providers, patients, and the system is vital. This systematic review examines and synthesises the available studies on HL-related knowledge, attitude, practice, and perceived barriers. CINAHL and Medline (via EBSCOhost), Google Scholar, PubMed, ProQuest, Sage Journals, and Science Direct were searched. Both quantitative and/or qualitative studies in the English language were included. Intervention studies and studies focusing on HL assessment tools and prevalence of low HL were excluded. The risk of biasness reduced with the involvement of two reviewers independently assessing study eligibility and quality. A total of 30 studies were included, which consist of 19 quantitative, 9 qualitative, and 2 mixed-method studies. Out of 17 studies, 13 reported deficiency of HL-related knowledge among healthcare providers and 1 among patients. Three studies showed a positive attitude of healthcare providers towards learning about HL. Another three studies demonstrated patients feel shame exposing their literacy and undergoing HL assessment. Common HL communication techniques reported practiced by healthcare providers were the use of everyday language, teach-back method, and providing patients with reading materials and aids, while time constraint was the most reported HL perceived barriers by both healthcare providers and patients. Significant gaps exists in HL knowledge among healthcare providers and patients that needs immediate intervention. Such as, greater effort placed in creating a health system that provides an opportunity for healthcare providers to learn about HL and patients to access health information with taking consideration of their perceived barriers.

  14. A Novel Method of Quantitative Anterior Chamber Depth Estimation Using Temporal Perpendicular Digital Photography.

    Science.gov (United States)

    Zamir, Ehud; Kong, George Y X; Kowalski, Tanya; Coote, Michael; Ang, Ghee Soon

    2016-07-01

    We hypothesize that: (1) Anterior chamber depth (ACD) is correlated with the relative anteroposterior position of the pupillary image, as viewed from the temporal side. (2) Such a correlation may be used as a simple quantitative tool for estimation of ACD. Two hundred sixty-six phakic eyes had lateral digital photographs taken from the temporal side, perpendicular to the visual axis, and underwent optical biometry (Nidek AL scanner). The relative anteroposterior position of the pupillary image was expressed using the ratio between: (1) lateral photographic temporal limbus to pupil distance ("E") and (2) lateral photographic temporal limbus to cornea distance ("Z"). In the first chronological half of patients (Correlation Series), E:Z ratio (EZR) was correlated with optical biometric ACD. The correlation equation was then used to predict ACD in the second half of patients (Prediction Series) and compared to their biometric ACD for agreement analysis. A strong linear correlation was found between EZR and ACD, R = -0.91, R 2 = 0.81. Bland-Altman analysis showed good agreement between predicted ACD using this method and the optical biometric ACD. The mean error was -0.013 mm (range -0.377 to 0.336 mm), standard deviation 0.166 mm. The 95% limits of agreement were ±0.33 mm. Lateral digital photography and EZR calculation is a novel method to quantitatively estimate ACD, requiring minimal equipment and training. EZ ratio may be employed in screening for angle closure glaucoma. It may also be helpful in outpatient medical clinic settings, where doctors need to judge the safety of topical or systemic pupil-dilating medications versus their risk of triggering acute angle closure glaucoma. Similarly, non ophthalmologists may use it to estimate the likelihood of acute angle closure glaucoma in emergency presentations.

  15. Application of short-wave infrared (SWIR) spectroscopy in quantitative estimation of clay mineral contents

    International Nuclear Information System (INIS)

    You, Jinfeng; Xing, Lixin; Pan, Jun; Meng, Tao; Liang, Liheng

    2014-01-01

    Clay minerals are significant constituents of soil which are necessary for life. This paper studied three types of clay minerals, kaolinite, illite, and montmorillonite, for they are not only the most common soil forming materials, but also important indicators of soil expansion and shrinkage potential. These clay minerals showed diagnostic absorption bands resulting from vibrations of hydroxyl groups and structural water molecules in the SWIR wavelength region. The short-wave infrared reflectance spectra of the soil was obtained from a Portable Near Infrared Spectrometer (PNIS, spectrum range: 1300∼2500 nm, interval: 2 nm). Due to the simplicity, quickness, and the non-destructiveness analysis, SWIR spectroscopy has been widely used in geological prospecting, chemical engineering and many other fields. The aim of this study was to use multiple linear regression (MLR) and partial least squares (PLS) regression to establish the optimizing quantitative estimation models of the kaolinite, illite and montmorillonite contents from soil reflectance spectra. Here, the soil reflectance spectra mainly refers to the spectral reflectivity of soil (SRS) corresponding to the absorption-band position (AP) of kaolinite, illite, and montmorillonite representative spectra from USGS spectral library, the SRS corresponding to the AP of soil spectral and soil overall spectrum reflectance values. The optimal estimation models of three kinds of clay mineral contents showed that the retrieval accuracy was satisfactory (Kaolinite content: a Root Mean Square Error of Calibration (RMSEC) of 1.671 with a coefficient of determination (R 2 ) of 0.791; Illite content: a RMSEC of 1.126 with a R 2 of 0.616; Montmorillonite content: a RMSEC of 1.814 with a R 2 of 0.707). Thus, the reflectance spectra of soil obtained form PNIS could be used for quantitative estimation of kaolinite, illite and montmorillonite contents in soil

  16. Providing low-budget estimations of carbon sequestration and greenhouse gas emissions in agricultural wetlands

    International Nuclear Information System (INIS)

    Lloyd, Colin R; Rebelo, Lisa-Maria; Max Finlayson, C

    2013-01-01

    The conversion of wetlands to agriculture through drainage and flooding, and the burning of wetland areas for agriculture have important implications for greenhouse gas (GHG) production and changing carbon stocks. However, the estimation of net GHG changes from mitigation practices in agricultural wetlands is complex compared to dryland crops. Agricultural wetlands have more complicated carbon and nitrogen cycles with both above- and below-ground processes and export of carbon via vertical and horizontal movement of water through the wetland. This letter reviews current research methodologies in estimating greenhouse gas production and provides guidance on the provision of robust estimates of carbon sequestration and greenhouse gas emissions in agricultural wetlands through the use of low cost reliable and sustainable measurement, modelling and remote sensing applications. The guidance is highly applicable to, and aimed at, wetlands such as those in the tropics and sub-tropics, where complex research infrastructure may not exist, or agricultural wetlands located in remote regions, where frequent visits by monitoring scientists prove difficult. In conclusion, the proposed measurement-modelling approach provides guidance on an affordable solution for mitigation and for investigating the consequences of wetland agricultural practice on GHG production, ecological resilience and possible changes to agricultural yields, variety choice and farming practice. (letter)

  17. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    Science.gov (United States)

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  18. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    Directory of Open Access Journals (Sweden)

    Noah Zaitlen

    2013-05-01

    Full Text Available Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  19. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    Science.gov (United States)

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L

    2013-05-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  20. Estimating the development assistance for health provided to faith-based organizations, 1990-2013.

    Science.gov (United States)

    Haakenstad, Annie; Johnson, Elizabeth; Graves, Casey; Olivier, Jill; Duff, Jean; Dieleman, Joseph L

    2015-01-01

    Faith-based organizations (FBOs) have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH) channeled through these organizations. Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs. In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund's contributions to NGOs. In 2011, the Gates Foundation's contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs. Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health.

  1. Estimating the development assistance for health provided to faith-based organizations, 1990-2013.

    Directory of Open Access Journals (Sweden)

    Annie Haakenstad

    Full Text Available Faith-based organizations (FBOs have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH channeled through these organizations.Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs.In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund's contributions to NGOs. In 2011, the Gates Foundation's contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs.Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health.

  2. Estimating the Development Assistance for Health Provided to Faith-Based Organizations, 1990–2013

    Science.gov (United States)

    Haakenstad, Annie; Johnson, Elizabeth; Graves, Casey; Olivier, Jill; Duff, Jean; Dieleman, Joseph L.

    2015-01-01

    Background Faith-based organizations (FBOs) have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH) channeled through these organizations. Material and Methods Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs. Results In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund’s contributions to NGOs. In 2011, the Gates Foundation’s contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs. Conclusion Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health. PMID:26042731

  3. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    Science.gov (United States)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-04-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  4. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    Science.gov (United States)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-06-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  5. A test for Improvement of high resolution Quantitative Precipitation Estimation for localized heavy precipitation events

    Science.gov (United States)

    Lee, Jung-Hoon; Roh, Joon-Woo; Park, Jeong-Gyun

    2017-04-01

    Accurate estimation of precipitation is one of the most difficult and significant tasks in the area of weather diagnostic and forecasting. In the Korean Peninsula, heavy precipitations are caused by various physical mechanisms, which are affected by shortwave trough, quasi-stationary moisture convergence zone among varying air masses, and a direct/indirect effect of tropical cyclone. In addition to, various geographical and topographical elements make production of temporal and spatial distribution of precipitation is very complicated. Especially, localized heavy rainfall events in South Korea generally arise from mesoscale convective systems embedded in these synoptic scale disturbances. In weather radar data with high temporal and spatial resolution, accurate estimation of rain rate from radar reflectivity data is too difficult. Z-R relationship (Marshal and Palmer 1948) have adapted representatively. In addition to, several methods such as support vector machine (SVM), neural network, Fuzzy logic, Kriging were utilized in order to improve the accuracy of rain rate. These methods show the different quantitative precipitation estimation (QPE) and the performances of accuracy are different for heavy precipitation cases. In this study, in order to improve the accuracy of QPE for localized heavy precipitation, ensemble method for Z-R relationship and various techniques was tested. This QPE ensemble method was developed by a concept based on utilizing each advantage of precipitation calibration methods. The ensemble members were produced for a combination of different Z-R coefficient and calibration method.

  6. Cancer and the LGBTQ Population: Quantitative and Qualitative Results from an Oncology Providers' Survey on Knowledge, Attitudes, and Practice Behaviors.

    Science.gov (United States)

    Tamargo, Christina L; Quinn, Gwendolyn P; Sanchez, Julian A; Schabath, Matthew B

    2017-10-07

    Despite growing social acceptance, the LGBTQ population continues to face barriers to healthcare including fear of stigmatization by healthcare providers, and providers' lack of knowledge about LGBTQ-specific health issues. This analysis focuses on the assessment of quantitative and qualitative responses from a subset of providers who identified as specialists that treat one or more of the seven cancers that may be disproportionate in LGBTQ patients. A 32-item web-based survey was emailed to 388 oncology providers at a single institution. The survey assessed: demographics, knowledge, attitudes, and practice behaviors. Oncology providers specializing in seven cancer types had poor knowledge of LGBTQ-specific health needs, with fewer than half of the surveyed providers (49.5%) correctly answering knowledge questions. Most providers had overall positive attitudes toward LGBTQ patients, with 91.7% agreeing they would be comfortable treating this population, and would support education and/or training on LGBTQ-related cancer health issues. Results suggest that despite generally positive attitudes toward the LGBTQ population, oncology providers who treat cancer types most prevalent among the population, lack knowledge of their unique health issues. Knowledge and practice behaviors may improve with enhanced education and training on this population's specific needs.

  7. Unit rupture work as a criterion for quantitative estimation of hardenability in steel

    International Nuclear Information System (INIS)

    Kramarov, M.A.; Orlov, E.D.; Rybakov, A.B.

    1980-01-01

    Shown is possible utilization of high sensitivity of resistance to fracture of structural steel to the hardenability degree in the course of hardening to find the quantitative estimation of the latter one. Proposed is a criterion kappa, the ratio of the unit rupture work in the case of incomplete hardenability (asub(Tsub(ih))) under investigation, and the analoguc value obtained in the case of complete hardenability Asub(Tsub(Ch)) at the testing temperature corresponding to the critical temperature Tsub(100(M). Confirmed is high criterion sensitivity of the hardened steel structure on the basis of experimental investigation of the 40Kh, 38KhNM and 38KhNMFA steels after isothermal hold-up at different temperatures, corresponding to production of various products of austenite decomposition

  8. Parameter estimation using the genetic algorithm and its impact on quantitative precipitation forecast

    Directory of Open Access Journals (Sweden)

    Y. H. Lee

    2006-12-01

    Full Text Available In this study, optimal parameter estimations are performed for both physical and computational parameters in a mesoscale meteorological model, and their impacts on the quantitative precipitation forecasting (QPF are assessed for a heavy rainfall case occurred at the Korean Peninsula in June 2005. Experiments are carried out using the PSU/NCAR MM5 model and the genetic algorithm (GA for two parameters: the reduction rate of the convective available potential energy in the Kain-Fritsch (KF scheme for cumulus parameterization, and the Asselin filter parameter for numerical stability. The fitness function is defined based on a QPF skill score. It turns out that each optimized parameter significantly improves the QPF skill. Such improvement is maximized when the two optimized parameters are used simultaneously. Our results indicate that optimizations of computational parameters as well as physical parameters and their adequate applications are essential in improving model performance.

  9. The effect of volume-of-interest misregistration on quantitative planar activity and dose estimation

    International Nuclear Information System (INIS)

    Song, N; Frey, E C; He, B

    2010-01-01

    In targeted radionuclide therapy (TRT), dose estimation is essential for treatment planning and tumor dose response studies. Dose estimates are typically based on a time series of whole-body conjugate view planar or SPECT scans of the patient acquired after administration of a planning dose. Quantifying the activity in the organs from these studies is an essential part of dose estimation. The quantitative planar (QPlanar) processing method involves accurate compensation for image degrading factors and correction for organ and background overlap via the combination of computational models of the image formation process and 3D volumes of interest defining the organs to be quantified. When the organ VOIs are accurately defined, the method intrinsically compensates for attenuation, scatter and partial volume effects, as well as overlap with other organs and the background. However, alignment between the 3D organ volume of interest (VOIs) used in QPlanar processing and the true organ projections in the planar images is required. The aim of this research was to study the effects of VOI misregistration on the accuracy and precision of organ activity estimates obtained using the QPlanar method. In this work, we modeled the degree of residual misregistration that would be expected after an automated registration procedure by randomly misaligning 3D SPECT/CT images, from which the VOI information was derived, and planar images. Mutual information-based image registration was used to align the realistic simulated 3D SPECT images with the 2D planar images. The residual image misregistration was used to simulate realistic levels of misregistration and allow investigation of the effects of misregistration on the accuracy and precision of the QPlanar method. We observed that accurate registration is especially important for small organs or ones with low activity concentrations compared to neighboring organs. In addition, residual misregistration gave rise to a loss of precision

  10. Accuracy in the estimation of quantitative minimal area from the diversity/area curve.

    Science.gov (United States)

    Vives, Sergi; Salicrú, Miquel

    2005-05-01

    The problem of representativity is fundamental in ecological studies. A qualitative minimal area that gives a good representation of species pool [C.M. Bouderesque, Methodes d'etude qualitative et quantitative du benthos (en particulier du phytobenthos), Tethys 3(1) (1971) 79] can be discerned from a quantitative minimal area which reflects the structural complexity of community [F.X. Niell, Sobre la biologia de Ascophyllum nosodum (L.) Le Jolis en Galicia, Invest. Pesq. 43 (1979) 501]. This suggests that the populational diversity can be considered as the value of the horizontal asymptote corresponding to the curve sample diversity/biomass [F.X. Niell, Les applications de l'index de Shannon a l'etude de la vegetation interdidale, Soc. Phycol. Fr. Bull. 19 (1974) 238]. In this study we develop a expression to determine minimal areas and use it to obtain certain information about the community structure based on diversity/area curve graphs. This expression is based on the functional relationship between the expected value of the diversity and the sample size used to estimate it. In order to establish the quality of the estimation process, we obtained the confidence intervals as a particularization of the functional (h-phi)-entropies proposed in [M. Salicru, M.L. Menendez, D. Morales, L. Pardo, Asymptotic distribution of (h,phi)-entropies, Commun. Stat. (Theory Methods) 22 (7) (1993) 2015]. As an example used to demonstrate the possibilities of this method, and only for illustrative purposes, data about a study on the rocky intertidal seawed populations in the Ria of Vigo (N.W. Spain) are analyzed [F.X. Niell, Estudios sobre la estructura, dinamica y produccion del Fitobentos intermareal (Facies rocosa) de la Ria de Vigo. Ph.D. Mem. University of Barcelona, Barcelona, 1979].

  11. Fatalities in high altitude mountaineering: a review of quantitative risk estimates.

    Science.gov (United States)

    Weinbruch, Stephan; Nordby, Karl-Christian

    2013-12-01

    Quantitative estimates for mortality in high altitude mountaineering are reviewed. Special emphasis is placed on the heterogeneity of the risk estimates and on confounding. Crude estimates for mortality are on the order of 1/1000 to 40/1000 persons above base camp, for both expedition members and high altitude porters. High altitude porters have mostly a lower risk than expedition members (risk ratio for all Nepalese peaks requiring an expedition permit: 0.73; 95 % confidence interval 0.59-0.89). The summit bid is generally the most dangerous part of an expedition for members, whereas most high altitude porters die during route preparation. On 8000 m peaks, the mortality during descent from summit varies between 4/1000 and 134/1000 summiteers (members plus porters). The risk estimates are confounded by human and environmental factors. Information on confounding by gender and age is contradictory and requires further work. There are indications for safety segregation of men and women, with women being more risk averse than men. Citizenship appears to be a significant confounder. Prior high altitude mountaineering experience in Nepal has no protective effect. Commercial expeditions in the Nepalese Himalayas have a lower mortality than traditional expeditions, though after controlling for confounding, the difference is not statistically significant. The overall mortality is increasing with increasing peak altitude for expedition members but not for high altitude porters. In the Nepalese Himalayas and in Alaska, a significant decrease of mortality with calendar year was observed. A few suggestions for further work are made at the end of the article.

  12. Performance of refractometry in quantitative estimation of isotopic concentration of heavy water in nuclear reactor

    International Nuclear Information System (INIS)

    Dhole, K.; Roy, M.; Ghosh, S.; Datta, A.; Tripathy, M.K.; Bose, H.

    2013-01-01

    Highlights: ► Rapid analysis of heavy water samples, with precise temperature control. ► Entire composition range covered. ► Both variations in mole and wt.% of D 2 O in the heavy water sample studied. ► Standard error of calibration and prediction were estimated. - Abstract: The method of refractometry has been investigated for the quantitative estimation of isotopic concentration of heavy water (D 2 O) in a simulated water sample. Feasibility of refractometry as an excellent analytical technique for rapid and non-invasive determination of D 2 O concentration in water samples has been amply demonstrated. Temperature of the samples has been precisely controlled to eliminate the effect of temperature fluctuation on refractive index measurement. The method is found to exhibit a reasonable analytical response to its calibration performance over the purity range of 0–100% D 2 O. An accuracy of below ±1% in the measurement of isotopic purity of heavy water for the entire range could be achieved

  13. Training anesthesiology residents in providing anesthesia for awake craniotomy: learning curves and estimate of needed case load.

    Science.gov (United States)

    Bilotta, Federico; Titi, Luca; Lanni, Fabiana; Stazi, Elisabetta; Rosa, Giovanni

    2013-08-01

    To measure the learning curves of residents in anesthesiology in providing anesthesia for awake craniotomy, and to estimate the case load needed to achieve a "good-excellent" level of competence. Prospective study. Operating room of a university hospital. 7 volunteer residents in anesthesiology. Residents underwent a dedicated training program of clinical characteristics of anesthesia for awake craniotomy. The program was divided into three tasks: local anesthesia, sedation-analgesia, and intraoperative hemodynamic management. The learning curve for each resident for each task was recorded over 10 procedures. Quantitative assessment of the individual's ability was based on the resident's self-assessment score and the attending anesthesiologist's judgment, and rated by modified 12 mm Likert scale, reported ability score visual analog scale (VAS). This ability VAS score ranged from 1 to 12 (ie, very poor, mild, moderate, sufficient, good, excellent). The number of requests for advice also was recorded (ie, resident requests for practical help and theoretical notions to accomplish the procedures). Each task had a specific learning rate; the number of procedures necessary to achieve "good-excellent" ability with confidence, as determined by the recorded results, were 10 procedures for local anesthesia, 15 to 25 procedures for sedation-analgesia, and 20 to 30 procedures for intraoperative hemodynamic management. Awake craniotomy is an approach used increasingly in neuroanesthesia. A dedicated training program based on learning specific tasks and building confidence with essential features provides "good-excellent" ability. © 2013 Elsevier Inc. All rights reserved.

  14. Merging Radar Quantitative Precipitation Estimates (QPEs) from the High-resolution NEXRAD Reanalysis over CONUS with Rain-gauge Observations

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Nickl, E.; Seo, D. J.; Kim, B.; Zhang, J.; Qi, Y.

    2015-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over the Continental United States (CONUS) is completed for the period covering from 2002 to 2011. While this constitutes a unique opportunity to study precipitation processes at higher resolution than conventionally possible (1-km, 5-min), the long-term radar-only product needs to be merged with in-situ information in order to be suitable for hydrological, meteorological and climatological applications. The radar-gauge merging is performed by using rain gauge information at daily (Global Historical Climatology Network-Daily: GHCN-D), hourly (Hydrometeorological Automated Data System: HADS), and 5-min (Automated Surface Observing Systems: ASOS; Climate Reference Network: CRN) resolution. The challenges related to incorporating differing resolution and quality networks to generate long-term large-scale gridded estimates of precipitation are enormous. In that perspective, we are implementing techniques for merging the rain gauge datasets and the radar-only estimates such as Inverse Distance Weighting (IDW), Simple Kriging (SK), Ordinary Kriging (OK), and Conditional Bias-Penalized Kriging (CBPK). An evaluation of the different radar-gauge merging techniques is presented and we provide an estimate of uncertainty for the gridded estimates. In addition, comparisons with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) are provided in order to give a detailed picture of the improvements and remaining challenges.

  15. SU-F-I-33: Estimating Radiation Dose in Abdominal Fat Quantitative CT

    Energy Technology Data Exchange (ETDEWEB)

    Li, X; Yang, K; Liu, B [Massachusetts General Hospital, Boston, MA (United States)

    2016-06-15

    Purpose: To compare size-specific dose estimate (SSDE) in abdominal fat quantitative CT with another dose estimate D{sub size,L} that also takes into account scan length. Methods: This study complied with the requirements of the Health Insurance Portability and Accountability Act. At our institution, abdominal fat CT is performed with scan length = 1 cm and CTDI{sub vol} = 4.66 mGy (referenced to body CTDI phantom). A previously developed CT simulation program was used to simulate single rotation axial scans of 6–55 cm diameter water cylinders, and dose integral of the longitudinal dose profile over the central 1 cm length was used to predict the dose at the center of one-cm scan range. SSDE and D{sub size,L} were assessed for 182 consecutive abdominal fat CT examinations with mean water-equivalent diameter (WED) of 27.8 cm ± 6.0 (range, 17.9 - 42.2 cm). Patient age ranged from 18 to 75 years, and weight ranged from 39 to 163 kg. Results: Mean SSDE was 6.37 mGy ± 1.33 (range, 3.67–8.95 mGy); mean D{sub size,L} was 2.99 mGy ± 0.85 (range, 1.48 - 4.88 mGy); and mean D{sub size,L}/SSDE ratio was 0.46 ± 0.04 (range, 0.40 - 0.55). Conclusion: The conversion factors for size-specific dose estimate in AAPM Report No. 204 were generated using 15 - 30 cm scan lengths. One needs to be cautious in applying SSDE to small length CT scans. For abdominal fat CT, SSDE was 80–150% higher than the dose of 1 cm scan length.

  16. Estimated Nutritive Value of Low-Price Model Lunch Sets Provided to Garment Workers in Cambodia

    Directory of Open Access Journals (Sweden)

    Jan Makurat

    2017-07-01

    Full Text Available Background: The establishment of staff canteens is expected to improve the nutritional situation of Cambodian garment workers. The objective of this study is to assess the nutritive value of low-price model lunch sets provided at a garment factory in Phnom Penh, Cambodia. Methods: Exemplary lunch sets were served to female workers through a temporary canteen at a garment factory in Phnom Penh. Dish samples were collected repeatedly to examine mean serving sizes of individual ingredients. Food composition tables and NutriSurvey software were used to assess mean amounts and contributions to recommended dietary allowances (RDAs or adequate intake of energy, macronutrients, dietary fiber, vitamin C (VitC, iron, vitamin A (VitA, folate and vitamin B12 (VitB12. Results: On average, lunch sets provided roughly one third of RDA or adequate intake of energy, carbohydrates, fat and dietary fiber. Contribution to RDA of protein was high (46% RDA. The sets contained a high mean share of VitC (159% RDA, VitA (66% RDA, and folate (44% RDA, but were low in VitB12 (29% RDA and iron (20% RDA. Conclusions: Overall, lunches satisfied recommendations of caloric content and macronutrient composition. Sets on average contained a beneficial amount of VitC, VitA and folate. Adjustments are needed for a higher iron content. Alternative iron-rich foods are expected to be better suited, compared to increasing portions of costly meat/fish components. Lunch provision at Cambodian garment factories holds the potential to improve food security of workers, approximately at costs of <1 USD/person/day at large scale. Data on quantitative total dietary intake as well as physical activity among workers are needed to further optimize the concept of staff canteens.

  17. Quantitative estimation of brain atrophy and function with PET and MRI two-dimensional projection images

    International Nuclear Information System (INIS)

    Saito, Reiko; Uemura, Koji; Uchiyama, Akihiko; Toyama, Hinako; Ishii, Kenji; Senda, Michio

    2001-01-01

    The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or 18 F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)

  18. Myocardial blood flow estimates from dynamic contrast-enhanced magnetic resonance imaging: three quantitative methods

    Science.gov (United States)

    Borrazzo, Cristian; Galea, Nicola; Pacilio, Massimiliano; Altabella, Luisa; Preziosi, Enrico; Carnì, Marco; Ciolina, Federica; Vullo, Francesco; Francone, Marco; Catalano, Carlo; Carbone, Iacopo

    2018-02-01

    Dynamic contrast-enhanced cardiovascular magnetic resonance imaging can be used to quantitatively assess the myocardial blood flow (MBF), recovering the tissue impulse response function for the transit of a gadolinium bolus through the myocardium. Several deconvolution techniques are available, using various models for the impulse response. The method of choice may influence the results, producing differences that have not been deeply investigated yet. Three methods for quantifying myocardial perfusion have been compared: Fermi function modelling (FFM), the Tofts model (TM) and the gamma function model (GF), with the latter traditionally used in brain perfusion MRI. Thirty human subjects were studied at rest as well as under cold pressor test stress (submerging hands in ice-cold water), and a single bolus of gadolinium weighing 0.1  ±  0.05 mmol kg-1 was injected. Perfusion estimate differences between the methods were analysed by paired comparisons with Student’s t-test, linear regression analysis, and Bland-Altman plots, as well as also using the two-way ANOVA, considering the MBF values of all patients grouped according to two categories: calculation method and rest/stress conditions. Perfusion estimates obtained by various methods in both rest and stress conditions were not significantly different, and were in good agreement with the literature. The results obtained during the first-pass transit time (20 s) yielded p-values in the range 0.20-0.28 for Student’s t-test, linear regression analysis slopes between 0.98-1.03, and R values between 0.92-1.01. From the Bland-Altman plots, the paired comparisons yielded a bias (and a 95% CI)—expressed as ml/min/g—for FFM versus TM, -0.01 (-0.20, 0.17) or 0.02 (-0.49, 0.52) at rest or under stress respectively, for FFM versus GF, -0.05 (-0.29, 0.20) or  -0.07 (-0.55, 0.41) at rest or under stress, and for TM versus GF, -0.03 (-0.30, 0.24) or  -0.09 (-0.43, 0.26) at rest or under stress. With the

  19. Quantitative Analysis of VIIRS DNB Nightlight Point Source for Light Power Estimation and Stability Monitoring

    Directory of Open Access Journals (Sweden)

    Changyong Cao

    2014-12-01

    Full Text Available The high sensitivity and advanced onboard calibration on the Visible Infrared Imaging Radiometer Suite (VIIRS Day/Night Band (DNB enables accurate measurements of low light radiances which leads to enhanced quantitative applications at night. The finer spatial resolution of DNB also allows users to examine social economic activities at urban scales. Given the growing interest in the use of the DNB data, there is a pressing need for better understanding of the calibration stability and absolute accuracy of the DNB at low radiances. The low light calibration accuracy was previously estimated at a moderate 15% using extended sources while the long-term stability has yet to be characterized. There are also several science related questions to be answered, for example, how the Earth’s atmosphere and surface variability contribute to the stability of the DNB measured radiances; how to separate them from instrument calibration stability; whether or not SI (International System of Units traceable active light sources can be designed and installed at selected sites to monitor the calibration stability, radiometric and geolocation accuracy, and point spread functions of the DNB; furthermore, whether or not such active light sources can be used for detecting environmental changes, such as aerosols. This paper explores the quantitative analysis of nightlight point sources, such as those from fishing vessels, bridges, and cities, using fundamental radiometry and radiative transfer, which would be useful for a number of applications including search and rescue in severe weather events, as well as calibration/validation of the DNB. Time series of the bridge light data are used to assess the stability of the light measurements and the calibration of VIIRS DNB. It was found that the light radiant power computed from the VIIRS DNB data matched relatively well with independent assessments based on the in situ light installations, although estimates have to be

  20. Improving Radar Quantitative Precipitation Estimation over Complex Terrain in the San Francisco Bay Area

    Science.gov (United States)

    Cifelli, R.; Chen, H.; Chandrasekar, V.

    2017-12-01

    A recent study by the State of California's Department of Water Resources has emphasized that the San Francisco Bay Area is at risk of catastrophic flooding. Therefore, accurate quantitative precipitation estimation (QPE) and forecast (QPF) are critical for protecting life and property in this region. Compared to rain gauge and meteorological satellite, ground based radar has shown great advantages for high-resolution precipitation observations in both space and time domain. In addition, the polarization diversity shows great potential to characterize precipitation microphysics through identification of different hydrometeor types and their size and shape information. Currently, all the radars comprising the U.S. National Weather Service (NWS) Weather Surveillance Radar-1988 Doppler (WSR-88D) network are operating in dual-polarization mode. Enhancement of QPE is one of the main considerations of the dual-polarization upgrade. The San Francisco Bay Area is covered by two S-band WSR-88D radars, namely, KMUX and KDAX. However, in complex terrain like the Bay Area, it is still challenging to obtain an optimal rainfall algorithm for a given set of dual-polarization measurements. In addition, the accuracy of rain rate estimates is contingent on additional factors such as bright band contamination, vertical profile of reflectivity (VPR) correction, and partial beam blockages. This presentation aims to improve radar QPE for the Bay area using advanced dual-polarization rainfall methodologies. The benefit brought by the dual-polarization upgrade of operational radar network is assessed. In addition, a pilot study of gap fill X-band radar performance is conducted in support of regional QPE system development. This paper also presents a detailed comparison between the dual-polarization radar-derived rainfall products with various operational products including the NSSL's Multi-Radar/Multi-Sensor (MRMS) system. Quantitative evaluation of various rainfall products is achieved

  1. Quantitative assessment of the microbial risk of leafy greens from farm to consumption: preliminary framework, data, and risk estimates.

    Science.gov (United States)

    Danyluk, Michelle D; Schaffner, Donald W

    2011-05-01

    This project was undertaken to relate what is known about the behavior of Escherichia coli O157:H7 under laboratory conditions and integrate this information to what is known regarding the 2006 E. coli O157:H7 spinach outbreak in the context of a quantitative microbial risk assessment. The risk model explicitly assumes that all contamination arises from exposure in the field. Extracted data, models, and user inputs were entered into an Excel spreadsheet, and the modeling software @RISK was used to perform Monte Carlo simulations. The model predicts that cut leafy greens that are temperature abused will support the growth of E. coli O157:H7, and populations of the organism may increase by as much a 1 log CFU/day under optimal temperature conditions. When the risk model used a starting level of -1 log CFU/g, with 0.1% of incoming servings contaminated, the predicted numbers of cells per serving were within the range of best available estimates of pathogen levels during the outbreak. The model predicts that levels in the field of -1 log CFU/g and 0.1% prevalence could have resulted in an outbreak approximately the size of the 2006 E. coli O157:H7 outbreak. This quantitative microbial risk assessment model represents a preliminary framework that identifies available data and provides initial risk estimates for pathogenic E. coli in leafy greens. Data gaps include retail storage times, correlations between storage time and temperature, determining the importance of E. coli O157:H7 in leafy greens lag time models, and validation of the importance of cross-contamination during the washing process.

  2. QUANTITATIVE ESTIMATION OF VOLUMETRIC ICE CONTENT IN FROZEN GROUND BY DIPOLE ELECTROMAGNETIC PROFILING METHOD

    Directory of Open Access Journals (Sweden)

    L. G. Neradovskiy

    2018-01-01

    Full Text Available Volumetric estimation of the ice content in frozen soils is known as one of the main problems in the engineering geocryology and the permafrost geophysics. A new way to use the known method of dipole electromagnetic profiling for the quantitative estimation of the volumetric ice content in frozen soils is discussed. Investigations of foundation of the railroad in Yakutia (i.e. in the permafrost zone were used as an example for this new approach. Unlike the conventional way, in which the permafrost is investigated by its resistivity and constructing of geo-electrical cross-sections, the new approach is aimed at the study of the dynamics of the process of attenuation in the layer of annual heat cycle in the field of high-frequency vertical magnetic dipole. This task is simplified if not all the characteristics of the polarization ellipse are measured but the only one which is the vertical component of the dipole field and can be the most easily measured. Collected data of the measurements were used to analyze the computational errors of the average values of the volumetric ice content from the amplitude attenuation of the vertical component of the dipole field. Note that the volumetric ice content is very important for construction. It is shown that usually the relative error of computation of this characteristic of a frozen soil does not exceed 20% if the works are performed by the above procedure using the key-site methodology. This level of accuracy meets requirements of the design-and-survey works for quick, inexpensive, and environmentally friendly zoning of built-up remote and sparsely populated territories of the Russian permafrost zone according to a category of a degree of the ice content in frozen foundations of engineering constructions.

  3. A quantitative model for estimating mean annual soil loss in cultivated land using 137Cs measurements

    International Nuclear Information System (INIS)

    Yang Hao; Zhao Qiguo; Du Mingyuan; Minami, Katsuyuki; Hatta, Tamao

    2000-01-01

    The radioisotope 137 Cs has been widely used to determine rates of cultivated soil loss, Many calibration relationships (including both empirical relationships and theoretical models) have been employed to estimate erosion rates from the amount of 137 Cs lost from the cultivated soil profile. However, there are important limitations which restrict the reliability of these models, which consider only the uniform distribution of 137 Cs in the plough layer and the depth. As a result, erosion rates they may be overestimated or underestimated. This article presents a quantitative model for the relation the amount of 137 Cs lost from the cultivate soil profile and the rate of soil erosion. According to a mass balance model, during the construction of this model we considered the following parameters: the remaining fraction of the surface enrichment layer (F R ), the thickness of the surface enrichment layer (H s ), the depth of the plough layer (H p ), input fraction of the total 137 Cs fallout deposition during a given year t (F t ), radioactive decay of 137 Cs (k), and sampling year (t). The simulation results showed that the amounts of erosion rates estimated using this model were very sensitive to changes in the values of the parameters F R , H s , and H p . We also observed that the relationship between the rate of soil loss and 137 Cs depletion is neither linear nor logarithmic, and is very complex. Although the model is an improvement over existing approaches to derive calibration relationships for cultivated soil, it requires empirical information on local soil properties and the behavior of 137 Cs in the soil profile. There is clearly still a need for more precise information on the latter aspect and, in particular, on the retention of 137 Cs fallout in the top few millimeters of the soil profile and on the enrichment and depletion effects associated with soil redistribution (i.e. for determining accurate values of F R and H s ). (author)

  4. Improving Satellite Quantitative Precipitation Estimation Using GOES-Retrieved Cloud Optical Depth

    Energy Technology Data Exchange (ETDEWEB)

    Stenz, Ronald; Dong, Xiquan; Xi, Baike; Feng, Zhe; Kuligowski, Robert J.

    2016-02-01

    To address significant gaps in ground-based radar coverage and rain gauge networks in the U.S., geostationary satellite quantitative precipitation estimates (QPEs) such as the Self-Calibrating Multivariate Precipitation Retrievals (SCaMPR) can be used to fill in both the spatial and temporal gaps of ground-based measurements. Additionally, with the launch of GOES-R, the temporal resolution of satellite QPEs may be comparable to that of Weather Service Radar-1988 Doppler (WSR-88D) volume scans as GOES images will be available every five minutes. However, while satellite QPEs have strengths in spatial coverage and temporal resolution, they face limitations particularly during convective events. Deep Convective Systems (DCSs) have large cloud shields with similar brightness temperatures (BTs) over nearly the entire system, but widely varying precipitation rates beneath these clouds. Geostationary satellite QPEs relying on the indirect relationship between BTs and precipitation rates often suffer from large errors because anvil regions (little/no precipitation) cannot be distinguished from rain-cores (heavy precipitation) using only BTs. However, a combination of BTs and optical depth (τ) has been found to reduce overestimates of precipitation in anvil regions (Stenz et al. 2014). A new rain mask algorithm incorporating both τ and BTs has been developed, and its application to the existing SCaMPR algorithm was evaluated. The performance of the modified SCaMPR was evaluated using traditional skill scores and a more detailed analysis of performance in individual DCS components by utilizing the Feng et al. (2012) classification algorithm. SCaMPR estimates with the new rain mask applied benefited from significantly reduced overestimates of precipitation in anvil regions and overall improvements in skill scores.

  5. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade.

    Science.gov (United States)

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-22

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1-3.9 ppm or 3-9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990-2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  6. Quantitative Precipitation Estimation over Ocean Using Bayesian Approach from Microwave Observations during the Typhoon Season

    Directory of Open Access Journals (Sweden)

    Jen-Chi Hu

    2009-01-01

    Full Text Available We have developed a new Bayesian approach to retrieve oceanic rain rate from the Tropical Rainfall Measuring Mission (TRMM Microwave Imager (TMI, with an emphasis on typhoon cases in the West Pacific. Retrieved rain rates are validated with measurements of rain gauges located on Japanese islands. To demonstrate improvement, retrievals are also compared with those from the TRMM/Precipitation Radar (PR, the Goddard Profiling Algorithm (GPROF, and a multi-channel linear regression statistical method (MLRS. We have found that qualitatively, all methods retrieved similar horizontal distributions in terms of locations of eyes and rain bands of typhoons. Quantitatively, our new Bayesian retrievals have the best linearity and the smallest root mean square (RMS error against rain gauge data for 16 typhoon over passes in 2004. The correlation coefficient and RMS of our retrievals are 0.95 and ~2 mm hr-1, respectively. In particular, at heavy rain rates, our Bayesian retrievals out perform those retrieved from GPROF and MLRS. Over all, the new Bayesian approach accurately retrieves surface rain rate for typhoon cases. Ac cu rate rain rate estimates from this method can be assimilated in models to improve forecast and prevent potential damages in Taiwan during typhoon seasons.

  7. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade

    Science.gov (United States)

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1-3.9 ppm or 3-9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990-2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  8. Radar-derived quantitative precipitation estimation in complex terrain over the eastern Tibetan Plateau

    Science.gov (United States)

    Gou, Yabin; Ma, Yingzhao; Chen, Haonan; Wen, Yixin

    2018-05-01

    Quantitative precipitation estimation (QPE) is one of the important applications of weather radars. However, in complex terrain such as Tibetan Plateau, it is a challenging task to obtain an optimal Z-R relation due to the complex spatial and temporal variability in precipitation microphysics. This paper develops two radar QPE schemes respectively based on Reflectivity Threshold (RT) and Storm Cell Identification and Tracking (SCIT) algorithms using observations from 11 Doppler weather radars and 3264 rain gauges over the Eastern Tibetan Plateau (ETP). These two QPE methodologies are evaluated extensively using four precipitation events that are characterized by different meteorological features. Precipitation characteristics of independent storm cells associated with these four events, as well as the storm-scale differences, are investigated using short-term vertical profile of reflectivity (VPR) clusters. Evaluation results show that the SCIT-based rainfall approach performs better than the simple RT-based method for all precipitation events in terms of score comparison using validation gauge measurements as references. It is also found that the SCIT-based approach can effectively mitigate the local error of radar QPE and represent the precipitation spatiotemporal variability better than the RT-based scheme.

  9. Quantitative estimation of the right ventricular overloading by thallium-201 myocardial scintigraphy

    International Nuclear Information System (INIS)

    Owada, Kenji; Machii, Kazuo; Tsukahara, Yasunori

    1982-01-01

    Thallium-201 myocardial scintigraphy was performed on 55 patients with various types of right ventricular overloading. The right ventricular (RV) free wall was visualized in 39 out of the 55 patients (71%). The mean values of right ventricular systolic pressure (RVSP) and pulmonary artery mean pressure (PAMP) in the visualized cases (uptakers) were 54.6 +- 24.1 and 30.5 +- 15.3 mmHg, respectively. These values were significantly higher than those of the non-visualized cases (non-uptakers). There were 12 RVSP-''normotensive'' uptakers and 15 PAMP-''normotensive'' uptakers. The RV free wall images were classified into three types according to their morphological features. Type I was predominantly seen in cases of RV pressure overloading, type II in RV volume overloading and type III in combined ventricular overloading. RVSP in the type III group was significantly higher than that in other two groups. The radioactivity ratio in RV free wall and interventricular septum (IVS), the RV/IVS uptake ratio was calculated using left anterior oblique (LAO) view images. The RV/IVS uptake ratio closely correlated with RVSP and PAMP (r = 0.88 and 0.82, respectively). In each group of RV free wall image, there were also close correlations between the RV/IVS uptake ratio and both RVSP and PAMP. Our results indicate that the RV/IVS uptake ratio can be used as a parameter for the semi-quantitative estimation of right ventricular overloading. (author)

  10. Development and testing of transfer functions for generating quantitative climatic estimates from Australian pollen data

    Science.gov (United States)

    Cook, Ellyn J.; van der Kaars, Sander

    2006-10-01

    We review attempts to derive quantitative climatic estimates from Australian pollen data, including the climatic envelope, climatic indicator and modern analogue approaches, and outline the need to pursue alternatives for use as input to, or validation of, simulations by models of past, present and future climate patterns. To this end, we have constructed and tested modern pollen-climate transfer functions for mainland southeastern Australia and Tasmania using the existing southeastern Australian pollen database and for northern Australia using a new pollen database we are developing. After testing for statistical significance, 11 parameters were selected for mainland southeastern Australia, seven for Tasmania and six for northern Australia. The functions are based on weighted-averaging partial least squares regression and their predictive ability evaluated against modern observational climate data using leave-one-out cross-validation. Functions for summer, annual and winter rainfall and temperatures are most robust for southeastern Australia, while in Tasmania functions for minimum temperature of the coldest period, mean winter and mean annual temperature are the most reliable. In northern Australia, annual and summer rainfall and annual and summer moisture indexes are the strongest. The validation of all functions means all can be applied to Quaternary pollen records from these three areas with confidence. Copyright

  11. An Experimental Study for Quantitative Estimation of Rebar Corrosion in Concrete Using Ground Penetrating Radar

    Directory of Open Access Journals (Sweden)

    Md Istiaque Hasan

    2016-01-01

    Full Text Available Corrosion of steel rebar in reinforced concrete is one the most important durability issues in the service life of a structure. In this paper, an investigation is conducted to find out the relationship between the amount of reinforced concrete corrosion and GPR maximum positive amplitude. Accelerated corrosion was simulated in the lab by impressing direct current into steel rebar that was submerged in a 5% salt water solution. The amount of corrosion was varied in the rebars with different levels of mass loss ranging from 0% to 45%. The corroded rebars were then placed into three different oil emulsion tanks having different dielectric properties similar to concrete. The maximum amplitudes from the corroded bars were recorded. A linear relationship between the maximum positive amplitudes and the amount of corrosion in terms of percentage loss of area was observed. It was proposed that the relationship between the GPR maximum amplitude and the amount of corrosion can be used as a basis of a NDE technique of quantitative estimation of corrosion.

  12. Ultrasonic 3-D Vector Flow Method for Quantitative In Vivo Peak Velocity and Flow Rate Estimation

    DEFF Research Database (Denmark)

    Holbek, Simon; Ewertsen, Caroline; Bouzari, Hamed

    2017-01-01

    Current clinical ultrasound (US) systems are limited to show blood flow movement in either 1-D or 2-D. In this paper, a method for estimating 3-D vector velocities in a plane using the transverse oscillation method, a 32×32 element matrix array, and the experimental US scanner SARUS is presented...... is validated in two phantom studies, where flow rates are measured in a flow-rig, providing a constant parabolic flow, and in a straight-vessel phantom ( ∅=8 mm) connected to a flow pump capable of generating time varying waveforms. Flow rates are estimated to be 82.1 ± 2.8 L/min in the flow-rig compared...

  13. Motor unit number estimation in the quantitative assessment of severity and progression of motor unit loss in Hirayama disease.

    Science.gov (United States)

    Zheng, Chaojun; Zhu, Yu; Zhu, Dongqing; Lu, Feizhou; Xia, Xinlei; Jiang, Jianyuan; Ma, Xiaosheng

    2017-06-01

    To investigate motor unit number estimation (MUNE) as a method to quantitatively evaluate severity and progression of motor unit loss in Hirayama disease (HD). Multipoint incremental MUNE was performed bilaterally on both abductor digiti minimi and abductor pollicis brevis muscles in 46 patients with HD and 32 controls, along with handgrip strength examination. MUNE was re-evaluated approximately 1year after initial examination in 17 patients with HD. The MUNE values were significantly lower in all the tested muscles in the HD group (Pdisease duration (Pmotor unit loss in patients with HD within approximately 1year (P4years. A reduction in the functioning motor units was found in patients with HD compared with that in controls, even in the early asymptomatic stages. Moreover, the motor unit loss in HD progresses gradually as the disease advances. These results have provided evidence for the application of MUNE in estimating the reduction of motor unit in HD and confirming the validity of MUNE for tracking the progression of HD in a clinical setting. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  14. Estimation of the contribution of private providers in tuberculosis case notification and treatment outcome in Pakistan.

    Science.gov (United States)

    Chughtai, A A; Qadeer, E; Khan, W; Hadi, H; Memon, I A

    2013-03-01

    To improve involvement of the private sector in the national tuberculosis (TB) programme in Pakistan various public-private mix projects were set up between 2004 and 2009. A retrospective analysis of data was made to study 6 different public-private mix models for TB control in Pakistan and estimate the contribution of the various private providers to TB case notification and treatment outcome. The number of TB cases notified through the private sector increased significantly from 77 cases in 2004 to 37,656 in 2009. Among the models, the nongovernmental organization model made the greatest contribution to case notification (58.3%), followed by the hospital-based model (18.9%). Treatment success was highest for the district-led model (94.1%) and lowest for the hospital-based model (74.2%). The private sector made an important contribution to the national data through the various public-private mix projects. Issues of sustainability and the lack of treatment supporters are discussed as reasons for lack of success of some projects.

  15. Investigation of Weather Radar Quantitative Precipitation Estimation Methodologies in Complex Orography

    Directory of Open Access Journals (Sweden)

    Mario Montopoli

    2017-02-01

    Full Text Available Near surface quantitative precipitation estimation (QPE from weather radar measurements is an important task for feeding hydrological models, limiting the impact of severe rain events at the ground as well as aiding validation studies of satellite-based rain products. To date, several works have analyzed the performance of various QPE algorithms using actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization radar variables not only to ensure a good level of data quality but also as a direct input to rain estimation equations. One of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution, which affects all the acquired radar variables as well as estimated rain rates at different levels. This is particularly impactful in mountainous areas, where the sampled altitudes are likely several hundred meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested in a complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that use the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered. In that case, all the radar variables used in the rain estimation process should be consistently extrapolated at the surface to try and maintain the correlations among them. To avoid facing such a complexity, especially with a view to operational implementation, we propose looking at the features of the vertical profile of rain (VPR, i.e., after performing the rain estimation. This procedure allows characterization of a single variable (i.e., rain when dealing with

  16. Quantitative estimation of the extent of alkylation of DNA following treatment of mammalian cells with non-radioactive alkylating agents

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, R.D. (Univ. of Tennessee, Oak Ridge); Regan, J.D.

    1981-01-01

    Alkaline sucrose sedimentation has been used to quantitate phosphotriester formation following treatment of human cells with the monofunctional alkylating agents methyl and ethyl methanesulfonate. These persistent alkaline-labile lesions are not repaired during short-term culture conditions and thus serve as a useful and precise index of the total alkylation of the DNA.Estimates of alkylation by this procedure compare favorably with direct estimates by use of labeled alkylating agents.

  17. Quantitative estimation of groundwater recharge with special reference to the use of natural radioactive isotopes and hydrological simulation

    International Nuclear Information System (INIS)

    Bredenkamp, D.B.

    1978-01-01

    Methods of quantitative estimation of groundwater recharge have been estimated to 1) illustrate uncertainties associated with methods usually applied 2) indicate some of the simplifying assumptions inherent to a specific method 3) propagate the use of more than one technique in order to improve the reliability of the combined recharge estimate and 4) propose a hydrological model by which the annual recharge and annual variability of recharge could be ascertained. Classical methods such as the water balance equation and flow nets have been reviewed. The use of environmental tritium and radiocarbon have been illustrated as a means of obaining qualitative answers to the occurence of recharge and in revealing the effective mechanism of groundwater recharge through the soil. Quantitative estimation of recharge from the ratio of recharge to storage have been demonstrated for the Kuruman recharge basin. Methods of interpreting tritium profiles in order to obtain a quantitative estimate of recharge have been shown with application of the technique for Rietondale and a dolomitic aquifer in the Western Transvaal. The major part of the thesis has been devoted to the use of hydrological model as a means of estimating groundwater recharge. Subsequent to a general discussion of the conceptual logic, various models have been proposed and tested

  18. Quantitative Estimation of Temperature Variations in Plantar Angiosomes: A Study Case for Diabetic Foot

    Directory of Open Access Journals (Sweden)

    H. Peregrina-Barreto

    2014-01-01

    Full Text Available Thermography is a useful tool since it provides information that may help in the diagnostic of several diseases in a noninvasive and fast way. Particularly, thermography has been applied in the study of the diabetic foot. However, most of these studies report only qualitative information making it difficult to measure significant parameters such as temperature variations. These variations are important in the analysis of the diabetic foot since they could bring knowledge, for instance, regarding ulceration risks. The early detection of ulceration risks is considered an important research topic in the medicine field, as its objective is to avoid major complications that might lead to a limb amputation. The absence of symptoms in the early phase of the ulceration is conceived as the main disadvantage to provide an opportune diagnostic in subjects with neuropathy. Since the relation between temperature and ulceration risks is well established in the literature, a methodology that obtains quantitative temperature differences in the plantar area of the diabetic foot to detect ulceration risks is proposed in this work. Such methodology is based on the angiosome concept and image processing.

  19. Estimation of quantitative levels of diesel exhaust exposure and the health impact in the contemporary Australian mining industry

    NARCIS (Netherlands)

    Peters, Susan; de Klerk, Nicholas; Reid, Alison; Fritschi, Lin; Musk, Aw Bill; Vermeulen, Roel

    2017-01-01

    OBJECTIVES: To estimate quantitative levels of exposure to diesel exhaust expressed by elemental carbon (EC) in the contemporary mining industry and to describe the excess risk of lung cancer that may result from those levels. METHODS: EC exposure has been monitored in Western Australian miners

  20. Quantitative precipitation estimation in complex orography using quasi-vertical profiles of dual polarization radar variables

    Science.gov (United States)

    Montopoli, Mario; Roberto, Nicoletta; Adirosi, Elisa; Gorgucci, Eugenio; Baldini, Luca

    2017-04-01

    Weather radars are nowadays a unique tool to estimate quantitatively the rain precipitation near the surface. This is an important task for a plenty of applications. For example, to feed hydrological models, mitigate the impact of severe storms at the ground using radar information in modern warning tools as well as aid the validation studies of satellite-based rain products. With respect to the latter application, several ground validation studies of the Global Precipitation Mission (GPM) products have recently highlighted the importance of accurate QPE from ground-based weather radars. To date, a plenty of works analyzed the performance of various QPE algorithms making use of actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization variables not only to ensure a good level of radar data quality but also as a direct input in the rain estimation equations. Among others, one of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution that affects at different levels, all the radar variables acquired as well as rain rates. This is particularly impactful in mountainous areas where the altitudes of the radar sampling is likely several hundred of meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested a in complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that make use of the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered because in that case all the radar variables used in the rain estimation process should be consistently extrapolated at the surface

  1. GENE ACTION AND HERITABILITY ESTIMATES OF QUANTITATIVE CHARACTERS AMONG LINES DERIVED FROM VARIETAL CROSSES OF SOYBEAN

    Directory of Open Access Journals (Sweden)

    Lukman Hakim

    2017-09-01

    Full Text Available The knowledge of genetic action, heritability and genetic variability is useful and permits plant breeder to design efficient breeding strategies in soybean.  The objectives of this study were to determine gene action, genetic variability, heritability and genetic advance of quantitative characters that could be realized through selection of segregation progenies. The F1 population and F2 progenies of six crosses among five soybean varieties were evaluated at Muneng Experimental Station, East Java during the dry season of 2014.  The lines were planted in a randomized block design with four replications.  The seeds of each F1 and F2 progenies and parents were planted in four rows of 3 m long, 40 cm x 20 cm plant spacing, one plant per hill. The result showed that pod number per plant, seed yield, plant yield and harvest index were found to be predominantly controlled by additive gene effects.  Seed size was also controlled by additive gene effects, with small seed dominant to large seed size.  Plant height was found to be controlled by both additive and nonadditive gene effects.  Similarly, days to maturity was due mainly to additive and nonadditive gene effects, with earliness dominant to lateness.  Days to maturity had the highest heritability estimates of 49.3%, followed by seed size (47.0%, harvest index (45.8%, and pod number per plant (45.5%.  Therefore, they could be used in the selection of a high yielding soybean genotype in the F3 generation. 

  2. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework. PMID:28522983

  3. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation.

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  4. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Directory of Open Access Journals (Sweden)

    Ji Chul Kim

    2017-05-01

    Full Text Available Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  5. Quantitative Phosphoproteomic Analysis Provides Insight into the Response to Short-Term Drought Stress in Ammopiptanthus mongolicus Roots

    Directory of Open Access Journals (Sweden)

    Huigai Sun

    2017-10-01

    Full Text Available Drought is one of the major abiotic stresses that negatively affects plant growth and development. Ammopiptanthus mongolicus is an ecologically important shrub in the mid-Asia desert region and used as a model for abiotic tolerance research in trees. Protein phosphorylation participates in the regulation of various biological processes, however, phosphorylation events associated with drought stress signaling and response in plants is still limited. Here, we conducted a quantitative phosphoproteomic analysis of the response of A. mongolicus roots to short-term drought stress. Data are available via the iProx database with project ID IPX0000971000. In total, 7841 phosphorylation sites were found from the 2019 identified phosphopeptides, corresponding to 1060 phosphoproteins. Drought stress results in significant changes in the abundance of 103 phosphopeptides, corresponding to 90 differentially-phosphorylated phosphoproteins (DPPs. Motif-x analysis identified two motifs, including [pSP] and [RXXpS], from these DPPs. Functional enrichment and protein-protein interaction analysis showed that the DPPs were mainly involved in signal transduction and transcriptional regulation, osmotic adjustment, stress response and defense, RNA splicing and transport, protein synthesis, folding and degradation, and epigenetic regulation. These drought-corresponsive phosphoproteins, and the related signaling and metabolic pathways probably play important roles in drought stress signaling and response in A. mongolicus roots. Our results provide new information for understanding the molecular mechanism of the abiotic stress response in plants at the posttranslational level.

  6. The effects of dominance, regular inbreeding and sampling design on Q(ST), an estimator of population differentiation for quantitative traits.

    Science.gov (United States)

    Goudet, Jérôme; Büchi, Lucie

    2006-02-01

    To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.

  7. Disdrometer-based C-Band Radar Quantitative Precipitation Estimation (QPE) in a highly complex terrain region in tropical Colombia.

    Science.gov (United States)

    Sepúlveda, J.; Hoyos Ortiz, C. D.

    2017-12-01

    An adequate quantification of precipitation over land is critical for many societal applications including agriculture, hydroelectricity generation, water supply, and risk management associated with extreme events. The use of rain gauges, a traditional method for precipitation estimation, and an excellent one, to estimate the volume of liquid water during a particular precipitation event, does not allow to fully capture the highly spatial variability of the phenomena which is a requirement for almost all practical applications. On the other hand, the weather radar, an active remote sensing sensor, provides a proxy for rainfall with fine spatial resolution and adequate temporary sampling, however, it does not measure surface precipitation. In order to fully exploit the capabilities of the weather radar, it is necessary to develop quantitative precipitation estimation (QPE) techniques combining radar information with in-situ measurements. Different QPE methodologies are explored and adapted to local observations in a highly complex terrain region in tropical Colombia using a C-Band radar and a relatively dense network of rain gauges and disdrometers. One important result is that the expressions reported in the literature for extratropical locations are not representative of the conditions found in the tropical region studied. In addition to reproducing the state-of-the-art techniques, a new multi-stage methodology based on radar-derived variables and disdrometer data is proposed in order to achieve the best QPE possible. The main motivation for this new methodology is based on the fact that most traditional QPE methods do not directly take into account the different uncertainty sources involved in the process. The main advantage of the multi-stage model compared to traditional models is that it allows assessing and quantifying the uncertainty in the surface rain rate estimation. The sub-hourly rainfall estimations using the multi-stage methodology are realistic

  8. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar

    Science.gov (United States)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.

    2013-09-01

    This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.

  9. Water, sanitation and hygiene interventions for acute childhood diarrhea: a systematic review to provide estimates for the Lives Saved Tool.

    Science.gov (United States)

    Darvesh, Nazia; Das, Jai K; Vaivada, Tyler; Gaffey, Michelle F; Rasanathan, Kumanan; Bhutta, Zulfiqar A

    2017-11-07

    In the Sustainable Development Goals (SDGs) era, there is growing recognition of the responsibilities of non-health sectors in improving the health of children. Interventions to improve access to clean water, sanitation facilities, and hygiene behaviours (WASH) represent key opportunities to improve child health and well-being by preventing the spread of infectious diseases and improving nutritional status. We conducted a systematic review of studies evaluating the effects of WASH interventions on childhood diarrhea in children 0-5 years old. Searches were run up to September 2016. We screened the titles and abstracts of retrieved articles, followed by screening of the full-text reports of relevant studies. We abstracted study characteristics and quantitative data, and assessed study quality. Meta-analyses were performed for similar intervention and outcome pairs. Pooled analyses showed diarrhea risk reductions from the following interventions: point-of-use water filtration (pooled risk ratio (RR): 0.47, 95% confidence interval (CI): 0.36-0.62), point-of-use water disinfection (pooled RR: 0.69, 95% CI: 0.60-0.79), and hygiene education with soap provision (pooled RR: 0.73, 95% CI: 0.57-0.94). Quality ratings were low or very low for most studies, and heterogeneity was high in pooled analyses. Improvements to the water supply and water disinfection at source did not show significant effects on diarrhea risk, nor did the one eligible study examining the effect of latrine construction. Various WASH interventions show diarrhea risk reductions between 27% and 53% in children 0-5 years old, depending on intervention type, providing ample evidence to support the scale-up of WASH in low and middle-income countries (LMICs). Due to the overall low quality of the evidence and high heterogeneity, further research is required to accurately estimate the magnitude of the effects of these interventions in different contexts.

  10. Integrating field plots, lidar, and landsat time series to provide temporally consistent annual estimates of biomass from 1990 to present

    Science.gov (United States)

    Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan. Huang

    2015-01-01

    We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...

  11. Quantitative estimation of compliance of human systemic veins by occlusion plethysmography with radionuclide

    International Nuclear Information System (INIS)

    Takatsu, Hisato; Gotoh, Kohshi; Suzuki, Takahiko; Ohsumi, Yukio; Yagi, Yasuo; Tsukamoto, Tatsuo; Terashima, Yasushi; Nagashima, Kenshi; Hirakawa, Senri

    1989-01-01

    Volume-pressure relationship and compliance of human systemic veins were estimated quantitatively and noninvasively using radionuclide. The effect of nitroglycerin (NTG) on these parameters was examined. Plethysmography with radionuclide (RN) was performed using the occlusion method on the forearm in 56 patients with various cardiac diseases after RN angiocardiography with 99m Tc-RBC. The RN counts-venous pressure curve was constructed from (1) the changes in radioactivity from region of interest on the forearm that were considered to reflect the changes in the blood volume of the forearm, and (2) the changes in the pressure of the forearm vein (fv) due to venous occlusion. The specific compliance of the forearm veins (Csp.fv; (1/V)·(ΔV/ΔP)) was obtained graphically from this curve at each patient's venous pressure (Pv). Csp.fv was 0.044±0.012 mmHg -1 in class I (mean±SD; n=13), 0.033±0.007 mmHg -1 in class II (n=30), and 0.019±0.007 mmHg -1 in class III (n=13), of the previous NYHA classification of work tolerance. There were significant differences in Csp.fv among the three classes. The systemic venous blood volume (Vsv) was determined by subtracting the central blood volume, measured by RN-angiocardiography, from total blood volume, measured by the indicator dilution method utilizing 131 I-human serum albumin. Systemic venous compliance (Csv) was calculated from Csv=Csp.fv·Vsv. The Csv was 127.2±24.8 ml·mmHg -1 (mean±SD) in class I, 101.1±24.1 ml·mmHg -1 in class II and 62.2±28.1 ml·mmHg -1 in class III. There were significant differences in Csv among the three classes. The class I Csv value was calculated to be 127.2±24.8 ml·mmHg -1 and the Csv/body weight was calculated to be 2.3±0.7 ml·mmHg -1 ·kg -1 of body weight. The administration of NTG increased Csv significantly in all cases. (J.P.N.)

  12. A quantitative framework for estimating risk of collision between marine mammals and boats

    Science.gov (United States)

    Martin, Julien; Sabatier, Quentin; Gowan, Timothy A.; Giraud, Christophe; Gurarie, Eliezer; Calleson, Scott; Ortega-Ortiz, Joel G.; Deutsch, Charles J.; Rycyk, Athena; Koslovsky, Stacie M.

    2016-01-01

    Speed regulations of watercraft in protected areas are designed to reduce lethal collisions with wildlife but can have economic consequences. We present a quantitative framework for investigating the risk of deadly collisions between boats and wildlife.

  13. Mesoscale and Local Scale Evaluations of Quantitative Precipitation Estimates by Weather Radar Products during a Heavy Rainfall Event

    Directory of Open Access Journals (Sweden)

    Basile Pauthier

    2016-01-01

    Full Text Available A 24-hour heavy rainfall event occurred in northeastern France from November 3 to 4, 2014. The accuracy of the quantitative precipitation estimation (QPE by PANTHERE and ANTILOPE radar-based gridded products during this particular event, is examined at both mesoscale and local scale, in comparison with two reference rain-gauge networks. Mesoscale accuracy was assessed for the total rainfall accumulated during the 24-hour event, using the Météo France operational rain-gauge network. Local scale accuracy was assessed for both total event rainfall and hourly rainfall accumulations, using the recently developed HydraVitis high-resolution rain gauge network Evaluation shows that (1 PANTHERE radar-based QPE underestimates rainfall fields at mesoscale and local scale; (2 both PANTHERE and ANTILOPE successfully reproduced the spatial variability of rainfall at local scale; (3 PANTHERE underestimates can be significantly improved at local scale by merging these data with rain gauge data interpolation (i.e., ANTILOPE. This study provides a preliminary evaluation of radar-based QPE at local scale, suggesting that merged products are invaluable for applications at very high resolution. The results obtained underline the importance of using high-density rain-gauge networks to obtain information at high spatial and temporal resolution, for better understanding of local rainfall variation, to calibrate remotely sensed rainfall products.

  14. Model developments for quantitative estimates of the benefits of the signals on nuclear power plant availability and economics

    International Nuclear Information System (INIS)

    Seong, Poong Hyun

    1993-01-01

    A novel framework for quantitative estimates of the benefits of signals on nuclear power plant availability and economics has been developed in this work. The models developed in this work quantify how the perfect signals affect the human operator's success in restoring the power plant to the desired state when it enters undesirable transients. Also, the models quantify the economic benefits of these perfect signals. The models have been applied to the condensate feedwater system of the nuclear power plant for demonstration. (Author)

  15. Qualitative and quantitative estimations of the effect of geomagnetic field variations on human brain functional state

    International Nuclear Information System (INIS)

    Belisheva, N.K.; Popov, A.N.; Petukhova, N.V.; Pavlova, L.P.; Osipov, K.S.; Tkachenko, S.Eh.; Baranova, T.I.

    1995-01-01

    The comparison of functional dynamics of human brain with reference to qualitative and quantitative characteristics of local geomagnetic field (GMF) variations was conducted. Steady and unsteady states of human brain can be determined: by geomagnetic disturbances before the observation period; by structure and doses of GMF variations; by different combinations of qualitative and quantitative characteristics of GMF variations. Decrease of optimal GMF activity level and the appearance of aperiodic disturbances of GMF can be a reason of unsteady brain's state. 18 refs.; 3 figs

  16. Quantitative analysis of oyster larval proteome provides new insights into the effects of multiple climate change stressors

    KAUST Repository

    Dineshram, Ramadoss; Chandramouli, Kondethimmanahalli; Ko, Ginger Wai Kuen; Zhang, Huoming; Qian, Pei-Yuan; Ravasi, Timothy; Thiyagarajan, Vengatesen

    2016-01-01

    might be affected in a future ocean, we examined changes in the proteome of metamorphosing larvae under multiple stressors: decreased pH (pH 7.4), increased temperature (30 °C), and reduced salinity (15 psu). Quantitative protein expression profiling

  17. The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.

    Science.gov (United States)

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim

    2014-09-01

    The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data. © The Author(s) 2014.

  18. Validation and measurement uncertainty estimation in food microbiology: differences between quantitative and qualitative methods

    Directory of Open Access Journals (Sweden)

    Vesna Režić Dereani

    2010-09-01

    Full Text Available The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the technical competence of the staff involved in the tests, recently are widely introduced in food microbiology laboratories in Croatia. In addition to quality manual introduction, and a lot of general documents, some of the most demanding procedures in routine microbiology laboratories are measurement uncertainty (MU procedures and validation experiment design establishment. Those procedures are not standardized yet even at international level, and they require practical microbiological knowledge, altogether with statistical competence. Differences between validation experiments design for quantitative and qualitative food microbiology analysis are discussed in this research, and practical solutions are shortly described. MU for quantitative determinations is more demanding issue than qualitative MU calculation. MU calculations are based on external proficiency testing data and internal validation data. In this paper, practical schematic descriptions for both procedures are shown.

  19. Stereological estimation of nuclear volume and other quantitative histopathological parameters in the prognostic evaluation of supraglottic laryngeal squamous cell carcinoma

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Bennedbaek, O; Pilgaard, J

    1989-01-01

    The aim of this study was to investigate various approaches to the grading of malignancy in pre-treatment biopsies from patients with supraglottic laryngeal squamous cell carcinoma. The prospects of objective malignancy grading based on stereological estimation of the volume-weighted mean nuclear...... observers of the latter was poor in the material which consisted of 35 biopsy specimens. Unbiased estimates of nuclear Vv were on the average 385 microns3 (CV = 0.44), with more than 90% of the associated variance attributable to differences in nuclear Vv among individual lesions. Nuclear Vv was positively....... None of the investigated categorical and quantitative parameters (cutoff points = means) reached the level of significance with respect to prognostic value. However, nuclear Vv showed the best information concerning survival (2p = 0.08), and this estimator offers optimal features for objective...

  20. Estimating the cost of skin cancer detection by dermatology providers in a large health care system.

    Science.gov (United States)

    Matsumoto, Martha; Secrest, Aaron; Anderson, Alyce; Saul, Melissa I; Ho, Jonhan; Kirkwood, John M; Ferris, Laura K

    2018-04-01

    Data on the cost and efficiency of skin cancer detection through total body skin examination are scarce. To determine the number needed to screen (NNS) and biopsy (NNB) and cost per skin cancer diagnosed in a large dermatology practice in patients undergoing total body skin examination. This is a retrospective observational study. During 2011-2015, a total of 20,270 patients underwent 33,647 visits for total body skin examination; 9956 lesion biopsies were performed yielding 2763 skin cancers, including 155 melanomas. The NNS to detect 1 skin cancer was 12.2 (95% confidence interval [CI] 11.7-12.6) and 1 melanoma was 215 (95% CI 185-252). The NNB to detect 1 skin cancer was 3.0 (95% CI 2.9-3.1) and 1 melanoma was 27.8 (95% CI 23.3-33.3). In a multivariable model for NNS, age and personal history of melanoma were significant factors. Age switched from a protective factor to a risk factor at 51 years of age. The estimated cost per melanoma detected was $32,594 (95% CI $27,326-$37,475). Data are from a single health care system and based on physician coding. Melanoma detection through total body skin examination is most efficient in patients ≥50 years of age and those with a personal history of melanoma. Our findings will be helpful in modeling the cost effectiveness of melanoma screening by dermatologists. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  1. Semi-quantitative estimation by IR of framework, extraframework and defect Al species of HBEA zeolites.

    Science.gov (United States)

    Marques, João P; Gener, Isabelle; Ayrault, Philippe; Lopes, José M; Ribeiro, F Ramôa; Guisnet, Michel

    2004-10-21

    A simple method based on the characterization (composition, Bronsted and Lewis acidities) of acid treated HBEA zeolites was developed for estimating the concentrations of framework, extraframework and defect Al species.

  2. Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation

    OpenAIRE

    Pillai S; Singhvi I

    2008-01-01

    Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydro...

  3. Quantitative estimation of defects from measurement obtained by remote field eddy current inspection

    International Nuclear Information System (INIS)

    Davoust, M.E.; Fleury, G.

    1999-01-01

    Remote field eddy current technique is used for dimensioning grooves that may occurs in ferromagnetic pipes. This paper proposes a method to estimate the depth and the length of corrosion grooves from measurement of a pick-up coil signal phase at different positions close to the defect. Grooves dimensioning needs the knowledge of the physical relation between measurements and defect dimensions. So, finite element calculations are performed to obtain a parametric algebraic function of the physical phenomena. By means of this model and a previously defined general approach, an estimate of groove size may be given. In this approach, algebraic function parameters and groove dimensions are linked through a polynomial function. In order to validate this estimation procedure, a statistical study has been performed. The approach is proved to be suitable for real measurements. (authors)

  4. Observation-based Quantitative Uncertainty Estimation for Realtime Tsunami Inundation Forecast using ABIC and Ensemble Simulation

    Science.gov (United States)

    Takagawa, T.

    2016-12-01

    An ensemble forecasting scheme for tsunami inundation is presented. The scheme consists of three elemental methods. The first is a hierarchical Bayesian inversion using Akaike's Bayesian Information Criterion (ABIC). The second is Montecarlo sampling from a probability density function of multidimensional normal distribution. The third is ensamble analysis of tsunami inundation simulations with multiple tsunami sources. Simulation based validation of the model was conducted. A tsunami scenario of M9.1 Nankai earthquake was chosen as a target of validation. Tsunami inundation around Nagoya Port was estimated by using synthetic tsunami waveforms at offshore GPS buoys. The error of estimation of tsunami inundation area was about 10% even if we used only ten minutes observation data. The estimation accuracy of waveforms on/off land and spatial distribution of maximum tsunami inundation depth is demonstrated.

  5. THE QUADRANTS METHOD TO ESTIMATE QUANTITATIVE VARIABLES IN MANAGEMENT PLANS IN THE AMAZON

    Directory of Open Access Journals (Sweden)

    Gabriel da Silva Oliveira

    2015-12-01

    Full Text Available This work aimed to evaluate the accuracy in estimates of abundance, basal area and commercial volume per hectare, by the quadrants method applied to an area of 1.000 hectares of rain forest in the Amazon. Samples were simulated by random and systematic process with different sample sizes, ranging from 100 to 200 sampling points. The amounts estimated by the samples were compared with the parametric values recorded in the census. In the analysis we considered as the population all trees with diameter at breast height equal to or greater than 40 cm. The quadrants method did not reach the desired level of accuracy for the variables basal area and commercial volume, overestimating the observed values recorded in the census. However, the accuracy of the estimates of abundance, basal area and commercial volume was satisfactory for applying the method in forest inventories for management plans in the Amazon.

  6. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    Science.gov (United States)

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely

  7. Methods for the quantitative comparison of molecular estimates of clade age and the fossil record.

    Science.gov (United States)

    Clarke, Julia A; Boyd, Clint A

    2015-01-01

    Approaches quantifying the relative congruence, or incongruence, of molecular divergence estimates and the fossil record have been limited. Previously proposed methods are largely node specific, assessing incongruence at particular nodes for which both fossil data and molecular divergence estimates are available. These existing metrics, and other methods that quantify incongruence across topologies including entirely extinct clades, have so far not taken into account uncertainty surrounding both the divergence estimates and the ages of fossils. They have also treated molecular divergence estimates younger than previously assessed fossil minimum estimates of clade age as if they were the same as cases in which they were older. However, these cases are not the same. Recovered divergence dates younger than compared oldest known occurrences require prior hypotheses regarding the phylogenetic position of the compared fossil record and standard assumptions about the relative timing of morphological and molecular change to be incorrect. Older molecular dates, by contrast, are consistent with an incomplete fossil record and do not require prior assessments of the fossil record to be unreliable in some way. Here, we compare previous approaches and introduce two new descriptive metrics. Both metrics explicitly incorporate information on uncertainty by utilizing the 95% confidence intervals on estimated divergence dates and data on stratigraphic uncertainty concerning the age of the compared fossils. Metric scores are maximized when these ranges are overlapping. MDI (minimum divergence incongruence) discriminates between situations where molecular estimates are younger or older than known fossils reporting both absolute fit values and a number score for incompatible nodes. DIG range (divergence implied gap range) allows quantification of the minimum increase in implied missing fossil record induced by enforcing a given set of molecular-based estimates. These metrics are used

  8. Combining real-time PCR and next-generation DNA sequencing to provide quantitative comparisons of fungal aerosol populations

    Science.gov (United States)

    Dannemiller, Karen C.; Lang-Yona, Naama; Yamamoto, Naomichi; Rudich, Yinon; Peccia, Jordan

    2014-02-01

    We examined fungal communities associated with the PM10 mass of Rehovot, Israel outdoor air samples collected in the spring and fall seasons. Fungal communities were described by 454 pyrosequencing of the internal transcribed spacer (ITS) region of the fungal ribosomal RNA encoding gene. To allow for a more quantitative comparison of fungal exposure in humans, the relative abundance values of specific taxa were transformed to absolute concentrations through multiplying these values by the sample's total fungal spore concentration (derived from universal fungal qPCR). Next, the sequencing-based absolute concentrations for Alternaria alternata, Cladosporium cladosporioides, Epicoccum nigrum, and Penicillium/Aspergillus spp. were compared to taxon-specific qPCR concentrations for A. alternata, C. cladosporioides, E. nigrum, and Penicillium/Aspergillus spp. derived from the same spring and fall aerosol samples. Results of these comparisons showed that the absolute concentration values generated from pyrosequencing were strongly associated with the concentration values derived from taxon-specific qPCR (for all four species, p 0.70). The correlation coefficients were greater for species present in higher concentrations. Our microbial aerosol population analyses demonstrated that fungal diversity (number of fungal operational taxonomic units) was higher in the spring compared to the fall (p = 0.02), and principal coordinate analysis showed distinct seasonal differences in taxa distribution (ANOSIM p = 0.004). Among genera containing allergenic and/or pathogenic species, the absolute concentrations of Alternaria, Aspergillus, Fusarium, and Cladosporium were greater in the fall, while Cryptococcus, Penicillium, and Ulocladium concentrations were greater in the spring. The transformation of pyrosequencing fungal population relative abundance data to absolute concentrations can improve next-generation DNA sequencing-based quantitative aerosol exposure assessment.

  9. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    DEFF Research Database (Denmark)

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...

  10. Soil carbon storage estimation in a forested watershed using quantitative soil-landscape modeling

    Science.gov (United States)

    James A. Thompson; Randall K. Kolka

    2005-01-01

    Carbon storage in soils is important to forest ecosystems. Moreover, forest soils may serve as important C sinks for ameliorating excess atmospheric CO2. Spatial estimates of soil organic C (SOC) storage have traditionally relied upon soil survey maps and laboratory characterization data. This approach does not account for inherent variability...

  11. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young

    2017-08-01

    A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.

  12. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake

    International Nuclear Information System (INIS)

    He Bin; Du Yong; Segars, W. Paul; Wahl, Richard L.; Sgouros, George; Jacene, Heather; Frey, Eric C.

    2009-01-01

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT/CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans. Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of the

  13. Theoretical implications of quantitative properties of interval timing and probability estimation in mouse and rat.

    Science.gov (United States)

    Kheifets, Aaron; Freestone, David; Gallistel, C R

    2017-07-01

    In three experiments with mice ( Mus musculus ) and rats (Rattus norvigicus), we used a switch paradigm to measure quantitative properties of the interval-timing mechanism. We found that: 1) Rodents adjusted the precision of their timed switches in response to changes in the interval between the short and long feed latencies (the temporal goalposts). 2) The variability in the timing of the switch response was reduced or unchanged in the face of large trial-to-trial random variability in the short and long feed latencies. 3) The adjustment in the distribution of switch latencies in response to changes in the relative frequency of short and long trials was sensitive to the asymmetry in the Kullback-Leibler divergence. The three results suggest that durations are represented with adjustable precision, that they are timed by multiple timers, and that there is a trial-by-trial (episodic) record of feed latencies in memory. © 2017 Society for the Experimental Analysis of Behavior.

  14. Skill Assessment of An Hybrid Technique To Estimate Quantitative Precipitation Forecast For Galicia (nw Spain)

    Science.gov (United States)

    Lage, A.; Taboada, J. J.

    Precipitation is the most obvious of the weather elements in its effects on normal life. Numerical weather prediction (NWP) is generally used to produce quantitative precip- itation forecast (QPF) beyond the 1-3 h time frame. These models often fail to predict small-scale variations of rain because of spin-up problems and their coarse spatial and temporal resolution (Antolik, 2000). Moreover, there are some uncertainties about the behaviour of the NWP models in extreme situations (de Bruijn and Brandsma, 2000). Hybrid techniques, combining the benefits of NWP and statistical approaches in a flexible way, are very useful to achieve a good QPF. In this work, a new technique of QPF for Galicia (NW of Spain) is presented. This region has a percentage of rainy days per year greater than 50% with quantities that may cause floods, with human and economical damages. The technique is composed of a NWP model (ARPS) and a statistical downscaling process based on an automated classification scheme of at- mospheric circulation patterns for the Iberian Peninsula (J. Ribalaygua and R. Boren, 1995). Results show that QPF for Galicia is improved using this hybrid technique. [1] Antolik, M.S. 2000 "An Overview of the National Weather Service's centralized statistical quantitative precipitation forecasts". Journal of Hydrology, 239, pp:306- 337. [2] de Bruijn, E.I.F and T. Brandsma "Rainfall prediction for a flooding event in Ireland caused by the remnants of Hurricane Charley". Journal of Hydrology, 239, pp:148-161. [3] Ribalaygua, J. and Boren R. "Clasificación de patrones espaciales de precipitación diaria sobre la España Peninsular". Informes N 3 y 4 del Servicio de Análisis e Investigación del Clima. Instituto Nacional de Meteorología. Madrid. 53 pp.

  15. Evaluation of two "integrated" polarimetric Quantitative Precipitation Estimation (QPE) algorithms at C-band

    Science.gov (United States)

    Tabary, Pierre; Boumahmoud, Abdel-Amin; Andrieu, Hervé; Thompson, Robert J.; Illingworth, Anthony J.; Le Bouar, Erwan; Testud, Jacques

    2011-08-01

    SummaryTwo so-called "integrated" polarimetric rate estimation techniques, ZPHI ( Testud et al., 2000) and ZZDR ( Illingworth and Thompson, 2005), are evaluated using 12 episodes of the year 2005 observed by the French C-band operational Trappes radar, located near Paris. The term "integrated" means that the concentration parameter of the drop size distribution is assumed to be constant over some area and the algorithms retrieve it using the polarimetric variables in that area. The evaluation is carried out in ideal conditions (no partial beam blocking, no ground-clutter contamination, no bright band contamination, a posteriori calibration of the radar variables ZH and ZDR) using hourly rain gauges located at distances less than 60 km from the radar. Also included in the comparison, for the sake of benchmarking, is a conventional Z = 282 R1.66 estimator, with and without attenuation correction and with and without adjustment by rain gauges as currently done operationally at Météo France. Under those ideal conditions, the two polarimetric algorithms, which rely solely on radar data, appear to perform as well if not better, pending on the measurements conditions (attenuation, rain rates, …), than the conventional algorithms, even when the latter take into account rain gauges through the adjustment scheme. ZZDR with attenuation correction is the best estimator for hourly rain gauge accumulations lower than 5 mm h -1 and ZPHI is the best one above that threshold. A perturbation analysis has been conducted to assess the sensitivity of the various estimators with respect to biases on ZH and ZDR, taking into account the typical accuracy and stability that can be reasonably achieved with modern operational radars these days (1 dB on ZH and 0.2 dB on ZDR). A +1 dB positive bias on ZH (radar too hot) results in a +14% overestimation of the rain rate with the conventional estimator used in this study (Z = 282R1.66), a -19% underestimation with ZPHI and a +23

  16. Greenhouse effect and waste sector in Italy: Analysis and quantitative estimates of methane emissions

    International Nuclear Information System (INIS)

    Pizzullo, Marcello; Tognotti, Leonardo

    1997-01-01

    Methane is the most important atmospheric gas with a considerable effect on climate change after carbon dioxide. In this work methane emissions from waste have been evaluated. Estimates include emissions resulting from anaerobic degradation of landfill municipal solid waste and industrial and municipal wastewater anaerobic treatments. The adopted methodology follows specific guidelines carried out by IPCC (Intergovernamental Panel on Climate Change), the scientific reference commission for the Framework Convention on Climate Change subscribed in 1992 during the Earth Summit in Rio de Janeiro. Some factors used in the methodology for landfill emissions have been modified and adapted to the italian situation. The estimate of emission resulting from industrial wastewater anaerobic treatments has required preliminary evaluation of annual wastewater quantities produced by some significant industrial sectors

  17. Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation.

    Science.gov (United States)

    Pillai, S; Singhvi, I

    2008-09-01

    Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C(18) column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies.

  18. Contemporary group estimates adjusted for climatic effects provide a finer definition of the unknown environmental challenges experienced by growing pigs.

    Science.gov (United States)

    Guy, S Z Y; Li, L; Thomson, P C; Hermesch, S

    2017-12-01

    Environmental descriptors derived from mean performances of contemporary groups (CGs) are assumed to capture any known and unknown environmental challenges. The objective of this paper was to obtain a finer definition of the unknown challenges, by adjusting CG estimates for the known climatic effects of monthly maximum air temperature (MaxT), minimum air temperature (MinT) and monthly rainfall (Rain). As the unknown component could include infection challenges, these refined descriptors may help to better model varying responses of sire progeny to environmental infection challenges for the definition of disease resilience. Data were recorded from 1999 to 2013 at a piggery in south-east Queensland, Australia (n = 31,230). Firstly, CG estimates of average daily gain (ADG) and backfat (BF) were adjusted for MaxT, MinT and Rain, which were fitted as splines. In the models used to derive CG estimates for ADG, MaxT and MinT were significant variables. The models that contained these significant climatic variables had CG estimates with a lower variance compared to models without significant climatic variables. Variance component estimates were similar across all models, suggesting that these significant climatic variables accounted for some known environmental variation captured in CG estimates. No climatic variables were significant in the models used to derive the CG estimates for BF. These CG estimates were used to categorize environments. There was no observable sire by environment interaction (Sire×E) for ADG when using the environmental descriptors based on CG estimates on BF. For the environmental descriptors based on CG estimates of ADG, there was significant Sire×E only when MinT was included in the model (p = .01). Therefore, this new definition of the environment, preadjusted by MinT, increased the ability to detect Sire×E. While the unknown challenges captured in refined CG estimates need verification for infection challenges, this may provide a

  19. Quantitative analysis of oyster larval proteome provides new insights into the effects of multiple climate change stressors

    KAUST Repository

    Dineshram, Ramadoss

    2016-03-19

    The metamorphosis of planktonic larvae of the Pacific oyster (Crassostrea gigas) underpins their complex life-history strategy by switching on the molecular machinery required for sessile life and building calcite shells. Metamorphosis becomes a survival bottleneck, which will be pressured by different anthropogenically induced climate change-related variables. Therefore, it is important to understand how metamorphosing larvae interact with emerging climate change stressors. To predict how larvae might be affected in a future ocean, we examined changes in the proteome of metamorphosing larvae under multiple stressors: decreased pH (pH 7.4), increased temperature (30 °C), and reduced salinity (15 psu). Quantitative protein expression profiling using iTRAQ-LC-MS/MS identified more than 1300 proteins. Decreased pH had a negative effect on metamorphosis by down-regulating several proteins involved in energy production, metabolism, and protein synthesis. However, warming switched on these down-regulated pathways at pH 7.4. Under multiple stressors, cell signaling, energy production, growth, and developmental pathways were up-regulated, although metamorphosis was still reduced. Despite the lack of lethal effects, significant physiological responses to both individual and interacting climate change related stressors were observed at proteome level. The metamorphosing larvae of the C. gigas population in the Yellow Sea appear to have adequate phenotypic plasticity at the proteome level to survive in future coastal oceans, but with developmental and physiological costs. © 2016 John Wiley & Sons Ltd.

  20. Comprehensive Quantitative Profiling of Tau and Phosphorylated Tau Peptides in Cerebrospinal Fluid by Mass Spectrometry Provides New Biomarker Candidates.

    Science.gov (United States)

    Russell, Claire L; Mitra, Vikram; Hansson, Karl; Blennow, Kaj; Gobom, Johan; Zetterberg, Henrik; Hiltunen, Mikko; Ward, Malcolm; Pike, Ian

    2017-01-01

    Aberrant tau phosphorylation is a hallmark in Alzheimer's disease (AD), believed to promote formation of paired helical filaments, the main constituent of neurofibrillary tangles in the brain. While cerebrospinal fluid (CSF) levels of total tau and tau phosphorylated at threonine residue 181 (pThr181) are established core biomarkers for AD, the value of alternative phosphorylation sites, which may have more direct relevance to pathology, for early diagnosis is not yet known, largely due to their low levels in CSF and lack of standardized detection methods. To overcome sensitivity limitations for analysis of phosphorylated tau in CSF, we have applied an innovative mass spectrometry (MS) workflow, TMTcalibratortrademark, to enrich and enhance the detection of phosphoproteome components of AD brain tissue in CSF, and enable the quantitation of these analytes. We aimed to identify which tau species present in the AD brain are also detectable in CSF and which, if any, are differentially regulated with disease. Over 75% coverage of full-length (2N4R) tau was detected in the CSF with 47 phosphopeptides covering 31 different phosphorylation sites. Of these, 11 phosphopeptides were upregulated by at least 40%, along with an overall increase in tau levels in the CSF of AD patients relative to controls. Use of the TMTcalibratortrademark workflow dramatically improved our ability to detect tau-derived peptides that are directly related to human AD pathology. Further validation of regulated tau peptides as early biomarkers of AD is warranted and is currently being undertaken.

  1. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    Science.gov (United States)

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  2. Optimisation of information influences on problems of consequences of Chernobyl accident and quantitative criteria for estimation of information actions

    International Nuclear Information System (INIS)

    Sobaleu, A.

    2004-01-01

    Consequences of Chernobyl NPP accident still very important for Belarus. About 2 million Byelorussians live in the districts polluted by Chernobyl radionuclides. Modern approaches to the decision of after Chernobyl problems in Belarus assume more active use of information and educational actions to grow up a new radiological culture. It will allow to reduce internal doze of radiation without spending a lot of money and other resources. Experience of information work with the population affected by Chernobyl since 1986 till 2004 has shown, that information and educational influences not always reach the final aim - application of received knowledge on radiating safety in practice and changing the style of life. If we take into account limited funds and facilities, we should optimize information work. The optimization can be achieved on the basis of quantitative estimations of information actions effectiveness. It is possible to use two parameters for this quantitative estimations: 1) increase in knowledge of the population and experts on the radiating safety, calculated by new method based on applied theory of the information (Mathematical Theory of Communication) by Claude E. Shannon and 2) reduction of internal doze of radiation, calculated on the basis of measurements on human irradiation counter (HIC) before and after an information or educational influence. (author)

  3. Quantitative estimation of myocardial thickness by the wall thickness map with Tl-201 myocardial SPECT and its clinical use

    International Nuclear Information System (INIS)

    Sekiai, Yasuhiro; Sawai, Michihiko; Murayama, Susumu

    1988-01-01

    To estimate the wall thickness of left ventricular myocardium objectively and quantitatively, we adopted the device of wall thickness map (WTM) with Tl-201 myocardial SPECT. For validation on measuring left ventricular wall thickness with SPECT, fundamental studies were carried out with phantom models, and clinical studies were performed in 10 cases comparing the results from SPECT with those in echocardiography. To draw the WTM, left ventricular wall thickness was measured using the cut off method from SPECT images obtained at 5.6 mm intervals from the base and middle of left ventricle: short-axis image for the base and middle of left ventricle and vertical and horizontal long-axis images for the apical region. Wall thickness was defined from the number of pixel above the cut off level. Results of fundamental studies disclosed that it is impossible to evaluate the thickness of less than 10 mm by Tl-201 myocardial SPECT but possible to discriminate wall thickness of 10 mm, 15 mm, and 20 mm by Tl-201 myocardial SPECT. Echocardiographic results supported the validity of WTM, showing a good linear correlation (r = 0.96) between two methods on measuring wall thickness of left ventricle. We conclude that the WTM applied in this report may be useful for objective and quantitative estimation of myocardial hypertrophy. (author)

  4. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    International Nuclear Information System (INIS)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A

    2011-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ∼0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  5. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Energy Technology Data Exchange (ETDEWEB)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A, E-mail: tome@humonc.wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705 (United States)

    2011-02-07

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of {approx}0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  6. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Science.gov (United States)

    Wang, Dongxu; Mackie, T Rockwell

    2015-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  7. Raman spectroscopy of human skin: looking for a quantitative algorithm to reliably estimate human age

    Science.gov (United States)

    Pezzotti, Giuseppe; Boffelli, Marco; Miyamori, Daisuke; Uemura, Takeshi; Marunaka, Yoshinori; Zhu, Wenliang; Ikegaya, Hiroshi

    2015-06-01

    The possibility of examining soft tissues by Raman spectroscopy is challenged in an attempt to probe human age for the changes in biochemical composition of skin that accompany aging. We present a proof-of-concept report for explicating the biophysical links between vibrational characteristics and the specific compositional and chemical changes associated with aging. The actual existence of such links is then phenomenologically proved. In an attempt to foster the basics for a quantitative use of Raman spectroscopy in assessing aging from human skin samples, a precise spectral deconvolution is performed as a function of donors' ages on five cadaveric samples, which emphasizes the physical significance and the morphological modifications of the Raman bands. The outputs suggest the presence of spectral markers for age identification from skin samples. Some of them appeared as authentic "biological clocks" for the apparent exactness with which they are related to age. Our spectroscopic approach yields clear compositional information of protein folding and crystallization of lipid structures, which can lead to a precise identification of age from infants to adults. Once statistically validated, these parameters might be used to link vibrational aspects at the molecular scale for practical forensic purposes.

  8. Quantitative estimation of pulegone in Mentha longifolia growing in Saudi Arabia. Is it safe to use?

    Science.gov (United States)

    Alam, Prawez; Saleh, Mahmoud Fayez; Abdel-Kader, Maged Saad

    2016-03-01

    Our TLC study of the volatile oil isolated from Mentha longifolia showed a major UV active spot with higher Rf value than menthol. Based on the fact that the components of the oil from same plant differ quantitatively due to environmental conditions, the major spot was isolated using different chromatographic techniques and identified by spectroscopic means as pulegone. The presence of pulegone in M. longifolia, a plant widely used in Saudi Arabia, raised a hot debate due to its known toxicity. The Scientific Committee on Food, Health & Consumer Protection Directorate General, European Commission set a limit for the presence of pulegone in foodstuffs and beverages. In this paper we attempted to determine the exact amount of pulegone in different extracts, volatile oil as well as tea flavoured with M. longifolia (Habak) by densitometric HPTLC validated methods using normal phase (Method I) and reverse phase (Method II) TLC plates. The study indicated that the style of use of Habak in Saudi Arabia resulted in much less amount of pulegone than the allowed limit.

  9. Quantitative estimation of intestinal dilation as a predictor of obstruction in the dog.

    Science.gov (United States)

    Graham, J P; Lord, P F; Harrison, J M

    1998-11-01

    Mechanical obstruction is a major differential diagnosis for dogs presented with gastrointestinal problems. Small intestinal dilation is a cardinal sign of obstruction but its recognition depends upon the observer's experience and anecdotally derived parameters for normal small intestinal diameter. The objective of this study was to formulate a quantitative index for normal intestinal diameter and evaluate its usefulness in predicting small intestinal obstruction. The material consisted of survey abdominal radiographs of 50 normal dogs, 44 cases of intestinal obstruction and 86 patients which subsequently had an upper gastrointestinal examination. A ratio of the maximum small intestinal diameter (SI) and the height of the body of the fifth lumbar vertebra at its narrowest point (L5) was used, and a logistic regression model employed to determine the probability of an obstruction existing with varying degrees of intestinal dilation. A value of 1.6 for SI/L5 is recommended as the upper limit of normal intestinal diameter for clinical use. The model showed that obstruction is very unlikely if the SI/L5 value is less than this. Higher values were significantly associated with obstruction.

  10. Quantitative estimation of intestinal dilation as a predictor of obstruction in the dog

    International Nuclear Information System (INIS)

    Graham, J.P.; Lord, P.F.; Harrison, J.M.

    1998-01-01

    Mechanical obstruction is a major differential diagnosis for dogs presented with gastrointestinal problems. Small intestinal dilation is a cardinal sign of obstruction but its recognition depends upon the observer's experience and anecdotally derived parameters for normal small intestinal diameter. The objective of this study was to formulate a quantitative index for normal intestinal diameter and evaluate its usefulness in predicting small intestinal obstruction. The material consisted of survey abdominal radiographs of 50 normal dogs, 44 cases of intestinal obstruction and 86 patients which subsequently had an upper gastrointestinal examination. A ratio of the maximum small intestinal diameter (SI) and the height of the body of the fifth lumbar vertebra at its narrowest point (L5) was used, and a logistic regression model employed to determine the probability of an obstruction existing with varying degrees of intestinal dilation. A value of 1.6 for SI/L5 is recommended as the upper limit of normal intestinal diameter for clinical use. The model showed that obstruction is very unlikely if the SI/L5 value is less than this. Higher values were significantly associated with obstruction

  11. Estimation of quantitative levels of diesel exhaust exposure and the health impact in the contemporary Australian mining industry.

    Science.gov (United States)

    Peters, Susan; de Klerk, Nicholas; Reid, Alison; Fritschi, Lin; Musk, Aw Bill; Vermeulen, Roel

    2017-03-01

    To estimate quantitative levels of exposure to diesel exhaust expressed by elemental carbon (EC) in the contemporary mining industry and to describe the excess risk of lung cancer that may result from those levels. EC exposure has been monitored in Western Australian miners since 2003. Mixed-effects models were used to estimate EC levels for five surface and five underground occupation groups (as a fixed effect) and specific jobs within each group (as a random effect). Further fixed effects included sampling year and duration, and mineral mined. On the basis of published risk functions, we estimated excess lifetime risk of lung cancer mortality for several employment scenarios. Personal EC measurements (n=8614) were available for 146 different jobs at 124 mine sites. The mean estimated EC exposure level for surface occupations in 2011 was 14 µg/m 3 for 12 hour shifts. Levels for underground occupation groups ranged from 18 to 44 µg/m 3 . Underground diesel loader operators had the highest exposed specific job: 59 µg/m 3 . A lifetime career (45 years) as a surface worker or underground miner, experiencing exposure levels as estimated for 2011 (14 and 44 µg/m 3 EC), was associated with 5.5 and 38 extra lung cancer deaths per 1000 males, respectively. EC exposure levels in the contemporary Australian mining industry are still substantial, particularly for underground workers. The estimated excess numbers of lung cancer deaths associated with these exposures support the need for implementation of stringent occupational exposure limits for diesel exhaust. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  12. Quantitative falls risk estimation through multi-sensor assessment of standing balance.

    Science.gov (United States)

    Greene, Barry R; McGrath, Denise; Walsh, Lorcan; Doheny, Emer P; McKeown, David; Garattini, Chiara; Cunningham, Clodagh; Crosby, Lisa; Caulfield, Brian; Kenny, Rose A

    2012-12-01

    Falls are the most common cause of injury and hospitalization and one of the principal causes of death and disability in older adults worldwide. Measures of postural stability have been associated with the incidence of falls in older adults. The aim of this study was to develop a model that accurately classifies fallers and non-fallers using novel multi-sensor quantitative balance metrics that can be easily deployed into a home or clinic setting. We compared the classification accuracy of our model with an established method for falls risk assessment, the Berg balance scale. Data were acquired using two sensor modalities--a pressure sensitive platform sensor and a body-worn inertial sensor, mounted on the lower back--from 120 community dwelling older adults (65 with a history of falls, 55 without, mean age 73.7 ± 5.8 years, 63 female) while performing a number of standing balance tasks in a geriatric research clinic. Results obtained using a support vector machine yielded a mean classification accuracy of 71.52% (95% CI: 68.82-74.28) in classifying falls history, obtained using one model classifying all data points. Considering male and female participant data separately yielded classification accuracies of 72.80% (95% CI: 68.85-77.17) and 73.33% (95% CI: 69.88-76.81) respectively, leading to a mean classification accuracy of 73.07% in identifying participants with a history of falls. Results compare favourably to those obtained using the Berg balance scale (mean classification accuracy: 59.42% (95% CI: 56.96-61.88)). Results from the present study could lead to a robust method for assessing falls risk in both supervised and unsupervised environments.

  13. Quantitative falls risk estimation through multi-sensor assessment of standing balance

    International Nuclear Information System (INIS)

    Greene, Barry R; McGrath, Denise; Walsh, Lorcan; Doheny, Emer P; McKeown, David; Garattini, Chiara; Cunningham, Clodagh; Crosby, Lisa; Caulfield, Brian; Kenny, Rose A

    2012-01-01

    Falls are the most common cause of injury and hospitalization and one of the principal causes of death and disability in older adults worldwide. Measures of postural stability have been associated with the incidence of falls in older adults. The aim of this study was to develop a model that accurately classifies fallers and non-fallers using novel multi-sensor quantitative balance metrics that can be easily deployed into a home or clinic setting. We compared the classification accuracy of our model with an established method for falls risk assessment, the Berg balance scale. Data were acquired using two sensor modalities—a pressure sensitive platform sensor and a body-worn inertial sensor, mounted on the lower back—from 120 community dwelling older adults (65 with a history of falls, 55 without, mean age 73.7 ± 5.8 years, 63 female) while performing a number of standing balance tasks in a geriatric research clinic. Results obtained using a support vector machine yielded a mean classification accuracy of 71.52% (95% CI: 68.82–74.28) in classifying falls history, obtained using one model classifying all data points. Considering male and female participant data separately yielded classification accuracies of 72.80% (95% CI: 68.85–77.17) and 73.33% (95% CI: 69.88–76.81) respectively, leading to a mean classification accuracy of 73.07% in identifying participants with a history of falls. Results compare favourably to those obtained using the Berg balance scale (mean classification accuracy: 59.42% (95% CI: 56.96–61.88)). Results from the present study could lead to a robust method for assessing falls risk in both supervised and unsupervised environments. (paper)

  14. Quantitative estimation of lithofacies from seismic data in a tertiary turbidite system in the North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Joerstad, A.K.; Avseth, P.Aa; Mukerji, T.; Mavko, G.; Granli, J.R.

    1998-12-31

    Deep water clastic systems and associated turbidite reservoirs are often characterized by very complex sand distributions and reservoir description based on conventional seismic and well-log stratigraphic analysis may be very uncertain in these depositional environments. There is shown that reservoirs in turbidite systems have been produced very inefficiently in conventional development. More than 70% of the mobile oil is commonly left behind, because of the heterogeneous nature of these reservoirs. In this study there is examined a turbidite system in the North Sea with five available wells and a 3-D seismic near and far offset stack to establish most likely estimates of facies and pore fluid within the cube. 5 figs.

  15. Provider report of the existence of detection and care of perinatal depression: quantitative evidence from public obstetric units in Mexico

    Directory of Open Access Journals (Sweden)

    Filipa de Castro

    2016-07-01

    Full Text Available Objective. To provide evidence on perinatal mental healthcare in Mexico. Materials and methods. Descriptive and bivariate analyses of data from a cross-sectional probabilistic survey of 211 public obstetric units. Results. Over half (64.0% of units offer mental healthcare; fewer offer perinatal depression (PND detection (37.1% and care (40.3%. More units had protocols/guidelines for PND detection and for care, respectively, in Mexico City-Mexico state (76.7%; 78.1% than in Southern (26.5%; 36.4%, Northern (27.3%; 28.1% and Central Mexico (50.0%; 52.7%. Conclusion. Protocols and provider training in PND, implementation of brief screening tools and psychosocial interventions delivered by non-clinical personnel are needed.      DOI: http://dx.doi.org/10.21149/spm.v58i4.8028

  16. Provider report of the existence of detection and care of perinatal depression: quantitative evidence from public obstetric units in Mexico.

    Science.gov (United States)

    Castro, Filipa de; Place, Jean Marie; Allen-Leigh, Betania; Rivera-Rivera, Leonor; Billings, Deborah

    2016-08-01

    To provide evidence on perinatal mental healthcare in Mexico. Descriptive and bivariate analyses of data from a cross-sectional probabilistic survey of 211 public obstetric units. Over half (64.0%) of units offer mental healthcare; fewer offer perinatal depression (PND) detection (37.1%) and care (40.3%). More units had protocols/guidelines for PND detection and for care, respectively, in Mexico City-Mexico state (76.7%; 78.1%) than in Southern (26.5%; 36.4%), Northern (27.3%; 28.1%) and Central Mexico (50.0%; 52.7%). Protocols and provider training in PND, implementation of brief screening tools and psychosocial interventions delivered by non-clinical personnel are needed.

  17. Systematic feasibility analysis of a quantitative elasticity estimation for breast anatomy using supine/prone patient postures.

    Science.gov (United States)

    Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P

    2016-03-01

    Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors

  18. A generalized estimating equations approach to quantitative trait locus detection of non-normal traits

    Directory of Open Access Journals (Sweden)

    Thomson Peter C

    2003-05-01

    Full Text Available Abstract To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits.

  19. Quantitative Estimation of Risks for Production Unit Based on OSHMS and Process Resilience

    Science.gov (United States)

    Nyambayar, D.; Koshijima, I.; Eguchi, H.

    2017-06-01

    Three principal elements in the production field of chemical/petrochemical industry are (i) Production Units, (ii) Production Plant Personnel and (iii) Production Support System (computer system introduced for improving productivity). Each principal element has production process resilience, i.e. a capability to restrain disruptive signals occurred in and out of the production field. In each principal element, risk assessment is indispensable for the production field. In a production facility, the occupational safety and health management system (Hereafter, referred to as OSHMS) has been introduced to reduce a risk of accidents and troubles that may occur during production. In OSHMS, a risk assessment is specified to reduce a potential risk in the production facility such as a factory, and PDCA activities are required for a continual improvement of safety production environments. However, there is no clear statement to adopt the OSHMS standard into the production field. This study introduces a metric to estimate the resilience of the production field by using the resilience generated by the production plant personnel and the result of the risk assessment in the production field. A method for evaluating how OSHMS functions are systematically installed in the production field is also discussed based on the resilience of the three principal elements.

  20. A scintillation camera technique for quantitative estimation of separate kidney function and its use before nephrectomy

    International Nuclear Information System (INIS)

    Larsson, I.; Lindstedt, E.; Ohlin, P.; Strand, S.E.; White, T.

    1975-01-01

    A scintillation camera technique was used for measuring renal uptake of [ 131 I]Hippuran 80-110 s after injection. Externally measured Hippuran uptake was markedly influenced by kidney depth, which was measured by lateral-view image after injection of [ 99 Tc]iron ascorbic acid complex or [ 197 Hg]chlormerodrine. When one kidney was nearer to the dorsal surface of the body than the other, it was necessary to correct the externally measured Hippuran uptake for kidney depth to obtain reliable information on the true partition of Hippuran between the two kidneys. In some patients the glomerular filtration rate (GFR) was measured before and after nephrectomy. Measured postoperative GFR was compared with preoperative predicted GFR, which was calculated by multiplying the preoperative Hippuran uptake of the kidney to be left in situ, as a fraction of the preoperative Hippuran uptake of both kidneys, by the measured preoperative GFR. The measured postoperative GFR was usually moderately higher than the preoperatively predicted GFR. The difference could be explained by a postoperative compensatory increase in function of the remaining kidney. Thus, the present method offers a possibility of estimating separate kidney function without arterial or ureteric catheterization. (auth)

  1. Usefulness of the automatic quantitative estimation tool for cerebral blood flow: clinical assessment of the application software tool AQCEL.

    Science.gov (United States)

    Momose, Mitsuhiro; Takaki, Akihiro; Matsushita, Tsuyoshi; Yanagisawa, Shin; Yano, Kesato; Miyasaka, Tadashi; Ogura, Yuka; Kadoya, Masumi

    2011-01-01

    AQCEL enables automatic reconstruction of single-photon emission computed tomogram (SPECT) without image degradation and quantitative analysis of cerebral blood flow (CBF) after the input of simple parameters. We ascertained the usefulness and quality of images obtained by the application software AQCEL in clinical practice. Twelve patients underwent brain perfusion SPECT using technetium-99m ethyl cysteinate dimer at rest and after acetazolamide (ACZ) loading. Images reconstructed using AQCEL were compared with those reconstructed using conventional filtered back projection (FBP) method for qualitative estimation. Two experienced nuclear medicine physicians interpreted the image quality using the following visual scores: 0, same; 1, slightly superior; 2, superior. For quantitative estimation, the mean CBF values of the normal hemisphere of the 12 patients using ACZ calculated by the AQCEL method were compared with those calculated by the conventional method. The CBF values of the 24 regions of the 3-dimensional stereotaxic region of interest template (3DSRT) calculated by the AQCEL method at rest and after ACZ loading were compared to those calculated by the conventional method. No significant qualitative difference was observed between the AQCEL and conventional FBP methods in the rest study. The average score by the AQCEL method was 0.25 ± 0.45 and that by the conventional method was 0.17 ± 0.39 (P = 0.34). There was a significant qualitative difference between the AQCEL and conventional methods in the ACZ loading study. The average score for AQCEL was 0.83 ± 0.58 and that for the conventional method was 0.08 ± 0.29 (P = 0.003). During quantitative estimation using ACZ, the mean CBF values of 12 patients calculated by the AQCEL method were 3-8% higher than those calculated by the conventional method. The square of the correlation coefficient between these methods was 0.995. While comparing the 24 3DSRT regions of 12 patients, the squares of the correlation

  2. Spiritual care competence for contemporary nursing practice: A quantitative exploration of the guidance provided by fundamental nursing textbooks.

    Science.gov (United States)

    Timmins, Fiona; Neill, Freda; Murphy, Maryanne; Begley, Thelma; Sheaf, Greg

    2015-11-01

    Spirituality is receiving unprecedented attention in the nursing literature. Both the volume and scope of literature on the topic is expanding, and it is clear that this topic is of interest to nurses. There is consensus that the spiritual required by clients receiving health ought to be an integrated effort across the health care team. Although undergraduate nurses receive some education on the topic, this is ad hoc and inconsistent across universities. Textbooks are clearly a key resource in this area however the extent to which they form a comprehensive guide for nursing students and nurses is unclear. This study provides a hitherto unperformed analysis of core nursing textbooks to ascertain spirituality related content. 543 books were examined and this provides a range of useful information about inclusions and omissions in this field. Findings revealed that spirituality is not strongly portrayed as a component of holistic care and specific direction for the provision of spiritual care is lacking. Fundamental textbooks used by nurses and nursing students ought to inform and guide integrated spiritual care and reflect a more holistic approach to nursing care. The religious and/or spiritual needs of an increasingly diverse community need to be taken seriously within scholarly texts so that this commitment to individual clients' needs can be mirrored in practice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Quantitative and qualitative estimates of cross-border tobacco shopping and tobacco smuggling in France.

    Science.gov (United States)

    Lakhdar, C Ben

    2008-02-01

    In France, cigarette sales have fallen sharply, especially in border areas, since the price increases of 2003 and 2004. It was proposed that these falls were not due to people quitting smoking but rather to increased cross-border sales of tobacco and/or smuggling. This paper aims to test this proposition. Three approaches have been used. First, cigarette sales data from French sources for the period 1999-2006 were collected, and a simulation of the changes seen within these sales was carried out in order to estimate what the sales situation would have looked like without the presence of foreign tobacco. Second, the statements regarding tobacco consumed reported by the French population with registered tobacco sales were compared. Finally, in order to identify the countries of origin of foreign tobacco entering France, we collected a random sample of cigarette packs from a waste collection centre. According to the first method, cross-border shopping and smuggling of tobacco accounted for 8635 tones of tobacco in 2004, 9934 in 2005, and 9930 in 2006, ie, between 14% and 17% of total sales. The second method gave larger results: the difference between registered cigarette sales and cigarettes declared as being smoked was around 12,000 to 13,000 tones in 2005, equivalent to 20% of legal sales. The collection of cigarette packs at a waste collection centre showed that foreign cigarettes accounted for 18.6% of our sample in 2005 and 15.5% in 2006. France seems mainly to be a victim of cross-border purchasing of tobacco products, with the contraband market for tobacco remaining modest. in order to avoid cross-border purchases, an increased harmonization of national policies on the taxation of tobacco products needs to be envisaged by the European Union.

  4. Quantitative precipitation estimation based on high-resolution numerical weather prediction and data assimilation with WRF – a performance test

    Directory of Open Access Journals (Sweden)

    Hans-Stefan Bauer

    2015-04-01

    Full Text Available Quantitative precipitation estimation and forecasting (QPE and QPF are among the most challenging tasks in atmospheric sciences. In this work, QPE based on numerical modelling and data assimilation is investigated. Key components are the Weather Research and Forecasting (WRF model in combination with its 3D variational assimilation scheme, applied on the convection-permitting scale with sophisticated model physics over central Europe. The system is operated in a 1-hour rapid update cycle and processes a large set of in situ observations, data from French radar systems, the European GPS network and satellite sensors. Additionally, a free forecast driven by the ECMWF operational analysis is included as a reference run representing current operational precipitation forecasting. The verification is done both qualitatively and quantitatively by comparisons of reflectivity, accumulated precipitation fields and derived verification scores for a complex synoptic situation that developed on 26 and 27 September 2012. The investigation shows that even the downscaling from ECMWF represents the synoptic situation reasonably well. However, significant improvements are seen in the results of the WRF QPE setup, especially when the French radar data are assimilated. The frontal structure is more defined and the timing of the frontal movement is improved compared with observations. Even mesoscale band-like precipitation structures on the rear side of the cold front are reproduced, as seen by radar. The improvement in performance is also confirmed by a quantitative comparison of the 24-hourly accumulated precipitation over Germany. The mean correlation of the model simulations with observations improved from 0.2 in the downscaling experiment and 0.29 in the assimilation experiment without radar data to 0.56 in the WRF QPE experiment including the assimilation of French radar data.

  5. Stereological estimates of nuclear volume and other quantitative variables in supratentorial brain tumors. Practical technique and use in prognostic evaluation

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Braendgaard, H; Chistiansen, A O

    1991-01-01

    The use of morphometry and modern stereology in malignancy grading of brain tumors is only poorly investigated. The aim of this study was to present these quantitative methods. A retrospective feasibility study of 46 patients with supratentorial brain tumors was carried out to demonstrate...... the practical technique. The continuous variables were correlated with the subjective, qualitative WHO classification of brain tumors, and the prognostic value of the parameters was assessed. Well differentiated astrocytomas (n = 14) had smaller estimates of the volume-weighted mean nuclear volume and mean...... nuclear profile area, than those of anaplastic astrocytomas (n = 13) (2p = 3.1.10(-3) and 2p = 4.8.10(-3), respectively). No differences were seen between the latter type of tumor and glioblastomas (n = 19). The nuclear index was of the same magnitude in all three tumor types, whereas the mitotic index...

  6. Hawaii Clean Energy Initiative (HCEI) Scenario Analysis: Quantitative Estimates Used to Facilitate Working Group Discussions (2008-2010)

    Energy Technology Data Exchange (ETDEWEB)

    Braccio, R.; Finch, P.; Frazier, R.

    2012-03-01

    This report provides details on the Hawaii Clean Energy Initiative (HCEI) Scenario Analysis to identify potential policy options and evaluate their impact on reaching the 70% HECI goal, present possible pathways to attain the goal based on currently available technology, with an eye to initiatives under way in Hawaii, and provide an 'order-of-magnitude' cost estimate and a jump-start to action that would be adjusted with a better understanding of the technologies and market.

  7. Agreement between clinical estimation and a new quantitative analysis by Photoshop software in fundus and angiographic image variables.

    Science.gov (United States)

    Ramezani, Alireza; Ahmadieh, Hamid; Azarmina, Mohsen; Soheilian, Masoud; Dehghan, Mohammad H; Mohebbi, Mohammad R

    2009-12-01

    To evaluate the validity of a new method for the quantitative analysis of fundus or angiographic images using Photoshop 7.0 (Adobe, USA) software by comparing with clinical evaluation. Four hundred and eighteen fundus and angiographic images of diabetic patients were evaluated by three retina specialists and then by computing using Photoshop 7.0 software. Four variables were selected for comparison: amount of hard exudates (HE) on color pictures, amount of HE on red-free pictures, severity of leakage, and the size of the foveal avascular zone (FAZ). The coefficient of agreement (Kappa) between the two methods in the amount of HE on color and red-free photographs were 85% (0.69) and 79% (0.59), respectively. The agreement for severity of leakage was 72% (0.46). In the two methods for the evaluation of the FAZ size using the magic and lasso software tools, the agreement was 54% (0.09) and 89% (0.77), respectively. Agreement in the estimation of the FAZ size by the lasso magnetic tool was excellent and was almost as good in the quantification of HE on color and on red-free images. Considering the agreement of this new technique for the measurement of variables in fundus images using Photoshop software with the clinical evaluation, this method seems to have sufficient validity to be used for the quantitative analysis of HE, leakage, and FAZ size on the angiograms of diabetic patients.

  8. Patient and healthcare provider barriers to hypertension awareness, treatment and follow up: a systematic review and meta-analysis of qualitative and quantitative studies.

    Directory of Open Access Journals (Sweden)

    Rasha Khatib

    Full Text Available BACKGROUND: Although the importance of detecting, treating, and controlling hypertension has been recognized for decades, the majority of patients with hypertension remain uncontrolled. The path from evidence to practice contains many potential barriers, but their role has not been reviewed systematically. This review aimed to synthesize and identify important barriers to hypertension control as reported by patients and healthcare providers. METHODS: Electronic databases MEDLINE, EMBASE and Global Health were searched systematically up to February 2013. Two reviewers independently selected eligible studies. Two reviewers categorized barriers based on a theoretical framework of behavior change. The theoretical framework suggests that a change in behavior requires a strong commitment to change [intention], the necessary skills and abilities to adopt the behavior [capability], and an absence of health system and support constraints. FINDINGS: Twenty-five qualitative studies and 44 quantitative studies met the inclusion criteria. In qualitative studies, health system barriers were most commonly discussed in studies of patients and health care providers. Quantitative studies identified disagreement with clinical recommendations as the most common barrier among health care providers. Quantitative studies of patients yielded different results: lack of knowledge was the most common barrier to hypertension awareness. Stress, anxiety and depression were most commonly reported as barriers that hindered or delayed adoption of a healthier lifestyle. In terms of hypertension treatment adherence, patients mostly reported forgetting to take their medication. Finally, priority setting barriers were most commonly reported by patients in terms of following up with their health care providers. CONCLUSIONS: This review identified a wide range of barriers facing patients and health care providers pursuing hypertension control, indicating the need for targeted multi

  9. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of

  10. The Impact of Quantitative Data Provided by a Multi-spectral Digital Skin Lesion Analysis Device on Dermatologists'Decisions to Biopsy Pigmented Lesions.

    Science.gov (United States)

    Farberg, Aaron S; Winkelmann, Richard R; Tucker, Natalie; White, Richard; Rigel, Darrell S

    2017-09-01

    BACKGROUND: Early diagnosis of melanoma is critical to survival. New technologies, such as a multi-spectral digital skin lesion analysis (MSDSLA) device [MelaFind, STRATA Skin Sciences, Horsham, Pennsylvania] may be useful to enhance clinician evaluation of concerning pigmented skin lesions. Previous studies evaluated the effect of only the binary output. OBJECTIVE: The objective of this study was to determine how decisions dermatologists make regarding pigmented lesion biopsies are impacted by providing both the underlying classifier score (CS) and associated probability risk provided by multi-spectral digital skin lesion analysis. This outcome was also compared against the improvement reported with the provision of only the binary output. METHODS: Dermatologists attending an educational conference evaluated 50 pigmented lesions (25 melanomas and 25 benign lesions). Participants were asked if they would biopsy the lesion based on clinical images, and were asked this question again after being shown multi-spectral digital skin lesion analysis data that included the probability graphs and classifier score. RESULTS: Data were analyzed from a total of 160 United States board-certified dermatologists. Biopsy sensitivity for melanoma improved from 76 percent following clinical evaluation to 92 percent after quantitative multi-spectral digital skin lesion analysis information was provided ( p quantitative data were provided. Negative predictive value also increased (68% vs. 91%, panalysis (64% vs. 86%, p data into physician evaluation of pigmented lesions led to both increased sensitivity and specificity, thereby resulting in more accurate biopsy decisions.

  11. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wenchao Zhang

    2016-05-01

    Full Text Available The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS, for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.

  12. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    Science.gov (United States)

    Zhang, Wenchao; Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X

    2016-05-01

    The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.

  13. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  14. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)

    2014-07-15

    previously published pediatric patient doses that accounted for patient size in their dose calculation, and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusions: For organs fully covered within the scan volume, the average correlation of SSDE and organ absolute dose was found to be better than ±10%. In addition, this study provides a complete list of organ dose correlation factors (CF{sub SSDE}{sup organ}) for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE.

  15. Digital photography provides a fast, reliable, and noninvasive method to estimate anthocyanin pigment concentration in reproductive and vegetative plant tissues.

    Science.gov (United States)

    Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo

    2018-03-01

    Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the

  16. Technique for Determination of Rational Boundaries in Combining Construction and Installation Processes Based on Quantitative Estimation of Technological Connections

    Science.gov (United States)

    Gusev, E. V.; Mukhametzyanov, Z. R.; Razyapov, R. V.

    2017-11-01

    The problems of the existing methods for the determination of combining and technologically interlinked construction processes and activities are considered under the modern construction conditions of various facilities. The necessity to identify common parameters that characterize the interaction nature of all the technology-related construction and installation processes and activities is shown. The research of the technologies of construction and installation processes for buildings and structures with the goal of determining a common parameter for evaluating the relationship between technologically interconnected processes and construction works are conducted. The result of this research was to identify the quantitative evaluation of interaction construction and installation processes and activities in a minimum technologically necessary volume of the previous process allowing one to plan and organize the execution of a subsequent technologically interconnected process. The quantitative evaluation is used as the basis for the calculation of the optimum range of the combination of processes and activities. The calculation method is based on the use of the graph theory. The authors applied a generic characterization parameter to reveal the technological links between construction and installation processes, and the proposed technique has adaptive properties which are key for wide use in organizational decisions forming. The article provides a written practical significance of the developed technique.

  17. Seeing the Forest through the Trees: Citizen Scientists Provide Critical Data to Refine Aboveground Carbon Estimates in Restored Riparian Forests

    Science.gov (United States)

    Viers, J. H.

    2013-12-01

    Integrating citizen scientists into ecological informatics research can be difficult due to limited opportunities for meaningful engagement given vast data streams. This is particularly true for analysis of remotely sensed data, which are increasingly being used to quantify ecosystem services over space and time, and to understand how land uses deliver differing values to humans and thus inform choices about future human actions. Carbon storage and sequestration are such ecosystem services, and recent environmental policy advances in California (i.e., AB 32) have resulted in a nascent carbon market that is helping fuel the restoration of riparian forests in agricultural landscapes. Methods to inventory and monitor aboveground carbon for market accounting are increasingly relying on hyperspatial remotely sensed data, particularly the use of light detection and ranging (LiDAR) technologies, to estimate biomass. Because airborne discrete return LiDAR can inexpensively capture vegetation structural differences at high spatial resolution ( 1000 ha), its use is rapidly increasing, resulting in vast stores of point cloud and derived surface raster data. While established algorithms can quantify forest canopy structure efficiently, the highly complex nature of native riparian forests can result in highly uncertain estimates of biomass due to differences in composition (e.g., species richness, age class) and structure (e.g., stem density). This study presents the comparative results of standing carbon estimates refined with field data collected by citizen scientists at three different sites, each capturing a range of agricultural, remnant forest, and restored forest cover types. These citizen science data resolve uncertainty in composition and structure, and improve allometric scaling models of biomass and thus estimates of aboveground carbon. Results indicate that agricultural land and horticulturally restored riparian forests store similar amounts of aboveground carbon

  18. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    Science.gov (United States)

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and q

  19. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Haider, Masoom A. [Department of Medical Imaging, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5, Canada and Department of Medical Imaging, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Yeung, Ivan W. T. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Department of Medical Physics, Stronach Regional Cancer Centre, Southlake Regional Health Centre, Newmarket, Ontario L3Y 2P9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2016-01-15

    quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.

  20. Towards a quantitative, measurement-based estimate of the uncertainty in photon mass attenuation coefficients at radiation therapy energies

    Science.gov (United States)

    Ali, E. S. M.; Spencer, B.; McEwen, M. R.; Rogers, D. W. O.

    2015-02-01

    In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy—i.e. 100 keV (orthovoltage) to 25 MeV—using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ˜0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative ‘envelope of uncertainty’ of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).

  1. Quantitative microbial risk assessment to estimate the health risk from exposure to noroviruses in polluted surface water in South Africa.

    Science.gov (United States)

    Van Abel, Nicole; Mans, Janet; Taylor, Maureen B

    2017-10-01

    This study assessed the risks posed by noroviruses (NoVs) in surface water used for drinking, domestic, and recreational purposes in South Africa (SA), using a quantitative microbial risk assessment (QMRA) methodology that took a probabilistic approach coupling an exposure assessment with four dose-response models to account for uncertainty. Water samples from three rivers were found to be contaminated with NoV GI (80-1,900 gc/L) and GII (420-9,760 gc/L) leading to risk estimates that were lower for GI than GII. The volume of water consumed and the probabilities of infection were lower for domestic (2.91 × 10 -8 to 5.19 × 10 -1 ) than drinking water exposures (1.04 × 10 -5 to 7.24 × 10 -1 ). The annual probabilities of illness varied depending on the type of recreational water exposure with boating (3.91 × 10 -6 to 5.43 × 10 -1 ) and swimming (6.20 × 10 -6 to 6.42 × 10 -1 ) being slightly greater than playing next to/in the river (5.30 × 10 -7 to 5.48 × 10 -1 ). The QMRA was sensitive to the choice of dose-response model. The risk of NoV infection or illness from contaminated surface water is extremely high in SA, especially for lower socioeconomic individuals, but is similar to reported risks from limited international studies.

  2. An operational weather radar-based Quantitative Precipitation Estimation and its application in catchment water resources modeling

    DEFF Research Database (Denmark)

    He, Xin; Vejen, Flemming; Stisen, Simon

    2011-01-01

    of precipitation compared with rain-gauge-based methods, thus providing the basis for better water resources assessments. The radar QPE algorithm called ARNE is a distance-dependent areal estimation method that merges radar data with ground surface observations. The method was applied to the Skjern River catchment...... in western Denmark where alternative precipitation estimates were also used as input to an integrated hydrologic model. The hydrologic responses from the model were analyzed by comparing radar- and ground-based precipitation input scenarios. Results showed that radar QPE products are able to generate...... reliable simulations of stream flow and water balance. The potential of using radar-based precipitation was found to be especially high at a smaller scale, where the impact of spatial resolution was evident from the stream discharge results. Also, groundwater recharge was shown to be sensitive...

  3. Quantitative fluorescence kinetic analysis of NADH and FAD in human plasma using three- and four-way calibration methods capable of providing the second-order advantage

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Chao [School of Chemistry and Chemical Engineering, Guizhou University, Guiyang 550025 (China); Wu, Hai-Long, E-mail: hlwu@hnu.edu.cn [State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha 410082 (China); Zhou, Chang; Xiang, Shou-Xia; Zhang, Xiao-Hua; Yu, Yong-Jie; Yu, Ru-Qin [State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha 410082 (China)

    2016-03-03

    The metabolic coenzymes reduced nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD) are the primary electron donor and acceptor respectively, participate in almost all biological metabolic pathways. This study develops a novel method for the quantitative kinetic analysis of the degradation reaction of NADH and the formation reaction of FAD in human plasma containing an uncalibrated interferent, by using three-way calibration based on multi-way fluorescence technique. In the three-way analysis, by using the calibration set in a static manner, we directly predicted the concentrations of both analytes in the mixture at any time after the start of their reactions, even in the presence of an uncalibrated spectral interferent and a varying background interferent. The satisfactory quantitative results indicate that the proposed method allows one to directly monitor the concentration of each analyte in the mixture as the function of time in real-time and nondestructively, instead of determining the concentration after the analytical separation. Thereafter, we fitted the first-order rate law to their concentration data throughout their reactions. Additionally, a four-way calibration procedure is developed as an alternative for highly collinear systems. The results of the four-way analysis confirmed the results of the three-way analysis and revealed that both the degradation reaction of NADH and the formation reaction of FAD in human plasma fit the first-order rate law. The proposed methods could be expected to provide promising tools for simultaneous kinetic analysis of multiple reactions in complex systems in real-time and nondestructively. - Highlights: • A novel three-way calibration method for the quantitative kinetic analysis of NADH and FAD in human plasma is proposed. • The method can directly monitor the concentration of each analyte in the reaction in real-time and nondestructively. • The method has the second-order advantage. • A

  4. Quantitative fluorescence kinetic analysis of NADH and FAD in human plasma using three- and four-way calibration methods capable of providing the second-order advantage

    International Nuclear Information System (INIS)

    Kang, Chao; Wu, Hai-Long; Zhou, Chang; Xiang, Shou-Xia; Zhang, Xiao-Hua; Yu, Yong-Jie; Yu, Ru-Qin

    2016-01-01

    The metabolic coenzymes reduced nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD) are the primary electron donor and acceptor respectively, participate in almost all biological metabolic pathways. This study develops a novel method for the quantitative kinetic analysis of the degradation reaction of NADH and the formation reaction of FAD in human plasma containing an uncalibrated interferent, by using three-way calibration based on multi-way fluorescence technique. In the three-way analysis, by using the calibration set in a static manner, we directly predicted the concentrations of both analytes in the mixture at any time after the start of their reactions, even in the presence of an uncalibrated spectral interferent and a varying background interferent. The satisfactory quantitative results indicate that the proposed method allows one to directly monitor the concentration of each analyte in the mixture as the function of time in real-time and nondestructively, instead of determining the concentration after the analytical separation. Thereafter, we fitted the first-order rate law to their concentration data throughout their reactions. Additionally, a four-way calibration procedure is developed as an alternative for highly collinear systems. The results of the four-way analysis confirmed the results of the three-way analysis and revealed that both the degradation reaction of NADH and the formation reaction of FAD in human plasma fit the first-order rate law. The proposed methods could be expected to provide promising tools for simultaneous kinetic analysis of multiple reactions in complex systems in real-time and nondestructively. - Highlights: • A novel three-way calibration method for the quantitative kinetic analysis of NADH and FAD in human plasma is proposed. • The method can directly monitor the concentration of each analyte in the reaction in real-time and nondestructively. • The method has the second-order advantage. • A

  5. Quantitative investigation of the edge enhancement in in-line phase contrast projections and tomosynthesis provided by distributing microbubbles on the interface between two tissues: a phantom study

    Science.gov (United States)

    Wu, Di; Donovan Wong, Molly; Li, Yuhua; Fajardo, Laurie; Zheng, Bin; Wu, Xizeng; Liu, Hong

    2017-12-01

    The objective of this study was to quantitatively investigate the ability to distribute microbubbles along the interface between two tissues, in an effort to improve the edge and/or boundary features in phase contrast imaging. The experiments were conducted by employing a custom designed tissue simulating phantom, which also simulated a clinical condition where the ligand-targeted microbubbles are self-aggregated on the endothelium of blood vessels surrounding malignant cells. Four different concentrations of microbubble suspensions were injected into the phantom: 0%, 0.1%, 0.2%, and 0.4%. A time delay of 5 min was implemented before image acquisition to allow the microbubbles to become distributed at the interface between the acrylic and the cavity simulating a blood vessel segment. For comparison purposes, images were acquired using three system configurations for both projection and tomosynthesis imaging with a fixed radiation dose delivery: conventional low-energy contact mode, low-energy in-line phase contrast and high-energy in-line phase contrast. The resultant images illustrate the edge feature enhancements in the in-line phase contrast imaging mode when the microbubble concentration is extremely low. The quantitative edge-enhancement-to-noise ratio calculations not only agree with the direct image observations, but also indicate that the edge feature enhancement can be improved by increasing the microbubble concentration. In addition, high-energy in-line phase contrast imaging provided better performance in detecting low-concentration microbubble distributions.

  6. Rapid non-destructive quantitative estimation of urania/ thoria in mixed thorium uranium di-oxide pellets by high-resolution gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Shriwastwa, B.B.; Kumar, Anil; Raghunath, B.; Nair, M.R.; Abani, M.C.; Ramachandran, R.; Majumdar, S.; Ghosh, J.K.

    2001-01-01

    A non-destructive technique using high-resolution gamma-ray spectrometry has been standardised for quantitative estimation of uranium/thorium in mixed (ThO 2 -UO 2 ) fuel pellets of varying composition. Four gamma energies were selected; two each from the uranium and thorium series and the time of counting has been optimised. This technique can be used for rapid estimation of U/Th percentage in a large number of mixed fuel pellets from a production campaign

  7. A decision tree model to estimate the value of information provided by a groundwater quality monitoring network

    Science.gov (United States)

    Khader, A. I.; Rosenberg, D. E.; McKee, M.

    2013-05-01

    Groundwater contaminated with nitrate poses a serious health risk to infants when this contaminated water is used for culinary purposes. To avoid this health risk, people need to know whether their culinary water is contaminated or not. Therefore, there is a need to design an effective groundwater monitoring network, acquire information on groundwater conditions, and use acquired information to inform management options. These actions require time, money, and effort. This paper presents a method to estimate the value of information (VOI) provided by a groundwater quality monitoring network located in an aquifer whose water poses a spatially heterogeneous and uncertain health risk. A decision tree model describes the structure of the decision alternatives facing the decision-maker and the expected outcomes from these alternatives. The alternatives include (i) ignore the health risk of nitrate-contaminated water, (ii) switch to alternative water sources such as bottled water, or (iii) implement a previously designed groundwater quality monitoring network that takes into account uncertainties in aquifer properties, contaminant transport processes, and climate (Khader, 2012). The VOI is estimated as the difference between the expected costs of implementing the monitoring network and the lowest-cost uninformed alternative. We illustrate the method for the Eocene Aquifer, West Bank, Palestine, where methemoglobinemia (blue baby syndrome) is the main health problem associated with the principal contaminant nitrate. The expected cost of each alternative is estimated as the weighted sum of the costs and probabilities (likelihoods) associated with the uncertain outcomes resulting from the alternative. Uncertain outcomes include actual nitrate concentrations in the aquifer, concentrations reported by the monitoring system, whether people abide by manager recommendations to use/not use aquifer water, and whether people get sick from drinking contaminated water. Outcome costs

  8. The health system burden of chronic disease care: an estimation of provider costs of selected chronic diseases in Uganda.

    Science.gov (United States)

    Settumba, Stella Nalukwago; Sweeney, Sedona; Seeley, Janet; Biraro, Samuel; Mutungi, Gerald; Munderi, Paula; Grosskurth, Heiner; Vassall, Anna

    2015-06-01

    To explore the chronic disease services in Uganda: their level of utilisation, the total service costs and unit costs per visit. Full financial and economic cost data were collected from 12 facilities in two districts, from the provider's perspective. A combination of ingredients-based and step-down allocation costing approaches was used. The diseases under study were diabetes, hypertension, chronic obstructive pulmonary disease (COPD), epilepsy and HIV infection. Data were collected through a review of facility records, direct observation and structured interviews with health workers. Provision of chronic care services was concentrated at higher-level facilities. Excluding drugs, the total costs for NCD care fell below 2% of total facility costs. Unit costs per visit varied widely, both across different levels of the health system, and between facilities of the same level. This variability was driven by differences in clinical and drug prescribing practices. Most patients reported directly to higher-level facilities, bypassing nearby peripheral facilities. NCD services in Uganda are underfunded particularly at peripheral facilities. There is a need to estimate the budget impact of improving NCD care and to standardise treatment guidelines. © 2015 The Authors. Tropical Medicine & International Health Published by John Wiley & Sons Ltd.

  9. Intra-rater reliability of motor unit number estimation and quantitative motor unit analysis in subjects with amyotrophic lateral sclerosis.

    Science.gov (United States)

    Ives, Colleen T; Doherty, Timothy J

    2014-01-01

    To assess the intra-rater reliability of decomposition-enhanced spike-triggered averaging (DE-STA) motor unit number estimation (MUNE) and quantitative motor unit potential analysis in the upper trapezius (UT) and biceps brachii (BB) of subjects with amyotrophic lateral sclerosis (ALS) and to compare the results from the UT to control data. Patients diagnosed with clinically probable or definite ALS completed the experimental protocol twice with the same evaluator for the UT (n=10) and BB (n=9). Intra-rater reliability for the UT was good for the maximum compound muscle action potential (CMAP) (ICC=0.88), mean surface-detected motor unit potential (S-MUP) (ICC=0.87) and MUNE (ICC=0.88), and for the BB was moderate for maximum CMAP (ICC=0.61), and excellent for mean S-MUP (ICC=0.94) and MUNE (ICC=0.93). A significant difference between tests was found for UT MUNE. Comparing subjects with ALS to control subjects, UT maximum CMAP (p<0.01) and MUNE (p<0.001) values were significantly lower, and mean S-MUP values significantly greater (p<0.05) in subjects with ALS. This study has demonstrated the ability of the DE-STA MUNE technique to collect highly reliable data from two separate muscle groups and to detect the underlying pathophysiology of the disease. This was the first study to examine the reliability of this technique in subjects with ALS, and demonstrates its potential for future use as an outcome measure in ALS clinical trials and studies of ALS disease severity and natural history. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  10. Field data provide estimates of effective permeability, fracture spacing, well drainage area and incremental production in gas shales

    KAUST Repository

    Eftekhari, Behzad; Marder, M.; Patzek, Tadeusz

    2018-01-01

    the external unstimulated reservoir. This allows us to estimate for the first time the effective permeability of the unstimulated shale and the spacing of fractures in the stimulated region. From an analysis of wells in the Barnett shale, we find

  11. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process

    Science.gov (United States)

    Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.

    2016-06-01

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.

  12. Effects on the estimated cause-specific mortality fraction of providing physician reviewers with different formats of verbal autopsy data

    Directory of Open Access Journals (Sweden)

    Chow Clara

    2011-08-01

    a cause of death did not substantively influence the pattern of mortality estimated. Substantially abbreviated and simplified verbal autopsy questionnaires might provide robust information about high-level mortality patterns.

  13. Quantitative Estimation of Above Ground Crop Biomass using Ground-based, Airborne and Spaceborne Low Frequency Polarimetric Synthetic Aperture Radar

    Science.gov (United States)

    Koyama, C.; Watanabe, M.; Shimada, M.

    2016-12-01

    Estimation of crop biomass is one of the important challenges in environmental remote sensing related to agricultural as well as hydrological and meteorological applications. Usually passive optical data (photographs, spectral data) operating in the visible and near-infrared bands is used for such purposes. The virtue of optical remote sensing for yield estimation, however, is rather limited as the visible light can only provide information about the chemical characteristics of the canopy surface. Low frequency microwave signals with wavelength longer 20 cm have the potential to penetrate through the canopy and provide information about the whole vertical structure of vegetation from the top of the canopy down to the very soil surface. This phenomenon has been well known and exploited to detect targets under vegetation in the military radar application known as FOPEN (foliage penetration). With the availability of polarimetric interferometric SAR data the use PolInSAR techniques to retrieve vertical vegetation structures has become an attractive tool. However, PolInSAR is still highly experimental and suitable data is not yet widely available. In this study we focus on the use of operational dual-polarization L-band (1.27 GHz) SAR which is since the launch of Japan's Advanced Land Observing Satellite (ALOS, 2006-2011) available worldwide. Since 2014 ALOS-2 continues to deliver such kind of partial polarimetric data for the entire land surface. In addition to these spaceborne data sets we use airborne L-band SAR data acquired by the Japanese Pi-SAR-L2 as well as ultra-wideband (UWB) ground based SAR data operating in the frequency range from 1-4 GHz. By exploiting the complex dual-polarization [C2] Covariance matrix information, the scattering contributions from the canopy can be well separated from the ground reflections allowing for the establishment of semi-empirical relationships between measured radar reflectivity and the amount of fresh-weight above

  14. Quantitative Estimation of Coastal Changes along Selected Locations of Karnataka, India: A GIS and Remote Sensing Approach

    Digital Repository Service at National Institute of Oceanography (India)

    Vinayaraj, P.; Johnson, G.; Dora, G.U.; Philip, C.S.; SanilKumar, V.; Gowthaman, R.

    Qualitative and quantitative studies on changes of coastal geomorphology and shoreline of Karnataka, India have been carried out using toposheets of Survey of India and satellite imageries (IRS-P6 and IRS-1D). Changes during 30 years period...

  15. Towards a Quantitative Use of Satellite Remote Sensing in Crop Growth Models for Large Scale Agricultural Production Estimate (Invited)

    Science.gov (United States)

    Defourny, P.

    2013-12-01

    such the Green Area Index (GAI), fAPAR and fcover usually retrieved from MODIS, MERIS, SPOT-Vegetation described the quality of the green vegetation development. The GLOBAM (Belgium) and EU FP-7 MOCCCASIN projects (Russia) improved the standard products and were demonstrated over large scale. The GAI retrieved from MODIS time series using a purity index criterion depicted successfully the inter-annual variability. Furthermore, the quantitative assimilation of these GAI time series into a crop growth model improved the yield estimate over years. These results showed that the GAI assimilation works best at the district or provincial level. In the context of the GEO Ag., the Joint Experiment of Crop Assessment and Monitoring (JECAM) was designed to enable the global agricultural monitoring community to compare such methods and results over a variety of regional cropping systems. For a network of test sites around the world, satellite and field measurements are currently collected and will be made available for collaborative effort. This experiment should facilitate international standards for data products and reporting, eventually supporting the development of a global system of systems for agricultural crop assessment and monitoring.

  16. Impact of a nationwide study for surveillance of maternal near-miss on the quality of care provided by participating centers: a quantitative and qualitative approach

    Science.gov (United States)

    2014-01-01

    Background The Brazilian Network for Surveillance of Severe Maternal Morbidity was established in 27 centers in different regions of Brazil to investigate the frequency of severe maternal morbidity (near-miss and potentially life-threatening conditions) and associated factors, and to create a collaborative network for studies on perinatal health. It also allowed interventions aimed at improving the quality of care in the participating institutions. The objective of this study was to evaluate the perception of the professionals involved regarding the effect of participating in such network on the quality of care provided to women. Methods A mixed quantitative and qualitative study interviewed coordinators, investigators and managers from all the 27 obstetric units that had participated in the network. Following verbal informed consent, data were collected six and twelve months after the surveillance period using structured and semi-structured interviews that were conducted by telephone and recorded. A descriptive analysis for the quantitative and categorical data, and a thematic content analysis for the answers to the open questions were performed. Results The vast majority (93%) of interviewees considered it was important to have participated in the network and 95% that their ability to identify cases of severe maternal morbidity had improved. They also considered that the study had a positive effect, leading to changes in how cases were identified, better organization/standardization of team activities, changes in routines/protocols, implementation of auditing for severe cases, dissemination of knowledge at local/regional level and a contribution to local and/or national identification of maternal morbidity. After 12 months, interviewees mentioned the need to improve prenatal care and the scientific importance of the results. Some believed that there had been little or no impact due to the poor dissemination of information and the resistance of professionals to

  17. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  18. Rapid non-destructive quantitative estimation of urania/ thoria in mixed thorium uranium di-oxide pellets by high-resolution gamma-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Shriwastwa, B.B.; Kumar, Anil; Raghunath, B.; Nair, M.R.; Abani, M.C.; Ramachandran, R.; Majumdar, S.; Ghosh, J.K

    2001-06-01

    A non-destructive technique using high-resolution gamma-ray spectrometry has been standardised for quantitative estimation of uranium/thorium in mixed (ThO{sub 2}-UO{sub 2}) fuel pellets of varying composition. Four gamma energies were selected; two each from the uranium and thorium series and the time of counting has been optimised. This technique can be used for rapid estimation of U/Th percentage in a large number of mixed fuel pellets from a production campaign.

  19. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography

    Directory of Open Access Journals (Sweden)

    Manouras Aristomenis

    2009-08-01

    Full Text Available Abstract Background Visual assessment of left ventricular ejection fraction (LVEF is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE and triplane echocardiography (TPE using quantitative real-time three-dimensional echocardiography (RT3DE as the reference method. Methods Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. Results There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively without any significant bias (-0.5 ± 3.7% and -0.2 ± 2.9% respectively. Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Conclusion Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.

  20. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography.

    Science.gov (United States)

    Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Ake; Winter, Reidar

    2009-08-25

    Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 +/- 3.7% and -0.2 +/- 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.

  1. Comparison Of Quantitative Precipitation Estimates Derived From Rain Gauge And Radar Derived Algorithms For Operational Flash Flood Support.

    Science.gov (United States)

    Streubel, D. P.; Kodama, K.

    2014-12-01

    To provide continuous flash flood situational awareness and to better differentiate severity of ongoing individual precipitation events, the National Weather Service Research Distributed Hydrologic Model (RDHM) is being implemented over Hawaii and Alaska. In the implementation process of RDHM, three gridded precipitation analyses are used as forcing. The first analysis is a radar only precipitation estimate derived from WSR-88D digital hybrid reflectivity, a Z-R relationship and aggregated into an hourly ¼ HRAP grid. The second analysis is derived from a rain gauge network and interpolated into an hourly ¼ HRAP grid using PRISM climatology. The third analysis is derived from a rain gauge network where rain gauges are assigned static pre-determined weights to derive a uniform mean areal precipitation that is applied over a catchment on a ¼ HRAP grid. To assess the effect of different QPE analyses on the accuracy of RDHM simulations and to potentially identify a preferred analysis for operational use, each QPE was used to force RDHM to simulate stream flow for 20 USGS peak flow events. An evaluation of the RDHM simulations was focused on peak flow magnitude, peak flow timing, and event volume accuracy to be most relevant for operational use. Results showed RDHM simulations based on the observed rain gauge amounts were more accurate in simulating peak flow magnitude and event volume relative to the radar derived analysis. However this result was not consistent for all 20 events nor was it consistent for a few of the rainfall events where an annual peak flow was recorded at more than one USGS gage. Implications of this indicate that a more robust QPE forcing with the inclusion of uncertainty derived from the three analyses may provide a better input for simulating extreme peak flow events.

  2. Quantitative estimation of the influence of external vibrations on the measurement error of a coriolis mass-flow meter

    NARCIS (Netherlands)

    van de Ridder, Bert; Hakvoort, Wouter; van Dijk, Johannes; Lötters, Joost Conrad; de Boer, Andries; Dimitrovova, Z.; de Almeida, J.R.

    2013-01-01

    In this paper the quantitative influence of external vibrations on the measurement value of a Coriolis Mass-Flow Meter for low flows is investigated, with the eventual goal to reduce the influence of vibrations. Model results are compared with experimental results to improve the knowledge on how

  3. Quantitative coronary angiography in the estimation of the functional significance of coronary stenosis: correlations with dobutamine-atropine stress test

    NARCIS (Netherlands)

    J.M.P. Baptista da Silva (José); M. Arnese (Mariarosaria); J.R.T.C. Roelandt (Jos); P.M. Fioretti (Paolo); D.T.J. Keane (David); J. Escaned (Javier); C. di Mario (Carlo); P.W.J.C. Serruys (Patrick); H. Boersma (Eric)

    1994-01-01

    textabstractOBJECTIVES. The purpose of this study was to determine the predictive value of quantitative coronary angiography in the assessment of the functional significance of coronary stenosis as judged from the development of left ventricular wall motion abnormalities during dobutamine-atropine

  4. Interleaved quantitative BOLD: Combining extravascular R2' - and intravascular R2-measurements for estimation of deoxygenated blood volume and hemoglobin oxygen saturation.

    Science.gov (United States)

    Lee, Hyunyeol; Englund, Erin K; Wehrli, Felix W

    2018-03-23

    Quantitative BOLD (qBOLD), a non-invasive MRI method for assessment of hemodynamic and metabolic properties of the brain in the baseline state, provides spatial maps of deoxygenated blood volume fraction (DBV) and hemoglobin oxygen saturation (HbO 2 ) by means of an analytical model for the temporal evolution of free-induction-decay signals in the extravascular compartment. However, mutual coupling between DBV and HbO 2 in the signal model results in considerable estimation uncertainty precluding achievement of a unique set of solutions. To address this problem, we developed an interleaved qBOLD method (iqBOLD) that combines extravascular R 2 ' and intravascular R 2 mapping techniques so as to obtain prior knowledge for the two unknown parameters. To achieve these goals, asymmetric spin echo and velocity-selective spin-labeling (VSSL) modules were interleaved in a single pulse sequence. Prior to VSSL, arterial blood and CSF signals were suppressed to produce reliable estimates for cerebral venous blood volume fraction (CBV v ) as well as venous blood R 2 (to yield HbO 2 ). Parameter maps derived from the VSSL module were employed to initialize DBV and HbO 2 in the qBOLD processing. Numerical simulations and in vivo experiments at 3 T were performed to evaluate the performance of iqBOLD in comparison to the parent qBOLD method. Data obtained in eight healthy subjects yielded plausible values averaging 60.1 ± 3.3% for HbO 2 and 3.1 ± 0.5 and 2.0 ± 0.4% for DBV in gray and white matter, respectively. Furthermore, the results show that prior estimates of CBV v and HbO 2 from the VSSL component enhance the solution stability in the qBOLD processing, and thus suggest the feasibility of iqBOLD as a promising alternative to the conventional technique for quantifying neurometabolic parameters. Copyright © 2018. Published by Elsevier Inc.

  5. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  6. Field data provide estimates of effective permeability, fracture spacing, well drainage area and incremental production in gas shales

    KAUST Repository

    Eftekhari, Behzad

    2018-05-23

    About half of US natural gas comes from gas shales. It is valuable to study field production well by well. We present a field data-driven solution for long-term shale gas production from a horizontal, hydrofractured well far from other wells and reservoir boundaries. Our approach is a hybrid between an unstructured big-data approach and physics-based models. We extend a previous two-parameter scaling theory of shale gas production by adding a third parameter that incorporates gas inflow from the external unstimulated reservoir. This allows us to estimate for the first time the effective permeability of the unstimulated shale and the spacing of fractures in the stimulated region. From an analysis of wells in the Barnett shale, we find that on average stimulation fractures are spaced every 20 m, and the effective permeability of the unstimulated region is 100 nanodarcy. We estimate that over 30 years on production the Barnett wells will produce on average about 20% more gas because of inflow from the outside of the stimulated volume. There is a clear tradeoff between production rate and ultimate recovery in shale gas development. In particular, our work has strong implications for well spacing in infill drilling programs.

  7. Large-Scale Survey Findings Inform Patients’ Experiences in Using Secure Messaging to Engage in Patient-Provider Communication and Self-Care Management: A Quantitative Assessment

    Science.gov (United States)

    Patel, Nitin R; Lind, Jason D; Antinori, Nicole

    2015-01-01

    Background Secure email messaging is part of a national transformation initiative in the United States to promote new models of care that support enhanced patient-provider communication. To date, only a limited number of large-scale studies have evaluated users’ experiences in using secure email messaging. Objective To quantitatively assess veteran patients’ experiences in using secure email messaging in a large patient sample. Methods A cross-sectional mail-delivered paper-and-pencil survey study was conducted with a sample of respondents identified as registered for the Veteran Health Administrations’ Web-based patient portal (My HealtheVet) and opted to use secure messaging. The survey collected demographic data, assessed computer and health literacy, and secure messaging use. Analyses conducted on survey data include frequencies and proportions, chi-square tests, and one-way analysis of variance. Results The majority of respondents (N=819) reported using secure messaging 6 months or longer (n=499, 60.9%). They reported secure messaging to be helpful for completing medication refills (n=546, 66.7%), managing appointments (n=343, 41.9%), looking up test results (n=350, 42.7%), and asking health-related questions (n=340, 41.5%). Notably, some respondents reported using secure messaging to address sensitive health topics (n=67, 8.2%). Survey responses indicated that younger age (P=.039) and higher levels of education (P=.025) and income (P=.003) were associated with more frequent use of secure messaging. Females were more likely to report using secure messaging more often, compared with their male counterparts (P=.098). Minorities were more likely to report using secure messaging more often, at least once a month, compared with nonminorities (P=.086). Individuals with higher levels of health literacy reported more frequent use of secure messaging (P=.007), greater satisfaction (P=.002), and indicated that secure messaging is a useful (P=.002) and easy

  8. Large-Scale Survey Findings Inform Patients' Experiences in Using Secure Messaging to Engage in Patient-Provider Communication and Self-Care Management: A Quantitative Assessment.

    Science.gov (United States)

    Haun, Jolie N; Patel, Nitin R; Lind, Jason D; Antinori, Nicole

    2015-12-21

    Secure email messaging is part of a national transformation initiative in the United States to promote new models of care that support enhanced patient-provider communication. To date, only a limited number of large-scale studies have evaluated users' experiences in using secure email messaging. To quantitatively assess veteran patients' experiences in using secure email messaging in a large patient sample. A cross-sectional mail-delivered paper-and-pencil survey study was conducted with a sample of respondents identified as registered for the Veteran Health Administrations' Web-based patient portal (My HealtheVet) and opted to use secure messaging. The survey collected demographic data, assessed computer and health literacy, and secure messaging use. Analyses conducted on survey data include frequencies and proportions, chi-square tests, and one-way analysis of variance. The majority of respondents (N=819) reported using secure messaging 6 months or longer (n=499, 60.9%). They reported secure messaging to be helpful for completing medication refills (n=546, 66.7%), managing appointments (n=343, 41.9%), looking up test results (n=350, 42.7%), and asking health-related questions (n=340, 41.5%). Notably, some respondents reported using secure messaging to address sensitive health topics (n=67, 8.2%). Survey responses indicated that younger age (P=.039) and higher levels of education (P=.025) and income (P=.003) were associated with more frequent use of secure messaging. Females were more likely to report using secure messaging more often, compared with their male counterparts (P=.098). Minorities were more likely to report using secure messaging more often, at least once a month, compared with nonminorities (P=.086). Individuals with higher levels of health literacy reported more frequent use of secure messaging (P=.007), greater satisfaction (P=.002), and indicated that secure messaging is a useful (P=.002) and easy-to-use (P≤.001) communication tool, compared

  9. Derelict fishing line provides a useful proxy for estimating levels of non-compliance with no-take marine reserves.

    Directory of Open Access Journals (Sweden)

    David H Williamson

    Full Text Available No-take marine reserves (NTMRs are increasingly being established to conserve or restore biodiversity and to enhance the sustainability of fisheries. Although effectively designed and protected NTMR networks can yield conservation and fishery benefits, reserve effects often fail to manifest in systems where there are high levels of non-compliance by fishers (poaching. Obtaining reliable estimates of NTMR non-compliance can be expensive and logistically challenging, particularly in areas with limited or non-existent resources for conducting surveillance and enforcement. Here we assess the utility of density estimates and re-accumulation rates of derelict (lost and abandoned fishing line as a proxy for fishing effort and NTMR non-compliance on fringing coral reefs in three island groups of the Great Barrier Reef Marine Park (GBRMP, Australia. Densities of derelict fishing line were consistently lower on reefs within old (>20 year NTMRs than on non-NTMR reefs (significantly in the Palm and Whitsunday Islands, whereas line densities did not differ significantly between reefs in new NTMRs (5 years of protection and non-NTMR reefs. A manipulative experiment in which derelict fishing lines were removed from a subset of the monitoring sites demonstrated that lines re-accumulated on NTMR reefs at approximately one third (32.4% of the rate observed on non-NTMR reefs over a thirty-two month period. Although these inshore NTMRs have long been considered some of the best protected within the GBRMP, evidence presented here suggests that the level of non-compliance with NTMR regulations is higher than previously assumed.

  10. A quantitative magnetic resonance histology atlas of postnatal rat brain development with regional estimates of growth and variability.

    Science.gov (United States)

    Calabrese, Evan; Badea, Alexandra; Watson, Charles; Johnson, G Allan

    2013-05-01

    There has been growing interest in the role of postnatal brain development in the etiology of several neurologic diseases. The rat has long been recognized as a powerful model system for studying neuropathology and the safety of pharmacologic treatments. However, the complex spatiotemporal changes that occur during rat neurodevelopment remain to be elucidated. This work establishes the first magnetic resonance histology (MRH) atlas of the developing rat brain, with an emphasis on quantitation. The atlas comprises five specimens at each of nine time points, imaged with eight distinct MR contrasts and segmented into 26 developmentally defined brain regions. The atlas was used to establish a timeline of morphometric changes and variability throughout neurodevelopment and represents a quantitative database of rat neurodevelopment for characterizing rat models of human neurologic disease. Published by Elsevier Inc.

  11. Quantitative estimation of viable myocardium in the infarcted zone by infarct-redistribution map from images of exercise thallium-201 emission computed tomography

    International Nuclear Information System (INIS)

    Sekiai, Yasuhiro

    1988-01-01

    To evaluate, quantitatively, the viable myocardium in the infarcted zone, we invented the infarct-redistribution map which is produced from images of exercise thallium-201 emission computed tomography performed on 10 healthy subjects and 20 patients with myocardial infarction. The map displayed a left ventricle in which the infarcted area both with and without redistribution, the redistribution area without infarction, and normal perfusion area were shown separated in same screen. In these circumstances, the nonredistribution infarct lesion was found as being surrounded by the redistribution area. Indices of infarct and redistribution extent (defect score, % defect, redistribution ratio (RR) and redistribution index (RI)), were induced from the map and were used for quantitative analysis of the redistribution area and as the basis for comparative discussion regarding regional wall motion of the left ventricle. The quantitative indices of defect score, % defect, RR and RI were consistent with the visual assessment of planar images in detecting the extent of redistribution. Furthermore, defect score and % defect had an inverted linear relationship with % shortening (r = -0.573; p < 0.05, r = -0.536; p < 0.05, respectively), and RI had a good linear relationship with % shortening (r = 0.669; p < 0.01). We conclude that the infarct-redistribution map accurately reflects the myocardial viability and therefore may be useful for quantitative estimation of viable myocardium in the infarcted zone. (author)

  12. Probabilistic quantitative microbial risk assessment model of norovirus from wastewater irrigated vegetables in Ghana using genome copies and fecal indicator ratio conversion for estimating exposure dose

    DEFF Research Database (Denmark)

    Owusu-Ansah, Emmanuel de-Graft Johnson; Sampson, Angelina; Amponsah, Samuel K.

    2017-01-01

    physical and environmental factors that might influence the reliability of using indicator organisms in microbial risk assessment. The challenges facing analytical studies on virus enumeration (genome copies or particles) have contributed to the already existing lack of data in QMRA modelling. This study......The need to replace the commonly applied fecal indicator conversions ratio (an assumption of 1:10− 5 virus to fecal indicator organism) in Quantitative Microbial Risk Assessment (QMRA) with models based on quantitative data on the virus of interest has gained prominence due to the different...... attempts to fit a QMRA model to genome copies of norovirus data. The model estimates the risk of norovirus infection from the intake of vegetables irrigated with wastewater from different sources. The results were compared to the results of a corresponding model using the fecal indicator conversion ratio...

  13. Stereological estimation of nuclear volume and other quantitative histopathological parameters in the prognostic evaluation of supraglottic laryngeal squamous cell carcinoma

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Bennedbaek, O; Pilgaard, J

    1989-01-01

    The aim of this study was to investigate various approaches to the grading of malignancy in pre-treatment biopsies from patients with supraglottic laryngeal squamous cell carcinoma. The prospects of objective malignancy grading based on stereological estimation of the volume-weighted mean nuclear...... volume, nuclear Vv, and nuclear volume fraction, Vv(nuc/tis), along with morphometrical 2-dimensional estimation of nuclear density index, NI, and mitotic activity index, MI, were investigated and compared with the current morphological, multifactorial grading system. The reproducibility among two...... observers of the latter was poor in the material which consisted of 35 biopsy specimens. Unbiased estimates of nuclear Vv were on the average 385 microns3 (CV = 0.44), with more than 90% of the associated variance attributable to differences in nuclear Vv among individual lesions. Nuclear Vv was positively...

  14. Comparison of NIS and NHIS/NIPRCS vaccination coverage estimates. National Immunization Survey. National Health Interview Survey/National Immunization Provider Record Check Study.

    Science.gov (United States)

    Bartlett, D L; Ezzati-Rice, T M; Stokley, S; Zhao, Z

    2001-05-01

    The National Immunization Survey (NIS) and the National Health Interview Survey (NHIS) produce national coverage estimates for children aged 19 months to 35 months. The NIS is a cost-effective, random-digit-dialing telephone survey that produces national and state-level vaccination coverage estimates. The National Immunization Provider Record Check Study (NIPRCS) is conducted in conjunction with the annual NHIS, which is a face-to-face household survey. As the NIS is a telephone survey, potential coverage bias exists as the survey excludes children living in nontelephone households. To assess the validity of estimates of vaccine coverage from the NIS, we compared 1995 and 1996 NIS national estimates with results from the NHIS/NIPRCS for the same years. Both the NIS and the NHIS/NIPRCS produce similar results. The NHIS/NIPRCS supports the findings of the NIS.

  15. A quantitative real time polymerase chain reaction approach for estimating processed animal proteins in feed: preliminary data

    Directory of Open Access Journals (Sweden)

    Maria Cesarina Abete

    2013-04-01

    Full Text Available Lifting of the ban on the use of processed animal proteins (PAPs from non-ruminants in non-ruminant feed is in the wind, avoiding intraspecies recycling. Discrimination of species will be performed through polymerase chain reaction (PCR, which is at a moment a merely qualitative method. Nevertheless, quantification of PAPs in feed is needed. The aim of this study was to approach the quantitative determination of PAPs in feed through Real Time (RT-PCR technique; three different protocols picked up from the literature were tested. Three different kind of matrices were examined: pure animal meals (bovine, chicken and pork; one feed sample certified by the European reference laboratory on animal proteins (EURL AP in feed spiked with 0.1% bovine meal; and genomic DNAs from bovine, chicken and pork muscles. The limit of detection (LOD of the three protocols was set up. All the results obtained from the three protocols considered failed in the quantification process, most likely due to the uncertain copy numbers of the analytical targets chosen. This preliminary study will allow us to address further investigations, with the purpose of developing a RT-PCR quantitative method.

  16. A Concurrent Mixed Methods Approach to Examining the Quantitative and Qualitative Meaningfulness of Absolute Magnitude Estimation Scales in Survey Research

    Science.gov (United States)

    Koskey, Kristin L. K.; Stewart, Victoria C.

    2014-01-01

    This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…

  17. A study on deep geological environment for the radwaste disposal - Estimation of roughness for the quantitative analysis of fracture transmissivity

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Yul; Kim, J. Y.; Kim, Y. S.; Hyun, H. J. [Korea Institute of Geology, Mining and Materials, Taejon (Korea)

    2000-03-01

    Estimation of fracture roughness-as one of the basic hydraulic fracture parameters - is very important in assessing ground water flow described by using discrete fracture network modeling. Former manual estimation of the roughness for each fracture surface of drill cores is above all a tedious, time-consuming work and will often cause some ambiguities of roughness interpretation partly due to the subjective judgements of observers, and partly due to the measuring procedure itself. However, recently, indebt to the highly reliable Televiewer data for the fracture discrimination, it has led to a guess to develop a relationship between the traditional roughness method based on a linear profiles and the method from the Televiewer image based on a ellipsoidal profile. Hence, the aim of this work is to develop an automatic evaluation algorithm for measuring the roughness from the Televiewer images. A highly reliable software named 'RAF' has been developed and realized to the extent that its utility merits. In the developing procedure, various problems - such as the examination of a new base line(ellipsoidal) for measuring the unevenness of fracture, the elimination of overlapping fracture signatures or noise, the wavelet estimation according to the type of fractures and the digitalization of roughness etc. - were considered. With these consideration in mind, the newly devised algorithm for the estimation of roughness curves showed a great potential not only for avoiding ambiguities of roughness interpretation but also for the judgement of roughness classification. 12 refs., 23 figs. (Author)

  18. Bayesian estimation and use of high-throughput remote sensing indices for quantitative genetic analyses of leaf growth.

    Science.gov (United States)

    Baker, Robert L; Leong, Wen Fung; An, Nan; Brock, Marcus T; Rubin, Matthew J; Welch, Stephen; Weinig, Cynthia

    2018-02-01

    We develop Bayesian function-valued trait models that mathematically isolate genetic mechanisms underlying leaf growth trajectories by factoring out genotype-specific differences in photosynthesis. Remote sensing data can be used instead of leaf-level physiological measurements. Characterizing the genetic basis of traits that vary during ontogeny and affect plant performance is a major goal in evolutionary biology and agronomy. Describing genetic programs that specifically regulate morphological traits can be complicated by genotypic differences in physiological traits. We describe the growth trajectories of leaves using novel Bayesian function-valued trait (FVT) modeling approaches in Brassica rapa recombinant inbred lines raised in heterogeneous field settings. While frequentist approaches estimate parameter values by treating each experimental replicate discretely, Bayesian models can utilize information in the global dataset, potentially leading to more robust trait estimation. We illustrate this principle by estimating growth asymptotes in the face of missing data and comparing heritabilities of growth trajectory parameters estimated by Bayesian and frequentist approaches. Using pseudo-Bayes factors, we compare the performance of an initial Bayesian logistic growth model and a model that incorporates carbon assimilation (A max ) as a cofactor, thus statistically accounting for genotypic differences in carbon resources. We further evaluate two remotely sensed spectroradiometric indices, photochemical reflectance (pri2) and MERIS Terrestrial Chlorophyll Index (mtci) as covariates in lieu of A max , because these two indices were genetically correlated with A max across years and treatments yet allow much higher throughput compared to direct leaf-level gas-exchange measurements. For leaf lengths in uncrowded settings, including A max improves model fit over the initial model. The mtci and pri2 indices also outperform direct A max measurements. Of particular

  19. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Science.gov (United States)

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  20. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    OpenAIRE

    Frank M. You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier

    2016-01-01

    The type 2 modified augmented design (MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic ...

  1. Apparent polyploidization after gamma irradiation: pitfalls in the use of quantitative polymerase chain reaction (qPCR) for the estimation of mitochondrial and nuclear DNA gene copy numbers.

    Science.gov (United States)

    Kam, Winnie W Y; Lake, Vanessa; Banos, Connie; Davies, Justin; Banati, Richard

    2013-05-30

    Quantitative polymerase chain reaction (qPCR) has been widely used to quantify changes in gene copy numbers after radiation exposure. Here, we show that gamma irradiation ranging from 10 to 100 Gy of cells and cell-free DNA samples significantly affects the measured qPCR yield, due to radiation-induced fragmentation of the DNA template and, therefore, introduces errors into the estimation of gene copy numbers. The radiation-induced DNA fragmentation and, thus, measured qPCR yield varies with temperature not only in living cells, but also in isolated DNA irradiated under cell-free conditions. In summary, the variability in measured qPCR yield from irradiated samples introduces a significant error into the estimation of both mitochondrial and nuclear gene copy numbers and may give spurious evidence for polyploidization.

  2. Toward Quantitative Estimation of the Effect of Aerosol Particles in the Global Climate Model and Cloud Resolving Model

    Science.gov (United States)

    Eskes, H.; Boersma, F.; Dirksen, R.; van der A, R.; Veefkind, P.; Levelt, P.; Brinksma, E.; van Roozendael, M.; de Smedt, I.; Gleason, J.

    2005-05-01

    Based on measurements of GOME on ESA ERS-2, SCIAMACHY on ESA-ENVISAT, and Ozone Monitoring Instrument (OMI) on the NASA EOS-Aura satellite there is now a unique 11-year dataset of global tropospheric nitrogen dioxide measurements from space. The retrieval approach consists of two steps. The first step is an application of the DOAS (Differential Optical Absorption Spectroscopy) approach which delivers the total absorption optical thickness along the light path (the slant column). For GOME and SCIAMACHY this is based on the DOAS implementation developed by BIRA/IASB. For OMI the DOAS implementation was developed in a collaboration between KNMI and NASA. The second retrieval step, developed at KNMI, estimates the tropospheric vertical column of NO2 based on the slant column, cloud fraction and cloud top height retrieval, stratospheric column estimates derived from a data assimilation approach and vertical profile estimates from space-time collocated profiles from the TM chemistry-transport model. The second step was applied with only minor modifications to all three instruments to generate a uniform 11-year data set. In our talk we will address the following topics: - A short summary of the retrieval approach and results - Comparisons with other retrievals - Comparisons with global and regional-scale models - OMI-SCIAMACHY and SCIAMACHY-GOME comparisons - Validation with independent measurements - Trend studies of NO2 for the past 11 years

  3. A new computational scheme on quantitative inner pipe boundary identification based on the estimation of effective thermal conductivity

    International Nuclear Information System (INIS)

    Fan Chunli; Sun Fengrui; Yang Li

    2008-01-01

    In the paper, the irregular configuration of the inner pipe boundary is identified based on the estimation of the circumferential distribution of the effective thermal conductivity of pipe wall. In order to simulate the true temperature measurement in the numerical examples, the finite element method is used to calculate the temperature distribution at the outer pipe surface based on the irregular shaped inner pipe boundary to be determined. Then based on this simulated temperature distribution the inverse identification work is conducted by employing the modified one-dimensional correction method, along with the finite volume method, to estimate the circumferential distribution of the effective thermal conductivity of the pipe wall. Thereafter, the inner pipe boundary shape is calculated based on the conductivity estimation result. A series of numerical experiments with different temperature measurement errors and different thermal conductivities of pipe wall have certified the effectiveness of the method. It is proved that the method is a simple, fast and accurate one for this inverse heat conduction problem.

  4. Quantitative estimation of renal function with dynamic contrast-enhanced MRI using a modified two-compartment model.

    Directory of Open Access Journals (Sweden)

    Bin Chen

    Full Text Available To establish a simple two-compartment model for glomerular filtration rate (GFR and renal plasma flow (RPF estimations by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI.A total of eight New Zealand white rabbits were included in DCE-MRI. The two-compartment model was modified with the impulse residue function in this study. First, the reliability of GFR measurement of the proposed model was compared with other published models in Monte Carlo simulation at different noise levels. Then, functional parameters were estimated in six healthy rabbits to test the feasibility of the new model. Moreover, in order to investigate its validity of GFR estimation, two rabbits underwent acute ischemia surgical procedure in unilateral kidney before DCE-MRI, and pixel-wise measurements were implemented to detect the cortical GFR alterations between normal and abnormal kidneys.The lowest variability of GFR and RPF measurements were found in the proposed model in the comparison. Mean GFR was 3.03±1.1 ml/min and mean RPF was 2.64±0.5 ml/g/min in normal animals, which were in good agreement with the published values. Moreover, large GFR decline was found in dysfunction kidneys comparing to the contralateral control group.Results in our study demonstrate that measurement of renal kinetic parameters based on the proposed model is feasible and it has the ability to discriminate GFR changes in healthy and diseased kidneys.

  5. Accuracy of the visual estimation method as a predictor of food intake in Alzheimer's patients provided with different types of food.

    Science.gov (United States)

    Amano, Nobuko; Nakamura, Tomiyo

    2018-02-01

    The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  6. Probabilistic quantitative microbial risk assessment model of norovirus from wastewater irrigated vegetables in Ghana using genome copies and fecal indicator ratio conversion for estimating exposure dose.

    Science.gov (United States)

    Owusu-Ansah, Emmanuel de-Graft Johnson; Sampson, Angelina; Amponsah, Samuel K; Abaidoo, Robert C; Dalsgaard, Anders; Hald, Tine

    2017-12-01

    The need to replace the commonly applied fecal indicator conversions ratio (an assumption of 1:10 -5 virus to fecal indicator organism) in Quantitative Microbial Risk Assessment (QMRA) with models based on quantitative data on the virus of interest has gained prominence due to the different physical and environmental factors that might influence the reliability of using indicator organisms in microbial risk assessment. The challenges facing analytical studies on virus enumeration (genome copies or particles) have contributed to the already existing lack of data in QMRA modelling. This study attempts to fit a QMRA model to genome copies of norovirus data. The model estimates the risk of norovirus infection from the intake of vegetables irrigated with wastewater from different sources. The results were compared to the results of a corresponding model using the fecal indicator conversion ratio to estimate the norovirus count. In all scenarios of using different water sources, the application of the fecal indicator conversion ratio underestimated the norovirus disease burden, measured by the Disability Adjusted Life Years (DALYs), when compared to results using the genome copies norovirus data. In some cases the difference was >2 orders of magnitude. All scenarios using genome copies met the 10 -4 DALY per person per year for consumption of vegetables irrigated with wastewater, although these results are considered to be highly conservative risk estimates. The fecal indicator conversion ratio model of stream-water and drain-water sources of wastewater achieved the 10 -6 DALY per person per year threshold, which tends to indicate an underestimation of health risk when compared to using genome copies for estimating the dose. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Deterministic quantitative risk assessment development

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)

    2009-07-01

    Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)

  8. Identification and quantitative grade estimation of Uranium mineralization based on gross-count gamma ray log at Lemajung sector West Kalimantan

    International Nuclear Information System (INIS)

    Adi Gunawan Muhammad

    2014-01-01

    Lemajung sector, is one of uranium potential sector in Kalan Area, West Kalimantan. Uranium mineralization is found in metasiltstone and schistose metapelite rock with general direction of mineralization east - west tilted ± 70° to the north parallel with schistocity pattern (S1). Drilling evaluation has been implemented in 2013 in Lemajung sector at R-05 (LEML-(S1). Drilling evaluation has been implemented in 2013 in Lemajung sector at R-05 (LEML-gamma ray. The purpose of this activity is to determine uranium mineralization grade with quantitatively methode in the rocks and also determine the geological conditions in sorounding of drilling area. The methodology involves determining the value of k-factor, geological mapping for the sorounding of drill hole, determination of the thickness and grade estimation of uranium mineralization with gross-count gamma ray. Quantitatively from grade estimation of uranium using gross-count gamma ray log can be known that the highest % eU_3O_8 in the hole R-05 (LEML-40) reaches 0.7493≈6354 ppm eU found at depth interval from 30.1 to 34.96 m. Uranium mineralization is present as fracture filling (vein) or tectonic breccia matrix filling in metasiltstone with thickness from 0.10 to 2.40 m associated with sulphide (pyrite) and characterized by high ratio of U/Th. (author)

  9. Comparison of quantitative estimation of intracerebral hemorrhage and infarct volumes after thromboembolism in an embolic stroke model

    DEFF Research Database (Denmark)

    Eriksen, Nina; Rasmussen, Rune Skovgaard; Overgaard, Karsten

    2014-01-01

    . Group 1 was treated with saline, and group 2 was treated with 20 mg/kg recombinant tissue plasminogen activator to promote intracerebral hemorrhages. Stereology, semiautomated computer estimation, and manual erythrocyte counting were used to test the precision and efficiency of determining the size...... measurements, the stereological method was the most efficient and advantageous. CONCLUSIONS: We found that stereology was the superior method for quantification of hemorrhagic volume, especially for rodent petechial bleeding, which is otherwise difficult to measure. Our results suggest the possibility...

  10. Liquid chromatography/tandem mass spectrometry method for quantitative estimation of solutol HS15 and its applications

    OpenAIRE

    Bhaskar, V. Vijaya; Middha, Anil; Srivastava, Pratima; Rajagopal, Sriram

    2015-01-01

    A rapid, sensitive and selective pseudoMRM (pMRM)-based method for the determination of solutol HS15 (SHS15) in rat plasma was developed using liquid chromatography/tandem mass spectrometry (LCâMS/MS). The most abundant ions corresponding to SHS15 free polyethyleneglycol (PEG) oligomers at m/z 481, 525, 569, 613, 657, 701, 745, 789, 833, 877, 921 and 965 were selected for pMRM in electrospray mode of ionization. Purity of the lipophilic and hydrophilic components of SHS15 was estimated using ...

  11. Quantitative estimation of electro-osmosis force on charged particles inside a borosilicate resistive-pulse sensor.

    Science.gov (United States)

    Ghobadi, Mostafa; Yuqian Zhang; Rana, Ankit; Esfahani, Ehsan T; Esfandiari, Leyla

    2016-08-01

    Nano and micron-scale pore sensors have been widely used for biomolecular sensing application due to its sensitive, label-free and potentially cost-effective criteria. Electrophoretic and electroosmosis are major forces which play significant roles on the sensor's performance. In this work, we have developed a mathematical model based on experimental and simulation results of negatively charged particles passing through a 2μm diameter solid-state borosilicate pore under a constant applied electric field. The mathematical model has estimated the ratio of electroosmosis force to electrophoretic force on particles to be 77.5%.

  12. Modeling number of bacteria per food unit in comparison to bacterial concentration in quantitative risk assessment: impact on risk estimates.

    Science.gov (United States)

    Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin

    2015-02-01

    When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.

  13. SU-G-IeP1-06: Estimating Relative Tissue Density From Quantitative MR Images: A Novel Perspective for MRI-Only Heterogeneity Corrected Dose Calculation

    International Nuclear Information System (INIS)

    Soliman, A; Hashemi, M; Safigholi, H; Tchistiakova, E; Song, W

    2016-01-01

    Purpose: To explore the feasibility of extracting the relative density from quantitative MRI measurements as well as estimate a correlation between the extracted measures and CT Hounsfield units. Methods: MRI has the ability to separate water and fat signals, producing two separate images for each component. By performing appropriate corrections on the separated images, quantitative measurement of water and fat mass density can be estimated. This work aims to test this hypothesis on 1.5T.Peanut oil was used as fat-representative, while agar as water-representative. Gadolinium Chloride III and Sodium Chloride were added to the agar solution to adjust the relaxation times and the medium conductivity, respectively. Peanut oil was added to the agar solution with different percentages: 0%, 3%, 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100%. The phantom was scanned on 1.5T GE Optima 450W with the body coil using a multigradient echo sequences. Water/fat separation were performed while correcting for main field (B0) inhomogeneity and T_2* relaxation time. B1+ inhomogeneities were ignored. The phantom was subsequently scanned on a Philips Brilliance CT Big Bore. MR-corrected fat signal from all vials were normalized to 100% fat signal. CT Hounsfield values were then compared to those obtained from the normalized MR-corrected fat values as well as to the phantom for validation. Results: Good agreement were found between CT HU and the MR-extracted fat values (R"2 = 0.98). CT HU also showed excellent agreement with the prepared fat fractions (R"2=0.99). Vials with 70%, 80%, and 90% fat percentages showed inhomogeneous distributions, however their results were included for completion. Conclusion: Quantitative MRI water/fat imaging can be potentially used to extract the relative tissue density. Further in-vivo validation are required.

  14. SU-G-IeP1-06: Estimating Relative Tissue Density From Quantitative MR Images: A Novel Perspective for MRI-Only Heterogeneity Corrected Dose Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Soliman, A; Hashemi, M; Safigholi, H [Sunnybrook Research Institute, Toronto, ON (Canada); Sunnybrook Health Sciences Centre, Toronto, ON (Canada); Tchistiakova, E [Sunnybrook Health Sciences Centre, Toronto, ON (Canada); University of Toronto, Toronto, ON (Canada); Song, W [Sunnybrook Research Institute, Toronto, ON (Canada); Sunnybrook Health Sciences Centre, Toronto, ON (Canada); University of Toronto, Toronto, ON (Canada)

    2016-06-15

    Purpose: To explore the feasibility of extracting the relative density from quantitative MRI measurements as well as estimate a correlation between the extracted measures and CT Hounsfield units. Methods: MRI has the ability to separate water and fat signals, producing two separate images for each component. By performing appropriate corrections on the separated images, quantitative measurement of water and fat mass density can be estimated. This work aims to test this hypothesis on 1.5T.Peanut oil was used as fat-representative, while agar as water-representative. Gadolinium Chloride III and Sodium Chloride were added to the agar solution to adjust the relaxation times and the medium conductivity, respectively. Peanut oil was added to the agar solution with different percentages: 0%, 3%, 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100%. The phantom was scanned on 1.5T GE Optima 450W with the body coil using a multigradient echo sequences. Water/fat separation were performed while correcting for main field (B0) inhomogeneity and T{sub 2}* relaxation time. B1+ inhomogeneities were ignored. The phantom was subsequently scanned on a Philips Brilliance CT Big Bore. MR-corrected fat signal from all vials were normalized to 100% fat signal. CT Hounsfield values were then compared to those obtained from the normalized MR-corrected fat values as well as to the phantom for validation. Results: Good agreement were found between CT HU and the MR-extracted fat values (R{sup 2} = 0.98). CT HU also showed excellent agreement with the prepared fat fractions (R{sup 2}=0.99). Vials with 70%, 80%, and 90% fat percentages showed inhomogeneous distributions, however their results were included for completion. Conclusion: Quantitative MRI water/fat imaging can be potentially used to extract the relative tissue density. Further in-vivo validation are required.

  15. Quantitative estimates of coral reef substrate and species type derived objectively from photographic images taken at twenty-eight sites in the Hawaiian islands, 2002-2004 (NODC Accession 0002313)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of CRAMP surveys taken in 2002-2004 and includes quantitative estimates of substrate and species type. From the data percent coverage of a...

  16. An attempt and significance of using scandium (Sc) indication for quantitative estimation of soil ingested by pastured cattle

    International Nuclear Information System (INIS)

    Koyama, Takeo; Sudo, Madoka; Miyamoto, Susumu; Kikuchi, Takeaki; Takahashi, Masayoshi; Kuma, Tadashi.

    1985-01-01

    Pastured beef cattle constantly ingest soil together with grass. Dried grass and silage used in winter also contain some soil. Sc occurs in soil in much greater amounts than in grass and is not absorbed by digestive canals, and the Sc content can be determined accuretely by the activation analysis method. In view of this, a technique is devised which uses Sc as an indication in estimating the amount of soil ingested by cattle, and this new method is found to be better than the conventional one with Ti indication. Accordingly, dung is collected from the same cattle at the end of the pastured and housed periods. The dung samples are dried, ground, activated and analysed. On the basis of results of this analysis, the amount of soil ingested at the end of the pastured and housed periods is estimated at 106 +- 120 and 129 +- 171 g/day, respectively, which broadly agree with values previously reported. An evaluation of the amounts of Se and Zn taken by cattle from soil is also carried out. (Nogami, K.)

  17. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Directory of Open Access Journals (Sweden)

    Philip J Kellman

    Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert

  18. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Science.gov (United States)

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  19. Health care providers' perceptions of and attitudes towards induced abortions in sub-Saharan Africa and Southeast Asia : a systematic literature review of qualitative and quantitative data.

    OpenAIRE

    Rehnström Loi, Ulrika; Gemzell-Danielsson, Kristina; Faxelid, Elisabeth; Klingberg-Allvin, Marie

    2015-01-01

    Background Unsafe abortions are a serious public health problem and a major human rights issue. In low-income countries, where restrictive abortion laws are common, safe abortion care is not always available to women in need. Health care providers have an important role in the provision of abortion services. However, the shortage of health care providers in low-income countries is critical and exacerbated by the unwillingness of some health care providers to provide abortion services. The aim...

  20. Estimation of pulmonary artery pressure in patients with primary pulmonary hypertension by quantitative analysis of magnetic resonance images.

    Science.gov (United States)

    Murray, T I; Boxt, L M; Katz, J; Reagan, K; Barst, R J

    1994-01-01

    The use of magnetic resonance (MR) images for estimating mean pulmonary artery pressure (PAP) was tested by comparing main pulmonary artery (MPA) and middescending thoracic aorta (AO) caliber in 12 patients with primary pulmonary hypertension (PPH) with measurements made in eight other patients who were observed for diseases other than heart disease (controls). The ratio MPA/AO and the ratios of vessel caliber normalized to body surface area (MPAI and AOI, respectively) were computed. The PAP was obtained in all PPH patients and compared with caliber measurements. The PPH MPA (3.6 +/- 0.8 cm) was significantly larger than the control MPA (2.9 +/- 0.3 cm, p = 0.02); the PPH MPAI (2.8 +/- 0.7 cm/M2) was significantly greater than the control MPA (1.7 +/- 0.2 cm/M2, p < 0.0001). Control AO (2.2 +/- 0.3 cm) was significantly greater than PPH AO (1.6 +/- 0.4 cm, p < 0.0001); there was no significant difference between control AOI (1.3 +/- 0.2 cm/M2) and PPH AOI (1.2 +/- 0.2 cm/M2, p = 0.25). The PPH MPA/AO (2.3 +/- 0.6) was significantly greater than the control MPA/AO (1.3 +/- 0.1, p < 0.0001); overlap between MPA in the two groups was eliminated by indexing values to AO caliber (MPA/AO). Among PPH patients there was strong correlation between PAP and MPA/AO (PAP = 24 x MPA/AO + 3.7, r = 0.7, p < 0.01). Increased MPA/AO denotes the presence of pulmonary hypertension and may be used to estimate PAP.

  1. Secondary dentine as a sole parameter for age estimation: Comparison and reliability of qualitative and quantitative methods among North Western adult Indians

    Directory of Open Access Journals (Sweden)

    Jasbir Arora

    2016-06-01

    Full Text Available The indestructible nature of teeth against most of the environmental abuses makes its use in disaster victim identification (DVI. The present study has been undertaken to examine the reliability of Gustafson’s qualitative method and Kedici’s quantitative method of measuring secondary dentine for age estimation among North Western adult Indians. 196 (M = 85; F = 111 single rooted teeth were collected from the Department of Oral Health Sciences, PGIMER, Chandigarh. Ground sections were prepared and the amount of secondary dentine formed was scored qualitatively according to Gustafson’s (0–3 scoring system (method 1 and quantitatively following Kedici’s micrometric measurement method (method 2. Out of 196 teeth 180 samples (M = 80; F = 100 were found to be suitable for measuring secondary dentine following Kedici’s method. Absolute mean error of age was calculated by both methodologies. Results clearly showed that in pooled data, method 1 gave an error of ±10.4 years whereas method 2 exhibited an error of approximately ±13 years. A statistically significant difference was noted in absolute mean error of age between two methods of measuring secondary dentine for age estimation. Further, it was also revealed that teeth extracted for periodontal reasons severely decreased the accuracy of Kedici’s method however, the disease had no effect while estimating age by Gustafson’s method. No significant gender differences were noted in the absolute mean error of age by both methods which suggest that there is no need to separate data on the basis of gender.

  2. Quantitative testing of the methodology for genome size estimation in plants using flow cytometry: a case study of the Primulina genus

    Directory of Open Access Journals (Sweden)

    Jing eWang

    2015-05-01

    Full Text Available Flow cytometry (FCM is a commonly used method for estimating genome size in many organisms. The use of flow cytometry in plants is influenced by endogenous fluorescence inhibitors and may cause an inaccurate estimation of genome size; thus, falsifying the relationship between genome size and phenotypic traits/ecological performance. Quantitative optimization of FCM methodology minimizes such errors, yet there are few studies detailing this methodology. We selected the genus Primulina, one of the most representative and diverse genera of the Old World Gesneriaceae, to evaluate the methodology effect on determining genome size. Our results showed that buffer choice significantly affected genome size estimation in six out of the eight species examined and altered the 2C-value (DNA content by as much as 21.4%. The staining duration and propidium iodide (PI concentration slightly affected the 2C-value. Our experiments showed better histogram quality when the samples were stained for 40 minutes at a PI concentration of 100 µg ml-1. The quality of the estimates was not improved by one-day incubation in the dark at 4 °C or by centrifugation. Thus, our study determined an optimum protocol for genome size measurement in Primulina: LB01 buffer supplemented with 100 µg ml-1 PI and stained for 40 minutes. This protocol also demonstrated a high universality in other Gesneriaceae genera. We report the genome size of nine Gesneriaceae species for the first time. The results showed substantial genome size variation both within and among the species, with the 2C-value ranging between 1.62 and 2.71 pg. Our study highlights the necessity of optimizing the FCM methodology prior to obtaining reliable genome size estimates in a given taxon.

  3. Liquid chromatography/tandem mass spectrometry method for quantitative estimation of solutol HS15 and its applications

    Directory of Open Access Journals (Sweden)

    V. Vijaya Bhaskar

    2015-04-01

    Full Text Available A rapid, sensitive and selective pseudoMRM (pMRM-based method for the determination of solutol HS15 (SHS15 in rat plasma was developed using liquid chromatography/tandem mass spectrometry (LC–MS/MS. The most abundant ions corresponding to SHS15 free polyethyleneglycol (PEG oligomers at m/z 481, 525, 569, 613, 657, 701, 745, 789, 833, 877, 921 and 965 were selected for pMRM in electrospray mode of ionization. Purity of the lipophilic and hydrophilic components of SHS15 was estimated using evaporative light scattering detector (ELSD. Plasma concentrations of SHS15 were measured after oral administration at 2.50 g/kg dose and intravenous administration at 1.00 g/kg dose in male Sprague Dawley rats. SHS15 has poor oral bioavailability of 13.74% in rats. Differences in pharmacokinetics of oligomers were studied. A novel proposal was conveyed to the scientific community, where formulation excipient could be analyzed as a qualifier in the analysis of new chemical entities (NCEs to address the spiky plasma concentration profiles. Keywords: SHS15, LC–MS/MS, Spiky profiles, Validation

  4. Liquid chromatography/tandem mass spectrometry method for quantitative estimation of solutol HS15 and its applications.

    Science.gov (United States)

    Bhaskar, V Vijaya; Middha, Anil; Srivastava, Pratima; Rajagopal, Sriram

    2015-04-01

    A rapid, sensitive and selective pseudoMRM (pMRM)-based method for the determination of solutol HS15 (SHS15) in rat plasma was developed using liquid chromatography/tandem mass spectrometry (LC-MS/MS). The most abundant ions corresponding to SHS15 free polyethyleneglycol (PEG) oligomers at m / z 481, 525, 569, 613, 657, 701, 745, 789, 833, 877, 921 and 965 were selected for pMRM in electrospray mode of ionization. Purity of the lipophilic and hydrophilic components of SHS15 was estimated using evaporative light scattering detector (ELSD). Plasma concentrations of SHS15 were measured after oral administration at 2.50 g/kg dose and intravenous administration at 1.00 g/kg dose in male Sprague Dawley rats. SHS15 has poor oral bioavailability of 13.74% in rats. Differences in pharmacokinetics of oligomers were studied. A novel proposal was conveyed to the scientific community, where formulation excipient could be analyzed as a qualifier in the analysis of new chemical entities (NCEs) to address the spiky plasma concentration profiles.

  5. Quantitative estimation of the cost of parasitic castration in a Helisoma anceps population using a matrix population model.

    Science.gov (United States)

    Negovetich, N J; Esch, G W

    2008-10-01

    Larval trematodes frequently castrate their snail intermediate hosts. When castrated, the snails do not contribute offspring to the population, yet they persist and compete with the uninfected individuals for the available food resources. Parasitic castration should reduce the population growth rate lambda, but the magnitude of this decrease is unknown. The present study attempted to quantify the cost of parasitic castration at the level of the population by mathematically modeling the population of the planorbid snail Helisoma anceps in Charlie's Pond, North Carolina. Analysis of the model identified the life-history trait that most affects lambda, and the degree to which parasitic castration can lower lambda. A period matrix product model was constructed with estimates of fecundity, survival, growth rates, and infection probabilities calculated in a previous study. Elasticity analysis was performed by increasing the values of the life-history traits by 10% and recording the percentage change in lambda. Parasitic castration resulted in a 40% decrease in lambda of H. anceps. Analysis of the model suggests that decreasing the size at maturity was more effective at reducing the cost of castration than increasing survival or growth rates of the snails. The current matrix model was the first to mathematically describe a snail population, and the predictions of the model are in agreement with published research.

  6. Contrast-enhanced 3T MR perfusion of musculoskeletal tumours. T1 value heterogeneity assessment and evaluation of the influence of T1 estimation methods on quantitative parameters

    Energy Technology Data Exchange (ETDEWEB)

    Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Verbizier, Jacques de; Blum, Alain [Hopital Central, CHRU-Nancy, Service d' Imagerie Guilloz, Nancy (France); Chen, Bailiang; Beaumont, Marine [Universite de Lorraine, Laboratoire IADI, UMR S 947, Nancy (France); Badr, Sammy; Cotten, Anne [CHRU Lille Centre de Consultations et d' Imagerie de l' Appareil Locomoteur, Department of Radiology and Musculoskeletal Imaging, Lille (France)

    2017-12-15

    To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. (orig.)

  7. Precipitation evidences on X-Band Synthetic Aperture Radar imagery: an approach for quantitative detection and estimation

    Science.gov (United States)

    Mori, Saverio; Marzano, Frank S.; Montopoli, Mario; Pulvirenti, Luca; Pierdicca, Nazzareno

    2017-04-01

    al. 2014 and Mori et al. 2012); ancillary data, such as local incident angle and land cover, are used. This stage is necessary to tune the precipitation map stage and to avoid severe misinterpretations on the precipitation map routines. The second stage consist of estimating the local cloud attenuation. Finally the precipitation map is estimated, using the the retrieval algorithm developed by Marzano et al. (2011), applied only to pixels where rain is known to be present. Within the FP7 project EartH2Observe we have applied this methodology to 14 study cases, acquired within TSX and CSK missions over Italy and United States. This choice allows analysing both hurricane-like intense events and continental mid-latitude precipitations, with the possibility to verify and validate the proposed methodology through the available weather radar networks. Moreover it allows in same extent analysing the contribution of orography and quality of ancillary data (i.e. landcover). In this work we will discuss the results obtained until now in terms of improved rain cell localization and precipitation quantification.

  8. A hybrid method for the estimation of ground motion in sedimentary basins: Quantitative modelling for Mexico City

    International Nuclear Information System (INIS)

    Faeh, D.; Suhadolc, P.; Mueller, S.; Panza, G.F.

    1994-04-01

    To estimate the ground motion in two-dimensional, laterally heterogeneous, anelastic media, a hybrid technique has been developed which combines modal summation and the finite difference method. In the calculation of the local wavefield due to a seismic event, both for small and large epicentral distances, it is possible to take into account the sources, path and local soil effects. As practical application we have simulated the ground motion in Mexico City caused by the Michoacan earthquake of September 19, 1985. By studying the one-dimensional response of the two sedimentary layers present in Mexico City, it is possible to explain the difference in amplitudes observed between records for receivers inside and outside the lake-bed zone. These simple models show that the sedimentary cover produces the concentration of high-frequency waves (0.2-0.5 Hz) on the horizontal components of motion. The large amplitude coda of ground motion observed inside the lake-bed zone, and the spectral ratios between signals observed inside and outside the lake-bed zone, can only be explained by two-dimensional models of the sedimentary basin. In such models, the ground motion is mainly controlled by the response of the uppermost clay layer. The synthetic signals explain the major characteristics (relative amplitudes, spectral ratios, and frequency content) of the observed ground motion. The large amplitude coda of the ground motion observed in the lake-bed zone can be explained as resonance effects and the excitation of local surface waves in the laterally heterogeneous clay layer. Also, for the 1985 Michoacan event, the energy contributions of the three subevents are important to explain the observed durations. (author). 39 refs, 15 figs, 1 tab

  9. Quantitative estimation of the pathways followed in the conversion to glycogen of glucose administered to the fasted rat

    International Nuclear Information System (INIS)

    Scofield, R.F.; Kosugi, K.; Schumann, W.C.; Kumaran, K.; Landau, B.R.

    1985-01-01

    When [6- 3 H,6- 14 C]glucose was given in glucose loads to fasted rats, the average 3 H/ 14 C ratios in the glycogens deposited in their livers, relative to that in the glucoses administered, were 0.85 and 0.88. When [3- 3 H,3- 14 C]lactate was given in trace quantity along with unlabeled glucose loads, the average 3 H/ 14 C ratio in the glycogens deposited was 0.08. This indicates that a major fraction of the carbons of the glucose loads was converted to liver glycogen without first being converted to lactate. When [3- 3 H,6- 14 C]glucose was given in glucose loads, the 3 H/ 14 C ratios in the glycogens deposited averaged 0.44. This indicates that a significant amount of H bound to C-3, but not C-6, of glucose is removed within liver in the conversion of the carbons of the glucose to glycogen. This can occur in the pentose cycle and by cycling of glucose-6-P via triose phosphates. The contributions of these pathways were estimated by giving glucose loads labeled with [1- 14 C]glucose, [2- 14 C]glucose, [5- 14 C]glucose, and [6- 14 C]glucose and degrading the glucoses obtained by hydrolyzing the glycogens that deposited. Between 4 and 9% of the glucose utilized by the liver was utilized in the pentose cycle. While these are relatively small percentages a major portion of the difference between the ratios obtained with [3- 3 H]glucose and with [6- 3 H]glucose is attributable to metabolism in the pentose cycle

  10. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Science.gov (United States)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  11. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Energy Technology Data Exchange (ETDEWEB)

    He Bin [Division of Nuclear Medicine, Department of Radiology, New York Presbyterian Hospital-Weill Medical College of Cornell University, New York, NY 10021 (United States); Frey, Eric C, E-mail: bih2006@med.cornell.ed, E-mail: efrey1@jhmi.ed [Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutions, Baltimore, MD 21287-0859 (United States)

    2010-06-21

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed {sup 111}In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations

  12. Quantitative Analysis of the Usage of a Pedagogical Tool Combining Questions Listed as Learning Objectives and Answers Provided as Online Videos

    Directory of Open Access Journals (Sweden)

    Odette Laneuville

    2015-05-01

    Full Text Available To improve the learning of basic concepts in molecular biology of an undergraduate science class, a pedagogical tool was developed, consisting of learning objectives listed at the end of each lecture and answers to those objectives made available as videos online. The aim of this study was to determine if the pedagogical tool was used by students as instructed, and to explore students’ perception of its usefulness. A combination of quantitative survey data and measures of online viewing was used to evaluate the usage of the pedagogical practice. A total of 77 short videos linked to 11 lectures were made available to 71 students, and 64 completed the survey. Using online tracking tools, a total of 7046 views were recorded. Survey data indicated that most students (73.4% accessed all videos, and the majority (98.4% found the videos to be useful in assisting their learning. Interestingly, approximately half of the students (53.1% always or most of the time used the pedagogical tool as recommended, and consistently answered the learning objectives before watching the videos. While the proposed pedagogical tool was used by the majority of students outside the classroom, only half used it as recommended limiting the impact on students’ involvement in the learning of the material presented in class.

  13. Quantitative Proteome Analysis of Mouse Liver Lysosomes Provides Evidence for Mannose 6-phosphate-independent Targeting Mechanisms of Acid Hydrolases in Mucolipidosis II.

    Science.gov (United States)

    Markmann, Sandra; Krambeck, Svenja; Hughes, Christopher J; Mirzaian, Mina; Aerts, Johannes M F G; Saftig, Paul; Schweizer, Michaela; Vissers, Johannes P C; Braulke, Thomas; Damme, Markus

    2017-03-01

    The efficient receptor-mediated targeting of soluble lysosomal proteins to lysosomes requires the modification with mannose 6-phosphate (M6P) residues. Although the absence of M6P results in misrouting and hypersecretion of lysosomal enzymes in many cells, normal levels of lysosomal enzymes have been reported in liver of patients lacking the M6P-generating phosphotransferase (PT). The identity of lysosomal proteins depending on M6P has not yet been comprehensively analyzed. In this study we purified lysosomes from liver of PT-defective mice and 67 known soluble lysosomal proteins were identified that illustrated quantitative changes using an ion mobility-assisted data-independent label-free LC-MS approach. After validation of various differentially expressed lysosomal components by Western blotting and enzyme activity assays, the data revealed a small number of lysosomal proteins depending on M6P, including neuraminidase 1, cathepsin F, Npc2, and cathepsin L, whereas the majority reach lysosomes by alternative pathways. These data were compared with findings on cultured hepatocytes and liver sinusoid endothelial cells isolated from the liver of wild-type and PT-defective mice. Our findings show that the relative expression, targeting efficiency and lysosomal localization of lysosomal proteins tested in cultured hepatic cells resemble their proportion in isolated liver lysosomes. Hypersecretion of newly synthesized nonphosphorylated lysosomal proteins suggest that secretion-recapture mechanisms contribute to maintain major lysosomal functions in liver. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  14. Quantitative Estimation of Yeast on Maxillary Denture in Patients with Denture Stomatitis and the Effect of Chlorhexidine Gluconate in Reduction of Yeast

    Directory of Open Access Journals (Sweden)

    Jaykumar R Gade

    2011-01-01

    Full Text Available Denture stomatitis is a condition associated with wearing of a denture. The predisposing factor leading to denture stomatitis could be poor oral hygiene, ill-fitting denture and relief areas. Around 30 patients with denture stomatitis were advised to rinse with chlorhexidine gluconate mouthwash for 14 days and were directed to immerse the upper denture in the chlorhexidine solution for 8 hours. The samples were collected by scraping maxillary denture in saline at three intervals, prior to, at the end of 24 hours and after 14 days of treatment, then were inoculated and quantitative estimation of the yeast growth on Sabouraud′s dextrose agar plate was done. It was observed that after a period of 14 days, there was a reduction in the growth of yeast and also improvement in the clinical picture of the oral mucosa

  15. QUANTITATIVE ESTIMATION OF SOIL EROSION IN THE DRĂGAN RIVER WATERSHED WITH THE U.S.L.E. TYPE ROMSEM MODEL

    Directory of Open Access Journals (Sweden)

    Csaba HORVÁTH

    2008-05-01

    Full Text Available Quantitative estimation of soil erosion in the Drăgan river watershed with the U.S.L.E. type Romsem modelSediment delivered from water erosion causes substantial waterway damages and water quality degradation. A number of factors such as drainage area size, basin slope, climate, land use/land cover may affect sediment delivery processes. The goal of this study is to define a computationally effective suitable soil erosion model in the Drăgan river watershed, for future sedimentation studies. Geographic Information System (GIS is used to determine the Universal Soil Loss Equation Model (U.S.L.E. values of the studied water basin. The methods and approaches used in this study are expected to be applicable in future research and to watersheds in other regions.

  16. Processing of next generation weather radar-multisensor precipitation estimates and quantitative precipitation forecast data for the DuPage County streamflow simulation system

    Science.gov (United States)

    Bera, Maitreyee; Ortel, Terry W.

    2018-01-12

    The U.S. Geological Survey, in cooperation with DuPage County Stormwater Management Department, is testing a near real-time streamflow simulation system that assists in the management and operation of reservoirs and other flood-control structures in the Salt Creek and West Branch DuPage River drainage basins in DuPage County, Illinois. As part of this effort, the U.S. Geological Survey maintains a database of hourly meteorological and hydrologic data for use in this near real-time streamflow simulation system. Among these data are next generation weather radar-multisensor precipitation estimates and quantitative precipitation forecast data, which are retrieved from the North Central River Forecasting Center of the National Weather Service. The DuPage County streamflow simulation system uses these quantitative precipitation forecast data to create streamflow predictions for the two simulated drainage basins. This report discusses in detail how these data are processed for inclusion in the Watershed Data Management files used in the streamflow simulation system for the Salt Creek and West Branch DuPage River drainage basins.

  17. Ecosystem services - from assessements of estimations to quantitative, validated, high-resolution, continental-scale mapping via airborne LIDAR

    Science.gov (United States)

    Zlinszky, András; Pfeifer, Norbert

    2016-04-01

    service potential" which is the ability of the local ecosystem to deliver various functions (water retention, carbon storage etc.), but can't quantify how much of these are actually used by humans or what the estimated monetary value is. Due to its ability to measure both terrain relief and vegetation structure in high resolution, airborne LIDAR supports direct quantification of the properties of an ecosystem that lead to it delivering a given service (such as biomass, water retention, micro-climate regulation or habitat diversity). In addition, its high resolution allows direct calibration with field measurements: routine harvesting-based ecological measurements, local biodiversity indicator surveys or microclimate recordings all take place at the human scale and can be directly linked to the local value of LIDAR-based indicators at meter resolution. Therefore, if some field measurements with standard ecological methods are performed on site, the accuracy of LIDAR-based ecosystem service indicators can be rigorously validated. With this conceptual and technical approach high resolution ecosystem service assessments can be made with well established credibility. These would consolidate the concept of ecosystem services and support both scientific research and evidence-based environmental policy at local and - as data coverage is continually increasing - continental scale.

  18. Contrast-enhanced 3T MR Perfusion of Musculoskeletal Tumours: T1 Value Heterogeneity Assessment and Evaluation of the Influence of T1 Estimation Methods on Quantitative Parameters.

    Science.gov (United States)

    Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Chen, Bailiang; De Verbizier, Jacques; Beaumont, Marine; Badr, Sammy; Cotten, Anne; Blum, Alain

    2017-12-01

    To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. • T1 value variation in musculoskeletal tumours is considerable. • T1 values in muscle and tumours are significantly different. • Patient-specific T1 estimation is needed for comparison of inter-patient perfusion parameters. • Technical variation is higher in permeability than semiquantitative perfusion parameters.

  19. Weight and its relationship to adolescent perceptions of their providers (WRAP): a qualitative and quantitative assessment of teen weight-related preferences and concerns.

    Science.gov (United States)

    Cohen, Marc L; Tanofsky-Kraff, Marian; Young-Hyman, Deborah; Yanovski, Jack A

    2005-08-01

    To examine the relationship of body weight to satisfaction with care in adolescents, and to obtain qualitative data on preferences for general and weight-related medical care in normal weight and overweight adolescents. The Weight and its Relationship to Adolescent Perceptions of their Providers survey, a 4-page questionnaire containing previously validated satisfaction scales and open-ended qualitative questions regarding health care preferences, was administered to 62 severely overweight (body mass index [BMI] 38.9 +/- 8.4 kg/m2) and 29 normal weight (BMI 22.5 +/- 4.0 kg/m2) adolescents (age 13.9 +/- 1.7 years; 57% female; 50% Caucasian, 47% African-American, 3% Hispanic). The affective subscale of the medical satisfaction scale was negatively correlated with BMI standard deviation score (r = -.22, p teens. Seventy-nine percent of overweight adolescents stated their health care provider discussed their weight with them; however, only 41% of overweight adolescents desired to discuss their weight. Compared to normal-weight adolescents, overweight teens were more likely to report that their provider raised topics of weight (p teens expressed concerns regarding the public location of their provider's office scale. Satisfaction with affective aspects of the provider-patient relationship is negatively correlated with BMI standard deviation score. Length of experience with one's provider is also a strong predictor of teen satisfaction with their medical care. Teens prefer the term "overweight" for those with high body weight. Sensitivity to confidentiality, privacy, and embarrassment regarding physical examination and weight are important for teen satisfaction.

  20. A bottom-up approach in estimating the measurement uncertainty and other important considerations for quantitative analyses in drug testing for horses.

    Science.gov (United States)

    Leung, Gary N W; Ho, Emmie N M; Kwok, W Him; Leung, David K K; Tang, Francis P W; Wan, Terence S M; Wong, April S Y; Wong, Colton H F; Wong, Jenny K Y; Yu, Nola H

    2007-09-07

    Quantitative determination, particularly for threshold substances in biological samples, is much more demanding than qualitative identification. A proper assessment of any quantitative determination is the measurement uncertainty (MU) associated with the determined value. The International Standard ISO/IEC 17025, "General requirements for the competence of testing and calibration laboratories", has more prescriptive requirements on the MU than its superseded document, ISO/IEC Guide 25. Under the 2005 or 1999 versions of the new standard, an estimation of the MU is mandatory for all quantitative determinations. To comply with the new requirement, a protocol was established in the authors' laboratory in 2001. The protocol has since evolved based on our practical experience, and a refined version was adopted in 2004. This paper describes our approach in establishing the MU, as well as some other important considerations, for the quantification of threshold substances in biological samples as applied in the area of doping control for horses. The testing of threshold substances can be viewed as a compliance test (or testing to a specified limit). As such, it should only be necessary to establish the MU at the threshold level. The steps in a "Bottom-Up" approach adopted by us are similar to those described in the EURACHEM/CITAC guide, "Quantifying Uncertainty in Analytical Measurement". They involve first specifying the measurand, including the relationship between the measurand and the input quantities upon which it depends. This is followed by identifying all applicable uncertainty contributions using a "cause and effect" diagram. The magnitude of each uncertainty component is then calculated and converted to a standard uncertainty. A recovery study is also conducted to determine if the method bias is significant and whether a recovery (or correction) factor needs to be applied. All standard uncertainties with values greater than 30% of the largest one are then used to

  1. Weight and its relationship to adolescent perceptions of their providers (WRAP): A qualitative and quantitative assessment of teen weight-related preferences and concerns

    Science.gov (United States)

    Cohen, Marc L.; Tanofsky-Kraff, Marian; Young-Hyman, Deborah; Yanovski, Jack A.

    2008-01-01

    Purpose To examine the relationship of body weight to satisfaction with care in adolescents, and to obtain qualitative data on preferences for general and weight-related medical care in normal weight and overweight adolescents. Methods The Weight and its Relationship to Adolescent Perceptions of their Providers survey, a 4-page questionnaire containing previously validated satisfaction scales and open-ended qualitative questions regarding health care preferences, was administered to 62 severely overweight (body mass index [BMI] 38.9 ± 8.4 kg/m2) and 29 normal weight (BMI 22.5 ± 4.0 kg/m2) adolescents (age 13.9 ± 1.7 years; 57% female; 50% Caucasian, 47% African-American, 3% Hispanic). Results The affective subscale of the medical satisfaction scale was negatively correlated with BMI standard deviation score (r = −.22, p teens. Seventy-nine percent of overweight adolescents stated their health care provider discussed their weight with them; however, only 41% of overweight adolescents desired to discuss their weight. Compared to normal-weight adolescents, overweight teens were more likely to report that their provider raised topics of weight (p teens expressed concerns regarding the public location of their provider’s office scale. Conclusions Satisfaction with affective aspects of the provider-patient relationship is negatively correlated with BMI standard deviation score. Length of experience with one’s provider is also a strong predictor of teen satisfaction with their medical care. Teens prefer the term “overweight” for those with high body weight. Sensitivity to confidentiality, privacy, and embarrassment regarding physical examination and weight are important for teen satisfaction. PMID:16026727

  2. Novel approach for quantitatively estimating element retention and material balances in soil profiles of recharge basins used for wastewater reclamation

    Energy Technology Data Exchange (ETDEWEB)

    Eshel, Gil, E-mail: eshelgil@gmail.com [Soil Erosion Research Station, Ministry of Agriculture and Rural Development, HaMaccabim Road, Rishon-Lezion. P.O.B. 30, Beit-Dagan, 50250 (Israel); Lin, Chunye [School of Environment, Beijing Normal University, 19 Xinjiekouwaidajie St., Beijing, 100875 (China); Banin, Amos [Department of Soil and Water Sciences, Faculty of Agricultural, Food and Environmental Quality Sciences, The Hebrew University of Jerusalem, P.O. Box 12, Rehovot (Israel)

    2015-01-01

    We investigated changes in element content and distribution in soil profiles in a study designed to monitor the geochemical changes accruing in soil due to long-term secondary effluent recharge, and its impact on the sustainability of the Soil Aquifer Treatment (SAT) system. Since the initial elemental contents of the soils at the studied site were not available, we reconstructed them using scandium (Sc) as a conservative tracer. By using this approach, we were able to produce a mass-balance for 18 elements and evaluate the geochemical changes resulting from 19 years of effluent recharge. This approach also provides a better understanding of the role of soils as an adsorption filter for the heavy metals contained in the effluent. The soil mass balance suggests 19 years of effluent recharge cause for a significant enrichment in Cu, Cr, Ni, Zn, Mg, K, Na, S and P contents in the upper 4 m of the soil profile. Combining the elements lode record during the 19 years suggest that Cr, Ni, and P inputs may not reach the groundwater (20 m deep), whereas the other elements may. Conversely, we found that 58, 60, and 30% of the initial content of Mn, Ca and Co respectively leached from the upper 2-m of the soil profile. These high percentages of Mn and Ca depletion from the basin soils may reduce the soil's ability to buffer decreases in redox potential pe and pH, respectively, which could initiate a reduction in the soil's holding capacity for heavy metals. - Highlights: • Sc proved as a reliable tracer for reconstructing the initial soil elemental contents. • Mass-balance for 18 elements resulting from 19 years of SAT operation is presented. • After 19 years of operation Cr, Ni, and P inputs may not reach the groundwater. • The inputs of other 15 elements may reach the groundwater. • 58, 60, 30% of initial soil content of Mn, Ca, Co res. leached from the upper 2-m.

  3. Inter- and intra-observer agreement of BI-RADS-based subjective visual estimation of amount of fibroglandular breast tissue with magnetic resonance imaging: comparison to automated quantitative assessment

    International Nuclear Information System (INIS)

    Wengert, G.J.; Helbich, T.H.; Woitek, R.; Kapetas, P.; Clauser, P.; Baltzer, P.A.; Vogl, W.D.; Weber, M.; Meyer-Baese, A.; Pinker, Katja

    2016-01-01

    To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. (orig.)

  4. Inter- and intra-observer agreement of BI-RADS-based subjective visual estimation of amount of fibroglandular breast tissue with magnetic resonance imaging: comparison to automated quantitative assessment

    Energy Technology Data Exchange (ETDEWEB)

    Wengert, G.J.; Helbich, T.H.; Woitek, R.; Kapetas, P.; Clauser, P.; Baltzer, P.A. [Medical University of Vienna/ Vienna General Hospital, Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Gender Imaging, Vienna (Austria); Vogl, W.D. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Computational Imaging Research Lab, Wien (Austria); Weber, M. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Division of General and Pediatric Radiology, Wien (Austria); Meyer-Baese, A. [State University of Florida, Department of Scientific Computing in Medicine, Tallahassee, FL (United States); Pinker, Katja [Medical University of Vienna/ Vienna General Hospital, Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Gender Imaging, Vienna (Austria); State University of Florida, Department of Scientific Computing in Medicine, Tallahassee, FL (United States); Memorial Sloan-Kettering Cancer Center, Department of Radiology, Molecular Imaging and Therapy Services, New York City, NY (United States)

    2016-11-15

    To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. (orig.)

  5. Quantitative skeletal maturation estimation using cone-beam computed tomography-generated cervical vertebral images: a pilot study in 5- to 18-year-old Japanese children.

    Science.gov (United States)

    Byun, Bo-Ram; Kim, Yong-Il; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Hwang, Dea-Seok; Park, Soo-Byung; Son, Woo-Sung

    2015-11-01

    The purpose of this study was to establish multivariable regression models for the estimation of skeletal maturation status in Japanese boys and girls using the cone-beam computed tomography (CBCT)-based cervical vertebral maturation (CVM) assessment method and hand-wrist radiography. The analyzed sample consisted of hand-wrist radiographs and CBCT images from 47 boys and 57 girls. To quantitatively evaluate the correlation between the skeletal maturation status and measurement ratios, a CBCT-based CVM assessment method was applied to the second, third, and fourth cervical vertebrae. Pearson's correlation coefficient analysis and multivariable regression analysis were used to determine the ratios for each of the cervical vertebrae (p maturation status according to the CBCT-based quantitative cervical vertebral maturation (QCVM) assessment was 5.90 + 99.11 × AH3/W3 - 14.88 × (OH2 + AH2)/W2 + 13.24 × D2; for the Japanese girls, it was 41.39 + 59.52 × AH3/W3 - 15.88 × (OH2 + PH2)/W2 + 10.93 × D2. The CBCT-generated CVM images proved very useful to the definition of the cervical vertebral body and the odontoid process. The newly developed CBCT-based QCVM assessment method showed a high correlation between the derived ratios from the second cervical vertebral body and odontoid process. There are high correlations between the skeletal maturation status and the ratios of the second cervical vertebra based on the remnant of dentocentral synchondrosis.

  6. Wavelet-based resolution recovery using an anatomical prior provides quantitative recovery for human population phantom PET [11C]raclopride data

    International Nuclear Information System (INIS)

    Shidahara, M; Tamura, H; Tsoumpas, C; McGinnity, C J; Hammers, A; Turkheimer, F E; Kato, T; Watabe, H

    2012-01-01

    The objective of this study was to evaluate a resolution recovery (RR) method using a variety of simulated human brain [ 11 C]raclopride positron emission tomography (PET) images. Simulated datasets of 15 numerical human phantoms were processed by a wavelet-based RR method using an anatomical prior. The anatomical prior was in the form of a hybrid segmented atlas, which combined an atlas for anatomical labelling and a PET image for functional labelling of each anatomical structure. We applied RR to both 60 min static and dynamic PET images. Recovery was quantified in 84 regions, comparing the typical ‘true’ value for the simulation, as obtained in normal subjects, simulated and RR PET images. The radioactivity concentration in the white matter, striatum and other cortical regions was successfully recovered for the 60 min static image of all 15 human phantoms; the dependence of the solution on accurate anatomical information was demonstrated by the difficulty of the technique to retrieve the subthalamic nuclei due to mismatch between the two atlases used for data simulation and recovery. Structural and functional synergy for resolution recovery (SFS-RR) improved quantification in the caudate and putamen, the main regions of interest, from −30.1% and −26.2% to −17.6% and −15.1%, respectively, for the 60 min static image and from −51.4% and −38.3% to −27.6% and −20.3% for the binding potential (BP ND ) image, respectively. The proposed methodology proved effective in the RR of small structures from brain [ 11 C]raclopride PET images. The improvement is consistent across the anatomical variability of a simulated population as long as accurate anatomical segmentations are provided. (paper)

  7. MO-E-17A-04: Size-Specific Dose Estimate (SSDE) Provides a Simple Method to Calculate Organ Dose for Pediatric CT Examinations

    International Nuclear Information System (INIS)

    Moore, B; Brady, S; Kaufman, R; Mirro, A

    2014-01-01

    Purpose: Investigate the correlation of SSDE with organ dose in a pediatric population. Methods: Four anthropomorphic phantoms, representing a range of pediatric body habitus, were scanned with MOSFET dosimeters placed at 23 organ locations to determine absolute organ dosimetry. Phantom organ dosimetry was divided by phantom SSDE to determine correlation between organ dose and SSDE. Correlation factors were then multiplied by patient SSDE to estimate patient organ dose. Patient demographics consisted of 352 chest and 241 abdominopelvic CT examinations, 22 ± 15 kg (range 5−55 kg) mean weight, and 6 ± 5 years (range 4 mon to 23 years) mean age. Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm. 23 organ correlation factors were determined in the chest and abdominopelvic region across nine pediatric weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7−1.4) and abdominopelvic (average 0.9; range 0.7−1.3) was near unity. For organs that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1−0.4) for both the chest and abdominopelvic regions, respectively. Pediatric organ dosimetry was compared to published values and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusion: Average correlation of SSDE and organ dosimetry was found to be better than ± 10% for fully covered organs within the scan volume. This study provides a list of organ dose correlation factors for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE

  8. MO-E-17A-04: Size-Specific Dose Estimate (SSDE) Provides a Simple Method to Calculate Organ Dose for Pediatric CT Examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, B; Brady, S; Kaufman, R [St Jude Children' s Research Hospital, Memphis, TN (United States); Mirro, A [Washington University, St. Louis, MO (United States)

    2014-06-15

    Purpose: Investigate the correlation of SSDE with organ dose in a pediatric population. Methods: Four anthropomorphic phantoms, representing a range of pediatric body habitus, were scanned with MOSFET dosimeters placed at 23 organ locations to determine absolute organ dosimetry. Phantom organ dosimetry was divided by phantom SSDE to determine correlation between organ dose and SSDE. Correlation factors were then multiplied by patient SSDE to estimate patient organ dose. Patient demographics consisted of 352 chest and 241 abdominopelvic CT examinations, 22 ± 15 kg (range 5−55 kg) mean weight, and 6 ± 5 years (range 4 mon to 23 years) mean age. Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm. 23 organ correlation factors were determined in the chest and abdominopelvic region across nine pediatric weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7−1.4) and abdominopelvic (average 0.9; range 0.7−1.3) was near unity. For organs that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1−0.4) for both the chest and abdominopelvic regions, respectively. Pediatric organ dosimetry was compared to published values and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusion: Average correlation of SSDE and organ dosimetry was found to be better than ± 10% for fully covered organs within the scan volume. This study provides a list of organ dose correlation factors for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE.

  9. Model-based approach for quantitative estimates of skin, heart, and lung toxicity risk for left-side photon and proton irradiation after breast-conserving surgery.

    Science.gov (United States)

    Tommasino, Francesco; Durante, Marco; D'Avino, Vittoria; Liuzzi, Raffaele; Conson, Manuel; Farace, Paolo; Palma, Giuseppe; Schwarz, Marco; Cella, Laura; Pacelli, Roberto

    2017-05-01

    Proton beam therapy represents a promising modality for left-side breast cancer (BC) treatment, but concerns have been raised about skin toxicity and poor cosmesis. The aim of this study is to apply skin normal tissue complication probability (NTCP) model for intensity modulated proton therapy (IMPT) optimization in left-side BC. Ten left-side BC patients undergoing photon irradiation after breast-conserving surgery were randomly selected from our clinical database. Intensity modulated photon (IMRT) and IMPT plans were calculated with iso-tumor-coverage criteria and according to RTOG 1005 guidelines. Proton plans were computed with and without skin optimization. Published NTCP models were employed to estimate the risk of different toxicity endpoints for skin, lung, heart and its substructures. Acute skin NTCP evaluation suggests a lower toxicity level with IMPT compared to IMRT when the skin is included in proton optimization strategy (0.1% versus 1.7%, p < 0.001). Dosimetric results show that, with the same level of tumor coverage, IMPT attains significant heart and lung dose sparing compared with IMRT. By NTCP model-based analysis, an overall reduction in the cardiopulmonary toxicity risk prediction can be observed for all IMPT compared to IMRT plans: the relative risk reduction from protons varies between 0.1 and 0.7 depending on the considered toxicity endpoint. Our analysis suggests that IMPT might be safely applied without increasing the risk of severe acute radiation induced skin toxicity. The quantitative risk estimates also support the potential clinical benefits of IMPT for left-side BC irradiation due to lower risk of cardiac and pulmonary morbidity. The applied approach might be relevant on the long term for the setup of cost-effectiveness evaluation strategies based on NTCP predictions.

  10. Towards cheminformatics-based estimation of drug therapeutic index: Predicting the protective index of anticonvulsants using a new quantitative structure-index relationship approach.

    Science.gov (United States)

    Chen, Shangying; Zhang, Peng; Liu, Xin; Qin, Chu; Tao, Lin; Zhang, Cheng; Yang, Sheng Yong; Chen, Yu Zong; Chui, Wai Keung

    2016-06-01

    The overall efficacy and safety profile of a new drug is partially evaluated by the therapeutic index in clinical studies and by the protective index (PI) in preclinical studies. In-silico predictive methods may facilitate the assessment of these indicators. Although QSAR and QSTR models can be used for predicting PI, their predictive capability has not been evaluated. To test this capability, we developed QSAR and QSTR models for predicting the activity and toxicity of anticonvulsants at accuracy levels above the literature-reported threshold (LT) of good QSAR models as tested by both the internal 5-fold cross validation and external validation method. These models showed significantly compromised PI predictive capability due to the cumulative errors of the QSAR and QSTR models. Therefore, in this investigation a new quantitative structure-index relationship (QSIR) model was devised and it showed improved PI predictive capability that superseded the LT of good QSAR models. The QSAR, QSTR and QSIR models were developed using support vector regression (SVR) method with the parameters optimized by using the greedy search method. The molecular descriptors relevant to the prediction of anticonvulsant activities, toxicities and PIs were analyzed by a recursive feature elimination method. The selected molecular descriptors are primarily associated with the drug-like, pharmacological and toxicological features and those used in the published anticonvulsant QSAR and QSTR models. This study suggested that QSIR is useful for estimating the therapeutic index of drug candidates. Copyright © 2016. Published by Elsevier Inc.

  11. Estimated cost savings associated with the transfer of office-administered specialty pharmaceuticals to a specialty pharmacy provider in a Medical Injectable Drug program.

    Science.gov (United States)

    Baldini, Christopher G; Culley, Eric J

    2011-01-01

    A large managed care organization (MCO) in western Pennsylvania initiated a Medical Injectable Drug (MID) program in 2002 that transferred a specific subset of specialty drugs from physician reimbursement under the traditional "buy-and-bill" model in the medical benefit to MCO purchase from a specialty pharmacy provider (SPP) that supplied physician offices with the MIDs. The MID program was initiated with 4 drugs in 2002 (palivizumab and 3 hyaluronate products/derivatives) growing to more than 50 drugs by 2007-2008. To (a) describe the MID program as a method to manage the cost and delivery of this subset of specialty drugs, and (b) estimate the MID program cost savings in 2007 and 2008 in an MCO with approximately 4.6 million members. Cost savings generated by the MID program were calculated by comparing the total actual expenditure (plan cost plus member cost) on medications included in the MID program for calendar years 2007 and 2008 with the total estimated expenditure that would have been paid to physicians during the same time period for the same medication if reimbursement had been made using HCPCS (J code) billing under the physician "buy-and-bill" reimbursement rates. For the approximately 50 drugs in the MID program in 2007 and 2008, the drug cost savings in 2007 were estimated to be $15.5 million (18.2%) or $290 per claim ($0.28 per member per month [PMPM]) and about $13 million (12.7%) or $201 per claim ($0.23 PMPM) in 2008. Although 28% of MID claims continued to be billed by physicians using J codes in 2007 and 22% in 2008, all claims for MIDs were limited to the SPP reimbursement rates. This MID program was associated with health plan cost savings of approximately $28.5 million over 2 years, achieved by the transfer of about 50 physician-administered injectable pharmaceuticals from reimbursement to physicians to reimbursement to a single SPP and payment of physician claims for MIDs at the SPP reimbursement rates.

  12. Radar-based quantitative precipitation estimation for the identification of debris flow occurrence over earthquake-affected regions in Sichuan, China

    Science.gov (United States)

    Shi, Zhao; Wei, Fangqiang; Chandrasekar, Venkatachalam

    2018-03-01

    Both Ms 8.0 Wenchuan earthquake on 12 May 2008 and Ms 7.0 Lushan earthquake on 20 April 2013 occurred in the province of Sichuan, China. In the earthquake-affected mountainous area, a large amount of loose material caused a high occurrence of debris flow during the rainy season. In order to evaluate the rainfall intensity-duration (I-D) threshold of the debris flow in the earthquake-affected area, and to fill up the observational gaps caused by the relatively scarce and low-altitude deployment of rain gauges in this area, raw data from two S-band China New Generation Doppler Weather Radar (CINRAD) were captured for six rainfall events that triggered 519 debris flows between 2012 and 2014. Due to the challenges of radar quantitative precipitation estimation (QPE) over mountainous areas, a series of improvement measures are considered: a hybrid scan mode, a vertical reflectivity profile (VPR) correction, a mosaic of reflectivity, a merged rainfall-reflectivity (R - Z) relationship for convective and stratiform rainfall, and rainfall bias adjustment with Kalman filter (KF). For validating rainfall accumulation over complex terrains, the study areas are divided into two kinds of regions by the height threshold of 1.5 km from the ground. Three kinds of radar rainfall estimates are compared with rain gauge measurements. It is observed that the normalized mean bias (NMB) is decreased by 39 % and the fitted linear ratio between radar and rain gauge observation reaches at 0.98. Furthermore, the radar-based I-D threshold derived by the frequentist method is I = 10.1D-0.52 and is underestimated by uncorrected raw radar data. In order to verify the impacts on observations due to spatial variation, I-D thresholds are identified from the nearest rain gauge observations and radar observations at the rain gauge locations. It is found that both kinds of observations have similar I-D thresholds and likewise underestimate I-D thresholds due to undershooting at the core of convective

  13. Quantitative estimation of massive gas hydrate in gas chimney structures, the eastern margin of Japan Sea, from the physical property anomalies obtained by LWD.

    Science.gov (United States)

    Tanahashi, M.; Morita, S.; Matsumoto, R.

    2017-12-01

    Two dedicated LWD (Logging While Drilling) cruises, GR14 and HR15, were conducted in summers of 2014 and 2015, respectively, by Meiji University and Geological Survey of Japan, AIST to explore the gas chimney structures, which are characterized by the columnar acoustic blanking below the topographic mound and/or pockmarks in eastern margin of Japan Sea. Shallow (33 to 172m-bsf, average 136m-bsf) 33 LWD drillings were carried out generally in and around gas chimney structures which are in Oki Trough, Off-Joetsu, and Mogami Trough areas, eastern margin of Japan Sea, during two cruises. Schlumberger LWD tools, GeoVISION (resistivity), TeleScope, ProVISION (NMR) and SonicVISION (sonic) were applied during GR14. NeoScope (neutron) was added and SonicScope was replaced for SonicVISION during HR15. The presence of thick highly-anomalous intervals within the LWD data at site J24L suggests the development of massive gas hydrate within Off-Joetsu, by very high resistivity ( 10,000 Ωm), high Vp ( 3,700 m/s) and Vs (370-1,839 m/s), high neutron porosity ( 1.2), low natural gamma ray intensity ( 0 API), low neutron gamma density ( 0.8 g/cm3), low NMR porosity ( 0.0), low permeability (10-2-10-4 mD), low formation neutron sigma (26-28). The extreme physical properties intervals suggest the development of the almost pure hydrate. Because of the clear contrast between pure hydrate and seawater saturated fine sediments, the hydrate amount can be estimated quantitatively based on the assumptions as the two component system of pure hydrate and the monotonous seawater saturated fine sediments. This study was conducted as a part of the methane hydrate research project funded by METI (the Ministry of Economy, Trade and Industry, Japan).

  14. Rapid quantitative estimation of chlorinated methane utilizing bacteria in drinking water and the effect of nanosilver on biodegradation of the trichloromethane in the environment.

    Science.gov (United States)

    Zamani, Isaac; Bouzari, Majid; Emtiazi, Giti; Fanaei, Maryam

    2015-03-01

    Halomethanes are toxic and carcinogenic chemicals, which are widely used in industry. Also they can be formed during water disinfection by chlorine. Biodegradation by methylotrophs is the most important way to remove these pollutants from the environment. This study aimed to represent a simple and rapid method for quantitative study of halomethanes utilizing bacteria in drinking water and also a method to facilitate the biodegradation of these compounds in the environment compared to cometabolism. Enumeration of chlorinated methane utilizing bacteria in drinking water was carried out by most probable number (MPN) method in two steps. First, the presence and the number of methylotroph bacteria were confirmed on methanol-containing medium. Then, utilization of dichloromethane was determined by measuring the released chloride after the addition of 0.04 mol/L of it to the growth medium. Also, the effect of nanosilver particles on biodegradation of multiple chlorinated methanes was studied by bacterial growth on Bushnell-Haas Broth containing chloroform (trichloromethane) that was treated with 0.2 ppm nanosilver. Most probable number of methylotrophs and chlorinated methane utilizing bacteria in tested drinking water were 10 and 4 MPN Index/L, respectively. Chloroform treatment by nanosilver leads to dechlorination and the production of formaldehyde. The highest growth of bacteria and formic acid production were observed in the tubes containing 1% chloroform treated with nanosilver. By combining the two tests, a rapid approach to estimation of most probable number of chlorinated methane utilizing bacteria is introduced. Treatment by nanosilver particles was resulted in the easier and faster biodegradation of chloroform by bacteria. Thus, degradation of these chlorinated compounds is more efficient compared to cometabolism.

  15. Usefulness of R2* maps generated by iterative decomposition of water and fat with echo asymmetry and least-squares estimation quantitation sequence for cerebral artery dissection

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Ayumi; Shinohara, Yuki; Fujii, Shinya; Miyoshi, Fuminori; Kuya, Keita; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological, and Therapeutic Science, Faculty of Medicine, Yonago (Japan); Yamashita, Eijiro [Tottori University Hospital, Division of Clinical Radiology, Yonago (Japan)

    2015-09-15

    Acute intramural hematoma resulting from cerebral artery dissection is usually visualized as a region of intermediate signal intensity on T1-weighted images (WI). This often causes problems with distinguishing acute atheromatous lesions from surrounding parenchyma and dissection. The present study aimed to determine whether or not R2* maps generated by the iterative decomposition of water and fat with echo asymmetry and least-squares estimation quantitation sequence (IDEAL IQ) can distinguish cerebral artery dissection more effectively than three-dimensional variable refocusing flip angle TSE T1WI (T1-CUBE) and T2*WI. We reviewed data from nine patients with arterial dissection who were assessed by MR images including R2* maps, T2*WI, T1-CUBE, and 3D time-of-flight (TOF)-MRA. We visually assessed intramural hematomas in each patient as positive (clearly visible susceptibility effect reflecting intramural hematoma as hyperintensity on R2* map and hypointensity on T2*WI), negative (absent intramural hematoma), equivocal (difficult to distinguish between intramural hematoma and other paramagnetic substances such as veins, vessel wall calcification, or hemorrhage) and not evaluable (difficult to determine intramural hematoma due to susceptibility artifacts arising from skull base). Eight of nine patients were assessed during the acute phase. Lesions in all eight patients were positive for intramural hematoma corresponding to dissection sites on R2* maps, while two lesions were positive on T2*WI and three lesions showed high-intensity on T1-CUBE reflected intramural hematoma during the acute phase. R2* maps generated using IDEAL IQ can detect acute intramural hematoma associated with cerebral artery dissection more effectively than T2*WI and earlier than T1-CUBE. (orig.)

  16. Quantitative analysis of oyster larval proteome provides new insights into the effects of multiple climate change stressors, supplement to: Dineshram, R; Chandramouli, K; Ko, W K Ginger; Zhang, Huoming; Qian, Pei Yuan; Ravasi, Timothy; Thiyagarajan, Vengatesen (2016): Quantitative analysis of oyster larval proteome provides new insights into the effects of multiple climate change stressors. Global Change Biology, 22(6), 2054-2068

    KAUST Repository

    Dineshram, R

    2016-01-01

    The metamorphosis of planktonic larvae of the Pacific oyster (Crassostrea gigas) underpins their complex life-history strategy by switching on the molecular machinery required for sessile life and building calcite shells. Metamorphosis becomes a survival bottleneck, which will be pressured by different anthropogenically induced climate change-related variables. Therefore, it is important to understand how metamorphosing larvae interact with emerging climate change stressors. To predict how larvae might be affected in a future ocean, we examined changes in the proteome of metamorphosing larvae under multiple stressors: decreased pH (pH 7.4), increased temperature (30 °C), and reduced salinity (15 psu). Quantitative protein expression profiling using iTRAQ-LC-MS/MS identified more than 1300 proteins. Decreased pH had a negative effect on metamorphosis by down-regulating several proteins involved in energy production, metabolism, and protein synthesis. However, warming switched on these down-regulated pathways at pH 7.4. Under multiple stressors, cell signaling, energy production, growth, and developmental pathways were up-regulated, although metamorphosis was still reduced. Despite the lack of lethal effects, significant physiological responses to both individual and interacting climate change related stressors were observed at proteome level. The metamorphosing larvae of the C. gigas population in the Yellow Sea appear to have adequate phenotypic plasticity at the proteome level to survive in future coastal oceans, but with developmental and physiological costs.

  17. Correlation between special brain area and blood perfusion in patients with cerebral infarction at convalescent period Feasibility for quantitative determination and estimation of learning and memory function

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    : Correlation of rCBF in different brain regions and learning memory ability in patients with cerebral infarction.RESULTS: ① The rCBF of hippocampus, nucleus amygdale, temportal cortex and prefrontal cortex of good learning memory function group were significantly higher than those of poor learning memory function group (P < 0.05). ②In the good learning memory function group, rCBF of hippocampus, nucleus amygdale,temportal cortex and prefrontal cortex were significantly positively correlated with memory scale scores ( r =0.961, 0.926, 0.954, 0.907, P < 0.05 ) , and also in the poor learning memory function group ( r = 0.979,0.976, 0.991, 0.953, P < 0.05) .CONCLUSION: The rCBF of hippocampus, nucleus amygdale, temportal cortex and prefrontal cortex of patients with cerebral infarction are significantly positively correlated with memory scale scores. Predicting learning memory ability of patients by quantitative determination of rCBF provides a quantitative and objective method for evaluating learning memory ability.

  18. Ionization Energies, Electron Affinities, and Polarization Energies of Organic Molecular Crystals: Quantitative Estimations from a Polarizable Continuum Model (PCM)–Tuned Range-Separated Density Functional Approach

    KAUST Repository

    Sun, Haitao

    2016-05-16

    We propose a new methodology for the first-principles description of the electronic properties relevant for charge transport in organic molecular crystals. This methodology, which is based on the combination of a non-empirical, optimally tuned range-separated hybrid functional with the polarizable continuum model, is applied to a series of eight representative molecular semiconductor crystals. We show that it provides ionization energies, electron affinities, and transport gaps in very good agreement with experimental values as well as with the results of many-body perturbation theory within the GW approximation at a fraction of the computational costs. Hence, this approach represents an easily applicable and computationally efficient tool to estimate the gas-to-crystal-phase shifts of the frontier-orbital quasiparticle energies in organic electronic materials.

  19. Development and quantitative effect estimation of an integrated decision support system to aid operator's cognitive activities for NPP advanced main control rooms

    International Nuclear Information System (INIS)

    Lee, Seung Jun

    2007-02-01

    As digital and computer technologies have grown, human-machine interfaces (HMIs) have evolved. In safety critical systems, especially in nuclear power plants (NPPs), HMIs are important for reducing operational costs, for reducing the number of necessary operators, and for reducing the probability of accident occurrence. Efforts have been made to improve main control room (MCR) interface design and to develop automation or support systems to ensure convenient operation and maintenance. In this paper, an integrated decision support system to aid the cognitive activities of operators is proposed for advanced MCRs in future NPPs. The proposed system supports not merely a particular task, but also the entire operation process based on a human cognitive process model. It supports the operator's entire cognitive process by integrating decision support systems that support each cognitive activity. In this paper, the operator's operation processes are analyzed based on a human cognitive process model and appropriate support systems that support each activity of the human cognitive process are suggested. Two decision support systems were developed in this paper. The first one is the fault diagnosis advisory system (FDAS) which detects faults and diagnoses them. The FDAS provides a list of possible faults and expected causes to operators. It was implemented using two kinds of neural networks for more reliable diagnosis results. The second system is the multifunctional operator support system for operation guidance, which includes the FDAS and the operation guidance system. The operation guidance system is to prevent operator's commission errors and omission errors. Furthermore, the effect of the proposed system was estimated because to evaluate decision support systems in order to validate their efficiency is as important as to design highly reliable decision support systems. The effect estimations were performed theoretically and experimentally. The Bayesian

  20. Rigour in quantitative research.

    Science.gov (United States)

    Claydon, Leica Sarah

    2015-07-22

    This article which forms part of the research series addresses scientific rigour in quantitative research. It explores the basis and use of quantitative research and the nature of scientific rigour. It examines how the reader may determine whether quantitative research results are accurate, the questions that should be asked to determine accuracy and the checklists that may be used in this process. Quantitative research has advantages in nursing, since it can provide numerical data to help answer questions encountered in everyday practice.

  1. Quantifying the Impact of Natural Immunity on Rotavirus Vaccine Efficacy Estimates: A Clinical Trial in Dhaka, Bangladesh (PROVIDE) and a Simulation Study.

    Science.gov (United States)

    Rogawski, Elizabeth T; Platts-Mills, James A; Colgate, E Ross; Haque, Rashidul; Zaman, K; Petri, William A; Kirkpatrick, Beth D

    2018-03-05

    The low efficacy of rotavirus vaccines in clinical trials performed in low-resource settings may be partially explained by acquired immunity from natural exposure, especially in settings with high disease incidence. In a clinical trial of monovalent rotavirus vaccine in Bangladesh, we compared the original per-protocol efficacy estimate to efficacy derived from a recurrent events survival model in which children were considered naturally exposed and potentially immune after their first rotavirus diarrhea (RVD) episode. We then simulated trial cohorts to estimate the expected impact of prior exposure on efficacy estimates for varying rotavirus incidence rates and vaccine efficacies. Accounting for natural immunity increased the per-protocol vaccine efficacy estimate against severe RVD from 63.1% (95% confidence interval [CI], 33.0%-79.7%) to 70.2% (95% CI, 44.5%-84.0%) in the postvaccination period, and original year 2 efficacy was underestimated by 14%. The simulations demonstrated that this expected impact increases linearly with RVD incidence, will be greatest for vaccine efficacies near 50%, and can reach 20% in settings with high incidence and low efficacy. High rotavirus incidence leads to predictably lower vaccine efficacy estimates due to the acquisition of natural immunity in unvaccinated children, and this phenomenon should be considered when comparing efficacy estimates across settings. NCT01375647.

  2. Estimativa de estro em vacas leiteiras utilizando métodos quantitativos preditivos Dairy cows estrus estimation using predictive and quantitative methods

    Directory of Open Access Journals (Sweden)

    Irenilza de Alencar Nääs

    2008-11-01

    in milk production was due to the use of several technologies that have being developed for the sector, mainly those related to genetics and herd management. Accurate estrus detection in dairy cows is a limiting factor in the reproduction efficiency of dairy cattle, and it has been considered the most important deficiency in the field of reproduction. Failing to detect estrus efficiently may cause losses for the producer. Quantitative predictive methods based on historical data and specialist knowledge may allow, from an organized data base, the prediction of estrus pattern with lower error. This research compared the precision of the estrus prediction techniques for freestall confined Holstein dairy cows using quantitative predictive methods, through the interpolation of intermediate points of historical herd data set. A base of rules was formulated and the values of weight for each statement is within the interval of 0 to 1; and these limits were used to generate a function of pertinence fuzzy that had as output the estrus prediction. In the following stage Data mining technique was applied using the parameters of movement rate, milk production, days of lactation and mounting behavior, and a decision tree was built for analyzing the most significant parameters for predicting estrus in dairy cows. The results indicate that the prediction of estrus incidence may be achieved either using the association of cow’s movement (87%, with estimated error of 4% or the observation of mounting behavior (78%, with estimated error of 11%.

  3. Long-Term Quantitative Precipitation Estimates (QPE) at High Spatial and Temporal Resolution over CONUS: Bias-Adjustment of the Radar-Only National Mosaic and Multi-sensor QPE (NMQ/Q2) Precipitation Reanalysis (2001-2012)

    Science.gov (United States)

    Prat, Olivier; Nelson, Brian; Stevens, Scott; Seo, Dong-Jun; Kim, Beomgeun

    2015-04-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (NEXRAD) network over Continental United States (CONUS) is completed for the period covering from 2001 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Several in-situ datasets are available to assess the biases of the radar-only product and to adjust for those biases to provide a multi-sensor QPE. The rain gauge networks that are used such as the Global Historical Climatology Network-Daily (GHCN-D), the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), and the Climate Reference Network (CRN), have different spatial density and temporal resolution. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. The objective of this work is threefold. First, we investigate how the different in-situ networks can impact the precipitation estimates as a function of the spatial density, sensor type, and temporal resolution. Second, we assess conditional and un-conditional biases of the radar-only QPE for various time scales (daily, hourly, 5-min) using in-situ precipitation observations. Finally, after assessing the bias and applying reduction or elimination techniques, we are using a unique in-situ dataset merging the different RG networks (CRN, ASOS, HADS, GHCN-D) to

  4. Pyrolysis and co-composting of municipal organic waste in Bangladesh: A quantitative estimate of recyclable nutrients, greenhouse gas emissions, and economic benefits.

    Science.gov (United States)

    Mia, Shamim; Uddin, Md Ektear; Kader, Md Abdul; Ahsan, Amimul; Mannan, M A; Hossain, Mohammad Monjur; Solaiman, Zakaria M

    2018-05-01

    Waste causes environmental pollution and greenhouse gas (GHG) emissions when it is not managed sustainably. In Bangladesh, municipal organic waste (MOW) is partially collected and landfilled. Thus, it causes deterioration of the environment urging a recycle-oriented waste management system. In this study, we propose a waste management system through pyrolysis of selective MOW for biochar production and composting of the remainder with biochar as an additive. We estimated the carbon (C), nitrogen (N), phosphorus (P) and potassium (K) recycling potentials in the new techniques of waste management. Waste generation of a city was calculated using population density and per capita waste generation rate (PWGR). Two indicators of economic development, i.e., gross domestic product (GDP) and per capita gross national income (GNI) were used to adopt PWGR with a projected contribution of 5-20% to waste generation. The projected PWGR was then validated with a survey. The waste generation from urban areas of Bangladesh in 2016 was estimated between 15,507 and 15,888 t day -1 with a large share (∼75%) of organic waste. Adoption of the proposed system could produce 3936 t day -1 biochar blended compost with an annual return of US $210 million in 2016 while it could reduce GHG emission substantially (-503 CO 2 e t -1 municipal waste). Moreover, the proposed system would able to recover ∼46%, 54%, 54% and 61% of total C, N, P and K content in the initial waste, respectively. We also provide a projection of waste generation and nutrient recycling potentials for the year 2035. The proposed method could be a self-sustaining policy option for waste management as it would generate ∼US$51 from each tonne of waste. Moreover, a significant amount of nutrients can be recycled to agriculture while contributing to the reduction in environmental pollution and GHG emission. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Rapid exposure and loss estimates for the May 12, 2008 Mw 7.9 Wenchuan earthquake provided by the U.S. Geological Survey's PAGER system

    Science.gov (United States)

    Earle, P.S.; Wald, D.J.; Allen, T.I.; Jaiswal, K.S.; Porter, K.A.; Hearne, M.G.

    2008-01-01

    One half-hour after the May 12th Mw 7.9 Wenchuan, China earthquake, the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) system distributed an automatically generated alert stating that 1.2 million people were exposed to severe-to-extreme shaking (Modified Mercalli Intensity VIII or greater). It was immediately clear that a large-scale disaster had occurred. These alerts were widely distributed and referenced by the major media outlets and used by governments, scientific, and relief agencies to guide their responses. The PAGER alerts and Web pages included predictive ShakeMaps showing estimates of ground shaking, maps of population density, and a list of estimated intensities at impacted cities. Manual, revised alerts were issued in the following hours that included the dimensions of the fault rupture. Within a half-day, PAGER’s estimates of the population exposed to strong shaking levels stabilized at 5.2 million people. A coordinated research effort is underway to extend PAGER’s capability to include estimates of the number of casualties. We are pursuing loss models that will allow PAGER the flexibility to use detailed inventory and engineering results in regions where these data are available while also calculating loss estimates in regions where little is known about the type and strength of the built infrastructure. Prototype PAGER fatality estimates are currently implemented and can be manually triggered. In the hours following the Wenchuan earthquake, these models predicted fatalities in the tens of thousands.

  6. Liquid Chromatography with Electrospray Ionization and Tandem Mass Spectrometry Applied in the Quantitative Analysis of Chitin-Derived Glucosamine for a Rapid Estimation of Fungal Biomass in Soil

    Directory of Open Access Journals (Sweden)

    Madelen A. Olofsson

    2016-01-01

    Full Text Available This method employs liquid chromatography-tandem mass spectrometry to rapidly quantify chitin-derived glucosamine for estimating fungal biomass. Analyte retention was achieved using hydrophilic interaction liquid chromatography, with a zwitter-ionic stationary phase (ZIC-HILIC, and isocratic elution using 60% 5 mM ammonium formate buffer (pH 3.0 and 40% ACN. Inclusion of muramic acid and its chromatographic separation from glucosamine enabled calculation of the bacterial contribution to the latter. Galactosamine, an isobaric isomer to glucosamine, found in significant amounts in soil samples, was also investigated. The two isomers form the same precursor and product ions and could not be chromatographically separated using this rapid method. Instead, glucosamine and galactosamine were distinguished mathematically, using the linear relationships describing the differences in product ion intensities for the two analytes. The m/z transitions of 180 → 72 and 180 → 84 were applied for the detection of glucosamine and galactosamine and that of 252 → 126 for muramic acid. Limits of detection were in the nanomolar range for all included analytes. The total analysis time was 6 min, providing a high sample throughput method.

  7. Statistical modelling and RCS detrending methods provide similar estimates of long-term trend in radial growth of common beech in north-eastern France

    OpenAIRE

    Bontemps , Jean-Daniel; Esper , Jan

    2011-01-01

    International audience; Dendrochronological methods have greatly contributed to the documentation of past long-term trends in forest growth. These methods primarily focus on the high-frequency signals of tree ring chronologies. They require the removal of the ageing trend in tree growth, known as 'standardisation' or 'detrending', as a prerequisite to the estimation of such trends. Because the approach is sequential, it may however absorb part of the low-frequency historical signal. In this s...

  8. Quantitative analysis of O-isopropyl methylphosphonic acid in serum samples of Japanese citizens allegedly exposed to sarin: Estimation of internal dosage

    NARCIS (Netherlands)

    Noort, D.; Hulst, A.G.; Platenburg, D.H.J.M.; Polhuijs, M.; Benschop, H.P.

    1998-01-01

    A convenient and rapid micro-anion exchange liquid chromatography (LC) tandem electrospray mass spectrometry (MS) procedure was developed for quantitative analysis in serum of O-isopropyl methylphosphonic acid (IMPA), the hydrolysis product of the nerve agent sarin. The mass spectrometric procedure

  9. Bias in the Cq value observed with hydrolysis probe based quantitative PCR can be corrected with the estimated PCR efficiency value

    NARCIS (Netherlands)

    Tuomi, Jari Michael; Voorbraak, Frans; Jones, Douglas L.; Ruijter, Jan M.

    2010-01-01

    For real-time monitoring of PCR amplification of DNA, quantitative PCR (qPCR) assays use various fluorescent reporters. DNA binding molecules and hybridization reporters (primers and probes) only fluoresce when bound to DNA and result in the non-cumulative increase in observed fluorescence.

  10. Comparison of visual scoring and quantitative planimetry methods for estimation of global infarct size on delayed enhanced cardiac MRI and validation with myocardial enzymes

    Energy Technology Data Exchange (ETDEWEB)

    Mewton, Nathan, E-mail: nmewton@gmail.com [Hopital Cardiovasculaire Louis Pradel, 28, Avenue Doyen Lepine, 69677 Bron cedex, Hospices Civils de Lyon (France); CREATIS-LRMN (Centre de Recherche et d' Applications en Traitement de l' Image et du Signal), Universite Claude Bernard Lyon 1, UMR CNRS 5220, U 630 INSERM (France); Revel, Didier [Hopital Cardiovasculaire Louis Pradel, 28, Avenue Doyen Lepine, 69677 Bron cedex, Hospices Civils de Lyon (France); CREATIS-LRMN (Centre de Recherche et d' Applications en Traitement de l' Image et du Signal), Universite Claude Bernard Lyon 1, UMR CNRS 5220, U 630 INSERM (France); Bonnefoy, Eric [Hopital Cardiovasculaire Louis Pradel, 28, Avenue Doyen Lepine, 69677 Bron cedex, Hospices Civils de Lyon (France); Ovize, Michel [Hopital Cardiovasculaire Louis Pradel, 28, Avenue Doyen Lepine, 69677 Bron cedex, Hospices Civils de Lyon (France); INSERM Unite 886 (France); Croisille, Pierre [Hopital Cardiovasculaire Louis Pradel, 28, Avenue Doyen Lepine, 69677 Bron cedex, Hospices Civils de Lyon (France); CREATIS-LRMN (Centre de Recherche et d' Applications en Traitement de l' Image et du Signal), Universite Claude Bernard Lyon 1, UMR CNRS 5220, U 630 INSERM (France)

    2011-04-15

    Purpose: Although delayed enhanced CMR has become a reference method for infarct size quantification, there is no ideal method to quantify total infarct size in a routine clinical practice. In a prospective study we compared the performance and post-processing time of a global visual scoring method to standard quantitative planimetry and we compared both methods to the peak values of myocardial biomarkers. Materials and methods: This study had local ethics committee approval; all patients gave written informed consent. One hundred and three patients admitted with reperfused AMI to our intensive care unit had a complete CMR study with gadolinium-contrast injection 4 {+-} 2 days after admission. A global visual score was defined on a 17-segment model and compared with the quantitative planimetric evaluation of hyperenhancement. The peak values of serum Troponin I (TnI) and creatine kinase (CK) release were measured in each patient. Results: The mean percentage of total left ventricular myocardium with hyperenhancement determined by the quantitative planimetry method was (20.1 {+-} 14.6) with a range of 1-68%. There was an excellent correlation between quantitative planimetry and visual global scoring for the hyperenhancement extent's measurement (r = 0.94; y = 1.093x + 0.87; SEE = 1.2; P < 0.001) The Bland-Altman plot showed a good concordance between the two approaches (mean of the differences = 1.9% with a standard deviation of 4.7). Mean post-processing time for quantitative planimetry was significantly longer than visual scoring post-processing time (23.7 {+-} 5.7 min vs 5.0 {+-} 1.1 min respectively, P < 0.001). Correlation between peak CK and quantitative planimetry was r = 0.82 (P < 0.001) and r = 0.83 (P < 0.001) with visual global scoring. Correlation between peak Troponin I and quantitative planimetry was r = 0.86 (P < 0.001) and r = 0.85 (P < 0.001) with visual global scoring. Conclusion: A visual approach based on a 17-segment model allows a rapid

  11. Comparison of visual scoring and quantitative planimetry methods for estimation of global infarct size on delayed enhanced cardiac MRI and validation with myocardial enzymes

    International Nuclear Information System (INIS)

    Mewton, Nathan; Revel, Didier; Bonnefoy, Eric; Ovize, Michel; Croisille, Pierre

    2011-01-01

    Purpose: Although delayed enhanced CMR has become a reference method for infarct size quantification, there is no ideal method to quantify total infarct size in a routine clinical practice. In a prospective study we compared the performance and post-processing time of a global visual scoring method to standard quantitative planimetry and we compared both methods to the peak values of myocardial biomarkers. Materials and methods: This study had local ethics committee approval; all patients gave written informed consent. One hundred and three patients admitted with reperfused AMI to our intensive care unit had a complete CMR study with gadolinium-contrast injection 4 ± 2 days after admission. A global visual score was defined on a 17-segment model and compared with the quantitative planimetric evaluation of hyperenhancement. The peak values of serum Troponin I (TnI) and creatine kinase (CK) release were measured in each patient. Results: The mean percentage of total left ventricular myocardium with hyperenhancement determined by the quantitative planimetry method was (20.1 ± 14.6) with a range of 1-68%. There was an excellent correlation between quantitative planimetry and visual global scoring for the hyperenhancement extent's measurement (r = 0.94; y = 1.093x + 0.87; SEE = 1.2; P < 0.001) The Bland-Altman plot showed a good concordance between the two approaches (mean of the differences = 1.9% with a standard deviation of 4.7). Mean post-processing time for quantitative planimetry was significantly longer than visual scoring post-processing time (23.7 ± 5.7 min vs 5.0 ± 1.1 min respectively, P < 0.001). Correlation between peak CK and quantitative planimetry was r = 0.82 (P < 0.001) and r = 0.83 (P < 0.001) with visual global scoring. Correlation between peak Troponin I and quantitative planimetry was r = 0.86 (P < 0.001) and r = 0.85 (P < 0.001) with visual global scoring. Conclusion: A visual approach based on a 17-segment model allows a rapid and accurate

  12. Evaluation of bone involvement in patients with Gaucher disease: a semi-quantitative magnetic resonance imaging method (using ROI estimation of bone lesion) as an alternative method to semi-quantitative methods used so far.

    Science.gov (United States)

    Komninaka, Veroniki; Kolomodi, Dionysia; Christoulas, Dimitrios; Marinakis, Theodoros; Papatheodorou, Athanasios; Repa, Konstantina; Voskaridou, Ersi; Revenas, Konstantinos; Terpos, Evangelos

    2015-10-01

    The aim of this study was to evaluate bone involvement in patients with Gaucher disease (GD) and to propose a novel semi-quantitative magnetic resonance imaging (MRI) staging. MRI of the lumbar spine, femur, and tibia was performed in 24 patients with GD and 24 healthy controls. We also measured circulating levels of C-C motif ligand-3 (CCL-3) chemokine, C-telopeptide of collagen type-1 (CTX), and tartrate-resistant acid phosphatase isoform type-b (TRACP-5b). We used the following staging based on MRI data: stage I: region of interest (ROI) 1/2 of normal values and bone infiltration up to 30%; stage II: ROI 1/3 of normal values and bone infiltration from 30 to 60%; stage III: ROI 1/4 of normal values and bone infiltration from 60% to 80%; and stage IV: detection of epiphyseal infiltration, osteonecrosis and deformity regardless of the ROI's values. All but two patients had abnormal MRI findings: 9 (37.5%), 6 (25%), 3 (12.5%), and 4 (16.7%) had stages I-IV, respectively. Patients with GD had elevated chitotriosidase, serum TRACP-5b, and CCL-3 levels (P < 0.001). We propose an easily reproducible semi-quantitative scoring system and confirm that patients with GD have abnormal MRI bone findings and enhanced osteoclast activity possibly due to elevated CCL-3. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Quantitative estimation of infarct size by simultaneous dual radionuclide single photon emission computed tomography: comparison with peak serum creatine kinase activity

    International Nuclear Information System (INIS)

    Kawaguchi, K.; Sone, T.; Tsuboi, H.; Sassa, H.; Okumura, K.; Hashimoto, H.; Ito, T.; Satake, T.

    1991-01-01

    To test the hypothesis that simultaneous dual energy single photon emission computed tomography (SPECT) with technetium-99m (99mTc) pyrophosphate and thallium-201 (201TI) can provide an accurate estimate of the size of myocardial infarction and to assess the correlation between infarct size and peak serum creatine kinase activity, 165 patients with acute myocardial infarction underwent SPECT 3.2 +/- 1.3 (SD) days after the onset of acute myocardial infarction. In the present study, the difference in the intensity of 99mTc-pyrophosphate accumulation was assumed to be attributable to difference in the volume of infarcted myocardium, and the infarct volume was corrected by the ratio of the myocardial activity to the osseous activity to quantify the intensity of 99mTc-pyrophosphate accumulation. The correlation of measured infarct volume with peak serum creatine kinase activity was significant (r = 0.60, p less than 0.01). There was also a significant linear correlation between the corrected infarct volume and peak serum creatine kinase activity (r = 0.71, p less than 0.01). Subgroup analysis showed a high correlation between corrected volume and peak creatine kinase activity in patients with anterior infarctions (r = 0.75, p less than 0.01) but a poor correlation in patients with inferior or posterior infarctions (r = 0.50, p less than 0.01). In both the early reperfusion and the no reperfusion groups, a good correlation was found between corrected infarct volume and peak serum creatine kinase activity (r = 0.76 and r = 0.76, respectively; p less than 0.01)

  14. Pilot-scale data provide enhanced estimates of the life cycle energy and emissions profile of algae biofuels produced via hydrothermal liquefaction.

    Science.gov (United States)

    Liu, Xiaowei; Saydah, Benjamin; Eranki, Pragnya; Colosi, Lisa M; Greg Mitchell, B; Rhodes, James; Clarens, Andres F

    2013-11-01

    Life cycle assessment (LCA) has been used widely to estimate the environmental implications of deploying algae-to-energy systems even though no full-scale facilities have yet to be built. Here, data from a pilot-scale facility using hydrothermal liquefaction (HTL) is used to estimate the life cycle profiles at full scale. Three scenarios (lab-, pilot-, and full-scale) were defined to understand how development in the industry could impact its life cycle burdens. HTL-derived algae fuels were found to have lower greenhouse gas (GHG) emissions than petroleum fuels. Algae-derived gasoline had significantly lower GHG emissions than corn ethanol. Most algae-based fuels have an energy return on investment between 1 and 3, which is lower than petroleum biofuels. Sensitivity analyses reveal several areas in which improvements by algae bioenergy companies (e.g., biocrude yields, nutrient recycle) and by supporting industries (e.g., CO2 supply chains) could reduce the burdens of the industry. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Critical Quantitative Inquiry in Context

    Science.gov (United States)

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  16. Quantitative Thermochronology

    Science.gov (United States)

    Braun, Jean; van der Beek, Peter; Batt, Geoffrey

    2006-05-01

    Thermochronology, the study of the thermal history of rocks, enables us to quantify the nature and timing of tectonic processes. Quantitative Thermochronology is a robust review of isotopic ages, and presents a range of numerical modeling techniques to allow the physical implications of isotopic age data to be explored. The authors provide analytical, semi-analytical, and numerical solutions to the heat transfer equation in a range of tectonic settings and under varying boundary conditions. They then illustrate their modeling approach built around a large number of case studies. The benefits of different thermochronological techniques are also described. Computer programs on an accompanying website at www.cambridge.org/9780521830577 are introduced through the text and provide a means of solving the heat transport equation in the deforming Earth to predict the ages of rocks and compare them directly to geological and geochronological data. Several short tutorials, with hints and solutions, are also included. Numerous case studies help geologists to interpret age data and relate it to Earth processes Essential background material to aid understanding and using thermochronological data Provides a thorough treatise on numerical modeling of heat transport in the Earth's crust Supported by a website hosting relevant computer programs and colour slides of figures from the book for use in teaching

  17. The Need to Provide for Security in Old Age in Hierarchy of Needs-An Estimation of Its Ranking within the Polish Population

    Science.gov (United States)

    Roszkiewicz, Malgorzata

    2004-01-01

    The results of studies conducted in the last 5 years in Poland formed the basis for the assumption that amongst many needs an individual or a Polish household seeks to satisfy, the need to provide for security in old age takes a prominent position. Determining the position of this need among other needs as defined in Schrab's classification…

  18. Assessment of the health impact of an environmental pollution and quantitative assessment of health risks; Estimation de l'impact sanitaire d'une pollution environnementale et evaluation quantitative des risques sanitaires

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-09-15

    The report made by a working group is written for experts in health risk assessment or for professionals involved in risk management. It proposes a methodological and conceptual framework which could build a unified approach to a quantitative assessment of health risks. In the first part, under the form of questions and answers, it defines the health impact, describes how to assess the excess of individual risk and the related hypothesis, how to pass from the excess of individual risk to the health impact, how to express the results of an health impact calculation, how to take the lack of knowledge into account at the different steps of this calculation, what is the significance of the result of such a calculation, and how useful an health impact assessment can be. The second part proposes a more detailed presentation of the scientific background for the health impact calculation with its indicators, its uncertainties, its practice in other countries, its relevance, and its fields of application. Then, after a comment of the dose-response relationship, it reports the scientific validity of the assessment of a number of cases.

  19. Quantitative lymphography

    International Nuclear Information System (INIS)

    Mostbeck, A.; Lofferer, O.; Kahn, P.; Partsch, H.; Koehn, H.; Bialonczyk, Ch.; Koenig, B.

    1984-01-01

    Labelled colloids and macromolecules are removed lymphatically. The uptake of tracer in the regional lymphnodes is a parameter of lymphatic flow. Due to great variations in patient shape - obesity, cachexia - and accompanying variations in counting efficiencies quantitative measurements with reasonable accuracy have not been reported to date. A new approach to regional absorption correction is based on the combination of transmission and emission scans for each patient. The transmission scan is used for calculation of an absorption correction matrix. Accurate superposition of the correction matrix and the emission scan is achieved by computing the centers of gravity of point sources and - in the case of aligning opposite views - by cross correlation of binary images. In phantom studies the recovery was high (98.3%) and the coefficient of variation of repeated measurement below 1%. In patient studies a standardized stress is a prerequisite for reliable and comparable results. Discrimination between normals (14.3 +- 4.2D%) and patients with lymphedema (2.05 +- 2.5D%) was highly significant using praefascial lymphography and sc injection. Clearence curve analysis of the activities at the injection site, however, gave no reliable data for this purpose. In normals, the uptake in lymphnodes after im injection is by one order of magnitude lower then the uptake after sc injection. The discrimination between normals and patients with postthromboic syndrome was significant. Lymphography after ic injection was in the normal range in 2/3 of the patients with lymphedema and is therefore of no diagnostic value. The difference in uptake after ic and sc injection demonstrated for the first time by our quantitative method provides new insights into the pathophysiology of lymphedema and needs further investigation. (Author)

  20. Evaluation of sanitary impact of environmental pollution and quantitative evaluation of sanitary risks; Estimation de l'impact sanitaire d'une pollution environnementale et evaluation quantitative des risques sanitaires

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-09-15

    The calculation of a sanitary impact present a great interest at the decision level for the decision-makers and the whole of concerned actors. It constitutes a first step to organize a social debate around the risk acceptance, to analyze the feasibility of an inquiry or an epidemiological surveillance or to proportion an activity leading to pollutants emission in natural medium. Several conclusions are brought out: it is justified to estimate a sanitary impact from a sanitary risk excess, especially coming from animal tissue. It is conceivable to go beyond an estimation of the only individual risk and to calculate a number of cases in excess in the concerned population. The working group underlines that the characteristics of the situation are the determining factor to give the type of response to bring. The effective of the population is an important element and a situation has not to be underestimated because of the size at the pretext that the excess calculation leads to a number of cases inferior to one leading to believe that the impact is minor or negligible while the individual probability is high. The sanitary impact, expressed by the number of cancer cases in excess in an exposed population is quantified from the average value of excess of sanitary risk multiplied by the population effective, and expressed with a confidence interval. The sanitary impact can be expressed under the form of a percentage of the population present in the exposure area and goes past the comparison marks usually pointed up. This practice must be cheered. An analysis of uncertainties must be made as often as possible. (N.C.)

  1. Using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) to characterize copper, zinc and mercury along grizzly bear hair providing estimate of diet

    Energy Technology Data Exchange (ETDEWEB)

    Noël, Marie, E-mail: marie.noel@stantec.com [Stantec Consulting Ltd. 2042 Mills Road, Unit 11, Sidney BC V8L 4X2 (Canada); Christensen, Jennie R., E-mail: jennie.christensen@stantec.com [Stantec Consulting Ltd. 2042 Mills Road, Unit 11, Sidney BC V8L 4X2 (Canada); Spence, Jody, E-mail: jodys@uvic.ca [School of Earth and Ocean Sciences, Bob Wright Centre A405, University of Victoria, PO BOX 3065 STN CSC, Victoria, BC V8W 3V6 (Canada); Robbins, Charles T., E-mail: ctrobbins@wsu.edu [School of the Environment and School of Biological Sciences, Washington State University, Pullman, WA 99164-4236 (United States)

    2015-10-01

    We enhanced an existing technique, laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), to function as a non-lethal tool in the temporal characterization of trace element exposure in wild mammals. Mercury (Hg), copper (Cu), cadmium (Cd), lead (Pb), iron (Fe) and zinc (Zn) were analyzed along the hair of captive and wild grizzly bears (Ursus arctos horribilis). Laser parameters were optimized (consecutive 2000 μm line scans along the middle line of the hair at a speed of 50 μm/s; spot size = 30 μm) for consistent ablation of the hair. A pressed pellet of reference material DOLT-2 and sulfur were used as external and internal standards, respectively. Our newly adapted method passed the quality control tests with strong correlations between trace element concentrations obtained using LA-ICP-MS and those obtained with regular solution-ICP-MS (r{sup 2} = 0.92, 0.98, 0.63, 0.57, 0.99 and 0.90 for Hg, Fe, Cu, Zn, Cd and Pb, respectively). Cross-correlation analyses revealed good reproducibility between trace element patterns obtained from hair collected from the same bear. One exception was Cd for which external contamination was observed resulting in poor reproducibility. In order to validate the method, we used LA-ICP-MS on the hair of five captive grizzly bears fed known and varying amounts of cutthroat trout over a period of 33 days. Trace element patterns along the hair revealed strong Hg, Cu and Zn signals coinciding with fish consumption. Accordingly, significant correlations between Hg, Cu, and Zn in the hair and Hg, Cu, and Zn intake were evident and we were able to develop accumulation models for each of these elements. While the use of LA-ICP-MS for the monitoring of trace elements in wildlife is in its infancy, this study highlights the robustness and applicability of this newly adapted method. - Highlights: • LA-ICP-MS provides temporal trace metal exposure information for wild grizzly bears. • Cu and Zn temporal exposures provide

  2. Using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) to characterize copper, zinc and mercury along grizzly bear hair providing estimate of diet

    International Nuclear Information System (INIS)

    Noël, Marie; Christensen, Jennie R.; Spence, Jody; Robbins, Charles T.

    2015-01-01

    We enhanced an existing technique, laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), to function as a non-lethal tool in the temporal characterization of trace element exposure in wild mammals. Mercury (Hg), copper (Cu), cadmium (Cd), lead (Pb), iron (Fe) and zinc (Zn) were analyzed along the hair of captive and wild grizzly bears (Ursus arctos horribilis). Laser parameters were optimized (consecutive 2000 μm line scans along the middle line of the hair at a speed of 50 μm/s; spot size = 30 μm) for consistent ablation of the hair. A pressed pellet of reference material DOLT-2 and sulfur were used as external and internal standards, respectively. Our newly adapted method passed the quality control tests with strong correlations between trace element concentrations obtained using LA-ICP-MS and those obtained with regular solution-ICP-MS (r 2 = 0.92, 0.98, 0.63, 0.57, 0.99 and 0.90 for Hg, Fe, Cu, Zn, Cd and Pb, respectively). Cross-correlation analyses revealed good reproducibility between trace element patterns obtained from hair collected from the same bear. One exception was Cd for which external contamination was observed resulting in poor reproducibility. In order to validate the method, we used LA-ICP-MS on the hair of five captive grizzly bears fed known and varying amounts of cutthroat trout over a period of 33 days. Trace element patterns along the hair revealed strong Hg, Cu and Zn signals coinciding with fish consumption. Accordingly, significant correlations between Hg, Cu, and Zn in the hair and Hg, Cu, and Zn intake were evident and we were able to develop accumulation models for each of these elements. While the use of LA-ICP-MS for the monitoring of trace elements in wildlife is in its infancy, this study highlights the robustness and applicability of this newly adapted method. - Highlights: • LA-ICP-MS provides temporal trace metal exposure information for wild grizzly bears. • Cu and Zn temporal exposures provide

  3. Medicare Provider Data - Hospice Providers

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Hospice Utilization and Payment Public Use File provides information on services provided to Medicare beneficiaries by hospice providers. The Hospice PUF...

  4. Bone Microarchitecture and Estimated Strength in 499 Adult Danish Women and Men: A Cross-Sectional, Population-Based High-Resolution Peripheral Quantitative Computed Tomographic Study on Peak Bone Structure

    DEFF Research Database (Denmark)

    Hansen, Stinus; Shanbhogue, V.; Folkestad, L.

    2014-01-01

    High-resolution peripheral quantitative computed tomography (HR-pQCT) allows in vivo assessment of cortical and trabecular bone mineral density (BMD), geometry, and microarchitecture at the distal radius and tibia in unprecedented detail. In this cross-sectional study, we provide normative and de...... and descriptive HR-pQCT data from a large population-based sample of Danish Caucasian women and men (n = 499) aged 20-80 years. In young adults (...

  5. Are Universities Providing Non-STEM Students the Mathematics Preparation Required by Their Programs?: A Case Study of A Quantitative Literacy Pathway and Vertical Alignment from Remediation to Degree Completion

    Science.gov (United States)

    Allen, Charles

    2017-01-01

    Informed by Gagne's belief in the necessity of prerequisite knowledge for new learning, and Bruner's Spiral Curriculum Theory, the objective of this case study was to explore the postsecondary pathway from remedial mathematics, through one gateway mathematics course, and into the quantitative literacy requirements of various non-STEM programs of…

  6. Use of the tritium thermonuclear peak in the deep unsaturated zone for quantitative estimate of aquifer recharge under semi-arid conditions: first application in Sahel

    International Nuclear Information System (INIS)

    Cheikh Becaye Gaye; Aranyossy, J.F.

    1992-01-01

    The location of the bomb tritium signal at 20 and 12 m depth in the unsaturated sand dunes in the semi-arid part of North Senegal leads to a qualitative estimate of the effective infiltration of 22 and 26 mm.yr -1 . These figures correspond respectively to 6.5 and 8% of the total precipitation since 1963. Tritium content distribution in interstitial water is modelled by convolution of the analytical solution of the dispersion equation. Best fitting of the complete 12 m depth tritium peak is obtained with a dispersion coefficient of 0.03 m 2 .yr -1

  7. Biotransformation of dichlorodiphenyltrichloroethane in the benthic polychaete, Nereis succinea: quantitative estimation by analyzing the partitioning of chemicals between gut fluid and lipid.

    Science.gov (United States)

    Wang, Fei; Pei, Yuan-yuan; You, Jing

    2015-02-01

    Biotransformation plays an important role in the bioaccumulation and toxicity of a chemical in biota. Dichlorodiphenyltrichloroethane (DDT) commonly co-occurs with its metabolites (dichlorodiphenyldichloroethane [DDD] and dichlorodiphenyldichloroethylene [DDE]), in the environment; thus it is a challenge to accurately quantify the biotransformation rates of DDT and distinguish the sources of the accumulated metabolites in an organism. The present study describes a method developed to quantitatively analyze the biotransformation of p,p'-DDT in the benthic polychaete, Nereis succinea. The lugworms were exposed to sediments spiked with DDT at various concentrations for 28 d. Degradation of DDT to DDD and DDE occurred in sediments during the aging period, and approximately two-thirds of the DDT remained in the sediment. To calculate the biotransformation rates, residues of individual compounds measured in the bioaccumulation testing (after biotransformation) were compared with residues predicted by analyzing the partitioning of the parent and metabolite compounds between gut fluid and tissue lipid (before biotransformation). The results suggest that sediment ingestion rates decreased when DDT concentrations in sediment increased. Extensive biotransformation of DDT occurred in N. succinea, with 86% of DDT being metabolized to DDD and biotransformation, and the remaining 30% was from direct uptake of sediment-associated DDD. In addition, the biotransformation was not dependent on bulk sediment concentrations, but rather on bioaccessible concentrations of the chemicals in sediment, which were quantified by gut fluid extraction. The newly established method improved the accuracy of prediction of the bioaccumulation and toxicity of DDTs. © 2014 SETAC.

  8. Qualitative and quantitative estimation of comprehensive synaptic connectivity in short- and long-term cultured rat hippocampal neurons with new analytical methods inspired by Scatchard and Hill plots

    Energy Technology Data Exchange (ETDEWEB)

    Tanamoto, Ryo; Shindo, Yutaka; Niwano, Mariko [Department of Biosciences and Informatics, Faculty of Science and Technology, Keio University (Japan); Matsumoto, Yoshinori [Department of Applied Physics and Physico-Informatics, Faculty of Science and Technology, Keio University (Japan); Miki, Norihisa [Department of Mechanical Engineering, Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, Kanagawa, 223-8522 (Japan); Hotta, Kohji [Department of Biosciences and Informatics, Faculty of Science and Technology, Keio University (Japan); Oka, Kotaro, E-mail: oka@bio.keio.ac.jp [Department of Biosciences and Informatics, Faculty of Science and Technology, Keio University (Japan)

    2016-03-18

    To investigate comprehensive synaptic connectivity, we examined Ca{sup 2+} responses with quantitative electric current stimulation by indium-tin-oxide (ITO) glass electrode with transparent and high electro-conductivity. The number of neurons with Ca{sup 2+} responses was low during the application of stepwise increase of electric current in short-term cultured neurons (less than 17 days in-vitro (DIV)). The neurons cultured over 17 DIV showed two-type responses: S-shaped (sigmoid) and monotonous saturated responses, and Scatchard plots well illustrated the difference of these two responses. Furthermore, sigmoid like neural network responses over 17 DIV were altered to the monotonous saturated ones by the application of the mixture of AP5 and CNQX, specific blockers of NMDA and AMPA receptors, respectively. This alternation was also characterized by the change of Hill coefficients. These findings indicate that the neural network with sigmoid-like responses has strong synergetic or cooperative synaptic connectivity via excitatory glutamate synapses. - Highlights: • We succeed to evaluate the maturation of neural network by Scathard and Hill Plots. • Long-term cultured neurons showed two-type responses: sigmoid and monotonous. • The sigmoid-like increase indicates the cooperatevity of neural networks. • Excitatory glutamate synapses cause the cooperatevity of neural networks.

  9. Prospective longitudinal assessment of parotid gland function using dynamic quantitative pertechnate scintigraphy and estimation of dose–response relationship of parotid-sparing radiotherapy in head-neck cancers

    International Nuclear Information System (INIS)

    Gupta, Tejpal; Hotwani, Chandni; Kannan, Sadhana; Master, Zubin; Rangarajan, Venkatesh; Murthy, Vedang; Budrukkar, Ashwini; Ghosh-Laskar, Sarbani; Agarwal, Jai Prakash

    2015-01-01

    To estimate dose–response relationship using dynamic quantitative 99m Tc-pertechnate scintigraphy in head-neck cancer patients treated with parotid-sparing conformal radiotherapy. Dynamic quantitative pertechnate salivary scintigraphy was performed pre-treatment and subsequently periodically after definitive radiotherapy. Reduction in salivary function following radiotherapy was quantified by salivary excretion fraction (SEF) ratios. Dose–response curves were modeled using standardized methodology to calculate tolerance dose 50 (TD50) for parotid glands. Salivary gland function was significantly affected by radiotherapy with maximal decrease in SEF ratios at 3-months, with moderate functional recovery over time. There was significant inverse correlation between SEF ratios and mean parotid doses at 3-months (r = −0.589, p < 0.001); 12-months (r = −0.554, p < 0.001); 24-months (r = −0.371, p = 0.002); and 36-months (r = −0.350, p = 0.005) respectively. Using a post-treatment SEF ratio <45% as the scintigraphic criteria to define severe salivary toxicity, the estimated TD50 value with its 95% confidence interval (95% CI) for the parotid gland was 35.1Gy (23.6-42.6Gy), 41.3Gy (34.6-48.8Gy), 55.9Gy (47.4-70.0Gy) and 64.3Gy (55.8-70.0Gy) at 3, 12, 24, and 36-months respectively. There is consistent decline in parotid function even after conformal radiotherapy with moderate recovery over time. Dynamic quantitative pertechnate scintigraphy is a simple, reproducible, and minimally invasive test of major salivary gland function. The online version of this article (doi:10.1186/s13014-015-0371-2) contains supplementary material, which is available to authorized users

  10. Quantitative estimates of Asian dust input to the western Philippine Sea in the mid-late Quaternary and its potential significance for paleoenvironment

    Science.gov (United States)

    Xu, Zhaokai; Li, Tiegang; Clift, Peter D.; Lim, Dhongil; Wan, Shiming; Chen, Hongjin; Tang, Zheng; Jiang, Fuqing; Xiong, Zhifang

    2015-09-01

    We present a new high-resolution multiproxy data set of Sr-Nd isotopes, rare earth element, soluble iron, and total organic carbon data from International Marine Global Change Study Core MD06-3047 located in the western Philippine Sea. We integrate our new data with published clay mineralogy, rare earth element chemistry, thermocline depth, and δ13C differences between benthic and planktonic foraminifera, in order to quantitatively constrain Asian dust input to the basin. We explore the relationship between Philippine Sea and high-latitude Pacific eolian fluxes, as well as its significance for marine productivity and atmospheric CO2 during the mid-late Quaternary. Three different indices indicate that Asian dust contributes between ˜15% and ˜50% to the detrital fraction of the sediments. Eolian dust flux in Core MD06-3047 is similar to that in the polar southern Pacific sediment. Coherent changes for most dust flux maximum/minimum indicate that dust generation in interhemispheric source areas might have a common response to climatic variation over the mid-late Quaternary. Furthermore, we note relatively good coherence between Asian dust input, soluble iron concentration, local marine productivity, and even global atmospheric CO2 concentration over the entire study interval. This suggests that dust-borne iron fertilization of marine phytoplankton might have been a periodic process operating at glacial/interglacial time scales over the past 700 ka. We suggest that strengthening of the biological pump in the Philippine Sea, and elsewhere in the tropical western Pacific during the mid-late Quaternary glacial periods may contribute to the lowering of atmospheric CO2 concentrations during ice ages.

  11. Estimating travel reduction associated with the use of telemedicine by patients and healthcare professionals: proposal for quantitative synthesis in a systematic review

    Directory of Open Access Journals (Sweden)

    Bahaadinbeigy Kambiz

    2011-08-01

    Full Text Available Abstract Background A major benefit offered by telemedicine is the avoidance of travel, by patients, their carers and health care professionals. Unfortunately, there is very little published information about the extent of avoided travel. We propose to undertake a systematic review of literature which reports credible data on the reductions in travel associated with the use of telemedicine. Method The conventional approach to quantitative synthesis of the results from multiple studies is to conduct a meta analysis. However, too much heterogeneity exists between available studies to allow a meaningful meta analysis of the avoided travel when telemedicine is used across all possible settings. We propose instead to consider all credible evidence on avoided travel through telemedicine by fitting a linear model which takes into account the relevant factors in the circumstances of the studies performed. We propose the use of stepwise multiple regression to identify which factors are significant. Discussion Our proposed approach is illustrated by the example of teledermatology. In a preliminary review of the literature we found 20 studies in which the percentage of avoided travel through telemedicine could be inferred (a total of 5199 patients. The mean percentage avoided travel reported in the 12 store-and-forward studies was 43%. In the 7 real-time studies and in a single study with a hybrid technique, 70% of the patients avoided travel. A simplified model based on the modality of telemedicine employed (i.e. real-time or store and forward explained 29% of the variance. The use of store and forward teledermatology alone was associated with 43% of avoided travel. The increase in the proportion of patients who avoided travel (25% when real-time telemedicine was employed was significant (P = 0.014. Service planners can use this information to weigh up the costs and benefits of the two approaches.

  12. Estimating travel reduction associated with the use of telemedicine by patients and healthcare professionals: proposal for quantitative synthesis in a systematic review.

    Science.gov (United States)

    Wootton, Richard; Bahaadinbeigy, Kambiz; Hailey, David

    2011-08-08

    A major benefit offered by telemedicine is the avoidance of travel, by patients, their carers and health care professionals. Unfortunately, there is very little published information about the extent of avoided travel. We propose to undertake a systematic review of literature which reports credible data on the reductions in travel associated with the use of telemedicine. The conventional approach to quantitative synthesis of the results from multiple studies is to conduct a meta analysis. However, too much heterogeneity exists between available studies to allow a meaningful meta analysis of the avoided travel when telemedicine is used across all possible settings. We propose instead to consider all credible evidence on avoided travel through telemedicine by fitting a linear model which takes into account the relevant factors in the circumstances of the studies performed. We propose the use of stepwise multiple regression to identify which factors are significant. Our proposed approach is illustrated by the example of teledermatology. In a preliminary review of the literature we found 20 studies in which the percentage of avoided travel through telemedicine could be inferred (a total of 5199 patients). The mean percentage avoided travel reported in the 12 store-and-forward studies was 43%. In the 7 real-time studies and in a single study with a hybrid technique, 70% of the patients avoided travel. A simplified model based on the modality of telemedicine employed (i.e. real-time or store and forward) explained 29% of the variance. The use of store and forward teledermatology alone was associated with 43% of avoided travel. The increase in the proportion of patients who avoided travel (25%) when real-time telemedicine was employed was significant (P = 0.014). Service planners can use this information to weigh up the costs and benefits of the two approaches.

  13. [Study on the quantitative estimation method for VOCs emission from petrochemical storage tanks based on tanks 4.0.9d model].

    Science.gov (United States)

    Li, Jing; Wang, Min-Yan; Zhang, Jian; He, Wan-Qing; Nie, Lei; Shao, Xia

    2013-12-01

    VOCs emission from petrochemical storage tanks is one of the important emission sources in the petrochemical industry. In order to find out the VOCs emission amount of petrochemical storage tanks, Tanks 4.0.9d model is utilized to calculate the VOCs emission from different kinds of storage tanks. VOCs emissions from a horizontal tank, a vertical fixed roof tank, an internal floating roof tank and an external floating roof tank were calculated as an example. The consideration of the site meteorological information, the sealing information, the tank content information and unit conversion by using Tanks 4.0.9d model in China was also discussed. Tanks 4.0.9d model can be used to estimate VOCs emissions from petrochemical storage tanks in China as a simple and highly accurate method.

  14. Quantitative relationships between thallium-201 estimated myocardial infarct size and left ventricular function in the acute or convalescent phase of the first attack of myocardial infarction

    Energy Technology Data Exchange (ETDEWEB)

    Kataoka, Hajime (Kagoshima Univ. (Japan). Faculty of Medicine); Ueda, Keiji; Sakai, Makoto (and others)

    1983-07-01

    Correlations between left ventricular (LV) function and infarct size estimated by computer-assisted thallium (Tl)-201 scintigraphy were studied in 16 patients in the acute or convalescent phase of the first attack of transmural myocardial infarction (MI). Tl-201 estimation of the infarct size was done using a ''corrected'' circumferential profile method, by which the total defect score could be obtained. The LV function was evaluated by radionuclide angiography, echocardiography and cardiac catheterization study. The following results were obtained: 1) A close inverse relationship was found between the defect score and the ejection fraction (r = -0.649, p < 0.01). 2) The linear correlation coefficient was 0.540 (p < 0.05) between the defect score and the pulmonary arterial end-diastolic pressure and -0.616 (p < 0.02) between the defect score and the stroke volume index. There was no significant correlation between the defect score and the cardiac index. 3) There was a linear correlation between the defect score and the LV end-diastolic dimension (r = -0.852, p < 0.001). However, there was no relation between the defect score and the left atrial dimension. When the LV indices were compared between the small (S) and the large (L) defect score group, the L defect group had faster heart rate, larger LV chamber size and the smaller stroke volume index than the S defect group. However, there was no significant difference in the cardiac index between these 2 groups. These results suggest that the LV dilatation in acute or convalescent phase of the first attack of transmural MI is an ominous sign because it was usually accompanied by large infarct size. The present study also indicates that LV dilatation accompanying a large infarct does not satisfactorily compensate for LV dysfunction by Frank-Starling mechanism.

  15. Comparison of blood levels of riboflavin and folate with dietary correlates estimated from a semi-quantitative food-frequency questionnaire in older persons in Portugal.

    Science.gov (United States)

    Tavares, Nelson R; Moreira, Pedro S; Amaral, Teresa F

    2012-01-01

    Since information regarding biochemical parameters of riboflavin and folate status is limited in some populations of older adults, a food-frequency questionnaire is often used to estimate riboflavin and folate status. However, the performance of this type of questionnaire among this age group has not been comprehensively evaluated. Thus, we sought to assess riboflavin and folate status in older adults living in Portugal and to validate findings from a semiquantitative food-frequency questionnaire (FFQ), by comparison to these blood measures. We used a cross-sectional study to investigate riboflavin in red blood cells (as Glutathione Reductase Activity Coefficient; EGRAC) and folate in the serum of 88 older persons (66.7% female), aged between 60 and 94 years, recruited from seven adult day care community centers in Porto, Portugal. Forty-six subjects had low EGRAC levels (riboflavin dietary intakes from FFQ, the mean was 3.34 mg, the median 3.37 mg, and range 0.66-4.81 mg. The Spearman correlation between these two measures was r = 0.073, (P = 0.497) and Pearson correlation, after adjustment for energy, was r = 0.263, P = 0.013. All participants were above the 7 nmol/L serum folate cut-off for adequacy. Spearman correlation coefficient between serum and FFQ measures was r = -0.10, (P = 0.359), and the Pearson correlation, after adjustment for energy and following log(e) transformation, was r = -0.58, (P = 0.593). Thus riboflavin and folate intakes estimated by FFQ correlated poorly with EGRAC and folate serum values.

  16. Steps toward a CONUS-wide reanalysis with archived NEXRAD data using National Mosaic and Multisensor Quantitative Precipitation Estimation (NMQ/Q2) algorithms

    Science.gov (United States)

    Stevens, S. E.; Nelson, B. R.; Langston, C.; Qi, Y.

    2012-12-01

    The National Mosaic and Multisensor QPE (NMQ/Q2) software suite, developed at NOAA's National Severe Storms Laboratory (NSSL) in Norman, OK, addresses a large deficiency in the resolution of currently archived precipitation datasets. Current standards, both radar- and satellite-based, provide for nationwide precipitation data with a spatial resolution of up to 4-5 km, with a temporal resolution as fine as one hour. Efforts are ongoing to process archived NEXRAD data for the period of record (1996 - present), producing a continuous dataset providing precipitation data at a spatial resolution of 1 km, on a timescale of only five minutes. In addition, radar-derived precipitation data are adjusted hourly using a wide variety of automated gauge networks spanning the United States. Applications for such a product range widely, from emergency management and flash flood guidance, to hydrological studies and drought monitoring. Results are presented from a subset of the NEXRAD dataset, providing basic statistics on the distribution of rainrates, relative frequency of precipitation types, and several other variables which demonstrate the variety of output provided by the software. Precipitation data from select case studies are also presented to highlight the increased resolution provided by this reanalysis and the possibilities that arise from the availability of data on such fine scales. A previously completed pilot project and steps toward a nationwide implementation are presented along with proposed strategies for managing and processing such a large dataset. Reprocessing efforts span several institutions in both North Carolina and Oklahoma, and data/software coordination are key in producing a homogeneous record of precipitation to be archived alongside NOAA's other Climate Data Records. Methods are presented for utilizing supercomputing capability in expediting processing, to allow for the iterative nature of a reanalysis effort.

  17. Quantitative cardiac computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Thelen, M.; Dueber, C.; Wolff, P.; Erbel, R.; Hoffmann, T.

    1985-06-01

    The scope and limitations of quantitative cardiac CT have been evaluated in a series of experimental and clinical studies. The left ventricular muscle mass was estimated by computed tomography in 19 dogs (using volumetric methods, measurements in two axes and planes and reference volume). There was good correlation with anatomical findings. The enddiastolic volume of the left ventricle was estimated in 22 patients with cardiomyopathies; using angiography as a reference, CT led to systematic under-estimation. It is also shown that ECG-triggered magnetic resonance tomography results in improved visualisation and may be expected to improve measurements of cardiac morphology.

  18. Environmental contamination with Toxocara eggs: a quantitative approach to estimate the relative contributions of dogs, cats and foxes, and to assess the efficacy of advised interventions in dogs.

    Science.gov (United States)

    Nijsse, Rolf; Mughini-Gras, Lapo; Wagenaar, Jaap A; Franssen, Frits; Ploeger, Harm W

    2015-07-28

    Environmental contamination with Toxocara eggs is considered the main source of human toxocariasis. The contribution of different groups of hosts to this contamination is largely unknown. Current deworming advices focus mainly on dogs. However, controversy exists about blind deworming regimens for >6-month-old dogs, as most of them do not actually shed Toxocara eggs. We aim to estimate the contribution of different non-juvenile hosts to the environmental Toxocara egg contamination and to assess the effects of different Toxocara-reducing interventions for dogs. A stochastic model was developed to quantify the relative contribution to the environmental contamination with Toxocara eggs of household dogs, household cats, stray cats, and foxes, all older than 6 months in areas with varying urbanization degrees. The model was built upon an existing model developed by Morgan et al. (2013). We used both original and published data on host density, prevalence and intensity of infection, coprophagic behaviour, faeces disposal by owners, and cats' outdoor access. Scenario analyses were performed to assess the expected reduction in dogs' egg output according to different deworming regimens and faeces clean-up compliances. Estimates referred to the Netherlands, a country free of stray dogs. Household dogs accounted for 39% of the overall egg output of >6-month-old hosts in the Netherlands, followed by stray cats (27%), household cats (19%), and foxes (15%). In urban areas, egg output was dominated by stray cats (81%). Intervention scenarios revealed that only with a high compliance (90%) to the four times a year deworming advice, dogs' contribution would drop from 39 to 28%. Alternatively, when 50% of owners would always remove their dogs' faeces, dogs' contribution would drop to 20%. Among final hosts of Toxocara older than 6 months, dogs are the main contributors to the environmental egg contamination, though cats in total (i.e. both owned and stray) transcend this

  19. Contribution to the unified formalization of functional and organizational knowledge of an industrial system for a quantitative risks assessment and an estimation of barrier impacts

    International Nuclear Information System (INIS)

    Leger, A.

    2009-01-01

    Since the industrial revolution, human being develops industrial systems to meet his production needs. But the operation of such facilities involves risks for the users. As a result the risk analysis has expanded during these last decades. Indeed, if in the Seventies, the studies were focused on the technological failures, several major accidents have underlined the importance of human and organisational factors in their occurrence, and have changed this initial way of thinking. So that in the Eighties, different methods allowing an identification of these factors have emerged. These studies, implying different fields of expertise, were so far independently built and applied. This fact leads to sector-based analyses and prevents from having an overall view of the studied situation. But, recently, some methodologies propose to (partially) integrate these different methods to study risks in a global approach. This lack of integration constitutes nowadays a scientific and industrial issue for the owners of critical systems. Thus, our contribution concerns the development of a methodology enabling the risk analyses of socio-technical systems in operation. This kind of analysis aims to probabilistically estimate risks for helping the decision-making. In that way, we propose an approach that enables to formalise, integrate, characterise and represent the different knowledge of the system. Our model allows an identification of the whole of the causes that lead to the occurrence of a critical event, by considering the technical data of the system and the data related to human operators and organisational features. (author)

  20. Quantitative analysis of receptor imaging

    International Nuclear Information System (INIS)

    Fu Zhanli; Wang Rongfu

    2004-01-01

    Model-based methods for quantitative analysis of receptor imaging, including kinetic, graphical and equilibrium methods, are introduced in detail. Some technical problem facing quantitative analysis of receptor imaging, such as the correction for in vivo metabolism of the tracer and the radioactivity contribution from blood volume within ROI, and the estimation of the nondisplaceable ligand concentration, is also reviewed briefly

  1. Quantitative research.

    Science.gov (United States)

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  2. Quantitative habitability.

    Science.gov (United States)

    Shock, Everett L; Holland, Melanie E

    2007-12-01

    A framework is proposed for a quantitative approach to studying habitability. Considerations of environmental supply and organismal demand of energy lead to the conclusions that power units are most appropriate and that the units for habitability become watts per organism. Extreme and plush environments are revealed to be on a habitability continuum, and extreme environments can be quantified as those where power supply only barely exceeds demand. Strategies for laboratory and field experiments are outlined that would quantify power supplies, power demands, and habitability. An example involving a comparison of various metabolisms pursued by halophiles is shown to be well on the way to a quantitative habitability analysis.

  3. Quantitative Estimation of Soil Carbon Sequestration in Three Land Use Types (Orchard, Paddy Rice and Forest in a Part of Ramsar Lands, Northern Iran

    Directory of Open Access Journals (Sweden)

    zakieh pahlavan yali

    2017-02-01

    Full Text Available Introduction: The increasing Greenhouse Gases in atmosphere is the main cause of climate and ecosystems changes. The most important greenhouse gas is CO2 that causes global warming or the greenhouse effect. One of the known solutions that reduces atmospheric carbon and helps to improve the situation, is carbon sequestration in vegetation cover and soil. Carbon sequestration refers to the change in atmospheric CO2 into organic carbon compounds by plants and capture it for a certain time . However, the ecosystems with different vegetation have Impressive Influence on soil carbon sequestration (SCS. Soil as the main component of these ecosystems is a world-wide indicator which has been known to play an important role in global balance of carbon sequestration. Furthermore, carbon sequestration can be a standard world trade and becomes guaranteed. Costs of transfer of CO2 (carbon transfer From the atmosphere into the soil based on the negative effects of increased CO2 on Weather is always increasing, This issue can be faced by developing countries to create a new industry, especially when conservation and restoration of rangeland to follow. This research was regarded due to estimation of SCS in three land use types (orchard, paddy rice and forest in a Part of Ramsar Lands, Northern Iran. Materials and Methods: Ramsar city with an area of about 729/7 km2 is located in the western part of Mazandaran province. Its height above sea level is 20 meters. Ramsar city is situated in a temperate and humid climate. Land area covered by forest, orchard and paddy rice. After field inspection of the area, detailed topographic maps of the specified zone on the study were also tested. In each of the three land types, 500 hectares in the every growing and totally 1,500 hectares as study area were selected .For evaluation the sequestration of carbon in different vegetation systems,15 soil profile selected and sampling from depth of 0 to 100 centimetres of each profile

  4. Contribution of long-term accounting for raindrop size distribution variations on quantitative precipitation estimation by weather radar: Disdrometers vs parameter optimization

    Science.gov (United States)

    Hazenberg, P.; Uijlenhoet, R.; Leijnse, H.

    2015-12-01

    Volumetric weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources, which can be subdivided into two main groups: 1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, vertical profile of reflectivity, attenuation, etc.), and 2) errors related to the conversion of the observed reflectivity (Z) values into rainfall intensity (R) and specific attenuation (k). Until the recent wide-scale implementation of dual-polarimetric radar, this second group of errors received relatively little attention, focusing predominantly on precipitation type-dependent Z-R and Z-k relations. The current work accounts for the impact of variations of the drop size distribution (DSD) on the radar QPE performance. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed within The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. However, overall precipitation intensities are still underestimated. This underestimation is expected to result from unaccounted errors (e.g. transmitter calibration, erroneous identification of precipitation as clutter, overshooting and small-scale variability). In case the DSD parameters are optimized, the performance of the radar is further improved, resulting in the best performance of the radar QPE product. However

  5. Quantitative Finance

    Science.gov (United States)

    James, Jessica

    2017-01-01

    Quantitative finance is a field that has risen to prominence over the last few decades. It encompasses the complex models and calculations that value financial contracts, particularly those which reference events in the future, and apply probabilities to these events. While adding greatly to the flexibility of the market available to corporations and investors, it has also been blamed for worsening the impact of financial crises. But what exactly does quantitative finance encompass, and where did these ideas and models originate? We show that the mathematics behind finance and behind games of chance have tracked each other closely over the centuries and that many well-known physicists and mathematicians have contributed to the field.

  6. Estimating rice yield related traits and quantitative trait loci analysis under different nitrogen treatments using a simple tower-based field phenotyping system with modified single-lens reflex cameras

    Science.gov (United States)

    Naito, Hiroki; Ogawa, Satoshi; Valencia, Milton Orlando; Mohri, Hiroki; Urano, Yutaka; Hosoi, Fumiki; Shimizu, Yo; Chavez, Alba Lucia; Ishitani, Manabu; Selvaraj, Michael Gomez; Omasa, Kenji

    2017-03-01

    Application of field based high-throughput phenotyping (FB-HTP) methods for monitoring plant performance in real field conditions has a high potential to accelerate the breeding process. In this paper, we discuss the use of a simple tower based remote sensing platform using modified single-lens reflex cameras for phenotyping yield traits in rice under different nitrogen (N) treatments over three years. This tower based phenotyping platform has the advantages of simplicity, ease and stability in terms of introduction, maintenance and continual operation under field conditions. Out of six phenological stages of rice analyzed, the flowering stage was the most useful in the estimation of yield performance under field conditions. We found a high correlation between several vegetation indices (simple ratio (SR), normalized difference vegetation index (NDVI), transformed vegetation index (TVI), corrected transformed vegetation index (CTVI), soil-adjusted vegetation index (SAVI) and modified soil-adjusted vegetation index (MSAVI)) and multiple yield traits (panicle number, grain weight and shoot biomass) across a three trials. Among all of the indices studied, SR exhibited the best performance in regards to the estimation of grain weight (R2 = 0.80). Under our tower-based field phenotyping system (TBFPS), we identified quantitative trait loci (QTL) for yield related traits using a mapping population of chromosome segment substitution lines (CSSLs) and a single nucleotide polymorphism data set. Our findings suggest the TBFPS can be useful for the estimation of yield performance during early crop development. This can be a major opportunity for rice breeders whom desire high throughput phenotypic selection for yield performance traits.

  7. Quantitative and qualitative coronary arteriography. 1

    International Nuclear Information System (INIS)

    Brown, B.G.; Simpson, Paul; Dodge, J.T. Jr; Bolson, E.L.; Dodge, H.T.

    1991-01-01

    The clinical objectives of arteriography are to obtain information that contributes to an understanding of the mechanisms of the clinical syndrome, provides prognostic information, facilitates therapeutic decisions, and guides invasive therapy. Quantitative and improved qualitative assessments of arterial disease provide us with a common descriptive language which has the potential to accomplish these objectives more effectively and thus to improve clinical outcome. In certain situations, this potential has been demonstrated. Clinical investigation using quantitative techniques has definitely contributed to our understanding of disease mechanisms and of atherosclerosis progression/regression. Routine quantitation of clinical images should permit more accurate and repeatable estimates of disease severity and promises to provide useful estimates of coronary flow reserve. But routine clinical QCA awaits more cost- and time-efficient methods and clear proof of a clinical advantage. Careful inspection of highly magnified, high-resolution arteriographic images reveals morphologic features related to the pathophysiology of the clinical syndrome and to the likelihood of future progression or regression of obstruction. Features that have been found useful include thrombus in its various forms, ulceration and irregularity, eccentricity, flexing and dissection. The description of such high-resolution features should be included among, rather than excluded from, the goals of image processing, since they contribute substantially to the understanding and treatment of the clinical syndrome. (author). 81 refs.; 8 figs.; 1 tab

  8. Quantitative LC-MS Provides No Evidence for m6 dA or m4 dC in the Genome of Mouse Embryonic Stem Cells and Tissues.

    Science.gov (United States)

    Schiffers, Sarah; Ebert, Charlotte; Rahimoff, René; Kosmatchev, Olesea; Steinbacher, Jessica; Bohne, Alexandra-Viola; Spada, Fabio; Michalakis, Stylianos; Nickelsen, Jörg; Müller, Markus; Carell, Thomas

    2017-09-04

    Until recently, it was believed that the genomes of higher organisms contain, in addition to the four canonical DNA bases, only 5-methyl-dC (m 5 dC) as a modified base to control epigenetic processes. In recent years, this view has changed dramatically with the discovery of 5-hydroxymethyl-dC (hmdC), 5-formyl-dC (fdC), and 5-carboxy-dC (cadC) in DNA from stem cells and brain tissue. N 6 -methyldeoxyadenosine (m 6 dA) is the most recent base reported to be present in the genome of various eukaryotic organisms. This base, together with N 4 -methyldeoxycytidine (m 4 dC), was first reported to be a component of bacterial genomes. In this work, we investigated the levels and distribution of these potentially epigenetically relevant DNA bases by using a novel ultrasensitive UHPLC-MS method. We further report quantitative data for m 5 dC, hmdC, fdC, and cadC, but we were unable to detect either m 4 dC or m 6 dA in DNA isolated from mouse embryonic stem cells or brain and liver tissue, which calls into question their epigenetic relevance. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Novel method for quantitative estimation of biofilms

    DEFF Research Database (Denmark)

    Syal, Kirtimaan

    2017-01-01

    Biofilm protects bacteria from stress and hostile environment. Crystal violet (CV) assay is the most popular method for biofilm determination adopted by different laboratories so far. However, biofilm layer formed at the liquid-air interphase known as pellicle is extremely sensitive to its washing...... and staining steps. Early phase biofilms are also prone to damage by the latter steps. In bacteria like mycobacteria, biofilm formation occurs largely at the liquid-air interphase which is susceptible to loss. In the proposed protocol, loss of such biofilm layer was prevented. In place of inverting...... and discarding the media which can lead to the loss of the aerobic biofilm layer in CV assay, media was removed from the formed biofilm with the help of a syringe and biofilm layer was allowed to dry. The staining and washing steps were avoided, and an organic solvent-tetrahydrofuran (THF) was deployed...

  10. Quantitative Risks

    Science.gov (United States)

    2015-02-24

    design failure modes and effects analysis (DFMEA), (b) Fault Tree Analysis ( FTA ) for all essential functions listed in the Failure Definition and...subsystem reliability date from Accomplishment 3, completed (a) updated DFMEA, (b) updated FTA , (c) updated reliability and maintainability estimates, (d...www.gao.gov/assets/660/658615.pdf [4] Karen Richey. Update to GAO’s Cost Estimating Assessment Guide and Scheduling Guide (draft). GAO. Mar 2013

  11. Quantitative tools for addressing hospital readmissions

    Directory of Open Access Journals (Sweden)

    Lagoe Ronald J

    2012-11-01

    Full Text Available Abstract Background Increased interest in health care cost containment is focusing attention on reduction of hospital readmissions. Major payors have already developed financial penalties for providers that generate excess readmissions. This subject has benefitted from the development of resources such as the Potentially Preventable Readmissions software. This process has encouraged hospitals to renew efforts to improve these outcomes. The aim of this study was to describe quantitative tools such as definitions, risk estimation, and tracking of patients for reducing hospital readmissions. Findings This study employed the Potentially Preventable Readmissions software to develop quantitative tools for addressing hospital readmissions. These tools included two definitions of readmissions that support identification and management of patients. They also included analytical approaches for estimation of the risk of readmission for individual patients by age, discharge status of the initial admission, and severity of illness. They also included patient specific spreadsheets for tracking of target populations and for evaluation of the impact of interventions. Conclusions The study demonstrated that quantitative tools including the development of definitions of readmissions, estimation of the risk of readmission, and patient specific spreadsheets could contribute to the improvement of patient outcomes in hospitals.

  12. How to estimate the health benefits of additional research and changing clinical practice

    OpenAIRE

    Claxton, Karl; Griffin, Susan; Koffijberg, Hendrik; McKenna, Claire

    2015-01-01

    A simple extension of standard meta-analysis can provide quantitative estimates of the potential health benefits of further research and of implementing the findings of existing research, which can help inform research prioritisation and efforts to change clinical practice

  13. How to estimate the health benefits of additional research and changing clinical practice

    OpenAIRE

    Claxton, Karl; Griffin, Susan; Koffijberg, Hendrik; McKenna, Claire

    2015-01-01

    A simple extension of standard metaanalysis can provide quantitative estimates of the potential health benefits of further research and of implementing the findings of existing research, which can help inform research prioritisation and efforts to change clinical practice

  14. A comparative quantitative analysis of the IDEAL (iterative decomposition of water and fat with echo asymmetry and least-squares estimation) and the CHESS (chemical shift selection suppression) techniques in 3.0 T L-spine MRI

    Science.gov (United States)

    Kim, Eng-Chan; Cho, Jae-Hwan; Kim, Min-Hye; Kim, Ki-Hong; Choi, Cheon-Woong; Seok, Jong-min; Na, Kil-Ju; Han, Man-Seok

    2013-03-01

    This study was conducted on 20 patients who had undergone pedicle screw fixation between March and December 2010 to quantitatively compare a conventional fat suppression technique, CHESS (chemical shift selection suppression), and a new technique, IDEAL (iterative decomposition of water and fat with echo asymmetry and least squares estimation). The general efficacy and usefulness of the IDEAL technique was also evaluated. Fat-suppressed transverse-relaxation-weighed images and longitudinal-relaxation-weighted images were obtained before and after contrast injection by using these two techniques with a 1.5T MR (magnetic resonance) scanner. The obtained images were analyzed for image distortion, susceptibility artifacts and homogenous fat removal in the target region. The results showed that the image distortion due to the susceptibility artifacts caused by implanted metal was lower in the images obtained using the IDEAL technique compared to those obtained using the CHESS technique. The results of a qualitative analysis also showed that compared to the CHESS technique, fewer susceptibility artifacts and more homogenous fat removal were found in the images obtained using the IDEAL technique in a comparative image evaluation of the axial plane images before and after contrast injection. In summary, compared to the CHESS technique, the IDEAL technique showed a lower occurrence of susceptibility artifacts caused by metal and lower image distortion. In addition, more homogenous fat removal was shown in the IDEAL technique.

  15. Quantitative radiography

    International Nuclear Information System (INIS)

    Brase, J.M.; Martz, H.E.; Waltjen, K.E.; Hurd, R.L.; Wieting, M.G.

    1986-01-01

    Radiographic techniques have been used in nondestructive evaluation primarily to develop qualitative information (i.e., defect detection). This project applies and extends the techniques developed in medical x-ray imaging, particularly computed tomography (CT), to develop quantitative information (both spatial dimensions and material quantities) on the three-dimensional (3D) structure of solids. Accomplishments in FY 86 include (1) improvements in experimental equipment - an improved microfocus system that will give 20-μm resolution and has potential for increased imaging speed, and (2) development of a simple new technique for displaying 3D images so as to clearly show the structure of the object. Image reconstruction and data analysis for a series of synchrotron CT experiments conducted by LLNL's Chemistry Department has begun

  16. Infrared thermography quantitative image processing

    Science.gov (United States)

    Skouroliakou, A.; Kalatzis, I.; Kalyvas, N.; Grivas, TB

    2017-11-01

    Infrared thermography is an imaging technique that has the ability to provide a map of temperature distribution of an object’s surface. It is considered for a wide range of applications in medicine as well as in non-destructive testing procedures. One of its promising medical applications is in orthopaedics and diseases of the musculoskeletal system where temperature distribution of the body’s surface can contribute to the diagnosis and follow up of certain disorders. Although the thermographic image can give a fairly good visual estimation of distribution homogeneity and temperature pattern differences between two symmetric body parts, it is important to extract a quantitative measurement characterising temperature. Certain approaches use temperature of enantiomorphic anatomical points, or parameters extracted from a Region of Interest (ROI). A number of indices have been developed by researchers to that end. In this study a quantitative approach in thermographic image processing is attempted based on extracting different indices for symmetric ROIs on thermograms of the lower back area of scoliotic patients. The indices are based on first order statistical parameters describing temperature distribution. Analysis and comparison of these indices result in evaluating the temperature distribution pattern of the back trunk expected in healthy, regarding spinal problems, subjects.

  17. Quantitative sexing (Q-Sexing) and relative quantitative sexing (RQ ...

    African Journals Online (AJOL)

    samer

    Key words: Polymerase chain reaction (PCR), quantitative real time polymerase chain reaction (qPCR), quantitative sexing, Siberian tiger. INTRODUCTION. Animal molecular sexing .... 43:3-12. Ellegren H (1996). First gene on the avian W chromosome (CHD) provides a tag for universal sexing of non-ratite birds. Proc.

  18. Magnetic Resonance Imaging Provides Added Value to the Prostate Cancer Prevention Trial Risk Calculator for Patients With Estimated Risk of High-grade Prostate Cancer Less Than or Equal to 10.

    Science.gov (United States)

    Kim, Eric H; Weaver, John K; Shetty, Anup S; Vetter, Joel M; Andriole, Gerald L; Strope, Seth A

    2017-04-01

    To determine the added value of prostate magnetic resonance imaging (MRI) to the Prostate Cancer Prevention Trial risk calculator. Between January 2012 and December 2015, 339 patients underwent prostate MRI prior to biopsy at our institution. MRI was considered positive if there was at least 1 Prostate Imaging Reporting and Data System 4 or 5 MRI suspicious region. Logistic regression was used to develop 2 models: biopsy outcome as a function of the (1) Prostate Cancer Prevention Trial risk calculator alone and (2) combined with MRI findings. When including all patients, the Prostate Cancer Prevention Trial with and without MRI models performed similarly (area under the curve [AUC] = 0.74 and 0.78, P = .06). When restricting the cohort to patients with estimated risk of high-grade (Gleason ≥7) prostate cancer ≤10%, the model with MRI outperformed the Prostate Cancer Prevention Trial alone model (AUC = 0.69 and 0.60, P = .01). Within this cohort of patients, there was no significant difference in discrimination between models for those with previous negative biopsy (AUC = 0.61 vs 0.63, P = .76), whereas there was a significant improvement in discrimination with the MRI model for biopsy-naïve patients (AUC = 0.72 vs 0.60, P = .01). The use of prostate MRI in addition to the Prostate Cancer Prevention Trial risk calculator provides a significant improvement in clinical risk discrimination for patients with estimated risk of high-grade (Gleason ≥7) prostate cancer ≤10%. Prebiopsy prostate MRI should be strongly considered for these patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Quantitative Communication Research: Review, Trends, and Critique

    Directory of Open Access Journals (Sweden)

    Timothy R. Levine

    2013-01-01

    Full Text Available Trends in quantitative communication research are reviewed. A content analysis of 48 articles reporting original communication research published in 1988-1991 and 2008-2011 is reported. Survey research and self-report measurement remain common approaches to research. Null hypothesis significance testing remains the dominant approach to statistical analysis. Reporting the shapes of distributions, estimates of statistical power, and confidence intervals remain uncommon. Trends over time include the increased popularity of health communication and computer mediated communication as topics of research, and increased attention to mediator and moderator variables. The implications of these practices for scientific progress are critically discussed, and suggestions for the future are provided.

  20. Comparación de dos índices cuantitativos de estimación del estado de desarrollo de la alfalfa Comparison of two quantitative indexes for the estimation of alfalfa development stages

    Directory of Open Access Journals (Sweden)

    M.L. Bernáldez

    2006-12-01

    Full Text Available El estado de desarrollo de la alfalfa (Medicago sativa L. es una variable de estudio común en evaluaciones de cultivares, dada su relación con la composición química y la tasa de crecimiento de la pastura. La determinación de los índices cuantitativos "estado medio por conteo" y "estado medio por peso" (EMC y EMP respectivamente permite la descripción del estado de desarrollo en pasturas de alfalfa de una manera objetiva y reproducible. Los índices EMC y EMP describen igualmente el estado de desarrollo de la alfalfa cuando la pastura se encuentra próxima al momento de utilización recomendado en la práctica. La ventaja de estimar EMC en relación a EMP, se basa en la rapidez operativa que ofrece la generación de datos para el cálculo del primero.The developmental stage of alfalfa (Medicago sativa L. is an usual variable of study when evaluating cultivars because of its relationship with chemical composition and pasture growth rate. Determination of quantitative indexes such as "mean stage by count" and "mean stage by weight" (MSC and MSW respectively makes it possible to describe the developmental phenological stages of alfalfa pastures in a more objective and reproducible way. Likewise, both the MSC and MSW indexes, describe the developmental stages of alfalfa when the pasture is close to the recommended utilisation time in practice. The advantage of estimating MSC in relation to MSW is based on the higher operative efficiency offered by the former in data generation for its calculation.

  1. The quantitative Morse theorem

    OpenAIRE

    Loi, Ta Le; Phien, Phan

    2013-01-01

    In this paper, we give a proof of the quantitative Morse theorem stated by {Y. Yomdin} in \\cite{Y1}. The proof is based on the quantitative Sard theorem, the quantitative inverse function theorem and the quantitative Morse lemma.

  2. Conjugate whole-body scanning system for quantitative measurement of organ distribution in vivo

    International Nuclear Information System (INIS)

    Tsui, B.M.W.; Chen, C.T.; Yasillo, N.J.; Ortega, C.J.; Charleston, D.B.; Lathrop, K.A.

    1979-01-01

    The determination of accurate, quantitative, biokinetic distribution of an internally dispersed radionuclide in humans is important in making realistic radiation absorbed dose estimates, studying biochemical transformations in health and disease, and developing clinical procedures indicative of abnormal functions. In order to collect these data, a whole-body imaging system is required which provides both adequate spatial resolution and some means of absolute quantitation. Based on these considerations, a new whole-body scanning system has been designed and constructed that employs the conjugate counting technique. The conjugate whole-body scanning system provides an efficient and accurate means of collecting absolute quantitative organ distribution data of radioactivity in vivo

  3. Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  4. Estimation of morbidity effects

    International Nuclear Information System (INIS)

    Ostro, B.

    1994-01-01

    Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10

  5. Understanding quantitative research: part 1.

    Science.gov (United States)

    Hoe, Juanita; Hoare, Zoë

    This article, which is the first in a two-part series, provides an introduction to understanding quantitative research, basic statistics and terminology used in research articles. Critical appraisal of research articles is essential to ensure that nurses remain up to date with evidence-based practice to provide consistent and high-quality nursing care. This article focuses on developing critical appraisal skills and understanding the use and implications of different quantitative approaches to research. Part two of this article will focus on explaining common statistical terms and the presentation of statistical data in quantitative research.

  6. Quantitative EPR A Practitioners Guide

    CERN Document Server

    Eaton, Gareth R; Barr, David P; Weber, Ralph T

    2010-01-01

    This is the first comprehensive yet practical guide for people who perform quantitative EPR measurements. No existing book provides this level of practical guidance to ensure the successful use of EPR. There is a growing need in both industrial and academic research to provide meaningful and accurate quantitative EPR results. This text discusses the various sample, instrument and software related aspects required for EPR quantitation. Specific topics include: choosing a reference standard, resonator considerations (Q, B1, Bm), power saturation characteristics, sample positioning, and finally, putting all the factors together to obtain an accurate spin concentration of a sample.

  7. Quantitative reconstruction of past Danish landscapes: The first results

    DEFF Research Database (Denmark)

    Odgaard, Bent Vad; Nielsen, Anne Birgitte

    We present a first attempt at pollen based quantitative reconstruction of land cover around 9 Danish lake sites for the past 2500 years, based on models of pollen dispersal and -deposition (Prentice, 1985; Sugita, 1993, 1994), and pollen productivity estimates produced from a historical calibrati...... are then used in the Landscape Reconstruction Algorithm, LRA, (Sugita & Walker, 2000; Sugita, in press) to separate background and local pollen signals at small sites, thus providing reconstructions of local vegetation around the sites....

  8. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  9. Parâmetros genéticos cuantitativos em famílias de polinización aberta de Eucalyptus urophylla. The estimation of quantitative genetic parameters in open pollinated progênies of Eucalyptus urophylla.

    Directory of Open Access Journals (Sweden)

    Gabriel Costa ROCHA

    2016-12-01

    progenies of Eucalyptus urophylla, considering different ages. The experiment was established in an area that belongs to the Eucatex company, located in the city of Itatinga/SP in a design of randomized blocks, 20 progenies, nine replicates, five plants per plot, totaling 900 plants. The following evaluations were performed: a plant height, b plant diameter at breast height and c wood volume in cubic meters. The estimation of quantitative genetic parameters was executed by adopting the REML/BLUP procedure. The analysis of variance presented significant differences (p < 0.05 among the studied progenies. The average heritability of progenies (h²mp% for the characters plant height at 36 months of age (H: 96, diameter at breast height (DBH: 94 and wood volume (VOL: 95, presented high genetic control for the characters expression. The phenotypic and genetic correlations showed high values (70 to 97%, performed based on DBH, optimizing the work of the breeder

  10. Simulation evaluation of quantitative myocardial perfusion assessment from cardiac CT

    Science.gov (United States)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-03-01

    Contrast enhancement on cardiac CT provides valuable information about myocardial perfusion and methods have been proposed to assess perfusion with static and dynamic acquisitions. There is a lack of knowledge and consensus on the appropriate approach to ensure 1) sufficient diagnostic accuracy for clinical decisions and 2) low radiation doses for patient safety. This work developed a thorough dynamic CT simulation and several accepted blood flow estimation techniques to evaluate the performance of perfusion assessment across a range of acquisition and estimation scenarios. Cardiac CT acquisitions were simulated for a range of flow states (Flow = 0.5, 1, 2, 3 ml/g/min, cardiac output = 3,5,8 L/min). CT acquisitions were simulated with a validated CT simulator incorporating polyenergetic data acquisition and realistic x-ray flux levels for dynamic acquisitions with a range of scenarios including 1, 2, 3 sec sampling for 30 sec with 25, 70, 140 mAs. Images were generated using conventional image reconstruction with additional image-based beam hardening correction to account for iodine content. Time attenuation curves were extracted for multiple regions around the myocardium and used to estimate flow. In total, 2,700 independent realizations of dynamic sequences were generated and multiple MBF estimation methods were applied to each of these. Evaluation of quantitative kinetic modeling yielded blood flow estimates with an root mean square error (RMSE) of ~0.6 ml/g/min averaged across multiple scenarios. Semi-quantitative modeling and qualitative static imaging resulted in significantly more error (RMSE = ~1.2 and ~1.2 ml/min/g respectively). For quantitative methods, dose reduction through reduced temporal sampling or reduced tube current had comparable impact on the MBF estimate fidelity. On average, half dose acquisitions increased the RMSE of estimates by only 18% suggesting that substantial dose reductions can be employed in the context of quantitative myocardial

  11. Applied quantitative finance

    CERN Document Server

    Chen, Cathy; Overbeck, Ludger

    2017-01-01

    This volume provides practical solutions and introduces recent theoretical developments in risk management, pricing of credit derivatives, quantification of volatility and copula modeling. This third edition is devoted to modern risk analysis based on quantitative methods and textual analytics to meet the current challenges in banking and finance. It includes 14 new contributions and presents a comprehensive, state-of-the-art treatment of cutting-edge methods and topics, such as collateralized debt obligations, the high-frequency analysis of market liquidity, and realized volatility. The book is divided into three parts: Part 1 revisits important market risk issues, while Part 2 introduces novel concepts in credit risk and its management along with updated quantitative methods. The third part discusses the dynamics of risk management and includes risk analysis of energy markets and for cryptocurrencies. Digital assets, such as blockchain-based currencies, have become popular b ut are theoretically challenging...

  12. Quantitative subsurface analysis using frequency modulated thermal wave imaging

    Science.gov (United States)

    Subhani, S. K.; Suresh, B.; Ghali, V. S.

    2018-01-01

    Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.

  13. Quantitative ion implantation

    International Nuclear Information System (INIS)

    Gries, W.H.

    1976-06-01

    This is a report of the study of the implantation of heavy ions at medium keV-energies into electrically conducting mono-elemental solids, at ion doses too small to cause significant loss of the implanted ions by resputtering. The study has been undertaken to investigate the possibility of accurate portioning of matter in submicrogram quantities, with some specific applications in mind. The problem is extensively investigated both on a theoretical level and in practice. A mathematical model is developed for calculating the loss of implanted ions by resputtering as a function of the implanted ion dose and the sputtering yield. Numerical data are produced therefrom which permit a good order-of-magnitude estimate of the loss for any ion/solid combination in which the ions are heavier than the solid atoms, and for any ion energy from 10 to 300 keV. The implanted ion dose is measured by integration of the ion beam current, and equipment and techniques are described which make possible the accurate integration of an ion current in an electromagnetic isotope separator. The methods are applied to two sample cases, one being a stable isotope, the other a radioisotope. In both cases independent methods are used to show that the implantation is indeed quantitative, as predicted. At the same time the sample cases are used to demonstrate two possible applications for quantitative ion implantation, viz. firstly for the manufacture of calibration standards for instrumental micromethods of elemental trace analysis in metals, and secondly for the determination of the half-lives of long-lived radioisotopes by a specific activity method. It is concluded that the present study has advanced quantitative ion implantation to the state where it can be successfully applied to the solution of problems in other fields

  14. Quantitative image fusion in infrared radiometry

    Science.gov (United States)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  15. Developments in quantitative electron probe microanalysis

    International Nuclear Information System (INIS)

    Tixier, R.

    1977-01-01

    A study of the range of validity of the formulae for corrections used with massive specimen analysis is made. The method used is original; we have shown that it was possible to use a property of invariability of corrected intensity ratios for standards. This invariance property provides a test for the self consistency of the theory. The theoretical and experimental conditions required for quantitative electron probe microanalysis of thin transmission electron microscope specimens are examined. The correction formulae for atomic number, absorption and fluorescence effects are calculated. Several examples of experimental results are given, relative to the quantitative analysis of intermetallic precipitates and carbides in steels. Advances in applications of electron probe instruments related to the use of computer and the present development of fully automated instruments are reviewed. The necessary statistics for measurements of X ray count data are studied. Estimation procedure and tests are developed. These methods are used to perform a statistical check of electron probe microanalysis measurements and to reject rogue values. An estimator of the confidence interval of the apparent concentration is derived. Formulae were also obtained to optimize the counting time in order to obtain the best precision in a minimum amount of time [fr

  16. Reliability and precision of pellet-group counts for estimating landscape-level deer density

    Science.gov (United States)

    David S. deCalesta

    2013-01-01

    This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...

  17. Estimation of Concrete Corrosion Due to Attack of Chloride Salt

    Directory of Open Access Journals (Sweden)

    V. V. Babitski

    2005-01-01

    Full Text Available The paper provides results of experimental concrete research under conditions of concentrated chloride salt solutions. General principles of forecasting concrete corrosion resistance under salt physical corrosion are given in the paper. Analytical dependences for quantitative estimation of corroded concrete have been obtained.

  18. Meta-analysis for quantitative microbiological risk assessments and benchmarking data

    NARCIS (Netherlands)

    Besten, den H.M.W.; Zwietering, M.H.

    2012-01-01

    Meta-analysis studies are increasingly being conducted in the food microbiology area to quantitatively integrate the findings of many individual studies on specific questions or kinetic parameters of interest. Meta-analyses provide global estimates of parameters and quantify their variabilities, and

  19. Quantitative Decision Support Requires Quantitative User Guidance

    Science.gov (United States)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output

  20. Quantitative autoradiography - a method of radioactivity measurement

    International Nuclear Information System (INIS)

    Treutler, H.C.; Freyer, K.

    1988-01-01

    In the last years the autoradiography has been developed to a quantitative method of radioactivity measurement. Operating techniques of quantitative autoradiography are demonstrated using special standard objects. Influences of irradiation quality, of backscattering in sample and detector materials, and of sensitivity and fading of the detectors are considered. Furthermore, questions of quantitative evaluation of autoradiograms are dealt with, and measuring errors are discussed. Finally, some practical uses of quantitative autoradiography are demonstrated by means of the estimation of activity distribution in radioactive foil samples. (author)

  1. Quantitative performance monitoring

    International Nuclear Information System (INIS)

    Heller, A.S.

    1987-01-01

    In the recently published update of NUREG/CR 3883, it was shown that Japanese plants of size and design similar to those in the US have significantly fewer trips in a given year of operation. One way to reduce such imbalance is the efficient use of available plant data. Since plant data are recorded and monitored continuously for management feedback and timely resolution of problems, this data should be actively used to increase the efficiency of operations and, ultimately, for a reduction of plant trips in power plants. A great deal of information is lost, however, if the analytical tools available for the data evaluation are misapplied or not adopted at all. This paper deals with a program developed to use quantitative techniques to monitor personnel performance in an operating power plant. Visual comparisons of ongoing performance with predetermined quantitative performance goals are made. A continuous feedback is provided to management for early detection of adverse trends and timely resolution of problems. Ultimately, costs are reduced through effective resource management and timely decision making

  2. Quantitative investment analysis

    CERN Document Server

    DeFusco, Richard

    2007-01-01

    In the "Second Edition" of "Quantitative Investment Analysis," financial experts Richard DeFusco, Dennis McLeavey, Jerald Pinto, and David Runkle outline the tools and techniques needed to understand and apply quantitative methods to today's investment process.

  3. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  4. Quantitative Microbial Risk Assessment Tutorial - Primer

    Science.gov (United States)

    This document provides a Quantitative Microbial Risk Assessment (QMRA) primer that organizes QMRA tutorials. The tutorials describe functionality of a QMRA infrastructure, guide the user through software use and assessment options, provide step-by-step instructions for implementi...

  5. Quantitation: clinical applications

    International Nuclear Information System (INIS)

    Britton, K.E.

    1982-01-01

    Single photon emission tomography may be used quantitatively if its limitations are recognized and quantitation is made in relation to some reference area on the image. Relative quantitation is discussed in outline in relation to the liver, brain and pituitary, thyroid, adrenals, and heart. (U.K.)

  6. Quantitative Reasoning in Problem Solving

    Science.gov (United States)

    Ramful, Ajay; Ho, Siew Yin

    2015-01-01

    In this article, Ajay Ramful and Siew Yin Ho explain the meaning of quantitative reasoning, describing how it is used in the to solve mathematical problems. They also describe a diagrammatic approach to represent relationships among quantities and provide examples of problems and their solutions.

  7. The additivity of radionuclide and chemical risk estimates in performance evaluation of mixed-waste sites

    International Nuclear Information System (INIS)

    Till, J.E.; Meyer, K.R.

    1990-01-01

    Methods for assessing radioactive waste sites that contain chemical constituents are in the formative stages. In evaluating these sites, a key concern will be the hazard to personnel involved in cleanup work and to the general population. This paper focuses on what we have learned from pathway analysis and risk assessment about providing a combined estimate of risk from exposure to both chemicals and radionuclides. Quantitative radiation risk assessment involves a high degree of uncertainty. Chemical risk assessment generally does not provide quantitative results. Thus, it is not currently possible to develop a useful, quantitative combined risk assessment for mixed-waste sites

  8. PCA-based groupwise image registration for quantitative MRI

    NARCIS (Netherlands)

    Huizinga, W.; Poot, D. H. J.; Guyader, J.-M.; Klaassen, R.; Coolen, B. F.; van Kranenburg, M.; van Geuns, R. J. M.; Uitterdijk, A.; Polfliet, M.; Vandemeulebroucke, J.; Leemans, A.; Niessen, W. J.; Klein, S.

    2016-01-01

    Quantitative magnetic resonance imaging (qMRI) is a technique for estimating quantitative tissue properties, such as the T5 and T2 relaxation times, apparent diffusion coefficient (ADC), and various perfusion measures. This estimation is achieved by acquiring multiple images with different

  9. Quantitative evaluation of flow systems, groundwater recharge and transmissivities using environmental tracers

    Energy Technology Data Exchange (ETDEWEB)

    Adar, E M [Ben-Gurion Univ. of Negev, Sede Boker Campus (Israel). Water Resources Center

    1996-10-01

    This chapter provides an overview of the basic concepts and formulations on the compartmental (mixing-cell) approach for interpretation of isotope and natural tracer data to arrive at quantitative estimates related to groundwater systems. The theoretical basis of the models and the specific solution algorithms used are described. The application of this approach to field cases are described as illustrative examples. Results of sensitivity analyses of the model to different parameters are provided. (author). 81 refs, 13 figs, 8 tabs.

  10. Quantitative evaluation of flow systems, groundwater recharge and transmissivities using environmental tracers

    International Nuclear Information System (INIS)

    Adar, E.M.

    1996-01-01

    This chapter provides an overview of the basic concepts and formulations on the compartmental (mixing-cell) approach for interpretation of isotope and natural tracer data to arrive at quantitative estimates related to groundwater systems. The theoretical basis of the models and the specific solution algorithms used are described. The application of this approach to field cases are described as illustrative examples. Results of sensitivity analyses of the model to different parameters are provided. (author). 81 refs, 13 figs, 8 tabs

  11. EFSA Panel on Biological Hazards (BIOHAZ); Scientific Opinion on a quantitative estimation of the public health impact of setting a new target for the reduction of Salmonella in broilers

    DEFF Research Database (Denmark)

    Hald, Tine

    This assessment relates the percentage of broiler-associated human salmonellosis cases to different Salmonella prevalences in broiler flocks in the European Union. It considers the contribution and relevance of different Salmonella serovars found in broilers to human salmonellosis. The model......-SAM model) employes data from the EU Baseline Surveys and EU statutory monitoring on Salmonella in animal-food sources, data on incidence of human salmonellosis and food availability data. It is estimated that around 2.4%, 65%, 28% and 4.5% of the human salmonellosis cases are attributable to broilers......, laying hens (eggs), pigs and turkeys respectively. Of the broiler-associated human salmonellosis cases, around 42% and 23% are estimated to be due to the serovars Salmonella Enteritidis and Salmonella Infantis respectively, while other serovars individually contributed less than 5%. Different scenarios...

  12. Radiation risk estimation

    International Nuclear Information System (INIS)

    Schull, W.J.; Texas Univ., Houston, TX

    1992-01-01

    Estimation of the risk of cancer following exposure to ionizing radiation remains largely empirical, and models used to adduce risk incorporate few, if any, of the advances in molecular biology of a past decade or so. These facts compromise the estimation risk where the epidemiological data are weakest, namely, at low doses and dose rates. Without a better understanding of the molecular and cellular events ionizing radiation initiates or promotes, it seems unlikely that this situation will improve. Nor will the situation improve without further attention to the identification and quantitative estimation of the effects of those host and environmental factors that enhance or attenuate risk. (author)

  13. Validity and Reproducibility of a Self-Administered Semi-Quantitative Food-Frequency Questionnaire for Estimating Usual Daily Fat, Fibre, Alcohol, Caffeine and Theobromine Intakes among Belgian Post-Menopausal Women

    Directory of Open Access Journals (Sweden)

    Selin Bolca

    2009-01-01

    Full Text Available A novel food-frequency questionnaire (FFQ was developed and validated to assess the usual daily fat, saturated, mono-unsaturated and poly-unsaturated fatty acid, fibre, alcohol, caffeine, and theobromine intakes among Belgian post-menopausal women participating in dietary intervention trials with phyto-oestrogens. The relative validity of the FFQ was estimated by comparison with 7 day (d estimated diet records (EDR, n 64 and its reproducibility was evaluated by repeated administrations 6 weeks apart (n 79. Although the questionnaire underestimated significantly all intakes compared to the 7 d EDR, it had a good ranking ability (r 0.47-0.94; weighted κ 0.25-0.66 and it could reliably distinguish extreme intakes for all the estimated nutrients, except for saturated fatty acids. Furthermore, the correlation between repeated administrations was high (r 0.71-0.87 with a maximal misclassification of 7% (weighted κ 0.33-0.80. In conclusion, these results compare favourably with those reported by others and indicate that the FFQ is a satisfactorily reliable and valid instrument for ranking individuals within this study population.

  14. Validity and reproducibility of a self-administered semi-quantitative food-frequency questionnaire for estimating usual daily fat, fibre, alcohol, caffeine and theobromine intakes among Belgian post-menopausal women.

    Science.gov (United States)

    Bolca, Selin; Huybrechts, Inge; Verschraegen, Mia; De Henauw, Stefaan; Van de Wiele, Tom

    2009-01-01

    A novel food-frequency questionnaire (FFQ) was developed and validated to assess the usual daily fat, saturated, mono-unsaturated and poly-unsaturated fatty acid, fibre, alcohol, caffeine, and theobromine intakes among Belgian post-menopausal women participating in dietary intervention trials with phyto-oestrogens. The relative validity of the FFQ was estimated by comparison with 7 day (d) estimated diet records (EDR, n 64) and its reproducibility was evaluated by repeated administrations 6 weeks apart (n 79). Although the questionnaire underestimated significantly all intakes compared to the 7 d EDR, it had a good ranking ability (r 0.47-0.94; weighted kappa 0.25-0.66) and it could reliably distinguish extreme intakes for all the estimated nutrients, except for saturated fatty acids. Furthermore, the correlation between repeated administrations was high (r 0.71-0.87) with a maximal misclassification of 7% (weighted kappa 0.33-0.80). In conclusion, these results compare favourably with those reported by others and indicate that the FFQ is a satisfactorily reliable and valid instrument for ranking individuals within this study population.

  15. Good practices for quantitative bias analysis.

    Science.gov (United States)

    Lash, Timothy L; Fox, Matthew P; MacLehose, Richard F; Maldonado, George; McCandless, Lawrence C; Greenland, Sander

    2014-12-01

    Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage

  16. Quantitative densitometry of neurotransmitter receptors

    International Nuclear Information System (INIS)

    Rainbow, T.C.; Bleisch, W.V.; Biegon, A.; McEwen, B.S.

    1982-01-01

    An autoradiographic procedure is described that allows the quantitative measurement of neurotransmitter receptors by optical density readings. Frozen brain sections are labeled in vitro with [ 3 H]ligands under conditions that maximize specific binding to neurotransmitter receptors. The labeled sections are then placed against the 3 H-sensitive LKB Ultrofilm to produce the autoradiograms. These autoradiograms resemble those produced by [ 14 C]deoxyglucose autoradiography and are suitable for quantitative analysis with a densitometer. Muscarinic cholinergic receptors in rat and zebra finch brain and 5-HT receptors in rat brain were visualized by this method. When the proper combination of ligand concentration and exposure time are used, the method provides quantitative information about the amount and affinity of neurotransmitter receptors in brain sections. This was established by comparisons of densitometric readings with parallel measurements made by scintillation counting of sections. (Auth.)

  17. Energy Education: The Quantitative Voice

    Science.gov (United States)

    Wolfson, Richard

    2010-02-01

    A serious study of energy use and its consequences has to be quantitative. It makes little sense to push your favorite renewable energy source if it can't provide enough energy to make a dent in humankind's prodigious energy consumption. Conversely, it makes no sense to dismiss alternatives---solar in particular---that supply Earth with energy at some 10,000 times our human energy consumption rate. But being quantitative---especially with nonscience students or the general public---is a delicate business. This talk draws on the speaker's experience presenting energy issues to diverse audiences through single lectures, entire courses, and a textbook. The emphasis is on developing a quick, ``back-of-the-envelope'' approach to quantitative understanding of energy issues. )

  18. QUANTITATIVE CONFOCAL LASER SCANNING MICROSCOPY

    Directory of Open Access Journals (Sweden)

    Merete Krog Raarup

    2011-05-01

    Full Text Available This paper discusses recent advances in confocal laser scanning microscopy (CLSM for imaging of 3D structure as well as quantitative characterization of biomolecular interactions and diffusion behaviour by means of one- and two-photon excitation. The use of CLSM for improved stereological length estimation in thick (up to 0.5 mm tissue is proposed. The techniques of FRET (Fluorescence Resonance Energy Transfer, FLIM (Fluorescence Lifetime Imaging Microscopy, FCS (Fluorescence Correlation Spectroscopy and FRAP (Fluorescence Recovery After Photobleaching are introduced and their applicability for quantitative imaging of biomolecular (co-localization and trafficking in live cells described. The advantage of two-photon versus one-photon excitation in relation to these techniques is discussed.

  19. Quantitative imaging methods in osteoporosis.

    Science.gov (United States)

    Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G

    2016-12-01

    Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.

  20. Estimating blood and brain concentrations and blood-to-brain influx by magnetic resonance imaging with step-down infusion of Gd-DTPA in focal transient cerebral ischemia and confirmation by quantitative autoradiography with Gd-[14C]DTPA

    OpenAIRE

    Knight, Robert A; Karki, Kishor; Ewing, James R; Divine, George W; Fenstermacher, Joseph D; Patlak, Clifford S; Nagaraja, Tavarekere N

    2009-01-01

    An intravenous step-down infusion procedure that maintained a constant gadolinium-diethylene-triaminepentaacetic acid (Gd-DTPA) blood concentration and magnetic resonance imaging (MRI) were used to localize and quantify the blood–brain barrier (BBB) opening in a rat model of transient cerebral ischemia (n = 7). Blood-to-brain influx rate constant (Ki) values of Gd-DTPA from such regions were estimated using MRI–Patlak plots and compared with the Ki values of Gd-[14C]DTPA, determined minutes l...

  1. Application of sensitivity analysis to a quantitative assessment of neutron cross-section requirements for the TFTR: an interim report

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.; Dudziak, D.J.; Muir, D.W.

    1975-09-01

    A computational method to determine cross-section requirements quantitatively is described and applied to the Tokamak Fusion Test Reactor (TFTR). In order to provide a rational basis for the priorities assigned to new cross-section measurements or evaluations, this method includes quantitative estimates of the uncertainty of currently available data, the sensitivity of important nuclear design parameters to selected cross sections, and the accuracy desired in predicting nuclear design parameters. Perturbation theory is used to combine estimated cross-section uncertainties with calculated sensitivities to determine the variance of any nuclear design parameter of interest

  2. Quantitative analysis chemistry

    International Nuclear Information System (INIS)

    Ko, Wansuk; Lee, Choongyoung; Jun, Kwangsik; Hwang, Taeksung

    1995-02-01

    This book is about quantitative analysis chemistry. It is divided into ten chapters, which deal with the basic conception of material with the meaning of analysis chemistry and SI units, chemical equilibrium, basic preparation for quantitative analysis, introduction of volumetric analysis, acid-base titration of outline and experiment examples, chelate titration, oxidation-reduction titration with introduction, titration curve, and diazotization titration, precipitation titration, electrometric titration and quantitative analysis.

  3. Using Popular Culture to Teach Quantitative Reasoning

    Science.gov (United States)

    Hillyard, Cinnamon

    2007-01-01

    Popular culture provides many opportunities to develop quantitative reasoning. This article describes a junior-level, interdisciplinary, quantitative reasoning course that uses examples from movies, cartoons, television, magazine advertisements, and children's literature. Some benefits from and cautions to using popular culture to teach…

  4. Quantitative Algebraic Reasoning

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Panangaden, Prakash; Plotkin, Gordon

    2016-01-01

    We develop a quantitative analogue of equational reasoning which we call quantitative algebra. We define an equality relation indexed by rationals: a =ε b which we think of as saying that “a is approximately equal to b up to an error of ε”. We have 4 interesting examples where we have a quantitative...... equational theory whose free algebras correspond to well known structures. In each case we have finitary and continuous versions. The four cases are: Hausdorff metrics from quantitive semilattices; pWasserstein metrics (hence also the Kantorovich metric) from barycentric algebras and also from pointed...

  5. Quantitative autoradiography of neurochemicals

    International Nuclear Information System (INIS)

    Rainbow, T.C.; Biegon, A.; Bleisch, W.V.

    1982-01-01

    Several new methods have been developed that apply quantitative autoradiography to neurochemistry. These methods are derived from the 2-deoxyglucose (2DG) technique of Sokoloff (1), which uses quantitative autoradiography to measure the rate of glucose utilization in brain structures. The new methods allow the measurement of the rate of cerbral protein synthesis and the levels of particular neurotransmitter receptors by quantitative autoradiography. As with the 2DG method, the new techniques can measure molecular levels in micron-sized brain structures; and can be used in conjunction with computerized systems of image processing. It is possible that many neurochemical measurements could be made by computerized analysis of quantitative autoradiograms

  6. Quantitative renal cinescintigraphy with iodine-123 hippuran methodological aspects, kit for labeling of hippuran

    International Nuclear Information System (INIS)

    Mehdaoui, A.; Pecking, A.; Delorme, G.; Mathonnat, F.; Debaud, B.; Bardy, A.; Coornaert, S.; Merlin, L.; Vinot, J.M.; Desgrez, A.; Gambini, D.; Vernejoul, P. de.

    1981-08-01

    The development of an extemporaneous kit for the labeling of ortho-iodo-hippuric acid (Hippuran) with iodine 123 allows the performance of a routine quantitative renal cinescintigraphy providing in 20 minutes, and in an absolutely non-traumatic way, a very complete renal morphofunctional study including: a cortical renal scintigraphy, sequential scintigraphies of excretory tract, renal functional curves, tubular, global, and separate clearances for each kidney. This functional quantitative investigation method should take a preferential place in the routine renal balance. The methodology of the technique is explained and compared to classical methods for estimation of tubular, global and separate clearances [fr

  7. Quantitative Methods in the Study of Local History

    Science.gov (United States)

    Davey, Pene

    1974-01-01

    The author suggests how the quantitative analysis of data from census records, assessment roles, and newspapers may be integrated into the classroom. Suggestions for obtaining quantitative data are provided. (DE)

  8. Quantitative Methods for Molecular Diagnostic and Therapeutic Imaging

    OpenAIRE

    Li, Quanzheng

    2013-01-01

    This theme issue provides an overview on the basic quantitative methods, an in-depth discussion on the cutting-edge quantitative analysis approaches as well as their applications for both static and dynamic molecular diagnostic and therapeutic imaging.

  9. Therapy Provider Phase Information

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Therapy Provider Phase Information dataset is a tool for providers to search by their National Provider Identifier (NPI) number to determine their phase for...

  10. Effect of long-term impact-loading on mass, size, and estimated strength of humerus and radius of female racquet-sports players: a peripheral quantitative computed tomography study between young and old starters and controls.

    Science.gov (United States)

    Kontulainen, Saija; Sievänen, Harri; Kannus, Pekka; Pasanen, Matti; Vuori, Ilkka

    2003-02-01

    Bone characteristics of the humeral shaft and distal radius were measured from 64 female tennis and squash players and their 27 age-, height-, and weight-matched controls with peripheral quantitative tomography (pQCT) and dual energy X-ray absorptiometry (DXA). The players were divided into two groups according to the starting age of their tennis or squash training (either before or after menarche) to examine the possible differences in the loading-induced changes in bone structure and volumetric density. The following pQCT variables were used: bone mineral content, total cross-sectional area of bone (TotA), cross-sectional area of the marrow cavity (CavA) and that of the cortical bone (CoA), cortical wall thickness (CWT), volumetric density of the cortical bone (CoD) and trabecular bone (TrD), and torsional bone strength index for the shaft (BSIt) and compressional bone strength index for the bone end (BSIc). These bone strength indices were compared with the DXA-derived areal bone mineral density (aBMD) to assess how well the latter represents the effect of mechanical loading on apparent bone strength. At the humeral shaft, the loaded arm's greater bone mineral content (an average 19% side-to-side difference in young starters and 9% in old starters), was caused by an enlarged cortex (CoA; side-to-side differences 20% and 9%, respectively). The loaded humerus seemed to have grown periosteally (the CavA did not differ between the sites), leading to 26% and 11% side-to-side BSIt differences in the young and old starters, respectively. CoD was equal between the arms (-1% difference in both player groups). The side-to-side differences in the young starters' bone mineral content, CoA, TotA, CWT, and BSIt were 8-22% higher than those of the controls and 8-14% higher than those of the old starters. Old starters' bone mineral content, CoA, and BSIt side-to-side differences were 6-7% greater than those in the controls. The DXA-derived side-to-side aBMD difference was 7

  11. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  12. Quantitative assessment of CA1 local circuits: knowledge base for interneuron-pyramidal cell connectivity.

    Science.gov (United States)

    Bezaire, Marianne J; Soltesz, Ivan

    2013-09-01

    In this work, through a detailed literature review, data-mining, and extensive calculations, we provide a current, quantitative estimate of the cellular and synaptic constituents of the CA1 region of the rat hippocampus. Beyond estimating the cell numbers of GABAergic interneuron types, we calculate their convergence onto CA1 pyramidal cells and compare it with the known input synapses on CA1 pyramidal cells. The convergence calculation and comparison are also made for excitatory inputs to CA1 pyramidal cells. In addition, we provide a summary of the excitatory and inhibitory convergence onto interneurons. The quantitative knowledge base assembled and synthesized here forms the basis for data-driven, large-scale computational modeling efforts. Additionally, this work highlights specific instances where the available data are incomplete, which should inspire targeted experimental projects toward a more complete quantification of the CA1 neurons and their connectivity. Copyright © 2013 Wiley Periodicals, Inc.

  13. Development of a simple estimation tool for LMFBR construction cost

    International Nuclear Information System (INIS)

    Yoshida, Kazuo; Kinoshita, Izumi

    1999-01-01

    A simple tool for estimating the construction costs of liquid-metal-cooled fast breeder reactors (LMFBRs), 'Simple Cost' was developed in this study. Simple Cost is based on a new estimation formula that can reduce the amount of design data required to estimate construction costs. Consequently, Simple cost can be used to estimate the construction costs of innovative LMFBR concepts for which detailed design has not been carried out. The results of test calculation show that Simple Cost provides cost estimations equivalent to those obtained with conventional methods within the range of plant power from 325 to 1500 MWe. Sensitivity analyses for typical design parameters were conducted using Simple Cost. The effects of four major parameters - reactor vessel diameter, core outlet temperature, sodium handling area and number of secondary loops - on the construction costs of LMFBRs were evaluated quantitatively. The results show that the reduction of sodium handling area is particularly effective in reducing construction costs. (author)

  14. Quantitative histological models suggest endothermy in plesiosaurs

    Directory of Open Access Journals (Sweden)

    Corinna V. Fleischle

    2018-06-01

    Full Text Available Background Plesiosaurs are marine reptiles that arose in the Late Triassic and survived to the Late Cretaceous. They have a unique and uniform bauplan and are known for their very long neck and hydrofoil-like flippers. Plesiosaurs are among the most successful vertebrate clades in Earth’s history. Based on bone mass decrease and cosmopolitan distribution, both of which affect lifestyle, indications of parental care, and oxygen isotope analyses, evidence for endothermy in plesiosaurs has accumulated. Recent bone histological investigations also provide evidence of fast growth and elevated metabolic rates. However, quantitative estimations of metabolic rates and bone growth rates in plesiosaurs have not been attempted before. Methods Phylogenetic eigenvector maps is a method for estimating trait values from a predictor variable while taking into account phylogenetic relationships. As predictor variable, this study employs vascular density, measured in bone histological sections of fossil eosauropterygians and extant comparative taxa. We quantified vascular density as primary osteon density, thus, the proportion of vascular area (including lamellar infillings of primary osteons to total bone area. Our response variables are bone growth rate (expressed as local bone apposition rate and resting metabolic rate (RMR. Results Our models reveal bone growth rates and RMRs for plesiosaurs that are in the range of birds, suggesting that plesiosaurs were endotherm. Even for basal eosauropterygians we estimate values in the range of mammals or higher. Discussion Our models are influenced by the availability of comparative data, which are lacking for large marine amniotes, potentially skewing our results. However, our statistically robust inference of fast growth and fast metabolism is in accordance with other evidence for plesiosaurian endothermy. Endothermy may explain the success of plesiosaurs consisting in their survival of the end-Triassic extinction

  15. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  16. Quantitative film radiography

    International Nuclear Information System (INIS)

    Devine, G.; Dobie, D.; Fugina, J.; Hernandez, J.; Logan, C.; Mohr, P.; Moss, R.; Schumacher, B.; Updike, E.; Weirup, D.

    1991-01-01

    We have developed a system of quantitative radiography in order to produce quantitative images displaying homogeneity of parts. The materials that we characterize are synthetic composites and may contain important subtle density variations not discernible by examining a raw film x-radiograph. In order to quantitatively interpret film radiographs, it is necessary to digitize, interpret, and display the images. Our integrated system of quantitative radiography displays accurate, high-resolution pseudo-color images in units of density. We characterize approximately 10,000 parts per year in hundreds of different configurations and compositions with this system. This report discusses: the method; film processor monitoring and control; verifying film and processor performance; and correction of scatter effects

  17. Quantitative phase imaging of arthropods

    Science.gov (United States)

    Sridharan, Shamira; Katz, Aron; Soto-Adames, Felipe; Popescu, Gabriel

    2015-11-01

    Classification of arthropods is performed by characterization of fine features such as setae and cuticles. An unstained whole arthropod specimen mounted on a slide can be preserved for many decades, but is difficult to study since current methods require sample manipulation or tedious image processing. Spatial light interference microscopy (SLIM) is a quantitative phase imaging (QPI) technique that is an add-on module to a commercial phase contrast microscope. We use SLIM to image a whole organism springtail Ceratophysella denticulata mounted on a slide. This is the first time, to our knowledge, that an entire organism has been imaged using QPI. We also demonstrate the ability of SLIM to image fine structures in addition to providing quantitative data that cannot be obtained by traditional bright field microscopy.

  18. Applications of Microfluidics in Quantitative Biology.

    Science.gov (United States)

    Bai, Yang; Gao, Meng; Wen, Lingling; He, Caiyun; Chen, Yuan; Liu, Chenli; Fu, Xiongfei; Huang, Shuqiang

    2018-05-01

    Quantitative biology is dedicated to taking advantage of quantitative reasoning and advanced engineering technologies to make biology more predictable. Microfluidics, as an emerging technique, provides new approaches to precisely control fluidic conditions on small scales and collect data in high-throughput and quantitative manners. In this review, the authors present the relevant applications of microfluidics to quantitative biology based on two major categories (channel-based microfluidics and droplet-based microfluidics), and their typical features. We also envision some other microfluidic techniques that may not be employed in quantitative biology right now, but have great potential in the near future. © 2017 Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. Biotechnology Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  19. Evolutionary Quantitative Genomics of Populus trichocarpa.

    Directory of Open Access Journals (Sweden)

    Ilga Porth

    Full Text Available Forest trees generally show high levels of local adaptation and efforts focusing on understanding adaptation to climate will be crucial for species survival and management. Here, we address fundamental questions regarding the molecular basis of adaptation in undomesticated forest tree populations to past climatic environments by employing an integrative quantitative genetics and landscape genomics approach. Using this comprehensive approach, we studied the molecular basis of climate adaptation in 433 Populus trichocarpa (black cottonwood genotypes originating across western North America. Variation in 74 field-assessed traits (growth, ecophysiology, phenology, leaf stomata, wood, and disease resistance was investigated for signatures of selection (comparing QST-FST using clustering of individuals by climate of origin (temperature and precipitation. 29,354 SNPs were investigated employing three different outlier detection methods and marker-inferred relatedness was estimated to obtain the narrow-sense estimate of population differentiation in wild populations. In addition, we compared our results with previously assessed selection of candidate SNPs using the 25 topographical units (drainages across the P. trichocarpa sampling range as population groupings. Narrow-sense QST for 53% of distinct field traits was significantly divergent from expectations of neutrality (indicating adaptive trait variation; 2,855 SNPs showed signals of diversifying selection and of these, 118 SNPs (within 81 genes were associated with adaptive traits (based on significant QST. Many SNPs were putatively pleiotropic for functionally uncorrelated adaptive traits, such as autumn phenology, height, and disease resistance. Evolutionary quantitative genomics in P. trichocarpa provides an enhanced understanding regarding the molecular basis of climate-driven selection in forest trees and we highlight that important loci underlying adaptive trait variation also show

  20. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth.

    Science.gov (United States)

    Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C

    2015-04-13

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  1. Providing Southern Perspectives on CSR

    DEFF Research Database (Denmark)

    Jeppesen, Søren; Kothuis, Bas

    The article seeks to contribute to the SMEs and CSR literature in developing countries by providing; a) a ‘Southern’ SME perspective, which includes the voices of managers and workers, b) a perspective of CSR, which opens up to informal CSR practices that SMEs undertake, and c) an analysis...... of the key institutional issues affecting the CSR practices of SMEs. It presents perceptions of CSR practices among 21 SMEs in the garment industry in South Africa, based on 40 interviews with managers and 19 interviews with workers through the use of qualitative and quantitative interview frameworks....... It highlights a high degree of similarities between managers and workers, though knowledge of (cognitive level) the concept ‘CSR’ differ considerably. Informal practices are widespread and of key importance to the SMEs, expressed by managers and workers alike. History, industry and manager-workers relations...

  2. Medical service provider networks.

    Science.gov (United States)

    Mougeot, Michel; Naegelen, Florence

    2018-05-17

    In many countries, health insurers or health plans choose to contract either with any willing providers or with preferred providers. We compare these mechanisms when two medical services are imperfect substitutes in demand and are supplied by two different firms. In both cases, the reimbursement is higher when patients select the in-network provider(s). We show that these mechanisms yield lower prices, lower providers' and insurer's profits, and lower expense than in the uniform-reimbursement case. Whatever the degree of product differentiation, a not-for-profit insurer should prefer selective contracting and select a reimbursement such that the out-of-pocket expense is null. Although all providers join the network under any-willing-provider contracting in the absence of third-party payment, an asymmetric equilibrium may exist when this billing arrangement is implemented. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Uncertainty estimation with a small number of measurements, part II: a redefinition of uncertainty and an estimator method

    Science.gov (United States)

    Huang, Hening

    2018-01-01

    This paper is the second (Part II) in a series of two papers (Part I and Part II). Part I has quantitatively discussed the fundamental limitations of the t-interval method for uncertainty estimation with a small number of measurements. This paper (Part II) reveals that the t-interval is an ‘exact’ answer to a wrong question; it is actually misused in uncertainty estimation. This paper proposes a redefinition of uncertainty, based on the classical theory of errors and the theory of point estimation, and a modification of the conventional approach to estimating measurement uncertainty. It also presents an asymptotic procedure for estimating the z-interval. The proposed modification is to replace the t-based uncertainty with an uncertainty estimator (mean- or median-unbiased). The uncertainty estimator method is an approximate answer to the right question to uncertainty estimation. The modified approach provides realistic estimates of uncertainty, regardless of whether the population standard deviation is known or unknown, or if the sample size is small or large. As an application example of the modified approach, this paper presents a resolution to the Du-Yang paradox (i.e. Paradox 2), one of the three paradoxes caused by the misuse of the t-interval in uncertainty estimation.

  4. Mathematics of quantitative kinetic PCR and the application of standard curves.

    Science.gov (United States)

    Rutledge, R G; Côté, C

    2003-08-15

    Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.

  5. Short Course Introduction to Quantitative Mineral Resource Assessments

    Science.gov (United States)

    Singer, Donald A.

    2007-01-01

    This is an abbreviated text supplementing the content of three sets of slides used in a short course that has been presented by the author at several workshops. The slides should be viewed in the order of (1) Introduction and models, (2) Delineation and estimation, and (3) Combining estimates and summary. References cited in the slides are listed at the end of this text. The purpose of the three-part form of mineral resource assessments discussed in the accompanying slides is to make unbiased quantitative assessments in a format needed in decision-support systems so that consequences of alternative courses of action can be examined. The three-part form of mineral resource assessments was developed to assist policy makers evaluate the consequences of alternative courses of action with respect to land use and mineral-resource development. The audience for three-part assessments is a governmental or industrial policy maker, a manager of exploration, a planner of regional development, or similar decision-maker. Some of the tools and models presented here will be useful for selection of exploration sites, but that is a side benefit, not the goal. To provide unbiased information, we recommend the three-part form of mineral resource assessments where general locations of undiscovered deposits are delineated from a deposit type's geologic setting, frequency distributions of tonnages and grades of well-explored deposits serve as models of grades and tonnages of undiscovered deposits, and number of undiscovered deposits are estimated probabilistically by type. The internally consistent descriptive, grade and tonnage, deposit density, and economic models used in the design of the three-part form of assessments reduce the chances of biased estimates of the undiscovered resources. What and why quantitative resource assessments: The kind of assessment recommended here is founded in decision analysis in order to provide a framework for making decisions concerning mineral

  6. Providing free autopoweroff plugs

    DEFF Research Database (Denmark)

    Jensen, Carsten Lynge; Hansen, Lars Gårn; Fjordbak, Troels

    2012-01-01

    Experimental evidence of the effect of providing households with cheap energy saving technology is sparse. We present results from a field experiment in which autopoweroff plugs were provided free of charge to randomly selected households. We use propensity score matching to find treatment effects...

  7. Quantitative genetics of disease traits.

    Science.gov (United States)

    Wray, N R; Visscher, P M

    2015-04-01

    John James authored two key papers on the theory of risk to relatives for binary disease traits and the relationship between parameters on the observed binary scale and an unobserved scale of liability (James Annals of Human Genetics, 1971; 35: 47; Reich, James and Morris Annals of Human Genetics, 1972; 36: 163). These two papers are John James' most cited papers (198 and 328 citations, November 2014). They have been influential in human genetics and have recently gained renewed popularity because of their relevance to the estimation of quantitative genetics parameters for disease traits using SNP data. In this review, we summarize the two early papers and put them into context. We show recent extensions of the theory for ascertained case-control data and review recent applications in human genetics. © 2015 Blackwell Verlag GmbH.

  8. Quantitative approaches in climate change ecology

    DEFF Research Database (Denmark)

    Brown, Christopher J.; Schoeman, David S.; Sydeman, William J.

    2011-01-01

    Contemporary impacts of anthropogenic climate change on ecosystems are increasingly being recognized. Documenting the extent of these impacts requires quantitative tools for analyses of ecological observations to distinguish climate impacts in noisy data and to understand interactions between...... climate variability and other drivers of change. To assist the development of reliable statistical approaches, we review the marine climate change literature and provide suggestions for quantitative approaches in climate change ecology. We compiled 267 peer‐reviewed articles that examined relationships...

  9. Quantitative secondary electron detection

    Science.gov (United States)

    Agrawal, Jyoti; Joy, David C.; Nayak, Subuhadarshi

    2018-05-08

    Quantitative Secondary Electron Detection (QSED) using the array of solid state devices (SSD) based electron-counters enable critical dimension metrology measurements in materials such as semiconductors, nanomaterials, and biological samples (FIG. 3). Methods and devices effect a quantitative detection of secondary electrons with the array of solid state detectors comprising a number of solid state detectors. An array senses the number of secondary electrons with a plurality of solid state detectors, counting the number of secondary electrons with a time to digital converter circuit in counter mode.

  10. [Methods of quantitative proteomics].

    Science.gov (United States)

    Kopylov, A T; Zgoda, V G

    2007-01-01

    In modern science proteomic analysis is inseparable from other fields of systemic biology. Possessing huge resources quantitative proteomics operates colossal information on molecular mechanisms of life. Advances in proteomics help researchers to solve complex problems of cell signaling, posttranslational modification, structure and functional homology of proteins, molecular diagnostics etc. More than 40 various methods have been developed in proteomics for quantitative analysis of proteins. Although each method is unique and has certain advantages and disadvantages all these use various isotope labels (tags). In this review we will consider the most popular and effective methods employing both chemical modifications of proteins and also metabolic and enzymatic methods of isotope labeling.

  11. Quantitative tectonic reconstructions of Zealandia based on crustal thickness estimates

    Science.gov (United States)

    Grobys, Jan W. G.; Gohl, Karsten; Eagles, Graeme

    2008-01-01

    Zealandia is a key piece in the plate reconstruction of Gondwana. The positions of its submarine plateaus are major constraints on the best fit and breakup involving New Zealand, Australia, Antarctica, and associated microplates. As the submarine plateaus surrounding New Zealand consist of extended and highly extended continental crust, classic plate tectonic reconstructions assuming rigid plates and narrow plate boundaries fail to reconstruct these areas correctly. However, if the early breakup history shall be reconstructed, it is crucial to consider crustal stretching in a plate-tectonic reconstruction. We present a reconstruction of the basins around New Zealand (Great South Basin, Bounty Trough, and New Caledonia Basin) based on crustal balancing, an approach that takes into account the rifting and thinning processes affecting continental crust. In a first step, we computed a crustal thickness map of Zealandia using seismic, seismological, and gravity data. The crustal thickness map shows the submarine plateaus to have a uniform crustal thickness of 20-24 km and the basins to have a thickness of 12-16 km. We assumed that a reconstruction of Zealandia should close the basins and lead to a most uniform crustal thickness. We used the standard deviation of the reconstructed crustal thickness as a measure of uniformity. The reconstruction of the Campbell Plateau area shows that the amount of extension in the Bounty Trough and the Great South Basin is far smaller than previously thought. Our results indicate that the extension of the Bounty Trough and Great South Basin occurred simultaneously.

  12. 611 A QUANTITATIVE ESTIMATE OF WEEDS OF SUGARCANE ...

    African Journals Online (AJOL)

    Osondu

    Abstract. A survey was conducted in the sugarcane fields of Unilorin Sugar Research Institute, Ilorin in the southern. Guinea savanna agro-ecological zone of Nigeria during 2011 and 2012 cropping seasons with an objective to identify the current status of prevalent weeds in rainfed and irrigated sugarcane fields. A.

  13. Quantitative Reasoning Learning Progressions for Environmental Science: Developing a Framework

    Directory of Open Access Journals (Sweden)

    Robert L. Mayes

    2013-01-01

    Full Text Available Quantitative reasoning is a complex concept with many definitions and a diverse account in the literature. The purpose of this article is to establish a working definition of quantitative reasoning within the context of science, construct a quantitative reasoning framework, and summarize research on key components in that framework. Context underlies all quantitative reasoning; for this review, environmental science serves as the context.In the framework, we identify four components of quantitative reasoning: the quantification act, quantitative literacy, quantitative interpretation of a model, and quantitative modeling. Within each of these components, the framework provides elements that comprise the four components. The quantification act includes the elements of variable identification, communication, context, and variation. Quantitative literacy includes the elements of numeracy, measurement, proportional reasoning, and basic probability/statistics. Quantitative interpretation includes the elements of representations, science diagrams, statistics and probability, and logarithmic scales. Quantitative modeling includes the elements of logic, problem solving, modeling, and inference. A brief comparison of the quantitative reasoning framework with the AAC&U Quantitative Literacy VALUE rubric is presented, demonstrating a mapping of the components and illustrating differences in structure. The framework serves as a precursor for a quantitative reasoning learning progression which is currently under development.

  14. Quantitative information in medical imaging

    International Nuclear Information System (INIS)

    Deconinck, F.

    1985-01-01

    When developing new imaging or image processing techniques, one constantly has in mind that the new technique should provide a better, or more optimal answer to medical tasks than existing techniques do 'Better' or 'more optimal' imply some kind of standard by which one can measure imaging or image processing performance. The choice of a particular imaging modality to answer a diagnostic task, such as the detection of coronary artery stenosis is also based on an implicit optimalisation of performance criteria. Performance is measured by the ability to provide information about an object (patient) to the person (referring doctor) who ordered a particular task. In medical imaging the task is generally to find quantitative information on bodily function (biochemistry, physiology) and structure (histology, anatomy). In medical imaging, a wide range of techniques is available. Each technique has it's own characteristics. The techniques discussed in this paper are: nuclear magnetic resonance, X-ray fluorescence, scintigraphy, positron emission tomography, applied potential tomography, computerized tomography, and compton tomography. This paper provides a framework for the comparison of imaging performance, based on the way the quantitative information flow is altered by the characteristics of the modality

  15. Credential Service Provider (CSP)

    Data.gov (United States)

    Department of Veterans Affairs — Provides a VA operated Level 1 and Level 2 credential for individuals who require access to VA applications, yet cannot obtain a credential from another VA accepted...

  16. MAX Provider Characteristics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The MAX Provider Characteristics (PC) File Implementation Report describes the design, implementation, and results of the MAXPC prototype, which was based on three...

  17. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    Science.gov (United States)

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  18. Overconfidence in Interval Estimates

    Science.gov (United States)

    Soll, Jack B.; Klayman, Joshua

    2004-01-01

    Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

  19. Ecosystem services provided by bats.

    Science.gov (United States)

    Kunz, Thomas H; Braun de Torrez, Elizabeth; Bauer, Dana; Lobova, Tatyana; Fleming, Theodore H

    2011-03-01

    Ecosystem services are the benefits obtained from the environment that increase human well-being. Economic valuation is conducted by measuring the human welfare gains or losses that result from changes in the provision of ecosystem services. Bats have long been postulated to play important roles in arthropod suppression, seed dispersal, and pollination; however, only recently have these ecosystem services begun to be thoroughly evaluated. Here, we review the available literature on the ecological and economic impact of ecosystem services provided by bats. We describe dietary preferences, foraging behaviors, adaptations, and phylogenetic histories of insectivorous, frugivorous, and nectarivorous bats worldwide in the context of their respective ecosystem services. For each trophic ensemble, we discuss the consequences of these ecological interactions on both natural and agricultural systems. Throughout this review, we highlight the research needed to fully determine the ecosystem services in question. Finally, we provide a comprehensive overview of economic valuation of ecosystem services. Unfortunately, few studies estimating the economic value of ecosystem services provided by bats have been conducted to date; however, we outline a framework that could be used in future studies to more fully address this question. Consumptive goods provided by bats, such as food and guano, are often exchanged in markets where the market price indicates an economic value. Nonmarket valuation methods can be used to estimate the economic value of nonconsumptive services, including inputs to agricultural production and recreational activities. Information on the ecological and economic value of ecosystem services provided by bats can be used to inform decisions regarding where and when to protect or restore bat populations and associated habitats, as well as to improve public perception of bats. © 2011 New York Academy of Sciences.

  20. A General Model for Estimating Macroevolutionary Landscapes.

    Science.gov (United States)

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  1. Extending Quantitative Easing

    DEFF Research Database (Denmark)

    Hallett, Andrew Hughes; Fiedler, Salomon; Kooths, Stefan

    The notes in this compilation address the pros and cons associated with the extension of ECB quantitative easing programme of asset purchases. The notes have been requested by the Committee on Economic and Monetary Affairs as an input for the February 2017 session of the Monetary Dialogue....

  2. Quantitative Moessbauer analysis

    International Nuclear Information System (INIS)

    Collins, R.L.

    1978-01-01

    The quantitative analysis of Moessbauer data, as in the measurement of Fe 3+ /Fe 2+ concentration, has not been possible because of the different mean square velocities (x 2 ) of Moessbauer nuclei at chemically different sites. A method is now described which, based on Moessbauer data at several temperatures, permits the comparison of absorption areas at (x 2 )=0. (Auth.)

  3. Estimation of hydrologic properties of an unsaturated, fractured rock mass

    International Nuclear Information System (INIS)

    Klavetter, E.A.; Peters, R.R.

    1986-07-01

    In this document, two distinctly different approaches are used to develop continuum models to evaluate water movement in a fractured rock mass. Both models provide methods for estimating rock-mass hydrologic properties. Comparisons made over a range of different tuff properties show good qualitative and quantitative agreement between estimates of rock-mass hydrologic properties made by the two models. This document presents a general discussion of: (1) the hydrology of Yucca Mountain, and the conceptual hydrological model currently being used for the Yucca Mountain site, (2) the development of two models that may be used to estimate the hydrologic properties of a fractured, porous rock mass, and (3) a comparison of the hydrologic properties estimated by these two models. Although the models were developed in response to hydrologic characterization requirements at Yucca Mountain, they can be applied to water movement in any fractured rock mass that satisfies the given assumptions

  4. Quantitative Evidence Synthesis with Power Priors

    NARCIS (Netherlands)

    Rietbergen, C.|info:eu-repo/dai/nl/322847796

    2016-01-01

    The aim of this thesis is to provide the applied researcher with a practical approach for quantitative evidence synthesis using the conditional power prior that allows for subjective input and thereby provides an alternative tgbgo deal with the difficulties as- sociated with the joint power prior

  5. Assessing the robustness of quantitative fatty acid signature analysis to assumption violations

    Science.gov (United States)

    Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.

    2016-01-01

      Knowledge of animal diets can provide important insights into life history and ecology, relationships among species in a community and potential response to ecosystem change or perturbation. Quantitative fatty acid signature analysis (QFASA) is a method of estimating diets from data on the composition, or signature, of fatty acids stored in adipose tissue. Given data on signatures of potential prey, a predator diet is estimated by minimizing the distance between its signature and a mixture of prey signatures. Calibration coefficients, constants derived from feeding trials, are used to account for differential metabolism of individual fatty acids. QFASA has been widely applied since its introduction and several variants of the original estimator have appeared in the literature. However, work to compare the statistical properties of QFASA estimators has been limited.

  6. Health service providers in Somalia: their readiness to provide malaria case-management.

    Science.gov (United States)

    Noor, Abdisalan M; Rage, Ismail A; Moonen, Bruno; Snow, Robert W

    2009-05-13

    Studies have highlighted the inadequacies of the public health sector in sub-Saharan African countries in providing appropriate malaria case management. The readiness of the public health sector to provide malaria case-management in Somalia, a country where there has been no functioning central government for almost two decades, was investigated. Three districts were purposively sampled in each of the two self-declared states of Puntland and Somaliland and the south-central region of Somalia, in April-November 2007. A survey and mapping of all public and private health service providers was undertaken. Information was recorded on services provided, types of anti-malarial drugs used and stock, numbers and qualifications of staff, sources of financial support and presence of malaria diagnostic services, new treatment guidelines and job aides for malaria case-management. All settlements were mapped and a semi-quantitative approach was used to estimate their population size. Distances from settlements to public health services were computed. There were 45 public health facilities, 227 public health professionals, and 194 private pharmacies for approximately 0.6 million people in the three districts. The median distance to public health facilities was 6 km. 62.3% of public health facilities prescribed the nationally recommended anti-malarial drug and 37.7% prescribed chloroquine as first-line therapy. 66.7% of public facilities did not have in stock the recommended first-line malaria therapy. Diagnosis of malaria using rapid diagnostic tests (RDT) or microscopy was performed routinely in over 90% of the recommended public facilities but only 50% of these had RDT in stock at the time of survey. National treatment guidelines were available in 31.3% of public health facilities recommended by the national strategy. Only 8.8% of the private pharmacies prescribed artesunate plus sulphadoxine/pyrimethamine, while 53.1% prescribed chloroquine as first-line therapy. 31.4% of

  7. Health service providers in Somalia: their readiness to provide malaria case-management

    Directory of Open Access Journals (Sweden)

    Moonen Bruno

    2009-05-01

    Full Text Available Abstract Background Studies have highlighted the inadequacies of the public health sector in sub-Saharan African countries in providing appropriate malaria case management. The readiness of the public health sector to provide malaria case-management in Somalia, a country where there has been no functioning central government for almost two decades, was investigated. Methods Three districts were purposively sampled in each of the two self-declared states of Puntland and Somaliland and the south-central region of Somalia, in April-November 2007. A survey and mapping of all public and private health service providers was undertaken. Information was recorded on services provided, types of anti-malarial drugs used and stock, numbers and qualifications of staff, sources of financial support and presence of malaria diagnostic services, new treatment guidelines and job aides for malaria case-management. All settlements were mapped and a semi-quantitative approach was used to estimate their population size. Distances from settlements to public health services were computed. Results There were 45 public health facilities, 227 public health professionals, and 194 private pharmacies for approximately 0.6 million people in the three districts. The median distance to public health facilities was 6 km. 62.3% of public health facilities prescribed the nationally recommended anti-malarial drug and 37.7% prescribed chloroquine as first-line therapy. 66.7% of public facilities did not have in stock the recommended first-line malaria therapy. Diagnosis of malaria using rapid diagnostic tests (RDT or microscopy was performed routinely in over 90% of the recommended public facilities but only 50% of these had RDT in stock at the time of survey. National treatment guidelines were available in 31.3% of public health facilities recommended by the national strategy. Only 8.8% of the private pharmacies prescribed artesunate plus sulphadoxine/pyrimethamine, while 53

  8. Estimating directional epistasis

    Science.gov (United States)

    Le Rouzic, Arnaud

    2014-01-01

    Epistasis, i.e., the fact that gene effects depend on the genetic background, is a direct consequence of the complexity of genetic architectures. Despite this, most of the models used in evolutionary and quantitative genetics pay scant attention to genetic interactions. For instance, the traditional decomposition of genetic effects models epistasis as noise around the evolutionarily-relevant additive effects. Such an approach is only valid if it is assumed that there is no general pattern among interactions—a highly speculative scenario. Systematic interactions generate directional epistasis, which has major evolutionary consequences. In spite of its importance, directional epistasis is rarely measured or reported by quantitative geneticists, not only because its relevance is generally ignored, but also due to the lack of simple, operational, and accessible methods for its estimation. This paper describes conceptual and statistical tools that can be used to estimate directional epistasis from various kinds of data, including QTL mapping results, phenotype measurements in mutants, and artificial selection responses. As an illustration, I measured directional epistasis from a real-life example. I then discuss the interpretation of the estimates, showing how they can be used to draw meaningful biological inferences. PMID:25071828

  9. Provider software buyer's guide.

    Science.gov (United States)

    1994-03-01

    To help long term care providers find new ways to improve quality of care and efficiency, Provider magazine presents the fourth annual listing of software firms marketing computer programs for all areas of nursing facility operations. On the following five pages, more than 80 software firms display their wares, with programs such as minimum data set and care planning, dietary, accounting and financials, case mix, and medication administration records. The guide also charts compatible hardware, integration ability, telephone numbers, company contacts, and easy-to-use reader service numbers.

  10. What HERA may provide?

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Hannes [DESY, Hamburg (Germany); De Roeck, Albert [CERN, Genf (Switzerland); Bartles, Jochen [Univ. Hamburg (DE). Institut fuer Theoretische Physik II] (and others)

    2008-09-15

    More than 100 people participated in a discussion session at the DIS08 workshop on the topic What HERA may provide. A summary of the discussion with a structured outlook and list of desirable measurements and theory calculations is given. (orig.)

  11. What HERA may provide?

    International Nuclear Information System (INIS)

    Jung, Hannes; De Roeck, Albert; Bartles, Jochen

    2008-09-01

    More than 100 people participated in a discussion session at the DIS08 workshop on the topic What HERA may provide. A summary of the discussion with a structured outlook and list of desirable measurements and theory calculations is given. (orig.)

  12. Provider of Services File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The POS file consists of two data files, one for CLIA labs and one for 18 other provider types. The file names are CLIA and OTHER. If downloading the file, note it...

  13. Adaptive Spectral Doppler Estimation

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2009-01-01

    . The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...

  14. Quantitative Characterization of Nanostructured Materials

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Frank (Bud) Bridges, University of California-Santa Cruz

    2010-08-05

    The two-and-a-half day symposium on the "Quantitative Characterization of Nanostructured Materials" will be the first comprehensive meeting on this topic held under the auspices of a major U.S. professional society. Spring MRS Meetings provide a natural venue for this symposium as they attract a broad audience of researchers that represents a cross-section of the state-of-the-art regarding synthesis, structure-property relations, and applications of nanostructured materials. Close interactions among the experts in local structure measurements and materials researchers will help both to identify measurement needs pertinent to real-world materials problems and to familiarize the materials research community with the state-of-the-art local structure measurement techniques. We have chosen invited speakers that reflect the multidisciplinary and international nature of this topic and the need to continually nurture productive interfaces among university, government and industrial laboratories. The intent of the symposium is to provide an interdisciplinary forum for discussion and exchange of ideas on the recent progress in quantitative characterization of structural order in nanomaterials using different experimental techniques and theory. The symposium is expected to facilitate discussions on optimal approaches for determining atomic structure at the nanoscale using combined inputs from multiple measurement techniques.

  15. Study of quantitative genetics of gum arabic production complicated by variability in ploidy level of Acacia senegal (L.) Willd

    DEFF Research Database (Denmark)

    Diallo, Adja Madjiguene; Nielsen, Lene Rostgaard; Hansen, Jon Kehlet

    2015-01-01

    Gum arabic is an important international commodity produced by trees of Acacia senegal across Sahelian Africa, but documented results of breeding activities are limited. The objective of this study was to provide reliable estimates of quantitative genetic parameters in order to shed light on the ...... stress the importance of testing ploidy levels of selected material and use of genetic markers to qualify the assumptions in the quantitative genetic analysis....... that progenies consisted of both diploid and polyploid trees, and growth, gum yield, and gum quality varied substantially among ploidy level, populations, and progenies. Analysis of molecular variance and estimates of outcrossing rate supported that trees within open-pollinated families of diploids were half...... sibs, while the open-pollinated families of polyploids showed low variation within families. The difference in sibling relationship observed between ploidy levels complicated estimation of genetic parameters. However, based on the diploid trees, we conclude that heritability in gum arabic production...

  16. Balance between qualitative and quantitative verification methods

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2012-01-01

    The amount of inspection effort for verification of declared nuclear material needs to be optimized in the situation where qualitative and quantitative measures are applied. Game theory was referred to investigate the relation of detection probability and deterrence of diversion. Payoffs used in the theory were quantified for cases of conventional safeguards and integrated safeguards by using AHP, Analytical Hierarchy Process. Then, it became possible to estimate detection probability under integrated safeguards which had equivalent deterrence capability for detection probability under conventional safeguards. In addition the distribution of inspection effort for qualitative and quantitative