WorldWideScience

Sample records for model quantitatively reproduces

  1. Eccentric Contraction-Induced Muscle Injury: Reproducible, Quantitative, Physiological Models to Impair Skeletal Muscle’s Capacity to Generate Force

    Science.gov (United States)

    Call, Jarrod A.; Lowe, Dawn A.

    2018-01-01

    In order to investigate the molecular and cellular mechanisms of muscle regeneration an experimental injury model is required. Advantages of eccentric contraction-induced injury are that it is a controllable, reproducible, and physiologically relevant model to cause muscle injury, with injury being defined as a loss of force generating capacity. While eccentric contractions can be incorporated into conscious animal study designs such as downhill treadmill running, electrophysiological approaches to elicit eccentric contractions and examine muscle contractility, for example before and after the injurious eccentric contractions, allows researchers to circumvent common issues in determining muscle function in a conscious animal (e.g., unwillingness to participate). Herein, we describe in vitro and in vivo methods that are reliable, repeatable, and truly maximal because the muscle contractions are evoked in a controlled, quantifiable manner independent of subject motivation. Both methods can be used to initiate eccentric contraction-induced injury and are suitable for monitoring functional muscle regeneration hours to days to weeks post-injury. PMID:27492161

  2. Eccentric Contraction-Induced Muscle Injury: Reproducible, Quantitative, Physiological Models to Impair Skeletal Muscle's Capacity to Generate Force.

    Science.gov (United States)

    Call, Jarrod A; Lowe, Dawn A

    2016-01-01

    In order to investigate the molecular and cellular mechanisms of muscle regeneration an experimental injury model is required. Advantages of eccentric contraction-induced injury are that it is a controllable, reproducible, and physiologically relevant model to cause muscle injury, with injury being defined as a loss of force generating capacity. While eccentric contractions can be incorporated into conscious animal study designs such as downhill treadmill running, electrophysiological approaches to elicit eccentric contractions and examine muscle contractility, for example before and after the injurious eccentric contractions, allows researchers to circumvent common issues in determining muscle function in a conscious animal (e.g., unwillingness to participate). Herein, we describe in vitro and in vivo methods that are reliable, repeatable, and truly maximal because the muscle contractions are evoked in a controlled, quantifiable manner independent of subject motivation. Both methods can be used to initiate eccentric contraction-induced injury and are suitable for monitoring functional muscle regeneration hours to days to weeks post-injury.

  3. Reproducibility and Reliability of Repeated Quantitative Fluorescence Angiography

    DEFF Research Database (Denmark)

    Nerup, Nikolaj; Knudsen, Kristine Bach Korsholm; Ambrus, Rikard

    2017-01-01

    that the camera can detect. As the emission of fluorescence is dependent of the excitatory light intensity, reduction of this may solve the problem. The aim of the present study was to investigate the reproducibility and reliability of repeated quantitative FA during a reduction of excitatory light....

  4. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  5. Reproducibility of Quantitative Structural and Physiological MRI Measurements

    Science.gov (United States)

    2017-08-09

    metabolites with percent standard deviation Cramer- Rao lower bounds ≤20% were included in statistical analyses. One subject’s MRI#1 and one sub...relative to the mean as it is calculated as the standard deviation nor- malized by the average between visits. MRD provides information about the...inherent technical and physiological consistency of these measurements. This longitudinal study examined the variance and reproducibility of commonly

  6. A reproducible canine model of esophageal varices.

    Science.gov (United States)

    Jensen, D M; Machicado, G A; Tapia, J I; Kauffman, G; Franco, P; Beilin, D

    1983-03-01

    One of the most promising nonoperative techniques for control of variceal hemorrhage is sclerosis via the fiberoptic endoscope. Many questions remain, however, about sclerosing agents, guidelines for effective use, and limitations of endoscopic techniques. A reproducible large animal model of esophageal varices would facilitate the critical evaluation of techniques for variceal hemostasis or sclerosis. Our purpose was to develop a large animal model of esophageal varices. Studies in pigs and dogs are described which led to the development of a reproducible canine model of esophageal varices. For the final model, mongrel dogs had laparotomy, side-to-side portacaval shunt, inferior vena cava ligation, placement of an ameroid constrictor around the portal vein, and liver biopsy. The mean (+/- SE) pre- and postshunt portal pressure increased significantly from 12 +/- 0.4 to 23 +/- 1 cm saline. Weekly endoscopies were performed to grade the varix size. Two-thirds of animals developed medium or large sized esophageal varices after the first operation. Three to six weeks later, a second laparotomy with complete ligation of the portal vein and liver biopsy were performed in animals with varices (one-third of the animals). All dogs developed esophageal varices and abdominal wall collateral veins of variable size 3-6 wk after the first operation. After the second operation, the varices became larger. Shunting of blood through esophageal varices via splenic and gastric veins was demonstrated by angiography. Sequential liver biopsies were normal. There was no morbidity or mortality. Ascites, encephalopathy, or spontaneous variceal bleeding did not occur. We have documented the lack of size change and the persistence of medium to large esophageal varices and abdominal collateral veins in all animals followed for more than 6 mo. Variceal bleeding could be induced by venipuncture for testing endoscopic hemostatic and sclerosis methods. We suggest other potential uses of this

  7. Modeling reproducibility of porescale multiphase flow experiments

    Science.gov (United States)

    Ling, B.; Tartakovsky, A. M.; Bao, J.; Oostrom, M.; Battiato, I.

    2017-12-01

    Multi-phase flow in porous media is widely encountered in geological systems. Understanding immiscible fluid displacement is crucial for processes including, but not limited to, CO2 sequestration, non-aqueous phase liquid contamination and oil recovery. Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  8. The reproducibility of quantitative measurements in lumbar magnetic resonance imaging of children from the general population

    DEFF Research Database (Denmark)

    Masharawi, Y; Kjær, Per; Bendix, T

    2008-01-01

    . The following parameters were measured using the iQ-VIEW system (IMAGE Information Systems Ltd., version 1.2.2, Plauen, Germany): Linear measurements--zygoappophyseal facets and interfacet widths, and vertebral body (VB), pedicle and intervertebral discs heights, widths, and lengths. Angular measurements......STUDY DESIGN: Quantitative lumbar magnetic resonance imaging (MRI) measurements in children were taken twice and analyzed for intra- and intertester reproducibility. OBJECTIVE: To evaluate the reproducibility of a variety of lumbar quantitative measurements taken from MRIs of children from...

  9. Quantitative susceptibility mapping of human brain at 3T: a multisite reproducibility study.

    Science.gov (United States)

    Lin, P-Y; Chao, T-C; Wu, M-L

    2015-03-01

    Quantitative susceptibility mapping of the human brain has demonstrated strong potential in examining iron deposition, which may help in investigating possible brain pathology. This study assesses the reproducibility of quantitative susceptibility mapping across different imaging sites. In this study, the susceptibility values of 5 regions of interest in the human brain were measured on 9 healthy subjects following calibration by using phantom experiments. Each of the subjects was imaged 5 times on 1 scanner with the same procedure repeated on 3 different 3T systems so that both within-site and cross-site quantitative susceptibility mapping precision levels could be assessed. Two quantitative susceptibility mapping algorithms, similar in principle, one by using iterative regularization (iterative quantitative susceptibility mapping) and the other with analytic optimal solutions (deterministic quantitative susceptibility mapping), were implemented, and their performances were compared. Results show that while deterministic quantitative susceptibility mapping had nearly 700 times faster computation speed, residual streaking artifacts seem to be more prominent compared with iterative quantitative susceptibility mapping. With quantitative susceptibility mapping, the putamen, globus pallidus, and caudate nucleus showed smaller imprecision on the order of 0.005 ppm, whereas the red nucleus and substantia nigra, closer to the skull base, had a somewhat larger imprecision of approximately 0.01 ppm. Cross-site errors were not significantly larger than within-site errors. Possible sources of estimation errors are discussed. The reproducibility of quantitative susceptibility mapping in the human brain in vivo is regionally dependent, and the precision levels achieved with quantitative susceptibility mapping should allow longitudinal and multisite studies such as aging-related changes in brain tissue magnetic susceptibility. © 2015 by American Journal of Neuroradiology.

  10. On the Reproducibility of Label-Free Quantitative Cross-Linking/Mass Spectrometry

    Science.gov (United States)

    Müller, Fränze; Fischer, Lutz; Chen, Zhuo Angel; Auchynnikava, Tania; Rappsilber, Juri

    2018-02-01

    Quantitative cross-linking/mass spectrometry (QCLMS) is an emerging approach to study conformational changes of proteins and multi-subunit complexes. Distinguishing protein conformations requires reproducibly identifying and quantifying cross-linked peptides. Here we analyzed the variation between multiple cross-linking reactions using bis[sulfosuccinimidyl] suberate (BS3)-cross-linked human serum albumin (HSA) and evaluated how reproducible cross-linked peptides can be identified and quantified by LC-MS analysis. To make QCLMS accessible to a broader research community, we developed a workflow that integrates the established software tools MaxQuant for spectra preprocessing, Xi for cross-linked peptide identification, and finally Skyline for quantification (MS1 filtering). Out of the 221 unique residue pairs identified in our sample, 124 were subsequently quantified across 10 analyses with coefficient of variation (CV) values of 14% (injection replica) and 32% (reaction replica). Thus our results demonstrate that the reproducibility of QCLMS is in line with the reproducibility of general quantitative proteomics and we establish a robust workflow for MS1-based quantitation of cross-linked peptides.

  11. On the Reproducibility of Label-Free Quantitative Cross-Linking/Mass Spectrometry

    Science.gov (United States)

    Müller, Fränze; Fischer, Lutz; Chen, Zhuo Angel; Auchynnikava, Tania; Rappsilber, Juri

    2017-12-01

    Quantitative cross-linking/mass spectrometry (QCLMS) is an emerging approach to study conformational changes of proteins and multi-subunit complexes. Distinguishing protein conformations requires reproducibly identifying and quantifying cross-linked peptides. Here we analyzed the variation between multiple cross-linking reactions using bis[sulfosuccinimidyl] suberate (BS3)-cross-linked human serum albumin (HSA) and evaluated how reproducible cross-linked peptides can be identified and quantified by LC-MS analysis. To make QCLMS accessible to a broader research community, we developed a workflow that integrates the established software tools MaxQuant for spectra preprocessing, Xi for cross-linked peptide identification, and finally Skyline for quantification (MS1 filtering). Out of the 221 unique residue pairs identified in our sample, 124 were subsequently quantified across 10 analyses with coefficient of variation (CV) values of 14% (injection replica) and 32% (reaction replica). Thus our results demonstrate that the reproducibility of QCLMS is in line with the reproducibility of general quantitative proteomics and we establish a robust workflow for MS1-based quantitation of cross-linked peptides. [Figure not available: see fulltext.

  12. Towards reproducible descriptions of neuronal network models.

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2009-08-01

    Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  13. Repeatability, reproducibility, and accuracy of quantitative mri of the breast in the community radiology setting.

    Science.gov (United States)

    Sorace, Anna G; Wu, Chengyue; Barnes, Stephanie L; Jarrett, Angela M; Avery, Sarah; Patt, Debra; Goodgame, Boone; Luci, Jeffery J; Kang, Hakmook; Abramson, Richard G; Yankeelov, Thomas E; Virostko, John

    2018-03-23

    Quantitative diffusion-weighted MRI (DW-MRI) and dynamic contrast-enhanced MRI (DCE-MRI) have the potential to impact patient care by providing noninvasive biological information in breast cancer. To quantify the repeatability, reproducibility, and accuracy of apparent diffusion coefficient (ADC) and T 1 -mapping of the breast in community radiology practices. Prospective. Ice-water DW-MRI and T 1 gel phantoms were used to assess accuracy. Normal subjects (n = 3) and phantoms across three sites (one academic, two community) were used to assess reproducibility. Test-retest analysis at one site in normal subjects (n = 12) was used to assess repeatability. 3T Siemens Skyra MRI quantitative DW-MRI and T 1 -mapping. Quantitative DW-MRI and T 1 -mapping parametric maps of phantoms and fibroglandular and adipose tissue of the breast. Average values of breast tissue were quantified and Bland-Altman analysis was performed to assess the repeatability of the MRI techniques, while the Friedman test assessed reproducibility. ADC measurements were reproducible across sites, with an average difference of 1.6% in an ice-water phantom and 7.0% in breast fibroglandular tissue. T 1 measurements in gel phantoms had an average difference of 2.8% across three sites, whereas breast fibroglandular and adipose tissue had 8.4% and 7.5% average differences, respectively. In the repeatability study, we found no bias between first and second scanning sessions (P = 0.1). The difference between repeated measurements was independent of the mean for each MRI metric (P = 0.156, P = 0.862, P = 0.197 for ADC, T 1 of fibroglandular tissue, and T 1 of adipose tissue, respectively). Community radiology practices can perform repeatable, reproducible, and accurate quantitative T 1 -mapping and DW-MRI. This has the potential to dramatically expand the number of sites that can participate in multisite clinical trials and increase clinical translation of quantitative MRI techniques

  14. Reproducibility of quantitative high-throughput BI-RADS features extracted from ultrasound images of breast cancer.

    Science.gov (United States)

    Hu, Yuzhou; Qiao, Mengyun; Guo, Yi; Wang, Yuanyuan; Yu, Jinhua; Li, Jiawei; Chang, Cai

    2017-07-01

    Digital Breast Imaging Reporting and Data System (BI-RADS) features extracted from ultrasound images are essential in computer-aided diagnosis, prediction, and prognosis of breast cancer. This study focuses on the reproducibility of quantitative high-throughput BI-RADS features in the presence of variations due to different segmentation results, various ultrasound machine models, and multiple ultrasound machine settings. Dataset 1 consists of 399 patients with invasive breast cancer and is used as the training set to measure the reproducibility of features, while dataset 2 consists of 138 other patients and is a validation set used to evaluate the diagnosis performances of the final reproducible features. Four hundred and sixty high-throughput BI-RADS features are designed and quantized according to BI-RADS lexicon. Concordance Correlation Coefficient (CCC) and Deviation (Dev) are used to assess the effect of the segmentation methods and Between-class Distance (BD) is used to study the influences of the machine models. In addition, the features jointly shared by two methodologies are further investigated on their effects with multiple machine settings. Subsequently, the absolute value of Pearson Correlation Coefficient (R abs ) is applied for redundancy elimination. Finally, the features that are reproducible and not redundant are preserved as the stable feature set. A 10-fold Support Vector Machine (SVM) classifier is employed to verify the diagnostic ability. One hundred and fifty-three features were found to have high reproducibility (CCC > 0.9 & Dev BI-RADS features to various degrees. Our 46 reproducible features were robust to these factors and were capable of distinguishing benign and malignant breast tumors. © 2017 American Association of Physicists in Medicine.

  15. Inter-laboratory evaluation of instrument platforms and experimental workflows for quantitative accuracy and reproducibility assessment

    Directory of Open Access Journals (Sweden)

    Andrew J. Percy

    2015-09-01

    Full Text Available The reproducibility of plasma protein quantitation between laboratories and between instrument types was examined in a large-scale international study involving 16 laboratories and 19 LC–MS/MS platforms, using two kits designed to evaluate instrument performance and one kit designed to evaluate the entire bottom-up workflow. There was little effect of instrument type on the quality of the results, demonstrating the robustness of LC/MRM-MS with isotopically labeled standards. Technician skill was a factor, as errors in sample preparation and sub-optimal LC–MS performance were evident. This highlights the importance of proper training and routine quality control before quantitation is done on patient samples.

  16. Reproducibility of quantitative susceptibility mapping in the brain at two field strengths from two vendors.

    Science.gov (United States)

    Deh, Kofi; Nguyen, Thanh D; Eskreis-Winkler, Sarah; Prince, Martin R; Spincemaille, Pascal; Gauthier, Susan; Kovanlikaya, Ilhami; Zhang, Yan; Wang, Yi

    2015-12-01

    To assess the reproducibility of brain quantitative susceptibility mapping (QSM) in healthy subjects and in patients with multiple sclerosis (MS) on 1.5 and 3T scanners from two vendors. Ten healthy volunteers and 10 patients were scanned twice on a 3T scanner from one vendor. The healthy volunteers were also scanned on a 1.5T scanner from the same vendor and on a 3T scanner from a second vendor. Similar imaging parameters were used for all scans. QSM images were reconstructed using a recently developed nonlinear morphology-enabled dipole inversion (MEDI) algorithm with L1 regularization. Region-of-interest (ROI) measurements were obtained for 20 major brain structures. Reproducibility was evaluated with voxel-wise and ROI-based Bland-Altman plots and linear correlation analysis. ROI-based QSM measurements showed excellent correlation between all repeated scans (correlation coefficient R ≥ 0.97), with a mean difference of less than 1.24 ppb (healthy subjects) and 4.15 ppb (patients), and 95% limits of agreements of within -25.5 to 25.0 ppb (healthy subjects) and -35.8 to 27.6 ppb (patients). Voxel-based QSM measurements had a good correlation (0.64 ≤ R ≤ 0.88) and limits of agreements of -60 to 60 ppb or less. Brain QSM measurements have good interscanner and same-scanner reproducibility for healthy and MS subjects, respectively, on the systems evaluated in this study. © 2015 Wiley Periodicals, Inc.

  17. Reproducible uniform coronary vasomotor tone with nitrocompounds: prerequisite of quantitative coronary angiographic trials.

    Science.gov (United States)

    Jost, S; Rafflenbeul, W; Reil, G H; Gulba, D; Knop, I; Hecker, H; Lichtlen, P R

    1990-07-01

    In quantitative analysis of repeated coronary angiograms, a variable vasomotor tone of the epicardial coronary arteries may influence the accuracy of the results. Therefore, we evaluated the extent and reproducibility of coronary artery dilation with nitrocompounds. In 32 patients with coronary artery disease, the vasodilatory response of angiographically normal coronary segments to different nitrocompounds was analyzed with the computer-assisted contour detection system CAAS. Twenty patients received 5 mg or 10 mg of isosorbide dinitrate sublingually. After 10 to 15 min, a maximal diameter increase was measured with an average of 16 +/- 11% (5 mg: P less than 0.01) and 28 +/- 13% (10 mg: P less than 0.001) from control. Another 12 patients received 0.025 mg per kg body weight of SIN-1, the active metabolite of molsidomine, as an intravenous infusion over 5 min. A comparable maximal dilation (29 +/- 5%; P less than 0.001) occurred after 10 to 15 min and could not be enhanced further with 0.8 mg nitroglycerin administered sublingually (28 +/- 7%; n.s.). One hour after square root of Sin-1, coronary dilation was still 24 +/- 8% (P less than 0.001 compared with control), and 0.8 mg of nitroglycerin sublingually reestablished the previous maximal dilation of 28 +/- 8%. We conclude that high doses of nitrocompounds induce a reproducible maximal coronary dilation that eliminates a substantial source of error in quantitative analysis of repeated coronary angiograms. At present, sublingual administrations of either 10 mg isosorbide dinitrate once or 0.8 mg nitroglycerin repeatedly seem to represent the easiest practicable modes to achieve maximal coronary vasodilation for an adequate period.

  18. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling

    Science.gov (United States)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.

    2017-12-01

    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  19. Reproducibility and accuracy of quantitative myocardial blood flow assessment with (82)Rb PET: comparison with (13)N-ammonia PET.

    Science.gov (United States)

    El Fakhri, Georges; Kardan, Arash; Sitek, Arkadiusz; Dorbala, Sharmila; Abi-Hatem, Nathalie; Lahoud, Youmna; Fischman, Alan; Coughlan, Martha; Yasuda, Tsunehiro; Di Carli, Marcelo F

    2009-07-01

    (82)Rb cardiac PET allows the assessment of myocardial perfusion with a column generator in clinics that lack a cyclotron. There is evidence that the quantitation of myocardial blood flow (MBF) and coronary flow reserve (CFR) with dynamic (82)Rb PET is feasible. The objectives of this study were to determine the accuracy and reproducibility of MBF estimates from dynamic (82)Rb PET by using our methodology for generalized factor analysis (generalized factor analysis of dynamic sequences [GFADS]) and compartment analysis. Reproducibility was evaluated in 22 subjects undergoing dynamic rest and dipyridamole stress (82)Rb PET studies at a 2-wk interval. The inter- and intraobserver variability of MBF quantitation with dynamic (82)Rb PET was assessed with 4 repeated estimations by each of 4 observers. Accuracy was evaluated in 20 subjects undergoing dynamic rest and dipyridamole stress PET studies with (82)Rb and (13)N-ammonia, respectively. The left ventricular and right ventricular blood pool and left ventricular tissue time-activity curves were estimated by GFADS. MBF was estimated by fitting the blood pool and tissue time-activity curves to a 2-compartment kinetic model for (82)Rb and to a 3-compartment model for (13)N-ammonia. CFR was estimated as the ratio of peak MBF to baseline MBF. The reproducibility of the MBF estimates in repeated (82)Rb studies was very good at rest and during peak stress (R(2)= 0.935), as was the reproducibility of the CFR estimates (R(2) = 0.841). The slope of the correlation line was very close to one for the estimation of MBF (0.986) and CFR (0.960) in repeated (82)Rb studies. The intraobserver reliability was less than 3% for the estimation of MBF at rest and during peak stress as well as for the estimation of CFR. The interobserver reliabilities were 0.950 at rest and 0.975 at peak stress. The correlation between myocardial flow estimates obtained at rest and those obtained during peak stress in (82)Rb and (13)N-ammonia studies was

  20. Reproducibility and relative validity of a semi-quantitative food-frequency questionnaire in an adult population of Rosario, Argentina

    OpenAIRE

    María Elisa Zapata; Romina Buffarini; Nadia Lingiardi; Ana Luiza Gonçalves-Soares

    2016-01-01

    Introduction: Dietary assessment of nutrients and food groups by food frequency questionnaire needs to be validated in each population. The objective of this cross-sectional study was to evaluate the reproducibility and relative validity of a semi-quantitative food frequency questionnaire among adults of Rosario, Argentina.Material and Methods: Two food frequency questionnaires and four 24-hour dietary recalls were applied in a sample of 88 adults. Reproducibility of food frequency questionna...

  1. Reproducibility and Transparency in Ocean-Climate Modeling

    Science.gov (United States)

    Hannah, N.; Adcroft, A.; Hallberg, R.; Griffies, S. M.

    2015-12-01

    Reproducibility is a cornerstone of the scientific method. Within geophysical modeling and simulation achieving reproducibility can be difficult, especially given the complexity of numerical codes, enormous and disparate data sets, and variety of supercomputing technology. We have made progress on this problem in the context of a large project - the development of new ocean and sea ice models, MOM6 and SIS2. Here we present useful techniques and experience.We use version control not only for code but the entire experiment working directory, including configuration (run-time parameters, component versions), input data and checksums on experiment output. This allows us to document when the solutions to experiments change, whether due to code updates or changes in input data. To avoid distributing large input datasets we provide the tools for generating these from the sources, rather than provide raw input data.Bugs can be a source of non-determinism and hence irreproducibility, e.g. reading from or branching on uninitialized memory. To expose these we routinely run system tests, using a memory debugger, multiple compilers and different machines. Additional confidence in the code comes from specialised tests, for example automated dimensional analysis and domain transformations. This has entailed adopting a code style where we deliberately restrict what a compiler can do when re-arranging mathematical expressions.In the spirit of open science, all development is in the public domain. This leads to a positive feedback, where increased transparency and reproducibility makes using the model easier for external collaborators, who in turn provide valuable contributions. To facilitate users installing and running the model we provide (version controlled) digital notebooks that illustrate and record analysis of output. This has the dual role of providing a gross, platform-independent, testing capability and a means to documents model output and analysis.

  2. Reproducibility of X-ray bone densitometry and quantitative ultrasound - data from clinical practice

    International Nuclear Information System (INIS)

    Boyanov, M.; Shinkov, A.; Nestorova, R.

    2005-01-01

    The Quality Control (QC) plays a crucial role in the proper functioning of any bone density measurement device. Our study was aimed at evaluation of: 1) QC for several years of a Dual-energy X-ray (DXA) bone densitometry unit Hologic QDR-4500, and 2) in vivo precision of DXA lumbar spine and total hip, as well as quantitative ultrasound (QUS) of the calcaneus and radius. 3 groups of postmenopausal women at mean ages between 52 and 58 years were scanned twice on the same device (1st group Hologic QDR 4500 A; 2nd group - QUS Sahara Hologic, 3rd group - QUS Sunlight Omnisense 5000/7000). Long-term BMD reproducibility of the QDR 4500 unit in vitro, expressed as coefficient of variation, was 0.39 %. At the lumbar spine the in vivo precision error was 1.35%. Among the different regions of interest in the proximal femur precision decreased as follows: total hip (0.95 %) → trochanter (1.29 %) → femoral neck (1.31 %) → Ward's (3.14 %). The precision of QUS of the distal radius (0.59 %) was much better than that of the calcaneus (3.36 %). Our precision results are in line with those published abroad, which is due to the high qualification of the Bulgarian technicians. We discussed the clinical importance of the least significant change with the different methods. Our data underline the need for one leading bone densitometry centre to be accredited as the Bulgarian Reference Centre

  3. Paleomagnetic analysis of curved thrust belts reproduced by physical models

    Science.gov (United States)

    Costa, Elisabetta; Speranza, Fabio

    2003-12-01

    This paper presents a new methodology for studying the evolution of curved mountain belts by means of paleomagnetic analyses performed on analogue models. Eleven models were designed aimed at reproducing various tectonic settings in thin-skinned tectonics. Our models analyze in particular those features reported in the literature as possible causes for peculiar rotational patterns in the outermost as well as in the more internal fronts. In all the models the sedimentary cover was reproduced by frictional low-cohesion materials (sand and glass micro-beads), which detached either on frictional or on viscous layers. These latter were reproduced in the models by silicone. The sand forming the models has been previously mixed with magnetite-dominated powder. Before deformation, the models were magnetized by means of two permanent magnets generating within each model a quasi-linear magnetic field of intensity variable between 20 and 100 mT. After deformation, the models were cut into closely spaced vertical sections and sampled by means of 1×1-cm Plexiglas cylinders at several locations along curved fronts. Care was taken to collect paleomagnetic samples only within virtually undeformed thrust sheets, avoiding zones affected by pervasive shear. Afterwards, the natural remanent magnetization of these samples was measured, and alternating field demagnetization was used to isolate the principal components. The characteristic components of magnetization isolated were used to estimate the vertical-axis rotations occurring during model deformation. We find that indenters pushing into deforming belts from behind form non-rotational curved outer fronts. The more internal fronts show oroclinal-type rotations of a smaller magnitude than that expected for a perfect orocline. Lateral symmetrical obstacles in the foreland colliding with forward propagating belts produce non-rotational outer curved fronts as well, whereas in between and inside the obstacles a perfect orocline forms

  4. Reproducibility and relative validity of a semi-quantitative food frequency questionnaire for Chinese pregnant women.

    Science.gov (United States)

    Zhang, Hongmin; Qiu, Xiang; Zhong, Chunrong; Zhang, Kewei; Xiao, Mei; Yi, Nianhua; Xiong, Guoping; Wang, Jing; Yao, Jing; Hao, Liping; Wei, Sheng; Yang, Nianhong; Yang, Xuefeng

    2015-06-04

    Food frequency questionnaire (FFQ) is a reliable tool to estimate dietary intake in large nutritional epidemiological studies, but there is lack of a current and validated FFQ for use in urban Chinese pregnant women. This study aimed to evaluate the reproducibility and validity of a semi-quantitative FFQ designed to estimate dietary intake among urban pregnant women in a cohort study conducted in central China. In the reproducibility study, a sample of 123 healthy pregnant women completed the first FFQ at 12-13 weeks gestation and the second FFQ 3-4 weeks later. To validate the FFQ, the pregnant women completed three 24-h recalls (24HRs) between the intervals of two FFQs. The intraclass correlation coefficients of two administrations of FFQ for foods ranged from 0.23 (nuts) to 0.49 (fruits) and for nutrients from 0.24 (iodine) to 0.58 (selenium) and coefficients were all statistically significant. The unadjusted Pearson correlation coefficients between two methods ranged from 0.28 (beans) to 0.53 (fruits) for foods and from 0.15 (iodine) to 0.59 (protein) for nutrients. Energy-adjusted and de-attenuated correlation coefficients for foods ranged from 0.35 (beans) to 0.56 (fruits) and for nutrients from 0.11 (iodine) to 0.63 (protein), and all correlations being statistically significant except for iodine, sodium and riboflavin. On average, 67.0% (51.2%-80.5%) of women were classified by both methods into the same or adjacent quintiles based on their food intakes, while 68.5% (56.1%-77.2%) of women were classified as such based on nutrient intakes. Extreme misclassifications were very low for both foods (average of 2.0%) and nutrients (average of 2.2%). Bland-Altman Plots also showed reasonably acceptable agreement between two methods. This FFQ is a reasonably reliable and valid tool for assessing most food and nutrient intakes of urban pregnant women in central China.

  5. Can global chemistry-climate models reproduce air quality extremes?

    Science.gov (United States)

    Schnell, J.; Prather, M. J.; Holmes, C. D.

    2013-12-01

    We identify and characterize extreme ozone pollution episodes over the USA and EU through a novel analysis of ten years (2000-2010) of surface ozone measurements. An optimal interpolation scheme is developed to create grid-cell averaged values of surface ozone that can be compared with gridded model simulations. In addition, it also allows a comparison of two non-coincident observational networks in the EU. The scheme incorporates techniques borrowed from inverse distance weighting and Kriging. It uses all representative observational site data while still recognizing the heterogeneity of surface ozone. Individual, grid-cell level events are identified as an exceedance of historical percentile (10 worst days in a year, 97.3 percentile). A clustering algorithm is then used to construct the ozone episodes from the individual events. We then test the skill of the high-resolution (100 km) two-year (2005-2006) hindcast from the UCI global chemistry transport model in reproducing the events/episodes identified in the observations using the same identification criteria. Although the UCI CTM has substantial biases in surface ozone, we find that it has considerable skill in reproducing both individual grid-cell level extreme events and their connectedness in space and time with an overall skill of 24% (32%) for the US (EU). The grid-cell level extreme ozone events in both the observations and UCI CTM are found to occur mostly (~75%) in coherent, multi-day, connected episodes covering areas greater than 1000 x 1000 square km. In addition the UCI CTM has greater skill in reproducing these larger episodes. We conclude that even at relatively coarse resolution, global chemistry-climate models can be used to project major synoptic pollution episodes driven by large-scale climate and chemistry changes even with their known biases.

  6. From alginate impressions to digital virtual models: accuracy and reproducibility.

    Science.gov (United States)

    Dalstra, Michel; Melsen, Birte

    2009-03-01

    To compare the accuracy and reproducibility of measurements performed on digital virtual models with those taken on plaster casts from models poured immediately after the impression was taken, the 'gold standard', and from plaster models poured following a 3-5 day shipping procedure of the alginate impression. Direct comparison of two measuring techniques. The study was conducted at the Department of Orthodontics, School of Dentistry, University of Aarhus, Denmark in 2006/2007. Twelve randomly selected orthodontic graduate students with informed consent. Three sets of alginate impressions were taken from the participants within 1 hour. Plaster models were poured immediately from two of the sets, while the third set was kept in transit in the mail for 3-5 days. Upon return a plaster model was poured as well. Finally digital models were made from the plaster models. A number of measurements were performed on the plaster casts with a digital calliper and on the corresponding digital models using the virtual measuring tool of the accompanying software. Afterwards these measurements were compared statistically. No statistical differences were found between the three sets of plaster models. The intra- and inter-observer variability are smaller for the measurements performed on the digital models. Sending alginate impressions by mail does not affect the quality and accuracy of plaster casts poured from them afterwards. Virtual measurements performed on digital models display less variability than the corresponding measurements performed with a calliper on the actual models.

  7. A reproducible oral microcosm biofilm model for testing dental materials.

    Science.gov (United States)

    Rudney, J D; Chen, R; Lenton, P; Li, J; Li, Y; Jones, R S; Reilly, C; Fok, A S; Aparicio, C

    2012-12-01

    Most studies of biofilm effects on dental materials use single-species biofilms, or consortia. Microcosm biofilms grown directly from saliva or plaque are much more diverse, but difficult to characterize. We used the Human Oral Microbial Identification Microarray (HOMIM) to validate a reproducible oral microcosm model. Saliva and dental plaque were collected from adults and children. Hydroxyapatite and dental composite discs were inoculated with either saliva or plaque, and microcosm biofilms were grown in a CDC biofilm reactor. In later experiments, the reactor was pulsed with sucrose. DNA from inoculums and microcosms was analysed by HOMIM for 272 species. Microcosms included about 60% of species from the original inoculum. Biofilms grown on hydroxyapatite and composites were extremely similar. Sucrose pulsing decreased diversity and pH, but increased the abundance of Streptococcus and Veillonella. Biofilms from the same donor, grown at different times, clustered together. This model produced reproducible microcosm biofilms that were representative of the oral microbiota. Sucrose induced changes associated with dental caries. This is the first use of HOMIM to validate an oral microcosm model that can be used to study the effects of complex biofilms on dental materials. © 2012 The Society for Applied Microbiology.

  8. Feasibility and reproducibility of fetal lung texture analysis by Automatic Quantitative Ultrasound Analysis and correlation with gestational age.

    Science.gov (United States)

    Cobo, Teresa; Bonet-Carne, Elisenda; Martínez-Terrón, Mónica; Perez-Moreno, Alvaro; Elías, Núria; Luque, Jordi; Amat-Roldan, Ivan; Palacio, Montse

    2012-01-01

    To evaluate the feasibility and reproducibility of fetal lung texture analysis using a novel automatic quantitative ultrasound analysis and to assess its correlation with gestational age. Prospective cross-sectional observational study. To evaluate texture features, 957 left and right lung images in a 2D four-cardiac-chamber view plane were previously delineated from fetuses between 20 and 41 weeks of gestation. Quantification of lung texture was performed by the Automatic Quantitative Ultrasound Analysis (AQUA) software to extract image features. A standard learning approach composed of feature transformation and a regression model was used to evaluate the association between texture features and gestational age. The association between weeks of gestation and fetal lung texture quantified by the AQUA software presented a Pearson correlation of 0.97. The association was not influenced by delineation parameters such as region of interest (ROI) localization, ROI size, right/left lung selected or sonographic parameters such as ultrasound equipment or transducer used. Fetal lung texture analysis measured by the AQUA software demonstrated a strong correlation with gestational age. This supports further research to explore the use of this technology to the noninvasive prediction of fetal lung maturity. Copyright © 2012 S. Karger AG, Basel.

  9. Can a coupled meteorology–chemistry model reproduce the ...

    Science.gov (United States)

    The ability of a coupled meteorology–chemistry model, i.e., Weather Research and Forecast and Community Multiscale Air Quality (WRF-CMAQ), to reproduce the historical trend in aerosol optical depth (AOD) and clear-sky shortwave radiation (SWR) over the Northern Hemisphere has been evaluated through a comparison of 21-year simulated results with observation-derived records from 1990 to 2010. Six satellite-retrieved AOD products including AVHRR, TOMS, SeaWiFS, MISR, MODIS-Terra and MODIS-Aqua as well as long-term historical records from 11 AERONET sites were used for the comparison of AOD trends. Clear-sky SWR products derived by CERES at both the top of atmosphere (TOA) and surface as well as surface SWR data derived from seven SURFRAD sites were used for the comparison of trends in SWR. The model successfully captured increasing AOD trends along with the corresponding increased TOA SWR (upwelling) and decreased surface SWR (downwelling) in both eastern China and the northern Pacific. The model also captured declining AOD trends along with the corresponding decreased TOA SWR (upwelling) and increased surface SWR (downwelling) in the eastern US, Europe and the northern Atlantic for the period of 2000–2010. However, the model underestimated the AOD over regions with substantial natural dust aerosol contributions, such as the Sahara Desert, Arabian Desert, central Atlantic and northern Indian Ocean. Estimates of the aerosol direct radiative effect (DRE) at TOA a

  10. Reproducibility and quantitativity of oblique-angle reconstruction in single photon emission computed tomography using Tl-201 myocardial phantom

    International Nuclear Information System (INIS)

    Bunko, Hisashi; Nanbu, Ichiro; Seki, Hiroyasu

    1984-01-01

    This study was carried out in order to evaluate reproducibility and quantitativity of oblique-angle reconstruction of myocardial phantom SPECT. Myocardial phantom with transmural and subendcardial defects, and off-axis phantom with wall thickness changing continuously from 0 to 23 mm were used. Sixty projection data in every 6 0 were aquired using dual-camera (ZLC) with high resolution collimators connected to Scintipac-2400 computer system. Oblique-angle reconstructed images were obtained by indicating the long axis of the phantom manually in the transaxial and vertical long axial tomograms. Reproducibility and quantitativity were evaluated by creating circumferential profile (CFP) of the finally reconstructed short axial images. Inter- and intra-operater reproducibility of relative counting ratio were less than 6.7% (C.V.) and 3.3% (C.V.), respectively. Both inter- and intraoperater reproducibility of absolute counts were better than that of counting ratio (less than 5.1% (C.V.) and 2.9% (C.V.), respectively). Variation of defect location in the reconstructed image and between the slices were less than 1 sampling interval of CFP (6 0 ) and 0.6 slice, respectively. Quantitativity of counts in the reconstructed images was poor in the transmulal defect, but was fair in the subendocardial defect. Counting ratio was greatly affected by wall thickness. Temporal quantitatibity or linearity of the counts in sequential SPECTs was good in non-defect area, especially when wall thickness was greater than 70% (16 mm) of maximum. In conclusion, three-dimensional oblique-angle reconstruction in Tl-201 myocardial SPECT could be applicable to relative and temporal quantitation of local myocardial activity other than defect area for the quantitative evaluation of Tl-201 myocardial wash-out. (J.P.N.)

  11. The substorm cycle as reproduced by global MHD models

    Science.gov (United States)

    Gordeev, E.; Sergeev, V.; Tsyganenko, N.; Kuznetsova, M.; Rastäetter, L.; Raeder, J.; Tóth, G.; Lyon, J.; Merkin, V.; Wiltberger, M.

    2017-01-01

    Recently, Gordeev et al. (2015) suggested a method to test global MHD models against statistical empirical data. They showed that four community-available global MHD models supported by the Community Coordinated Modeling Center (CCMC) produce a reasonable agreement with reality for those key parameters (the magnetospheric size, magnetic field, and pressure) that are directly related to the large-scale equilibria in the outer magnetosphere. Based on the same set of simulation runs, here we investigate how the models reproduce the global loading-unloading cycle. We found that in terms of global magnetic flux transport, three examined CCMC models display systematically different response to idealized 2 h north then 2 h south interplanetary magnetic field (IMF) Bz variation. The LFM model shows a depressed return convection and high loading rate during the growth phase as well as enhanced return convection and high unloading rate during the expansion phase, with the amount of loaded/unloaded magnetotail flux and the growth phase duration being the closest to their observed empirical values during isolated substorms. Two other models exhibit drastically different behavior. In the BATS-R-US model the plasma sheet convection shows a smooth transition to the steady convection regime after the IMF southward turning. In the Open GGCM a weak plasma sheet convection has comparable intensities during both the growth phase and the following slow unloading phase. We also demonstrate potential technical problem in the publicly available simulations which is related to postprocessing interpolation and could affect the accuracy of magnetic field tracing and of other related procedures.

  12. A reproducible brain tumour model established from human glioblastoma biopsies

    Directory of Open Access Journals (Sweden)

    Li Xingang

    2009-12-01

    Full Text Available Abstract Background Establishing clinically relevant animal models of glioblastoma multiforme (GBM remains a challenge, and many commonly used cell line-based models do not recapitulate the invasive growth patterns of patient GBMs. Previously, we have reported the formation of highly invasive tumour xenografts in nude rats from human GBMs. However, implementing tumour models based on primary tissue requires that these models can be sufficiently standardised with consistently high take rates. Methods In this work, we collected data on growth kinetics from a material of 29 biopsies xenografted in nude rats, and characterised this model with an emphasis on neuropathological and radiological features. Results The tumour take rate for xenografted GBM biopsies were 96% and remained close to 100% at subsequent passages in vivo, whereas only one of four lower grade tumours engrafted. Average time from transplantation to the onset of symptoms was 125 days ± 11.5 SEM. Histologically, the primary xenografts recapitulated the invasive features of the parent tumours while endothelial cell proliferations and necrosis were mostly absent. After 4-5 in vivo passages, the tumours became more vascular with necrotic areas, but also appeared more circumscribed. MRI typically revealed changes related to tumour growth, several months prior to the onset of symptoms. Conclusions In vivo passaging of patient GBM biopsies produced tumours representative of the patient tumours, with high take rates and a reproducible disease course. The model provides combinations of angiogenic and invasive phenotypes and represents a good alternative to in vitro propagated cell lines for dissecting mechanisms of brain tumour progression.

  13. Development of a Consistent and Reproducible Porcine Scald Burn Model

    Science.gov (United States)

    Kempf, Margit; Kimble, Roy; Cuttle, Leila

    2016-01-01

    There are very few porcine burn models that replicate scald injuries similar to those encountered by children. We have developed a robust porcine burn model capable of creating reproducible scald burns for a wide range of burn conditions. The study was conducted with juvenile Large White pigs, creating replicates of burn combinations; 50°C for 1, 2, 5 and 10 minutes and 60°C, 70°C, 80°C and 90°C for 5 seconds. Visual wound examination, biopsies and Laser Doppler Imaging were performed at 1, 24 hours and at 3 and 7 days post-burn. A consistent water temperature was maintained within the scald device for long durations (49.8 ± 0.1°C when set at 50°C). The macroscopic and histologic appearance was consistent between replicates of burn conditions. For 50°C water, 10 minute duration burns showed significantly deeper tissue injury than all shorter durations at 24 hours post-burn (p ≤ 0.0001), with damage seen to increase until day 3 post-burn. For 5 second duration burns, by day 7 post-burn the 80°C and 90°C scalds had damage detected significantly deeper in the tissue than the 70°C scalds (p ≤ 0.001). A reliable and safe model of porcine scald burn injury has been successfully developed. The novel apparatus with continually refreshed water improves consistency of scald creation for long exposure times. This model allows the pathophysiology of scald burn wound creation and progression to be examined. PMID:27612153

  14. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    osteosarcoma model was shown to be feasible: the take rate was high, surgical mortality was negligible and the procedure was simple to perform and easily reproduced. It may be a useful tool in the investigation of antiangiogenic and anticancer therapeutics. Ultrasound was found to be a highly accurate tool for tumor diagnosis, localization and measurement and may be recommended for monitoring tumor growth in this model.

  15. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...

  16. Qualitative and quantitative histopathology in transitional cell carcinomas of the urinary bladder. An international investigation of intra- and interobserver reproducibility

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Sasaki, M; Fukuzawa, S

    1994-01-01

    of both qualitative and quantitative grading methods. Grading of malignancy was performed by one observer in Japan (using the World Health Organization scheme), and by two observers in Denmark (using the Bergkvist system). A "translation" between the systems, grade for grade, and kappa statistics were...... used in evaluating the reproducibility. Unbiased estimates of nuclear mean volume, nuclear mean profile area, nuclear volume fraction, nuclear profile density index, and mitotic profile density index were obtained twice in 55 of the studied cases by one observer in Japan and one in Denmark, using...... a random, systematic sampling scheme. RESULTS: The results were compared by bivariate correlation analyses and Kendall's tau. The international interobserver reproducibility of qualitative gradings was rather poor (kappa = 0.51), especially for grade 2 tumors (kappa = 0.28). Likewise, the interobserver...

  17. Quantitative Aortic Distensibility Measurement Using CT in Patients with Abdominal Aortic Aneurysm: Reproducibility and Clinical Relevance

    Directory of Open Access Journals (Sweden)

    Yunfei Zha

    2017-01-01

    Full Text Available Purpose. To investigate the reproducibility of aortic distensibility (D measurement using CT and assess its clinical relevance in patients with infrarenal abdominal aortic aneurysm (AAA. Methods. 54 patients with infrarenal abdominal aortic aneurysm were studied to determine their distensibility by using 64-MDCT. Aortic cross-sectional area changes were determined at two positions of the aorta, immediately below the lowest renal artery (level 1. and at the level of its maximal diameter (level 2. by semiautomatic segmentation. Measurement reproducibility was assessed using intraclass correlation coefficient (ICC and Bland-Altman analyses. Stepwise multiple regression analysis was performed to assess linear associations between aortic D and anthropometric and biochemical parameters. Results. A mean distensibility of Dlevel  1.=(1.05±0.22×10-5  Pa-1 and Dlevel  2.=(0.49±0.18×10-5  Pa-1 was found. ICC proved excellent consistency between readers over two locations: 0.92 for intraobserver and 0.89 for interobserver difference in level 1. and 0.85 and 0.79 in level 2. Multivariate analysis of all these variables showed sac distensibility to be independently related (R2=0.68 to BMI, diastolic blood pressure, and AAA diameter. Conclusions. Aortic distensibility measurement in patients with AAA demonstrated high inter- and intraobserver agreement and may be valuable when choosing the optimal dimensions graft for AAA before endovascular aneurysm repair.

  18. Qualitative and quantitative histopathology in transitional cell carcinomas of the urinary bladder. An international investigation of intra- and interobserver reproducibility

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Sasaki, M; Fukuzawa, S

    1994-01-01

    .54). This can probably be related to the manual design of the sampling scheme and may be solved by introducing a motorized object stage in the systematic selection of fields of vision for quantitative measurements. However, the nuclear mean size estimators are unaffected by such sampling variability...... results. Using objective, unbiased stereologic techniques and ordinary histomorphometry, such problems may be solved. EXPERIMENTAL DESIGN: A study of 110 patients with papillary or solid transitional cell carcinomas of the urinary bladder in stage Ta through T4 was carried out, addressing reproducibility...... of both qualitative and quantitative grading methods. Grading of malignancy was performed by one observer in Japan (using the World Health Organization scheme), and by two observers in Denmark (using the Bergkvist system). A "translation" between the systems, grade for grade, and kappa statistics were...

  19. Highly Reproducible Automated Proteomics Sample Preparation Workflow for Quantitative Mass Spectrometry.

    Science.gov (United States)

    Fu, Qin; Kowalski, Michael P; Mastali, Mitra; Parker, Sarah J; Sobhani, Kimia; van den Broek, Irene; Hunter, Christie L; Van Eyk, Jennifer E

    2018-01-05

    Sample preparation for protein quantification by mass spectrometry requires multiple processing steps including denaturation, reduction, alkylation, protease digestion, and peptide cleanup. Scaling these procedures for the analysis of numerous complex biological samples can be tedious and time-consuming, as there are many liquid transfer steps and timed reactions where technical variations can be introduced and propagated. We established an automated sample preparation workflow with a total processing time for 96 samples of 5 h, including a 2 h incubation with trypsin. Peptide cleanup is accomplished by online diversion during the LC/MS/MS analysis. In a selected reaction monitoring (SRM) assay targeting 6 plasma biomarkers and spiked β-galactosidase, mean intraday and interday cyclic voltammograms (CVs) for 5 serum and 5 plasma samples over 5 days were samples repeated on 3 separate days had total CVs below 20%. Similar results were obtained when the workflow was transferred to a second site: 93% of peptides had CVs below 20%. An automated trypsin digestion workflow yields uniformly processed samples in less than 5 h. Reproducible quantification of peptides was observed across replicates, days, instruments, and laboratory sites, demonstrating the broad applicability of this approach.

  20. How Modeling Standards, Software, and Initiatives Support Reproducibility in Systems Biology and Systems Medicine.

    Science.gov (United States)

    Waltemath, Dagmar; Wolkenhauer, Olaf

    2016-10-01

    Only reproducible results are of significance to science. The lack of suitable standards and appropriate support of standards in software tools has led to numerous publications with irreproducible results. Our objectives are to identify the key challenges of reproducible research and to highlight existing solutions. In this paper, we summarize problems concerning reproducibility in systems biology and systems medicine. We focus on initiatives, standards, and software tools that aim to improve the reproducibility of simulation studies. The long-term success of systems biology and systems medicine depends on trustworthy models and simulations. This requires openness to ensure reusability and transparency to enable reproducibility of results in these fields.

  1. Reproducibility of the assessment of myocardial function using gated Tc-99m-MIBI SPECT and quantitative software

    International Nuclear Information System (INIS)

    Lee, Dong Soo; Cheon, Gi Jeong; Ahn, Ji Young; Jeong, Joon Ki; Lee, Myung Chul

    1998-01-01

    We investigated reproducibility of the quantification of left ventricular volume and ejection fraction, and grading of myocardial wall motion and systolic thickening when we used gated myocardial SPECT and Cedars quantification software. We performed gated myocardial SPECT in 33 consecutive patients twice in the same position after Tc-99m-MIBI SPECT. We used 16 frames per cycle for the gating of sequential Tc-99m-MIBI SPECT. After reconstruction, we used Cedars quantitative gated SPECT and calculated ventricular volume and ejection fraction (EF). Wall motion was graded using 5 point score. Wall thickening was graded using 4 point score. Coefficient of variation for re-examination of volume and fraction were calculated. Kappa values (k-value) for assessing reproducibility of wall motion or wall thickening were calculated. Enddiastolic volumes (EDV) ranged from 58 ml to 248 ml (122 ml +/-42 ml), endsystolic volumes (ESV) from 20 ml to 174 ml (65 ml+/-39 ml), and EF from 20% to 68% (51%+/-14%). Geometric mean of standard deviations of 33 patients was 5.0 ml for EDV, 3.9 ml for ESV and 1.9% for EF. Their average differences were not different from zero (p>0.05). k-value for wall motion using 2 consecutive images was 0.76 (confidence interval: 0.71-0.81). k-value was 0.87 (confidence interval: 0.83-0.90) for assessment of wall thickening. We concluded that quantification of functional indices, assessment of wall motion and wall thickening using gated Tc-99m MIBI SPECT was reproducible and we could use this method for the evaluation of short-acting drug effect

  2. Validation and reproducibility of a semi-quantitative FFQ as a measure of dietary intake in adults from Puerto Rico.

    Science.gov (United States)

    Palacios, Cristina; Trak, Maria Angelica; Betancourt, Jesmari; Joshipura, Kaumudi; Tucker, Katherine L

    2015-10-01

    We aimed to assess the relative validity and reproducibility of a semi-quantitative FFQ in Puerto Rican adults. Participants completed an FFQ, followed by a 6 d food record and a second administration of the FFQ, 30 d later. All nutrients were log transformed and adjusted for energy intake. Statistical analyses included correlations, paired t tests, cross-classification and Bland-Altman plots. Medical Sciences Campus, University of Puerto Rico. Convenience sample of students, employees and faculty members (n 100, ≥21 years). Data were collected in 2010. A total of ninety-two participants completed the study. Most were young overweight females. All nutrients were significantly correlated between the two FFQ, with an average correlation of 0·61 (range 0·43-0·73) and an average difference of 4·8 % between them. Most energy-adjusted nutrients showed significant correlations between the FFQ and food record, which improved with de-attenuation and averaged 0·38 (range 0·11-0·63). The lowest non-significant correlations (≤0·20) were for trans-fat, n 3 fatty acids, thiamin and vitamin E. Intakes assessed by the FFQ were higher than those from the food record by a mean of 19 % (range 4-44 %). Bland-Altman plots showed that there was a systematic trend towards higher estimates with the FFQ, particularly for energy, carbohydrate and Ca. Most participants were correctly classified into the same or adjacent quintile (average 66 %) by both methods with only 3 % gross misclassification. This semi-quantitative FFQ is a tool that offers relatively valid and reproducible estimates of energy and certain nutrients in this group of mostly female Puerto Ricans.

  3. Reproducibility of immunostaining quantification and description of a new digital image processing procedure for quantitative evaluation of immunohistochemistry in pathology.

    Science.gov (United States)

    Bernardo, Vagner; Lourenço, Simone Q C; Cruz, Renato; Monteiro-Leal, Luiz H; Silva, Licínio E; Camisasca, Danielle R; Farina, Marcos; Lins, Ulysses

    2009-08-01

    Quantification of immunostaining is a widely used technique in pathology. Nonetheless, techniques that rely on human vision are prone to inter- and intraobserver variability, and they are tedious and time consuming. Digital image analysis (DIA), now available in a variety of platforms, improves quantification performance: however, the stability of these different DIA systems is largely unknown. Here, we describe a method to measure the reproducibility of DIA systems. In addition, we describe a new image-processing strategy for quantitative evaluation of immunostained tissue sections using DAB/hematoxylin-stained slides. This approach is based on image subtraction, using a blue low pass filter in the optical train, followed by digital contrast and brightness enhancement. Results showed that our DIA system yields stable counts, and that this method can be used to evaluate the performance of DIA systems. The new image-processing approach creates an image that aids both human visual observation and DIA systems in assessing immunostained slides, delivers a quantitative performance similar to that of bright field imaging, gives thresholds with smaller ranges, and allows the segmentation of strongly immunostained areas, all resulting in a higher probability of representing specific staining. We believe that our approach offers important advantages to immunostaining quantification in pathology.

  4. Hippocampal volume change measurement: quantitative assessment of the reproducibility of expert manual outlining and the automated methods FreeSurfer and FIRST.

    Science.gov (United States)

    Mulder, Emma R; de Jong, Remko A; Knol, Dirk L; van Schijndel, Ronald A; Cover, Keith S; Visser, Pieter J; Barkhof, Frederik; Vrenken, Hugo

    2014-05-15

    To measure hippocampal volume change in Alzheimer's disease (AD) or mild cognitive impairment (MCI), expert manual delineation is often used because of its supposed accuracy. It has been suggested that expert outlining yields poorer reproducibility as compared to automated methods, but this has not been investigated. To determine the reproducibilities of expert manual outlining and two common automated methods for measuring hippocampal atrophy rates in healthy aging, MCI and AD. From the Alzheimer's Disease Neuroimaging Initiative (ADNI), 80 subjects were selected: 20 patients with AD, 40 patients with mild cognitive impairment (MCI) and 20 healthy controls (HCs). Left and right hippocampal volume change between baseline and month-12 visit was assessed by using expert manual delineation, and by the automated software packages FreeSurfer (longitudinal processing stream) and FIRST. To assess reproducibility of the measured hippocampal volume change, both back-to-back (BTB) MPRAGE scans available for each visit were analyzed. Hippocampal volume change was expressed in μL, and as a percentage of baseline volume. Reproducibility of the 1-year hippocampal volume change was estimated from the BTB measurements by using linear mixed model to calculate the limits of agreement (LoA) of each method, reflecting its measurement uncertainty. Using the delta method, approximate p-values were calculated for the pairwise comparisons between methods. Statistical analyses were performed both with inclusion and exclusion of visibly incorrect segmentations. Visibly incorrect automated segmentation in either one or both scans of a longitudinal scan pair occurred in 7.5% of the hippocampi for FreeSurfer and in 6.9% of the hippocampi for FIRST. After excluding these failed cases, reproducibility analysis for 1-year percentage volume change yielded LoA of ±7.2% for FreeSurfer, ±9.7% for expert manual delineation, and ±10.0% for FIRST. Methods ranked the same for reproducibility of 1

  5. Hydrological Modeling Reproducibility Through Data Management and Adaptors for Model Interoperability

    Science.gov (United States)

    Turner, M. A.

    2015-12-01

    Because of a lack of centralized planning and no widely-adopted standards among hydrological modeling research groups, research communities, and the data management teams meant to support research, there is chaos when it comes to data formats, spatio-temporal resolutions, ontologies, and data availability. All this makes true scientific reproducibility and collaborative integrated modeling impossible without some glue to piece it all together. Our Virtual Watershed Integrated Modeling System provides the tools and modeling framework hydrologists need to accelerate and fortify new scientific investigations by tracking provenance and providing adaptors for integrated, collaborative hydrologic modeling and data management. Under global warming trends where water resources are under increasing stress, reproducible hydrological modeling will be increasingly important to improve transparency and understanding of the scientific facts revealed through modeling. The Virtual Watershed Data Engine is capable of ingesting a wide variety of heterogeneous model inputs, outputs, model configurations, and metadata. We will demonstrate one example, starting from real-time raw weather station data packaged with station metadata. Our integrated modeling system will then create gridded input data via geostatistical methods along with error and uncertainty estimates. These gridded data are then used as input to hydrological models, all of which are available as web services wherever feasible. Models may be integrated in a data-centric way where the outputs too are tracked and used as inputs to "downstream" models. This work is part of an ongoing collaborative Tri-state (New Mexico, Nevada, Idaho) NSF EPSCoR Project, WC-WAVE, comprised of researchers from multiple universities in each of the three states. The tools produced and presented here have been developed collaboratively alongside watershed scientists to address specific modeling problems with an eye on the bigger picture of

  6. Reproducing Phenomenology of Peroxidation Kinetics via Model Optimization

    Science.gov (United States)

    Ruslanov, Anatole D.; Bashylau, Anton V.

    2010-06-01

    We studied mathematical modeling of lipid peroxidation using a biochemical model system of iron (II)-ascorbate-dependent lipid peroxidation of rat hepatocyte mitochondrial fractions. We found that antioxidants extracted from plants demonstrate a high intensity of peroxidation inhibition. We simplified the system of differential equations that describes the kinetics of the mathematical model to a first order equation, which can be solved analytically. Moreover, we endeavor to algorithmically and heuristically recreate the processes and construct an environment that closely resembles the corresponding natural system. Our results demonstrate that it is possible to theoretically predict both the kinetics of oxidation and the intensity of inhibition without resorting to analytical and biochemical research, which is important for cost-effective discovery and development of medical agents with antioxidant action from the medicinal plants.

  7. Using a 1-D model to reproduce diurnal SST signals

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, Jacob L.

    2014-01-01

    of measurement. A generally preferred approach to bridge the gap between in situ and remotely obtained measurements is through modelling of the upper ocean temperature. This ESA supported study focuses on the implementation of the 1 dimensional General Ocean Turbulence Model (GOTM), in order to resolve...... an additional parametrisation for the total outgoing long-wave radiation and a 9-band parametrisation for the light extinction. New parametrisations for the stability functions, associated with vertical mixing, have been included. GOTM is tested using experimental data from the Woods Hole Oceanographic...

  8. Physiologically based quantitative modeling of unihemispheric sleep.

    Science.gov (United States)

    Kedziora, D J; Abeysuriya, R G; Phillips, A J K; Robinson, P A

    2012-12-07

    Unihemispheric sleep has been observed in numerous species, including birds and aquatic mammals. While knowledge of its functional role has been improved in recent years, the physiological mechanisms that generate this behavior remain poorly understood. Here, unihemispheric sleep is simulated using a physiologically based quantitative model of the mammalian ascending arousal system. The model includes mutual inhibition between wake-promoting monoaminergic nuclei (MA) and sleep-promoting ventrolateral preoptic nuclei (VLPO), driven by circadian and homeostatic drives as well as cholinergic and orexinergic input to MA. The model is extended here to incorporate two distinct hemispheres and their interconnections. It is postulated that inhibitory connections between VLPO nuclei in opposite hemispheres are responsible for unihemispheric sleep, and it is shown that contralateral inhibitory connections promote unihemispheric sleep while ipsilateral inhibitory connections promote bihemispheric sleep. The frequency of alternating unihemispheric sleep bouts is chiefly determined by sleep homeostasis and its corresponding time constant. It is shown that the model reproduces dolphin sleep, and that the sleep regimes of humans, cetaceans, and fur seals, the latter both terrestrially and in a marine environment, require only modest changes in contralateral connection strength and homeostatic time constant. It is further demonstrated that fur seals can potentially switch between their terrestrial bihemispheric and aquatic unihemispheric sleep patterns by varying just the contralateral connection strength. These results provide experimentally testable predictions regarding the differences between species that sleep bihemispherically and unihemispherically. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Reproducible Infection Model for Clostridium perfringens in Broiler Chickens

    DEFF Research Database (Denmark)

    Pedersen, Karl; Friis-Holm, Lotte Bjerrum; Heuer, Ole Eske

    2008-01-01

    Experiments were carried out to establish an infection and disease model for Clostridium perfringens in broiler chickens. Previous experiments had failed to induce disease and only a transient colonization with challenge strains had been obtained. In the present study, two series of experiments...

  10. Synaptic augmentation in a cortical circuit model reproduces serial dependence in visual working memory.

    Directory of Open Access Journals (Sweden)

    Daniel P Bliss

    Full Text Available Recent work has established that visual working memory is subject to serial dependence: current information in memory blends with that from the recent past as a function of their similarity. This tuned temporal smoothing likely promotes the stability of memory in the face of noise and occlusion. Serial dependence accumulates over several seconds in memory and deteriorates with increased separation between trials. While this phenomenon has been extensively characterized in behavior, its neural mechanism is unknown. In the present study, we investigate the circuit-level origins of serial dependence in a biophysical model of cortex. We explore two distinct kinds of mechanisms: stable persistent activity during the memory delay period and dynamic "activity-silent" synaptic plasticity. We find that networks endowed with both strong reverberation to support persistent activity and dynamic synapses can closely reproduce behavioral serial dependence. Specifically, elevated activity drives synaptic augmentation, which biases activity on the subsequent trial, giving rise to a spatiotemporally tuned shift in the population response. Our hybrid neural model is a theoretical advance beyond abstract mathematical characterizations, offers testable hypotheses for physiological research, and demonstrates the power of biological insights to provide a quantitative explanation of human behavior.

  11. COMBINE archive and OMEX format : One file to share all information to reproduce a modeling project

    NARCIS (Netherlands)

    Bergmann, Frank T.; Olivier, Brett G.; Soiland-Reyes, Stian

    2014-01-01

    Background: With the ever increasing use of computational models in the biosciences, the need to share models and reproduce the results of published studies efficiently and easily is becoming more important. To this end, various standards have been proposed that can be used to describe models,

  12. Relative validity and reproducibility of a parent-administered semi-quantitative FFQ for assessing food intake in Danish children aged 3-9 years

    DEFF Research Database (Denmark)

    Buch-Andersen, Tine; Perez-Cueto, Armando; Toft, Ulla Marie Nørgaard

    2016-01-01

    OBJECTIVE: To assess the relative validity and reproducibility of the semi-quantitative FFQ (SFFQ) applied in the evaluation of a community intervention study, SoL-Bornholm, for estimating food intakes. DESIGN: The reference measure was a 4 d estimated food record. The SFFQ was completed two time...

  13. Voxel-level reproducibility assessment of modality independent elastography in a pre-clinical murine model

    Science.gov (United States)

    Flint, Katelyn M.; Weis, Jared A.; Yankeelov, Thomas E.; Miga, Michael I.

    2015-03-01

    Changes in tissue mechanical properties, measured non-invasively by elastography methods, have been shown to be an important diagnostic tool, particularly for cancer. Tissue elasticity information, tracked over the course of therapy, may be an important prognostic indicator of tumor response to treatment. While many elastography techniques exist, this work reports on the use of a novel form of elastography that uses image texture to reconstruct elastic property distributions in tissue (i.e., a modality independent elastography (MIE) method) within the context of a pre-clinical breast cancer system.1,2 The elasticity results have previously shown good correlation with independent mechanical testing.1 Furthermore, MIE has been successfully utilized to localize and characterize lesions in both phantom experiments and simulation experiments with clinical data.2,3 However, the reproducibility of this method has not been characterized in previous work. The goal of this study is to evaluate voxel-level reproducibility of MIE in a pre-clinical model of breast cancer. Bland-Altman analysis of co-registered repeat MIE scans in this preliminary study showed a reproducibility index of 24.7% (scaled to a percent of maximum stiffness) at the voxel level. As opposed to many reports in the magnetic resonance elastography (MRE) literature that speak to reproducibility measures of the bulk organ, these results establish MIE reproducibility at the voxel level; i.e., the reproducibility of locally-defined mechanical property measurements throughout the tumor volume.

  14. Reproducibility of an automatic quantitation of regional myocardial wall motion and systolic thickening on gated Tc-99m-MIBI myocardial SPECT

    International Nuclear Information System (INIS)

    Paeng, Jin Chul; Lee, Dong Soo; Cheon, Gi Jeong; Kim, Yu Kyeong; Chung, June Key; Lee, Myung Chul

    2000-01-01

    The aim of this study is to investigate the reproducibility of the quantitative assessment of segmental wall motion and systolic thickening provided by an automatic quantitation algorithm. Tc-99m-MIBI gated myocardial SPECT with dipyridamole stress was performed in 31 patients with known or suspected coronary artery disease (4 with single, 6 with two, 11 with triple vessel disease; ejection fraction 51±14%) twice consecutively in the same position. Myocardium was divided into 20 segments. Segmental wall motion and systolic thickening were calculated and expressed in mm and % increase respectively, using AutoQUANT TM software. The reproducibility of this quantitative measurement of wall motion and thickening was tested. Correlations between repeated measurements on consecutive gated SPECT were excellent for wall motion (r=0.95) and systolic thickening (r=0.88). On Bland-Altman analysis, two standard deviation was 2 mm for repeated measurement of segmental wall motion, and 20% for that of systolic thickening. The weighted kappa values of repeated measurements were 0.807 for wall motion and 0.708 for systolic thickening. Sex, perfusion, or segmental location had no influence on reproducibility. Segmental wall motion and systolic thickening quantified using AutoQUANT TM software on gated myocardial SPECT offers good reproducibility and is significantly different when the change is more than 2 mm for wall motion and more than 20% for systolic thickening

  15. Investigation of dimensional variation in parts manufactured by fused deposition modeling using Gauge Repeatability and Reproducibility

    Science.gov (United States)

    Mohamed, Omar Ahmed; Hasan Masood, Syed; Lal Bhowmik, Jahar

    2018-02-01

    In the additive manufacturing (AM) market, the question is raised by industry and AM users on how reproducible and repeatable the fused deposition modeling (FDM) process is in providing good dimensional accuracy. This paper aims to investigate and evaluate the repeatability and reproducibility of the FDM process through a systematic approach to answer this frequently asked question. A case study based on the statistical gage repeatability and reproducibility (gage R&R) technique is proposed to investigate the dimensional variations in the printed parts of the FDM process. After running the simulation and analysis of the data, the FDM process capability is evaluated, which would help the industry for better understanding the performance of FDM technology.

  16. Reproducibility and relative validity of a brief quantitative food frequency questionnaire for assessing fruit and vegetable intakes in North-African women

    OpenAIRE

    Landais, Edwige; Gartner, Agnès; Bour, A.; McCullough, F.; Delpeuch, Francis; Holdsworth, Michelle

    2014-01-01

    BackgroundIn the context of a rapidly increasing prevalence of noncommunicable diseases, fruit and vegetables could play a key preventive role. To date, there is no rapid assessment tool available for measuring the fruit and vegetable intakes of North-African women. The present study aimed to investigate the reproducibility and relative validity of an eight-item quantitative food frequency questionnaire that measures the fruit and vegetable intakes (FV-FFQ) of Moroccan women. MethodsDuring a ...

  17. (1)H HR-MAS spectroscopy for quantitative measurement of choline concentration in amniotic fluid as a marker of fetal lung maturity: inter- and intraobserver reproducibility study.

    Science.gov (United States)

    Joe, Bonnie N; Vahidi, Kiarash; Zektzer, Andrew; Chen, Mei-Hsiu; Clifton, Matthew S; Butler, Thomas; Keshari, Kayvan; Kurhanewicz, John; Coakley, Fergus; Swanson, Mark G

    2008-12-01

    To determine the intra- and interobserver reproducibility of human amniotic fluid metabolite concentration measurements (including potential markers of fetal lung maturity) detectable by MR spectroscopy. (1)H high-resolution magic angle spinning (HR-MAS) spectroscopy was performed at 11.7 T on 23 third-trimester amniotic fluid samples. Samples were analyzed quantitatively using 3-(trimethylsilyl)propionic-2,2,3,3-d(4) acid (TSP) as a reference. Four observers independently quantified eight metabolite regions (TSP, lactate doublet and quartet, alanine, citrate, creatinine, choline, and glucose) twice from anonymized, randomized spectra using a semiautomated software program. Excellent inter- and intraobserver reproducibility was found for all metabolites. Intraclass correlation as a measure of interobserver agreement for the four readers ranged from 0.654 to 0.995. A high correlation of 0.973 was seen for choline in particular, a major component of surfactant. Pearson correlation as a measure of intraobserver reproducibility ranged from 0.478 to 0.999. Quantification of choline and other metabolite concentrations in amniotic fluid by high-resolution MR spectroscopy can be performed with high inter- and intraobserver reproducibility. Demonstration of reproducible metabolite concentration measurements is a critical first step in the search for biomarkers of fetal lung maturity. (c) 2008 Wiley-Liss, Inc.

  18. Using the mouse to model human disease: increasing validity and reproducibility

    Directory of Open Access Journals (Sweden)

    Monica J. Justice

    2016-02-01

    Full Text Available Experiments that use the mouse as a model for disease have recently come under scrutiny because of the repeated failure of data, particularly derived from preclinical studies, to be replicated or translated to humans. The usefulness of mouse models has been questioned because of irreproducibility and poor recapitulation of human conditions. Newer studies, however, point to bias in reporting results and improper data analysis as key factors that limit reproducibility and validity of preclinical mouse research. Inaccurate and incomplete descriptions of experimental conditions also contribute. Here, we provide guidance on best practice in mouse experimentation, focusing on appropriate selection and validation of the model, sources of variation and their influence on phenotypic outcomes, minimum requirements for control sets, and the importance of rigorous statistics. Our goal is to raise the standards in mouse disease modeling to enhance reproducibility, reliability and clinical translation of findings.

  19. Qualitative and quantitative histopathology in transitional cell carcinomas of the urinary bladder. An international investigation of intra- and interobserver reproducibility

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Sasaki, M; Fukuzawa, S

    1994-01-01

    BACKGROUND: Histopathologic, prognosis-related grading of malignancy by means of morphologic examination in transitional cell carcinomas of the urinary bladder (TCC) may be subject to observer variation, resulting in a reduced level of reproducibility. This may confound comparisons of treatment r...

  20. Quantitative assessment of left ventricular mechanical dyssynchrony using cine cardiovascular magnetic resonance imaging: Inter-study reproducibility

    Directory of Open Access Journals (Sweden)

    Johannes T Kowallick

    2017-05-01

    Full Text Available Objectives To determine the inter-study reproducibility of left ventricular (LV mechanical dyssynchrony measures based on standard cardiovascular magnetic resonance (CMR cine images. Design Steady-state free precession (SSFP LV short-axis stacks and three long-axes were acquired on the same day at three time points. Circumferential strain systolic dyssynchrony indexes (SDI, area-SDI as well as circumferential and radial uniformity ratio estimates (CURE and RURE, respectively were derived from CMR myocardial feature-tracking (CMR-FT based on the tracking of three SSFP short-axis planes. Furthermore, 4D-LV-analysis based on SSFP short-axis stacks and longitudinal planes was performed to quantify 4D-volume-SDI. Setting A single-centre London teaching hospital. Participants 16 healthy volunteers. Main outcome measures Inter-study reproducibility between the repeated exams. Results CURE and RURE as well as 4D-volume-SDI showed good inter-study reproducibility (coefficient of variation [CoV] 6.4%–12.9%. Circumferential strain and area-SDI showed higher variability between the repeated measurements (CoV 24.9%–37.5%. Uniformity ratio estimates showed the lowest inter-study variability (CoV 6.4%–8.5%. Conclusions Derivation of LV mechanical dyssynchrony measures from standard cine images is feasible using CMR-FT and 4D-LV-analysis tools. Uniformity ratio estimates and 4D-volume-SDI showed good inter-study reproducibility. Their clinical value should next be explored in patients who potentially benefit from cardiac resynchronization therapy.

  1. Reproducibility of Quantitative Brain Imaging Using a PET-Only and a Combined PET/MR System

    Directory of Open Access Journals (Sweden)

    Martin L. Lassen

    2017-07-01

    Full Text Available The purpose of this study was to test the feasibility of migrating a quantitative brain imaging protocol from a positron emission tomography (PET-only system to an integrated PET/MR system. Potential differences in both absolute radiotracer concentration as well as in the derived kinetic parameters as a function of PET system choice have been investigated. Five healthy volunteers underwent dynamic (R-[11C]verapamil imaging on the same day using a GE-Advance (PET-only and a Siemens Biograph mMR system (PET/MR. PET-emission data were reconstructed using a transmission-based attenuation correction (AC map (PET-only, whereas a standard MR-DIXON as well as a low-dose CT AC map was applied to PET/MR emission data. Kinetic modeling based on arterial blood sampling was performed using a 1-tissue-2-rate constant compartment model, yielding kinetic parameters (K1 and k2 and distribution volume (VT. Differences for parametric values obtained in the PET-only and the PET/MR systems were analyzed using a 2-way Analysis of Variance (ANOVA. Comparison of DIXON-based AC (PET/MR with emission data derived from the PET-only system revealed average inter-system differences of −33 ± 14% (p < 0.05 for the K1 parameter and −19 ± 9% (p < 0.05 for k2. Using a CT-based AC for PET/MR resulted in slightly lower systematic differences of −16 ± 18% for K1 and −9 ± 10% for k2. The average differences in VT were −18 ± 10% (p < 0.05 for DIXON- and −8 ± 13% for CT-based AC. Significant systematic differences were observed for kinetic parameters derived from emission data obtained from PET/MR and PET-only imaging due to different standard AC methods employed. Therefore, a transfer of imaging protocols from PET-only to PET/MR systems is not straightforward without application of proper correction methods.Clinical Trial Registration:www.clinicaltrialsregister.eu, identifier 2013-001724-19

  2. Quantitative structure - mesothelioma potency model ...

    Science.gov (United States)

    Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar

  3. Reproducibility of the coil positioning in Nb$_3$Sn magnet models through magnetic measurements

    CERN Document Server

    Borgnolutti, F; Ferracin, P; Kashikhin, V V; Sabbi, G; Velev, G; Todesco, E; Zlobin, A V

    2009-01-01

    The random part of the integral field harmonics in a series of superconducting magnets has been used in the past to identify the reproducibility of the coil positioning. Using a magnetic model and a MonteCarlo approach, coil blocks are randomly moved and the amplitude that best fits the magnetic measurements is interpreted as the reproducibility of the coil position. Previous values for r.m.s. coil displacements for Nb-Ti magnets range from 0.05 to 0.01 mm. In this paper, we use this approach to estimate the reproducibility in the coil position for Nb3Sn short models that have been built in the framework of the FNAL core program (HFDA dipoles) and of the LARP program (TQ quadrupoles). Our analysis shows that the Nb3Sn models manufactured in the past years correspond to r.m.s. coil displacements of at least 5 times what is found for the series production of a mature Nb-Ti technology. On the other hand, the variability of the field harmonics along the magnet axis shows that Nb3Sn magnets have already reached va...

  4. Quantitative analysis of relationships between irradiation parameters and the reproducibility of cyclotron-produced 99mTc yields

    Science.gov (United States)

    Tanguay, J.; Hou, X.; Buckley, K.; Schaffer, P.; Bénard, F.; Ruth, T. J.; Celler, A.

    2015-05-01

    Cyclotron production of 99mTc through the 100Mo(p,2n) 99mTc reaction channel is actively being investigated as an alternative to reactor-based 99Mo generation by nuclear fission of 235U. An exciting aspect of this approach is that it can be implemented using currently-existing cyclotron infrastructure to supplement, or potentially replace, conventional 99mTc production methods that are based on aging and increasingly unreliable nuclear reactors. Successful implementation will require consistent production of large quantities of high-radionuclidic-purity 99mTc. However, variations in proton beam currents and the thickness and isotopic composition of enriched 100Mo targets, in addition to other irradiation parameters, may degrade reproducibility of both radionuclidic purity and absolute 99mTc yields. The purpose of this article is to present a method for quantifying relationships between random variations in production parameters, including 100Mo target thicknesses and proton beam currents, and reproducibility of absolute 99mTc yields (defined as the end of bombardment (EOB) 99mTc activity). Using the concepts of linear error propagation and the theory of stochastic point processes, we derive a mathematical expression that quantifies the influence of variations in various irradiation parameters on yield reproducibility, quantified in terms of the coefficient of variation of the EOB 99mTc activity. The utility of the developed formalism is demonstrated with an example. We show that achieving less than 20% variability in 99mTc yields will require highly-reproducible target thicknesses and proton currents. These results are related to the service rate which is defined as the percentage of 99mTc production runs that meet the minimum daily requirement of one (or many) nuclear medicine departments. For example, we show that achieving service rates of 84.0%, 97.5% and 99.9% with 20% variations in target thicknesses requires producing on average 1.2, 1.5 and 1.9 times the

  5. Validation of EURO-CORDEX regional climate models in reproducing the variability of precipitation extremes in Romania

    Science.gov (United States)

    Dumitrescu, Alexandru; Busuioc, Aristita

    2016-04-01

    EURO-CORDEX is the European branch of the international CORDEX initiative that aims to provide improved regional climate change projections for Europe. The main objective of this paper is to document the performance of the individual models in reproducing the variability of precipitation extremes in Romania. Here three EURO-CORDEX regional climate models (RCMs) ensemble (scenario RCP4.5) are analysed and inter-compared: DMI-HIRHAM5, KNMI-RACMO2.2 and MPI-REMO. Compared to previous studies, when the RCM validation regarding the Romanian climate has mainly been made on mean state and at station scale, a more quantitative approach of precipitation extremes is proposed. In this respect, to have a more reliable comparison with observation, a high resolution daily precipitation gridded data set was used as observational reference (CLIMHYDEX project). The comparison between the RCM outputs and observed grid point values has been made by calculating three extremes precipitation indices, recommended by the Expert Team on Climate Change Detection Indices (ETCCDI), for the 1976-2005 period: R10MM, annual count of days when precipitation ≥10mm; RX5DAY, annual maximum 5-day precipitation and R95P%, precipitation fraction of annual total precipitation due to daily precipitation > 95th percentile. The RCMs capability to reproduce the mean state for these variables, as well as the main modes of their spatial variability (given by the first three EOF patterns), are analysed. The investigation confirms the ability of RCMs to simulate the main features of the precipitation extreme variability over Romania, but some deficiencies in reproducing of their regional characteristics were found (for example, overestimation of the mea state, especially over the extra Carpathian regions). This work has been realised within the research project "Changes in climate extremes and associated impact in hydrological events in Romania" (CLIMHYDEX), code PN II-ID-2011-2-0073, financed by the Romanian

  6. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    on the existence of a quotient construction, allowing a property phi of a parallel system phi/A to be transformed into a sufficient and necessary quotient-property yolA to be satisfied by the component 13. Given a model checking problem involving a network Pi I and a property yo, the method gradually move (by...... quotienting) components Pi from the network into the formula co. Crucial to the success of the method is the ability to manage the size of the intermediate quotient-properties by a suitable collection of efficient minimization heuristics....

  7. Reproducibility of Quantitative Brain Imaging Using a PET-Only and a Combined PET/MR System

    DEFF Research Database (Denmark)

    Lassen, Martin L; Muzik, Otto; Beyer, Thomas

    2017-01-01

    The purpose of this study was to test the feasibility of migrating a quantitative brain imaging protocol from a positron emission tomography (PET)-only system to an integrated PET/MR system. Potential differences in both absolute radiotracer concentration as well as in the derived kinetic...

  8. Cellular automaton model in the fundamental diagram approach reproducing the synchronized outflow of wide moving jams

    International Nuclear Information System (INIS)

    Tian, Jun-fang; Yuan, Zhen-zhou; Jia, Bin; Fan, Hong-qiang; Wang, Tao

    2012-01-01

    Velocity effect and critical velocity are incorporated into the average space gap cellular automaton model [J.F. Tian, et al., Phys. A 391 (2012) 3129], which was able to reproduce many spatiotemporal dynamics reported by the three-phase theory except the synchronized outflow of wide moving jams. The physics of traffic breakdown has been explained. Various congested patterns induced by the on-ramp are reproduced. It is shown that the occurrence of synchronized outflow, free outflow of wide moving jams is closely related with drivers time delay in acceleration at the downstream jam front and the critical velocity, respectively. -- Highlights: ► Velocity effect is added into average space gap cellular automaton model. ► The physics of traffic breakdown has been explained. ► The probabilistic nature of traffic breakdown is simulated. ► Various congested patterns induced by the on-ramp are reproduced. ► The occurrence of synchronized outflow of jams depends on drivers time delay.

  9. Acute multi-sgRNA knockdown of KEOPS complex genes reproduces the microcephaly phenotype of the stable knockout zebrafish model.

    Directory of Open Access Journals (Sweden)

    Tilman Jobst-Schwan

    Full Text Available Until recently, morpholino oligonucleotides have been widely employed in zebrafish as an acute and efficient loss-of-function assay. However, off-target effects and reproducibility issues when compared to stable knockout lines have compromised their further use. Here we employed an acute CRISPR/Cas approach using multiple single guide RNAs targeting simultaneously different positions in two exemplar genes (osgep or tprkb to increase the likelihood of generating mutations on both alleles in the injected F0 generation and to achieve a similar effect as morpholinos but with the reproducibility of stable lines. This multi single guide RNA approach resulted in median likelihoods for at least one mutation on each allele of >99% and sgRNA specific insertion/deletion profiles as revealed by deep-sequencing. Immunoblot showed a significant reduction for Osgep and Tprkb proteins. For both genes, the acute multi-sgRNA knockout recapitulated the microcephaly phenotype and reduction in survival that we observed previously in stable knockout lines, though milder in the acute multi-sgRNA knockout. Finally, we quantify the degree of mutagenesis by deep sequencing, and provide a mathematical model to quantitate the chance for a biallelic loss-of-function mutation. Our findings can be generalized to acute and stable CRISPR/Cas targeting for any zebrafish gene of interest.

  10. NRFixer: Sentiment Based Model for Predicting the Fixability of Non-Reproducible Bugs

    Directory of Open Access Journals (Sweden)

    Anjali Goyal

    2017-08-01

    Full Text Available Software maintenance is an essential step in software development life cycle. Nowadays, software companies spend approximately 45\\% of total cost in maintenance activities. Large software projects maintain bug repositories to collect, organize and resolve bug reports. Sometimes it is difficult to reproduce the reported bug with the information present in a bug report and thus this bug is marked with resolution non-reproducible (NR. When NR bugs are reconsidered, a few of them might get fixed (NR-to-fix leaving the others with the same resolution (NR. To analyse the behaviour of developers towards NR-to-fix and NR bugs, the sentiment analysis of NR bug report textual contents has been conducted. The sentiment analysis of bug reports shows that NR bugs' sentiments incline towards more negativity than reproducible bugs. Also, there is a noticeable opinion drift found in the sentiments of NR-to-fix bug reports. Observations driven from this analysis were an inspiration to develop a model that can judge the fixability of NR bugs. Thus a framework, {NRFixer,} which predicts the probability of NR bug fixation, is proposed. {NRFixer} was evaluated with two dimensions. The first dimension considers meta-fields of bug reports (model-1 and the other dimension additionally incorporates the sentiments (model-2 of developers for prediction. Both models were compared using various machine learning classifiers (Zero-R, naive Bayes, J48, random tree and random forest. The bug reports of Firefox and Eclipse projects were used to test {NRFixer}. In Firefox and Eclipse projects, J48 and Naive Bayes classifiers achieve the best prediction accuracy, respectively. It was observed that the inclusion of sentiments in the prediction model shows a rise in the prediction accuracy ranging from 2 to 5\\% for various classifiers.

  11. Reproducing Sea-Ice Deformation Distributions With Viscous-Plastic Sea-Ice Models

    Science.gov (United States)

    Bouchat, A.; Tremblay, B.

    2016-02-01

    High resolution sea-ice dynamic models offer the potential to discriminate between sea-ice rheologies based on their ability to reproduce the satellite-derived deformation fields. Recent studies have shown that sea-ice viscous-plastic (VP) models do not reproduce the observed statistical properties of the strain rate distributions of the RADARSAT Geophysical Processor System (RGPS) deformation fields [1][2]. We use the elliptical VP rheology and we compute the probability density functions (PDFs) for simulated strain rate invariants (divergence and maximum shear stress) and compare against the deformations obtained with the 3-day gridded products from RGPS. We find that the large shear deformations are well reproduced by the elliptical VP model and the deformations do not follow a Gaussian distribution as reported in Girard et al. [1][2]. On the other hand, we do find an overestimation of the shear in the range of mid-magnitude deformations in all of our VP simulations tested with different spatial resolutions and numerical parameters. Runs with no internal stress (free-drift) or with constant viscosity coefficients (Newtonian fluid) also show this overestimation. We trace back this discrepancy to the elliptical yield curve aspect ratio (e = 2) having too little shear strength, hence not resisting enough the inherent shear in the wind forcing associated with synoptic weather systems. Experiments where we simply increase the shear resistance of the ice by modifying the ellipse ratio confirm the need for a rheology with an increased shear strength. [1] Girard et al. (2009), Evaluation of high-resolution sea ice models [...], Journal of Geophysical Research, 114[2] Girard et al. (2011), A new modeling framework for sea-ice mechanics [...], Annals of Glaciology, 57, 123-132

  12. Evaluation of fecal mRNA reproducibility via a marginal transformed mixture modeling approach

    Directory of Open Access Journals (Sweden)

    Davidson Laurie A

    2010-01-01

    Full Text Available Abstract Background Developing and evaluating new technology that enables researchers to recover gene-expression levels of colonic cells from fecal samples could be key to a non-invasive screening tool for early detection of colon cancer. The current study, to the best of our knowledge, is the first to investigate and report the reproducibility of fecal microarray data. Using the intraclass correlation coefficient (ICC as a measure of reproducibility and the preliminary analysis of fecal and mucosal data, we assessed the reliability of mixture density estimation and the reproducibility of fecal microarray data. Using Monte Carlo-based methods, we explored whether ICC values should be modeled as a beta-mixture or transformed first and fitted with a normal-mixture. We used outcomes from bootstrapped goodness-of-fit tests to determine which approach is less sensitive toward potential violation of distributional assumptions. Results The graphical examination of both the distributions of ICC and probit-transformed ICC (PT-ICC clearly shows that there are two components in the distributions. For ICC measurements, which are between 0 and 1, the practice in literature has been to assume that the data points are from a beta-mixture distribution. Nevertheless, in our study we show that the use of a normal-mixture modeling approach on PT-ICC could provide superior performance. Conclusions When modeling ICC values of gene expression levels, using mixture of normals in the probit-transformed (PT scale is less sensitive toward model mis-specification than using mixture of betas. We show that a biased conclusion could be made if we follow the traditional approach and model the two sets of ICC values using the mixture of betas directly. The problematic estimation arises from the sensitivity of beta-mixtures toward model mis-specification, particularly when there are observations in the neighborhood of the the boundary points, 0 or 1. Since beta-mixture modeling

  13. Reproducing the optical properties of fine desert dust aerosols using ensembles of simple model particles

    International Nuclear Information System (INIS)

    Kahnert, Michael

    2004-01-01

    Single scattering optical properties are calculated for a proxy of fine dust aerosols at a wavelength of 0.55 μm. Spherical and spheroidal model particles are employed to fit the aerosol optical properties and to retrieve information about the physical parameters characterising the aerosols. It is found that spherical particles are capable of reproducing the scalar optical properties and the forward peak of the phase function of the dust aerosols. The effective size parameter of the aerosol ensemble is retrieved with high accuracy by using spherical model particles. Significant improvements are achieved by using spheroidal model particles. The aerosol phase function and the other diagonal elements of the Stokes scattering matrix can be fitted with high accuracy, whereas the off-diagonal elements are poorly reproduced. More elongated prolate and more flattened oblate spheroids contribute disproportionately strongly to the optimised shape distribution of the model particles and appear to be particularly useful for achieving a good fit of the scattering matrix. However, the clear discrepancies between the shape distribution of the aerosols and the shape distribution of the spheroidal model particles suggest that the possibilities of extracting shape information from optical observations are rather limited

  14. QSAR model reproducibility and applicability: a case study of rate constants of hydroxyl radical reaction models applied to polybrominated diphenyl ethers and (benzo-)triazoles.

    Science.gov (United States)

    Roy, Partha Pratim; Kovarich, Simona; Gramatica, Paola

    2011-08-01

    The crucial importance of the three central OECD principles for quantitative structure-activity relationship (QSAR) model validation is highlighted in a case study of tropospheric degradation of volatile organic compounds (VOCs) by OH, applied to two CADASTER chemical classes (PBDEs and (benzo-)triazoles). The application of any QSAR model to chemicals without experimental data largely depends on model reproducibility by the user. The reproducibility of an unambiguous algorithm (OECD Principle 2) is guaranteed by redeveloping MLR models based on both updated version of DRAGON software for molecular descriptors calculation and some freely available online descriptors. The Genetic Algorithm has confirmed its ability to always select the most informative descriptors independently on the input pool of variables. The ability of the GA-selected descriptors to model chemicals not used in model development is verified by three different splittings (random by response, K-ANN and K-means clustering), thus ensuring the external predictivity of the new models, independently of the training/prediction set composition (OECD Principle 5). The relevance of checking the structural applicability domain becomes very evident on comparing the predictions for CADASTER chemicals, using the new models proposed herein, with those obtained by EPI Suite. Copyright © 2011 Wiley Periodicals, Inc.

  15. Quantitative interface models for simulating microstructure evolution

    International Nuclear Information System (INIS)

    Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.

    2004-01-01

    To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys

  16. Computed Tomography of the Human Pineal Gland for Study of the Sleep-Wake Rhythm: Reproducibility of a Semi-Quantitative Approach

    Energy Technology Data Exchange (ETDEWEB)

    Schmitz, S.A.; Platzek, I.; Kunz, D.; Mahlberg, R.; Wolf, K.J.; Heidenreich, J.O. [Charite - Universitaetsmedizin Berlin, Campus Benjamin Franklin, Berlin (Germany). Dept. of Radiology and Nuclear Medicine

    2006-10-15

    Purpose: To propose a semi-quantitative computed tomography (CT) protocol for determining uncalcified pineal tissue (UCPT), and to evaluate its reproducibility in modification of studies showing that the degree of calcification is a potential marker of deficient melatonin production and may prove an instability marker of circadian rhythm. Material and Methods: Twenty-two pineal gland autopsy specimens were scanned in a skull phantom with different slice thickness twice and the uncalcified tissue visually assessed using a four-point scale. The maximum gland density was measured and its inverse graded on a non-linear four-point scale. The sum of both scores was multiplied by the gland volume to yield the UCPT. The within-subject variance of UCPT was determined and compared between scans of different slice thickness. Results: The UCPT of the first measurement, in arbitrary units, was 39{+-}52.5 for 1 mm slice thickness, 44{+-}51.1 for 2 mm, 45{+-}34.8 for 4 mm, and 84{+-}58.0 for 8 mm. Significant differences of within-subject variance of UCPT were found between 1 and 4 mm, 1 and 8 mm, and 2 and 8 mm slice thicknesses ( P <0.05). Conclusion: A superior reproducibility of the semi-quantitative CT determination of UCPT was found using 1 and 2 mm slice thicknesses. These data support the use of thin slices of 1 and 2 mm. The benefit in reproducibility from thin slices has to be carefully weighted against their considerably higher radiation exposure.

  17. Computed Tomography of the Human Pineal Gland for Study of the Sleep-Wake Rhythm: Reproducibility of a Semi-Quantitative Approach

    International Nuclear Information System (INIS)

    Schmitz, S.A.; Platzek, I.; Kunz, D.; Mahlberg, R.; Wolf, K.J.; Heidenreich, J.O.

    2006-01-01

    Purpose: To propose a semi-quantitative computed tomography (CT) protocol for determining uncalcified pineal tissue (UCPT), and to evaluate its reproducibility in modification of studies showing that the degree of calcification is a potential marker of deficient melatonin production and may prove an instability marker of circadian rhythm. Material and Methods: Twenty-two pineal gland autopsy specimens were scanned in a skull phantom with different slice thickness twice and the uncalcified tissue visually assessed using a four-point scale. The maximum gland density was measured and its inverse graded on a non-linear four-point scale. The sum of both scores was multiplied by the gland volume to yield the UCPT. The within-subject variance of UCPT was determined and compared between scans of different slice thickness. Results: The UCPT of the first measurement, in arbitrary units, was 39±52.5 for 1 mm slice thickness, 44±51.1 for 2 mm, 45±34.8 for 4 mm, and 84±58.0 for 8 mm. Significant differences of within-subject variance of UCPT were found between 1 and 4 mm, 1 and 8 mm, and 2 and 8 mm slice thicknesses ( P <0.05). Conclusion: A superior reproducibility of the semi-quantitative CT determination of UCPT was found using 1 and 2 mm slice thicknesses. These data support the use of thin slices of 1 and 2 mm. The benefit in reproducibility from thin slices has to be carefully weighted against their considerably higher radiation exposure

  18. Quantitative system validation in model driven design

    DEFF Research Database (Denmark)

    Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois

    2010-01-01

    The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made...

  19. A novel highly reproducible and lethal nonhuman primate model for orthopox virus infection.

    Directory of Open Access Journals (Sweden)

    Marit Kramski

    Full Text Available The intentional re-introduction of Variola virus (VARV, the agent of smallpox, into the human population is of great concern due its bio-terroristic potential. Moreover, zoonotic infections with Cowpox (CPXV and Monkeypox virus (MPXV cause severe diseases in humans. Smallpox vaccines presently available can have severe adverse effects that are no longer acceptable. The efficacy and safety of new vaccines and antiviral drugs for use in humans can only be demonstrated in animal models. The existing nonhuman primate models, using VARV and MPXV, need very high viral doses that have to be applied intravenously or intratracheally to induce a lethal infection in macaques. To overcome these drawbacks, the infectivity and pathogenicity of a particular CPXV was evaluated in the common marmoset (Callithrix jacchus.A CPXV named calpox virus was isolated from a lethal orthopox virus (OPV outbreak in New World monkeys. We demonstrated that marmosets infected with calpox virus, not only via the intravenous but also the intranasal route, reproducibly develop symptoms resembling smallpox in humans. Infected animals died within 1-3 days after onset of symptoms, even when very low infectious viral doses of 5x10(2 pfu were applied intranasally. Infectious virus was demonstrated in blood, saliva and all organs analyzed.We present the first characterization of a new OPV infection model inducing a disease in common marmosets comparable to smallpox in humans. Intranasal virus inoculation mimicking the natural route of smallpox infection led to reproducible infection. In vivo titration resulted in an MID(50 (minimal monkey infectious dose 50% of 8.3x10(2 pfu of calpox virus which is approximately 10,000-fold lower than MPXV and VARV doses applied in the macaque models. Therefore, the calpox virus/marmoset model is a suitable nonhuman primate model for the validation of vaccines and antiviral drugs. Furthermore, this model can help study mechanisms of OPV pathogenesis.

  20. Recent trends in social systems quantitative theories and quantitative models

    CERN Document Server

    Hošková-Mayerová, Šárka; Soitu, Daniela-Tatiana; Kacprzyk, Janusz

    2017-01-01

    The papers collected in this volume focus on new perspectives on individuals, society, and science, specifically in the field of socio-economic systems. The book is the result of a scientific collaboration among experts from “Alexandru Ioan Cuza” University of Iaşi (Romania), “G. d’Annunzio” University of Chieti-Pescara (Italy), "University of Defence" of Brno (Czech Republic), and "Pablo de Olavide" University of Sevilla (Spain). The heterogeneity of the contributions presented in this volume reflects the variety and complexity of social phenomena. The book is divided in four Sections as follows. The first Section deals with recent trends in social decisions. Specifically, it aims to understand which are the driving forces of social decisions. The second Section focuses on the social and public sphere. Indeed, it is oriented on recent developments in social systems and control. Trends in quantitative theories and models are described in Section 3, where many new formal, mathematical-statistical to...

  1. Reproducing tailing in breakthrough curves: Are statistical models equally representative and predictive?

    Science.gov (United States)

    Pedretti, Daniele; Bianchi, Marco

    2018-03-01

    Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple mechanistic upscaling model based on the PLCO formulation is able to predict the ensemble of BTCs from the stochastic transport simulations without the need of any fitted parameters. The model embeds the constant αCO = 1 and relies on a stratified description of the transport mechanisms to estimate λ. The PL fails to

  2. [Reproducibility and repeatability of the determination of occlusal plane on digital dental models].

    Science.gov (United States)

    Qin, Yi-fei; Xu, Tian-min

    2015-06-18

    To assess the repeatability(intraobserver comparison)and reproducibility(interobserver comparison)of two different methods for establishing the occlusal plane on digital dental models. With Angle's classification as a stratification factor,48 cases were randomly extracted from 806 ones which had integrated clinical data and had their orthodontic treatment from July 2004 to August 2008 in Department of Orthodontics ,Peking University School and Hospital of Stomatology.Post-treatment plaster casts of 48 cases were scanned by Roland LPX-1200 3D laser scanner to generate geometry data as research subjects.In a locally developed software package,one observer repeated 5 times at intervals of at least one week to localize prescriptive landmarks on each digital model to establish a group of functional occlusal planes and a group of anatomic occlusal planes, while 6 observers established two other groups of functional and anatomic occlusal planes independently.Standard deviations of dihedral angles of each group on each model were calculated and compared between the related groups.The models with the five largest standard deviations of each group were studied to explore possible factors that might influence the identification of the landmarks on the digital models. Significant difference of intraobserver variability was not detected between the functional occlusal plane and the anatomic occlusal plane (P>0.1), while that of interobserver variability was detected (Pocclusal plane was 0.2° smaller than that of the anatomic occlusal plane.The functional occlusal plane's variability of intraobserver and interobsever did not differ significantly (P>0.1), while the anatomic occlusal plane's variability of the intraobserver was significantly smaller than that of the interobserver (Pocclusal planes are suitable as a conference plane with equal repeatability. When several observers measure a large number of digital models,the functional occlusal plane is more reproducible than the

  3. Reproducibility, reliability and validity of measurements obtained from Cecile3 digital models

    Directory of Open Access Journals (Sweden)

    Gustavo Adolfo Watanabe-Kanno

    2009-09-01

    Full Text Available The aim of this study was to determine the reproducibility, reliability and validity of measurements in digital models compared to plaster models. Fifteen pairs of plaster models were obtained from orthodontic patients with permanent dentition before treatment. These were digitized to be evaluated with the program Cécile3 v2.554.2 beta. Two examiners measured three times the mesiodistal width of all the teeth present, intercanine, interpremolar and intermolar distances, overjet and overbite. The plaster models were measured using a digital vernier. The t-Student test for paired samples and interclass correlation coefficient (ICC were used for statistical analysis. The ICC of the digital models were 0.84 ± 0.15 (intra-examiner and 0.80 ± 0.19 (inter-examiner. The average mean difference of the digital models was 0.23 ± 0.14 and 0.24 ± 0.11 for each examiner, respectively. When the two types of measurements were compared, the values obtained from the digital models were lower than those obtained from the plaster models (p < 0.05, although the differences were considered clinically insignificant (differences < 0.1 mm. The Cécile digital models are a clinically acceptable alternative for use in Orthodontics.

  4. Reproducing the nonlinear dynamic behavior of a structured beam with a generalized continuum model

    Science.gov (United States)

    Vila, J.; Fernández-Sáez, J.; Zaera, R.

    2018-04-01

    In this paper we study the coupled axial-transverse nonlinear vibrations of a kind of one dimensional structured solids by application of the so called Inertia Gradient Nonlinear continuum model. To show the accuracy of this axiomatic model, previously proposed by the authors, its predictions are compared with numeric results from a previously defined finite discrete chain of lumped masses and springs, for several number of particles. A continualization of the discrete model equations based on Taylor series allowed us to set equivalent values of the mechanical properties in both discrete and axiomatic continuum models. Contrary to the classical continuum model, the inertia gradient nonlinear continuum model used herein is able to capture scale effects, which arise for modes in which the wavelength is comparable to the characteristic distance of the structured solid. The main conclusion of the work is that the proposed generalized continuum model captures the scale effects in both linear and nonlinear regimes, reproducing the behavior of the 1D nonlinear discrete model adequately.

  5. Assessment of the reliability of reproducing two-dimensional resistivity models using an image processing technique.

    Science.gov (United States)

    Ishola, Kehinde S; Nawawi, Mohd Nm; Abdullah, Khiruddin; Sabri, Ali Idriss Aboubakar; Adiat, Kola Abdulnafiu

    2014-01-01

    This study attempts to combine the results of geophysical images obtained from three commonly used electrode configurations using an image processing technique in order to assess their capabilities to reproduce two-dimensional (2-D) resistivity models. All the inverse resistivity models were processed using the PCI Geomatica software package commonly used for remote sensing data sets. Preprocessing of the 2-D inverse models was carried out to facilitate further processing and statistical analyses. Four Raster layers were created, three of these layers were used for the input images and the fourth layer was used as the output of the combined images. The data sets were merged using basic statistical approach. Interpreted results show that all images resolved and reconstructed the essential features of the models. An assessment of the accuracy of the images for the four geologic models was performed using four criteria: the mean absolute error and mean percentage absolute error, resistivity values of the reconstructed blocks and their displacements from the true models. Generally, the blocks of the images of maximum approach give the least estimated errors. Also, the displacement of the reconstructed blocks from the true blocks is the least and the reconstructed resistivities of the blocks are closer to the true blocks than any other combined used. Thus, it is corroborated that when inverse resistivity models are combined, most reliable and detailed information about the geologic models is obtained than using individual data sets.

  6. The substorm loading-unloading cycle as reproduced by community-available global MHD magnetospheric models

    Science.gov (United States)

    Gordeev, Evgeny; Sergeev, Victor; Tsyganenko, Nikolay; Kuznetsova, Maria; Rastaetter, Lutz; Raeder, Joachim; Toth, Gabor; Lyon, John; Merkin, Vyacheslav; Wiltberger, Michael

    2017-04-01

    In this study we investigate how well the three community-available global MHD models, supported by the Community Coordinated Modeling Center (CCMC NASA), reproduce the global magnetospheric dynamics, including the loading-unloading substorm cycle. We found that in terms of global magnetic flux transport CCMC models display systematically different response to idealized 2-hour north then 2-hour south IMF Bz variation. The LFM model shows a depressed return convection in the tail plasma sheet and high rate of magnetic flux loading into the lobes during the growth phase, as well as enhanced return convection and high unloading rate during the expansion phase, with the amount of loaded/unloaded magnetotail flux and the growth phase duration being the closest to their observed empirical values during isolated substorms. BATSRUS and Open GGCM models exhibit drastically different behavior. In the BATS-R-US model the plasma sheet convection shows a smooth transition to the steady convection regime after the IMF southward turning. In the Open GGCM a weak plasma sheet convection has comparable intensities during both the growth phase and the following slow unloading phase. Our study shows that different CCMC models under the same solar wind conditions (north to south IMF variation) produce essentially different solutions in terms of global magnetospheric convection.

  7. Mouse Models of Diet-Induced Nonalcoholic Steatohepatitis Reproduce the Heterogeneity of the Human Disease

    Science.gov (United States)

    Machado, Mariana Verdelho; Michelotti, Gregory Alexander; Xie, Guanhua; de Almeida, Thiago Pereira; Boursier, Jerome; Bohnic, Brittany; Guy, Cynthia D.; Diehl, Anna Mae

    2015-01-01

    Background and aims Non-alcoholic steatohepatitis (NASH), the potentially progressive form of nonalcoholic fatty liver disease (NAFLD), is the pandemic liver disease of our time. Although there are several animal models of NASH, consensus regarding the optimal model is lacking. We aimed to compare features of NASH in the two most widely-used mouse models: methionine-choline deficient (MCD) diet and Western diet. Methods Mice were fed standard chow, MCD diet for 8 weeks, or Western diet (45% energy from fat, predominantly saturated fat, with 0.2% cholesterol, plus drinking water supplemented with fructose and glucose) for 16 weeks. Liver pathology and metabolic profile were compared. Results The metabolic profile associated with human NASH was better mimicked by Western diet. Although hepatic steatosis (i.e., triglyceride accumulation) was also more severe, liver non-esterified fatty acid content was lower than in the MCD diet group. NASH was also less severe and less reproducible in the Western diet model, as evidenced by less liver cell death/apoptosis, inflammation, ductular reaction, and fibrosis. Various mechanisms implicated in human NASH pathogenesis/progression were also less robust in the Western diet model, including oxidative stress, ER stress, autophagy deregulation, and hedgehog pathway activation. Conclusion Feeding mice a Western diet models metabolic perturbations that are common in humans with mild NASH, whereas administration of a MCD diet better models the pathobiological mechanisms that cause human NAFLD to progress to advanced NASH. PMID:26017539

  8. Mouse models of diet-induced nonalcoholic steatohepatitis reproduce the heterogeneity of the human disease.

    Directory of Open Access Journals (Sweden)

    Mariana Verdelho Machado

    Full Text Available Non-alcoholic steatohepatitis (NASH, the potentially progressive form of nonalcoholic fatty liver disease (NAFLD, is the pandemic liver disease of our time. Although there are several animal models of NASH, consensus regarding the optimal model is lacking. We aimed to compare features of NASH in the two most widely-used mouse models: methionine-choline deficient (MCD diet and Western diet.Mice were fed standard chow, MCD diet for 8 weeks, or Western diet (45% energy from fat, predominantly saturated fat, with 0.2% cholesterol, plus drinking water supplemented with fructose and glucose for 16 weeks. Liver pathology and metabolic profile were compared.The metabolic profile associated with human NASH was better mimicked by Western diet. Although hepatic steatosis (i.e., triglyceride accumulation was also more severe, liver non-esterified fatty acid content was lower than in the MCD diet group. NASH was also less severe and less reproducible in the Western diet model, as evidenced by less liver cell death/apoptosis, inflammation, ductular reaction, and fibrosis. Various mechanisms implicated in human NASH pathogenesis/progression were also less robust in the Western diet model, including oxidative stress, ER stress, autophagy deregulation, and hedgehog pathway activation.Feeding mice a Western diet models metabolic perturbations that are common in humans with mild NASH, whereas administration of a MCD diet better models the pathobiological mechanisms that cause human NAFLD to progress to advanced NASH.

  9. Circuit modeling of the electrical impedance: II. Normal subjects and system reproducibility

    International Nuclear Information System (INIS)

    Shiffman, C A; Rutkove, S B

    2013-01-01

    Part I of this series showed that the five-element circuit model accurately mimics impedances measured using multi-frequency electrical impedance myography (MFEIM), focusing on changes brought on by disease. This paper addresses two requirements which must be met if the method is to qualify for clinical use. First, the extracted parameters must be reproducible over long time periods such as those involved in the treatment of muscular disease, and second, differences amongst normal subjects should be attributable to known differences in the properties of healthy muscle. It applies the method to five muscle groups in 62 healthy subjects, closely following the procedure used earlier for the diseased subjects. Test–retest comparisons show that parameters are reproducible at levels from 6 to 16% (depending on the parameter) over time spans of up to 267 days, levels far below the changes occurring in serious disease. Also, variations with age, gender and muscle location are found to be consistent with established expectations for healthy muscle tissue. We conclude that the combination of MFEIM measurements and five-element circuit analysis genuinely reflects properties of muscle and is reliable enough to recommend its use in following neuromuscular disease. (paper)

  10. Building a Database for a Quantitative Model

    Science.gov (United States)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  11. Reproducibility analysis of measurements with a mechanical semiautomatic eye model for evaluation of intraocular lenses

    Science.gov (United States)

    Rank, Elisabet; Traxler, Lukas; Bayer, Natascha; Reutterer, Bernd; Lux, Kirsten; Drauschke, Andreas

    2014-03-01

    Mechanical eye models are used to validate ex vivo the optical quality of intraocular lenses (IOLs). The quality measurement and test instructions for IOLs are defined in the ISO 11979-2. However, it was mentioned in literature that these test instructions could lead to inaccurate measurements in case of some modern IOL designs. Reproducibility of alignment and measurement processes are presented, performed with a semiautomatic mechanical ex vivo eye model based on optical properties published by Liou and Brennan in the scale 1:1. The cornea, the iris aperture and the IOL itself are separately changeable within the eye model. The adjustment of the IOL can be manipulated by automatic decentration and tilt of the IOL in reference to the optical axis of the whole system, which is defined by the connection line of the central point of the artificial cornea and the iris aperture. With the presented measurement setup two quality criteria are measurable: the modulation transfer function (MTF) and the Strehl ratio. First the reproducibility of the alignment process for definition of initial conditions of the lateral position and tilt in reference to the optical axis of the system is investigated. Furthermore, different IOL holders are tested related to the stable holding of the IOL. The measurement is performed by a before-after comparison of the lens position using a typical decentration and tilt tolerance analysis path. Modulation transfer function MTF and Strehl ratio S before and after this tolerance analysis are compared and requirements for lens holder construction are deduced from the presented results.

  12. Reproducibility and relative validity of a brief quantitative food frequency questionnaire for assessing fruit and vegetable intakes in North-African women.

    Science.gov (United States)

    Landais, E; Gartner, A; Bour, A; McCullough, F; Delpeuch, F; Holdsworth, M

    2014-04-01

    In the context of a rapidly increasing prevalence of noncommunicable diseases, fruit and vegetables could play a key preventive role. To date, there is no rapid assessment tool available for measuring the fruit and vegetable intakes of North-African women. The present study aimed to investigate the reproducibility and relative validity of an eight-item quantitative food frequency questionnaire that measures the fruit and vegetable intakes (FV-FFQ) of Moroccan women. During a 1-week period, 100 women, living in the city of Rabat, Morocco (aged 20-49 years) completed the short FV-FFQ twice: once at baseline (FV-FFQ1) and once at the end of the study (FV-FFQ2). In the mean time, participants completed three 24-h dietary recalls. All questionnaires were administered by interviewers. Reproducibility was assessed by computing Spearman's correlation coefficients, intraclass correlation (ICC) coefficients and kappa statistics. Relative validity was assessed by computing Wilcoxon signed-rank tests and Spearman's correlation coefficients, as well as by performing Bland-Altman plots. In terms of reproducibility, Spearman's correlation coefficient was 0.56; ICC coefficient was 0.68; and weighted kappa was 0.35. In terms of relative validity, compared with the three 24-h recalls, the FV-FFQ slightly underestimated mean fruit and vegetable intakes (-10.9%; P = 0.006); Spearman's correlation coefficient was 0.69; at the individual level, intakes measured by the FV-FFQ were between 0.39 and 2.19 times those measured by the 24-h recalls. The brief eight-item FV-FFQ is a reliable and relatively valid tool for measuring mean fruit and vegetable intakes at the population level, although this is not the case at the individual level. © 2013 The Authors Journal of Human Nutrition and Dietetics © 2013 The British Dietetic Association Ltd.

  13. Assessment of a climate model to reproduce rainfall variability and extremes over Southern Africa

    Science.gov (United States)

    Williams, C. J. R.; Kniveton, D. R.; Layberry, R.

    2010-01-01

    It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The sub-continent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of ability of a state of the art climate model to simulate climate at daily timescales is carried out using satellite-derived rainfall data from the Microwave Infrared Rainfall Algorithm (MIRA). This dataset covers the period from 1993 to 2002 and the whole of southern Africa at a spatial resolution of 0.1° longitude/latitude. This paper concentrates primarily on the ability of the model to simulate the spatial and temporal patterns of present-day rainfall variability over southern Africa and is not intended to discuss possible future changes in climate as these have been documented elsewhere. Simulations of current climate from the UK Meteorological Office Hadley Centre's climate model, in both regional and global mode, are firstly compared to the MIRA dataset at daily timescales. Secondly, the ability of the model to reproduce daily rainfall extremes is assessed, again by a comparison with

  14. Why are models unable to reproduce multi-decadal trends in lower tropospheric baseline ozone levels?

    Science.gov (United States)

    Hu, L.; Liu, J.; Mickley, L. J.; Strahan, S. E.; Steenrod, S.

    2017-12-01

    Assessments of tropospheric ozone radiative forcing rely on accurate model simulations. Parrish et al (2014) found that three chemistry-climate models (CCMs) overestimate present-day O3 mixing ratios and capture only 50% of the observed O3 increase over the last five decades at 12 baseline sites in the northern mid-latitudes, indicating large uncertainties in our understanding of the ozone trends and their implications for radiative forcing. Here we present comparisons of outputs from two chemical transport models (CTMs) - GEOS-Chem and the Global Modeling Initiative model - with O3 observations from the same sites and from the global ozonesonde network. Both CTMs are driven by reanalysis meteorological data (MERRA or MERRA2) and thus are expected to be different in atmospheric transport processes relative to those freely running CCMs. We test whether recent model developments leading to more active ozone chemistry affect the computed ozone sensitivity to perturbations in emissions. Preliminary results suggest these CTMs can reproduce present-day ozone levels but fail to capture the multi-decadal trend since 1980. Both models yield widespread overpredictions of free tropospheric ozone in the 1980s. Sensitivity studies in GEOS-Chem suggest that the model estimate of natural background ozone is too high. We discuss factors that contribute to the variability and trends of tropospheric ozone over the last 30 years, with a focus on intermodel differences in spatial resolution and in the representation of stratospheric chemistry, stratosphere-troposphere exchange, halogen chemistry, and biogenic VOC emissions and chemistry. We also discuss uncertainty in the historical emission inventories used in models, and how these affect the simulated ozone trends.

  15. A stable and reproducible human blood-brain barrier model derived from hematopoietic stem cells.

    Directory of Open Access Journals (Sweden)

    Romeo Cecchelli

    Full Text Available The human blood brain barrier (BBB is a selective barrier formed by human brain endothelial cells (hBECs, which is important to ensure adequate neuronal function and protect the central nervous system (CNS from disease. The development of human in vitro BBB models is thus of utmost importance for drug discovery programs related to CNS diseases. Here, we describe a method to generate a human BBB model using cord blood-derived hematopoietic stem cells. The cells were initially differentiated into ECs followed by the induction of BBB properties by co-culture with pericytes. The brain-like endothelial cells (BLECs express tight junctions and transporters typically observed in brain endothelium and maintain expression of most in vivo BBB properties for at least 20 days. The model is very reproducible since it can be generated from stem cells isolated from different donors and in different laboratories, and could be used to predict CNS distribution of compounds in human. Finally, we provide evidence that Wnt/β-catenin signaling pathway mediates in part the BBB inductive properties of pericytes.

  16. Stochastic model of financial markets reproducing scaling and memory in volatility return intervals

    Science.gov (United States)

    Gontis, V.; Havlin, S.; Kononovicius, A.; Podobnik, B.; Stanley, H. E.

    2016-11-01

    We investigate the volatility return intervals in the NYSE and FOREX markets. We explain previous empirical findings using a model based on the interacting agent hypothesis instead of the widely-used efficient market hypothesis. We derive macroscopic equations based on the microscopic herding interactions of agents and find that they are able to reproduce various stylized facts of different markets and different assets with the same set of model parameters. We show that the power-law properties and the scaling of return intervals and other financial variables have a similar origin and could be a result of a general class of non-linear stochastic differential equations derived from a master equation of an agent system that is coupled by herding interactions. Specifically, we find that this approach enables us to recover the volatility return interval statistics as well as volatility probability and spectral densities for the NYSE and FOREX markets, for different assets, and for different time-scales. We find also that the historical S&P500 monthly series exhibits the same volatility return interval properties recovered by our proposed model. Our statistical results suggest that human herding is so strong that it persists even when other evolving fluctuations perturbate the financial system.

  17. How well do CMIP5 Climate Models Reproduce the Hydrologic Cycle of the Colorado River Basin?

    Science.gov (United States)

    Gautam, J.; Mascaro, G.

    2017-12-01

    The Colorado River, which is the primary source of water for nearly 40 million people in the arid Southwestern states of the United States, has been experiencing an extended drought since 2000, which has led to a significant reduction in water supply. As the water demands increase, one of the major challenges for water management in the region has been the quantification of uncertainties associated with streamflow predictions in the Colorado River Basin (CRB) under potential changes of future climate. Hence, testing the reliability of model predictions in the CRB is critical in addressing this challenge. In this study, we evaluated the performances of 17 General Circulation Models (GCMs) from the Coupled Model Intercomparison Project Phase Five (CMIP5) and 4 Regional Climate Models (RCMs) in reproducing the statistical properties of the hydrologic cycle in the CRB. We evaluated the water balance components at four nested sub-basins along with the inter-annual and intra-annual changes of precipitation (P), evaporation (E), runoff (R) and temperature (T) from 1979 to 2005. Most of the models captured the net water balance fairly well in the most-upstream basin but simulated a weak hydrological cycle in the evaporation channel at the downstream locations. The simulated monthly variability of P had different patterns, with correlation coefficients ranging from -0.6 to 0.8 depending on the sub-basin and the models from same parent institution clustering together. Apart from the most-upstream sub-basin where the models were mainly characterized by a negative seasonal bias in SON (of up to -50%), most of them had a positive bias in all seasons (of up to +260%) in the other three sub-basins. The models, however, captured the monthly variability of T well at all sites with small inter-model variabilities and a relatively similar range of bias (-7 °C to +5 °C) across all seasons. Mann-Kendall test was applied to the annual P and T time-series where majority of the models

  18. Fast bootstrapping and permutation testing for assessing reproducibility and interpretability of multivariate fMRI decoding models.

    Directory of Open Access Journals (Sweden)

    Bryan R Conroy

    Full Text Available Multivariate decoding models are increasingly being applied to functional magnetic imaging (fMRI data to interpret the distributed neural activity in the human brain. These models are typically formulated to optimize an objective function that maximizes decoding accuracy. For decoding models trained on full-brain data, this can result in multiple models that yield the same classification accuracy, though some may be more reproducible than others--i.e. small changes to the training set may result in very different voxels being selected. This issue of reproducibility can be partially controlled by regularizing the decoding model. Regularization, along with the cross-validation used to estimate decoding accuracy, typically requires retraining many (often on the order of thousands of related decoding models. In this paper we describe an approach that uses a combination of bootstrapping and permutation testing to construct both a measure of cross-validated prediction accuracy and model reproducibility of the learned brain maps. This requires re-training our classification method on many re-sampled versions of the fMRI data. Given the size of fMRI datasets, this is normally a time-consuming process. Our approach leverages an algorithm called fast simultaneous training of generalized linear models (FaSTGLZ to create a family of classifiers in the space of accuracy vs. reproducibility. The convex hull of this family of classifiers can be used to identify a subset of Pareto optimal classifiers, with a single-optimal classifier selectable based on the relative cost of accuracy vs. reproducibility. We demonstrate our approach using full-brain analysis of elastic-net classifiers trained to discriminate stimulus type in an auditory and visual oddball event-related fMRI design. Our approach and results argue for a computational approach to fMRI decoding models in which the value of the interpretation of the decoding model ultimately depends upon optimizing a

  19. Expert judgement models in quantitative risk assessment

    International Nuclear Information System (INIS)

    Rosqvist, T.; Tuominen, R.

    1999-01-01

    Expert judgement is a valuable source of information in risk management. Especially, risk-based decision making relies significantly on quantitative risk assessment, which requires numerical data describing the initiator event frequencies and conditional probabilities in the risk model. This data is seldom found in databases and has to be elicited from qualified experts. In this report, we discuss some modelling approaches to expert judgement in risk modelling. A classical and a Bayesian expert model is presented and applied to real case expert judgement data. The cornerstone in the models is the log-normal distribution, which is argued to be a satisfactory choice for modelling degree-of-belief type probability distributions with respect to the unknown parameters in a risk model. Expert judgements are qualified according to bias, dispersion, and dependency, which are treated differently in the classical and Bayesian approaches. The differences are pointed out and related to the application task. Differences in the results obtained from the different approaches, as applied to real case expert judgement data, are discussed. Also, the role of a degree-of-belief type probability in risk decision making is discussed

  20. Rainfall variability and extremes over southern Africa: Assessment of a climate model to reproduce daily extremes

    Science.gov (United States)

    Williams, C. J. R.; Kniveton, D. R.; Layberry, R.

    2009-04-01

    It is increasingly accepted that that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The subcontinent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of ability of a state of the art climate model to simulate climate at daily timescales is carried out using satellite derived rainfall data from the Microwave Infra-Red Algorithm (MIRA). This dataset covers the period from 1993-2002 and the whole of southern Africa at a spatial resolution of 0.1 degree longitude/latitude. The ability of a climate model to simulate current climate provides some indication of how much confidence can be applied to its future predictions. In this paper, simulations of current climate from the UK Meteorological Office Hadley Centre's climate model, in both regional and global mode, are firstly compared to the MIRA dataset at daily timescales. This concentrates primarily on the ability of the model to simulate the spatial and temporal patterns of rainfall variability over southern Africa. Secondly, the ability of the model to reproduce daily rainfall extremes will

  1. Cross-species analysis of gene expression in non-model mammals: reproducibility of hybridization on high density oligonucleotide microarrays

    Directory of Open Access Journals (Sweden)

    Pita-Thomas Wolfgang

    2007-04-01

    Full Text Available Abstract Background Gene expression profiles of non-model mammals may provide valuable data for biomedical and evolutionary studies. However, due to lack of sequence information of other species, DNA microarrays are currently restricted to humans and a few model species. This limitation may be overcome by using arrays developed for a given species to analyse gene expression in a related one, an approach known as "cross-species analysis". In spite of its potential usefulness, the accuracy and reproducibility of the gene expression measures obtained in this way are still open to doubt. The present study examines whether or not hybridization values from cross-species analyses are as reproducible as those from same-species analyses when using Affymetrix oligonucleotide microarrays. Results The reproducibility of the probe data obtained hybridizing deer, Old-World primates, and human RNA samples to Affymetrix human GeneChip® U133 Plus 2.0 was compared. The results show that cross-species hybridization affected neither the distribution of the hybridization reproducibility among different categories, nor the reproducibility values of the individual probes. Our analyses also show that a 0.5% of the probes analysed in the U133 plus 2.0 GeneChip are significantly associated to un-reproducible hybridizations. Such probes-called in the text un-reproducible probe sequences- do not increase in number in cross-species analyses. Conclusion Our study demonstrates that cross-species analyses do not significantly affect hybridization reproducibility of GeneChips, at least within the range of the mammal species analysed here. The differences in reproducibility between same-species and cross-species analyses observed in previous studies were probably caused by the analytical methods used to calculate the gene expression measures. Together with previous observations on the accuracy of GeneChips for cross-species analysis, our analyses demonstrate that cross

  2. Model for a reproducible curriculum infrastructure to provide international nurse anesthesia continuing education.

    Science.gov (United States)

    Collins, Shawn Bryant

    2011-12-01

    There are no set standards for nurse anesthesia education in developing countries, yet one of the keys to the standards in global professional practice is competency assurance for individuals. Nurse anesthetists in developing countries have difficulty obtaining educational materials. These difficulties include, but are not limited to, financial constraints, lack of anesthesia textbooks, and distance from educational sites. There is increasing evidence that the application of knowledge in developing countries is failing. One reason is that many anesthetists in developing countries are trained for considerably less than acceptable time periods and are often supervised by poorly trained practitioners, who then pass on less-than-desirable practice skills, thus exacerbating difficulties. Sustainability of development can come only through anesthetists who are both well trained and able to pass on their training to others. The international nurse anesthesia continuing education project was developed in response to the difficulty that nurse anesthetists in developing countries face in accessing continuing education. The purpose of this project was to develop a nonprofit, volunteer-based model for providing nurse anesthesia continuing education that can be reproduced and used in any developing country.

  3. Composite model to reproduce the mechanical behaviour of methane hydrate bearing soils

    Science.gov (United States)

    De la Fuente, Maria

    2016-04-01

    Methane hydrate bearing sediments (MHBS) are naturally-occurring materials containing different components in the pores that may suffer phase changes under relative small temperature and pressure variations for conditions typically prevailing a few hundreds of meters below sea level. Their modelling needs to account for heat and mass balance equations of the different components, and several strategies already exist to combine them (e.g., Rutqvist & Moridis, 2009; Sánchez et al. 2014). These equations have to be completed by restrictions and constitutive laws reproducing the phenomenology of heat and fluid flows, phase change conditions and mechanical response. While the formulation of the non-mechanical laws generally includes explicitly the mass fraction of methane in each phase, which allows for a natural update of parameters during phase changes, mechanical laws are, in most cases, stated for the whole solid skeleton (Uchida et al., 2012; Soga et al. 2006). In this paper, a mechanical model is proposed to cope with the response of MHBS. It is based on a composite approach that allows defining the thermo-hydro-mechanical response of mineral skeleton and solid hydrates independently. The global stress-strain-temperature response of the solid phase (grains + hydrate) is then obtained by combining both responses according to energy principle following the work by Pinyol et al. (2007). In this way, dissociation of MH can be assessed on the basis of the stress state and temperature prevailing locally within the hydrate component. Besides, its structuring effect is naturally accounted for by the model according to patterns of MH inclusions within soil pores. This paper describes the fundamental hypothesis behind the model and its formulation. Its performance is assessed by comparison with laboratory data presented in the literature. An analysis of MHBS response to several stress-temperature paths representing potential field cases is finally presented. References

  4. Global Quantitative Modeling of Chromatin Factor Interactions

    Science.gov (United States)

    Zhou, Jian; Troyanskaya, Olga G.

    2014-01-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  5. Quantitative Modeling of Landscape Evolution, Treatise on Geomorphology

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Schoorl, J.M.; Claessens, L.F.G.; Veldkamp, A.; Shroder, F.S.

    2013-01-01

    This chapter reviews quantitative modeling of landscape evolution – which means that not just model studies but also modeling concepts are discussed. Quantitative modeling is contrasted with conceptual or physical modeling, and four categories of model studies are presented. Procedural studies focus

  6. Can Computational Sediment Transport Models Reproduce the Observed Variability of Channel Networks in Modern Deltas?

    Science.gov (United States)

    Nesvold, E.; Mukerji, T.

    2017-12-01

    River deltas display complex channel networks that can be characterized through the framework of graph theory, as shown by Tejedor et al. (2015). Deltaic patterns may also be useful in a Bayesian approach to uncertainty quantification of the subsurface, but this requires a prior distribution of the networks of ancient deltas. By considering subaerial deltas, one can at least obtain a snapshot in time of the channel network spectrum across deltas. In this study, the directed graph structure is semi-automatically extracted from satellite imagery using techniques from statistical processing and machine learning. Once the network is labeled with vertices and edges, spatial trends and width and sinuosity distributions can also be found easily. Since imagery is inherently 2D, computational sediment transport models can serve as a link between 2D network structure and 3D depositional elements; the numerous empirical rules and parameters built into such models makes it necessary to validate the output with field data. For this purpose we have used a set of 110 modern deltas, with average water discharge ranging from 10 - 200,000 m3/s, as a benchmark for natural variability. Both graph theoretic and more general distributions are established. A key question is whether it is possible to reproduce this deltaic network spectrum with computational models. Delft3D was used to solve the shallow water equations coupled with sediment transport. The experimental setup was relatively simple; incoming channelized flow onto a tilted plane, with varying wave and tidal energy, sediment types and grain size distributions, river discharge and a few other input parameters. Each realization was run until a delta had fully developed: between 50 and 500 years (with a morphology acceleration factor). It is shown that input parameters should not be sampled independently from the natural ranges, since this may result in deltaic output that falls well outside the natural spectrum. Since we are

  7. Conceptual model suitability for reproducing preferential flow paths in waste rock piles

    Science.gov (United States)

    Broda, S.; Blessent, D.; Aubertin, M.

    2012-12-01

    Waste rocks are typically deposited on mining sites forming waste rock piles (WRP). Acid mine drainage (AMD) or contaminated neutral drainage (CND) with metal leaching from the sulphidic minerals adversely impact soil and water composition on and beyond the mining sites. The deposition method and the highly heterogeneous hydrogeological and geochemical properties of waste rock have a major impact on water and oxygen movement and pore water pressure distribution in the WRP, controlling AMD/CND production. However, the prediction and interpretation of water distribution in WRP is a challenging problem and many attempted numerical investigations of short and long term forecasts were found unreliable. Various forms of unsaturated localized preferential flow processes have been identified, for instance flow in macropores and fractures, heterogeneity-driven and gravity-driven unstable flow, with local hydraulic conductivities reaching several dozen meters per day. Such phenomena have been entirely neglected in numerical WRP modelling and are unattainable with the classical equivalent porous media conceptual approach typically used in this field. An additional complicating circumstance is the unknown location of macropores and fractures a priori. In this study, modeling techniques originally designed for massive fractured rock aquifers are applied. The properties of the waste rock material, found at the Tio mine at Havre Saint-Pierre, Québec (Canada), used in this modelling study were retrieved from laboratory permeability and water retention tests. These column tests were reproduced with the numerical 3D fully-integrated surface/subsurface flow model HydroGeoSphere, where material heterogeneity is represented by means of i) the dual continuum approach, ii) discrete fractures, and iii) a stochastic facies distribution framework using TPROGS. Comparisons with measured pore water pressures, tracer concentrations and exiting water volumes allowed defining limits and

  8. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  9. A novel, comprehensive, and reproducible porcine model for determining the timing of bruises in forensic pathology

    DEFF Research Database (Denmark)

    Barington, Kristiane; Jensen, Henrik Elvang

    2016-01-01

    in order to identify gross and histological parameters that may be useful in determining the age of a bruise. Methods The mechanical device was able to apply a single reproducible stroke with a plastic tube that was equivalent to being struck by a man. In each of 10 anesthetized pigs, four strokes...

  10. Toward quantitative modeling of silicon phononic thermocrystals

    Energy Technology Data Exchange (ETDEWEB)

    Lacatena, V. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France); IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Haras, M.; Robillard, J.-F., E-mail: jean-francois.robillard@isen.iemn.univ-lille1.fr; Dubois, E. [IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Monfray, S.; Skotnicki, T. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France)

    2015-03-16

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  11. Quantitative comparisons of analogue models of brittle wedge dynamics

    Science.gov (United States)

    Schreurs, Guido

    2010-05-01

    Analogue model experiments are widely used to gain insights into the evolution of geological structures. In this study, we present a direct comparison of experimental results of 14 analogue modelling laboratories using prescribed set-ups. A quantitative analysis of the results will document the variability among models and will allow an appraisal of reproducibility and limits of interpretation. This has direct implications for comparisons between structures in analogue models and natural field examples. All laboratories used the same frictional analogue materials (quartz and corundum sand) and prescribed model-building techniques (sieving and levelling). Although each laboratory used its own experimental apparatus, the same type of self-adhesive foil was used to cover the base and all the walls of the experimental apparatus in order to guarantee identical boundary conditions (i.e. identical shear stresses at the base and walls). Three experimental set-ups using only brittle frictional materials were examined. In each of the three set-ups the model was shortened by a vertical wall, which moved with respect to the fixed base and the three remaining sidewalls. The minimum width of the model (dimension parallel to mobile wall) was also prescribed. In the first experimental set-up, a quartz sand wedge with a surface slope of ˜20° was pushed by a mobile wall. All models conformed to the critical taper theory, maintained a stable surface slope and did not show internal deformation. In the next two experimental set-ups, a horizontal sand pack consisting of alternating quartz sand and corundum sand layers was shortened from one side by the mobile wall. In one of the set-ups a thin rigid sheet covered part of the model base and was attached to the mobile wall (i.e. a basal velocity discontinuity distant from the mobile wall). In the other set-up a basal rigid sheet was absent and the basal velocity discontinuity was located at the mobile wall. In both types of experiments

  12. Quantitative modeling of the ionospheric response to geomagnetic activity

    Directory of Open Access Journals (Sweden)

    T. J. Fuller-Rowell

    2000-07-01

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  13. Pharmacokinetic Modelling to Predict FVIII:C Response to Desmopressin and Its Reproducibility in Nonsevere Haemophilia A Patients.

    Science.gov (United States)

    Schütte, Lisette M; van Hest, Reinier M; Stoof, Sara C M; Leebeek, Frank W G; Cnossen, Marjon H; Kruip, Marieke J H A; Mathôt, Ron A A

    2018-04-01

     Nonsevere haemophilia A (HA) patients can be treated with desmopressin. Response of factor VIII activity (FVIII:C) differs between patients and is difficult to predict.  Our aims were to describe FVIII:C response after desmopressin and its reproducibility by population pharmacokinetic (PK) modelling.  Retrospective data of 128 nonsevere HA patients (age 7-75 years) receiving an intravenous or intranasal dose of desmopressin were used. PK modelling of FVIII:C was performed by nonlinear mixed effect modelling. Reproducibility of FVIII:C response was defined as less than 25% difference in peak FVIII:C between administrations.  A total of 623 FVIII:C measurements from 142 desmopressin administrations were available; 14 patients had received two administrations at different occasions. The FVIII:C time profile was best described by a two-compartment model with first-order absorption and elimination. Interindividual variability of the estimated baseline FVIII:C, central volume of distribution and clearance were 37, 43 and 50%, respectively. The most recently measured FVIII:C (FVIII-recent) was significantly associated with FVIII:C response to desmopressin ( p  C increase of 0.47 IU/mL (median, interquartile range: 0.32-0.65 IU/mL, n  = 142). C response was reproducible in 6 out of 14 patients receiving two desmopressin administrations.  FVIII:C response to desmopressin in nonsevere HA patients was adequately described by a population PK model. Large variability in FVIII:C response was observed, which could only partially be explained by FVIII-recent. C response was not reproducible in a small subset of patients. Therefore, monitoring FVIII:C around surgeries or bleeding might be considered. Research is needed to study this further. Schattauer Stuttgart.

  14. Quantitative modelling of the biomechanics of the avian syrinx

    DEFF Research Database (Denmark)

    Elemans, Coen P. H.; Larsen, Ole Næsbye; Hoffmann, Marc R.

    2003-01-01

    We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts...

  15. Hippocampal volume change measurement: Quantitative assessment of the reproducibility of expert manual outlining and the automated methods FreeSurfer and FIRST

    NARCIS (Netherlands)

    Mulder, E.R.; de Jong, R.A.; Knol, D.L.; van Schijndel, R.A.; Cover, K.S.; Visser, P.J.; Barkhof, F.; Vrenken, H.

    2014-01-01

    Background: To measure hippocampal volume change in Alzheimer's disease (AD) or mild cognitive impairment (MCI), expert manual delineation is often used because of its supposed accuracy. It has been suggested that expert outlining yields poorer reproducibility as compared to automated methods, but

  16. Reproducibility of detection of tyrosinase and MART-1 transcripts in the peripheral blood of melanoma patients: a quality control study using real-time quantitative RT-PCR

    NARCIS (Netherlands)

    de Vries, T. J.; Fourkour, A.; Punt, C. J.; van de Locht, L. T.; Wobbes, T.; van den Bosch, S.; de Rooij, M. J.; Mensink, E. J.; Ruiter, D. J.; van Muijen, G. N.

    1999-01-01

    In recent years, large discrepancies were described in the success rate of the tyrosinase reverse transcription polymerase chain reaction (RT-PCR) for detecting melanoma cells in the peripheral blood of melanoma patients. We present a quality control study in which we analysed the reproducibility of

  17. A Quantitative Software Risk Assessment Model

    Science.gov (United States)

    Lee, Alice

    2002-01-01

    This slide presentation reviews a risk assessment model as applied to software development. the presentation uses graphs to demonstrate basic concepts of software reliability. It also discusses the application to the risk model to the software development life cycle.

  18. A novel porcine model of ataxia telangiectasia reproduces neurological features and motor deficits of human disease.

    Science.gov (United States)

    Beraldi, Rosanna; Chan, Chun-Hung; Rogers, Christopher S; Kovács, Attila D; Meyerholz, David K; Trantzas, Constantin; Lambertz, Allyn M; Darbro, Benjamin W; Weber, Krystal L; White, Katherine A M; Rheeden, Richard V; Kruer, Michael C; Dacken, Brian A; Wang, Xiao-Jun; Davis, Bryan T; Rohret, Judy A; Struzynski, Jason T; Rohret, Frank A; Weimer, Jill M; Pearce, David A

    2015-11-15

    Ataxia telangiectasia (AT) is a progressive multisystem disorder caused by mutations in the AT-mutated (ATM) gene. AT is a neurodegenerative disease primarily characterized by cerebellar degeneration in children leading to motor impairment. The disease progresses with other clinical manifestations including oculocutaneous telangiectasia, immune disorders, increased susceptibly to cancer and respiratory infections. Although genetic investigations and physiological models have established the linkage of ATM with AT onset, the mechanisms linking ATM to neurodegeneration remain undetermined, hindering therapeutic development. Several murine models of AT have been successfully generated showing some of the clinical manifestations of the disease, however they do not fully recapitulate the hallmark neurological phenotype, thus highlighting the need for a more suitable animal model. We engineered a novel porcine model of AT to better phenocopy the disease and bridge the gap between human and current animal models. The initial characterization of AT pigs revealed early cerebellar lesions including loss of Purkinje cells (PCs) and altered cytoarchitecture suggesting a developmental etiology for AT and could advocate for early therapies for AT patients. In addition, similar to patients, AT pigs show growth retardation and develop motor deficit phenotypes. By using the porcine system to model human AT, we established the first animal model showing PC loss and motor features of the human disease. The novel AT pig provides new opportunities to unmask functions and roles of ATM in AT disease and in physiological conditions. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. A novel porcine model of ataxia telangiectasia reproduces neurological features and motor deficits of human disease

    Science.gov (United States)

    Beraldi, Rosanna; Chan, Chun-Hung; Rogers, Christopher S.; Kovács, Attila D.; Meyerholz, David K.; Trantzas, Constantin; Lambertz, Allyn M.; Darbro, Benjamin W.; Weber, Krystal L.; White, Katherine A.M.; Rheeden, Richard V.; Kruer, Michael C.; Dacken, Brian A.; Wang, Xiao-Jun; Davis, Bryan T.; Rohret, Judy A.; Struzynski, Jason T.; Rohret, Frank A.; Weimer, Jill M.; Pearce, David A.

    2015-01-01

    Ataxia telangiectasia (AT) is a progressive multisystem disorder caused by mutations in the AT-mutated (ATM) gene. AT is a neurodegenerative disease primarily characterized by cerebellar degeneration in children leading to motor impairment. The disease progresses with other clinical manifestations including oculocutaneous telangiectasia, immune disorders, increased susceptibly to cancer and respiratory infections. Although genetic investigations and physiological models have established the linkage of ATM with AT onset, the mechanisms linking ATM to neurodegeneration remain undetermined, hindering therapeutic development. Several murine models of AT have been successfully generated showing some of the clinical manifestations of the disease, however they do not fully recapitulate the hallmark neurological phenotype, thus highlighting the need for a more suitable animal model. We engineered a novel porcine model of AT to better phenocopy the disease and bridge the gap between human and current animal models. The initial characterization of AT pigs revealed early cerebellar lesions including loss of Purkinje cells (PCs) and altered cytoarchitecture suggesting a developmental etiology for AT and could advocate for early therapies for AT patients. In addition, similar to patients, AT pigs show growth retardation and develop motor deficit phenotypes. By using the porcine system to model human AT, we established the first animal model showing PC loss and motor features of the human disease. The novel AT pig provides new opportunities to unmask functions and roles of ATM in AT disease and in physiological conditions. PMID:26374845

  20. Using a 1-D model to reproduce the diurnal variability of SST

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, Jacob L.; Donlon, Craig J.

    2017-01-01

    preferred approach to bridge the gap between in situ and remotely sensed measurements and obtain diurnal warming estimates at large spatial scales is modeling of the upper ocean temperature. This study uses the one-dimensional General Ocean Turbulence Model (GOTM) to resolve diurnal signals identified from...... forcing fields and is able to resolve daily SST variability seen both from satellite and in situ measurements. As such, and due to its low computational cost, it is proposed as a candidate model for diurnal variability estimates....

  1. Energy and nutrient deposition and excretion in the reproducing sow: model development and evaluation

    DEFF Research Database (Denmark)

    Hansen, A V; Strathe, A B; Theil, Peter Kappel

    2014-01-01

    was related to predictions of body fat and protein loss from the lactation model. Nitrogen intake, urine N, fecal N, and milk N were predicted with RMSPE as percentage of observed mean of 9.7, 17.9, 10.0, and 7.7%, respectively. The model provided a framework, but more refinements and improvements in accuracy......Air and nutrient emissions from swine operations raise environmental concerns. During the reproduction phase, sows consume and excrete large quantities of nutrients. The objective of this study was to develop a mathematical model to describe energy and nutrient partitioning and predict manure...... excretion and composition and methane emissions on a daily basis. The model was structured to contain gestation and lactation modules, which can be run separately or sequentially, with outputs from the gestation module used as inputs to the lactation module. In the gestating module, energy and protein...

  2. Do on/off time series models reproduce emerging stock market comovements?

    OpenAIRE

    Mohamed el hédi Arouri; Fredj Jawadi

    2011-01-01

    Using nonlinear modeling tools, this study investigates the comovements between the Mexican and the world stock markets over the last three decades. While the previous works only highlight some evidence of comovements, our paper aims to specify the different time-varying links and mechanisms characterizing the Mexican stock market through the comparison of two nonlinear error correction models (NECMs). Our findings point out strong evidence of time-varying and nonlinear mean-reversion and lin...

  3. The Computable Catchment: An executable document for model-data software sharing, reproducibility and interactive visualization

    Science.gov (United States)

    Gil, Y.; Duffy, C.

    2015-12-01

    This paper proposes the concept of a "Computable Catchment" which is used to develop a collaborative platform for watershed modeling and data analysis. The object of the research is a sharable, executable document similar to a pdf, but one that includes documentation of the underlying theoretical concepts, interactive computational/numerical resources, linkage to essential data repositories and the ability for interactive model-data visualization and analysis. The executable document for each catchment is stored in the cloud with automatic provisioning and a unique identifier allowing collaborative model and data enhancements for historical hydroclimatic reconstruction and/or future landuse or climate change scenarios to be easily reconstructed or extended. The Computable Catchment adopts metadata standards for naming all variables in the model and the data. The a-priori or initial data is derived from national data sources for soils, hydrogeology, climate, and land cover available from the www.hydroterre.psu.edu data service (Leonard and Duffy, 2015). The executable document is based on Wolfram CDF or Computable Document Format with an interactive open-source reader accessible by any modern computing platform. The CDF file and contents can be uploaded to a website or simply shared as a normal document maintaining all interactive features of the model and data. The Computable Catchment concept represents one application for Geoscience Papers of the Future representing an extensible document that combines theory, models, data and analysis that are digitally shared, documented and reused among research collaborators, students, educators and decision makers.

  4. Quantitative models for sustainable supply chain management

    DEFF Research Database (Denmark)

    Brandenburg, M.; Govindan, Kannan; Sarkis, J.

    2014-01-01

    Sustainability, the consideration of environmental factors and social aspects, in supply chain management (SCM) has become a highly relevant topic for researchers and practitioners. The application of operations research methods and related models, i.e. formal modeling, for closed-loop SCM...... and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...

  5. "High-precision, reconstructed 3D model" of skull scanned by conebeam CT: Reproducibility verified using CAD/CAM data.

    Science.gov (United States)

    Katsumura, Seiko; Sato, Keita; Ikawa, Tomoko; Yamamura, Keiko; Ando, Eriko; Shigeta, Yuko; Ogawa, Takumi

    2016-01-01

    Computed tomography (CT) scanning has recently been introduced into forensic medicine and dentistry. However, the presence of metal restorations in the dentition can adversely affect the quality of three-dimensional reconstruction from CT scans. In this study, we aimed to evaluate the reproducibility of a "high-precision, reconstructed 3D model" obtained from a conebeam CT scan of dentition, a method that might be particularly helpful in forensic medicine. We took conebeam CT and helical CT images of three dry skulls marked with 47 measuring points; reconstructed three-dimensional images; and measured the distances between the points in the 3D images with a computer-aided design/computer-aided manufacturing (CAD/CAM) marker. We found that in comparison with the helical CT, conebeam CT is capable of reproducing measurements closer to those obtained from the actual samples. In conclusion, our study indicated that the image-reproduction from a conebeam CT scan was more accurate than that from a helical CT scan. Furthermore, the "high-precision reconstructed 3D model" facilitates reliable visualization of full-sized oral and maxillofacial regions in both helical and conebeam CT scans. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. A novel porcine model of ataxia telangiectasia reproduces neurological features and motor deficits of human disease

    OpenAIRE

    Beraldi, Rosanna; Chan, Chun-Hung; Rogers, Christopher S.; Kovács, Attila D.; Meyerholz, David K.; Trantzas, Constantin; Lambertz, Allyn M.; Darbro, Benjamin W.; Weber, Krystal L.; White, Katherine A.M.; Rheeden, Richard V.; Kruer, Michael C.; Dacken, Brian A.; Wang, Xiao-Jun; Davis, Bryan T.

    2015-01-01

    Ataxia telangiectasia (AT) is a progressive multisystem disorder caused by mutations in the AT-mutated (ATM) gene. AT is a neurodegenerative disease primarily characterized by cerebellar degeneration in children leading to motor impairment. The disease progresses with other clinical manifestations including oculocutaneous telangiectasia, immune disorders, increased susceptibly to cancer and respiratory infections. Although genetic investigations and physiological models have established the l...

  7. Establishing a Reproducible Hypertrophic Scar following Thermal Injury: A Porcine Model

    Directory of Open Access Journals (Sweden)

    Scott J. Rapp, MD

    2015-02-01

    Conclusions: Deep partial-thickness thermal injury to the back of domestic swine produces an immature hypertrophic scar by 10 weeks following burn with thickness appearing to coincide with the location along the dorsal axis. With minimal pig to pig variation, we describe our technique to provide a testable immature scar model.

  8. Reproducibility of a novel model of murine asthma-like pulmonary inflammation.

    Science.gov (United States)

    McKinley, L; Kim, J; Bolgos, G L; Siddiqui, J; Remick, D G

    2004-05-01

    Sensitization to cockroach allergens (CRA) has been implicated as a major cause of asthma, especially among inner-city populations. Endotoxin from Gram-negative bacteria has also been investigated for its role in attenuating or exacerbating the asthmatic response. We have created a novel model utilizing house dust extract (HDE) containing high levels of both CRA and endotoxin to induce pulmonary inflammation (PI) and airway hyperresponsiveness (AHR). A potential drawback of this model is that the HDE is in limited supply and preparation of new HDE will not contain the exact components of the HDE used to define our model system. The present study involved testing HDEs collected from various homes for their ability to cause PI and AHR. Dust collected from five homes was extracted in phosphate buffered saline overnight. The levels of CRA and endotoxin in the supernatants varied from 7.1 to 49.5 mg/ml of CRA and 1.7-6 micro g/ml of endotoxin in the HDEs. Following immunization and two pulmonary exposures to HDE all five HDEs induced AHR, PI and plasma IgE levels substantially higher than normal mice. This study shows that HDE containing high levels of cockroach allergens and endotoxin collected from different sources can induce an asthma-like response in our murine model.

  9. Reproducibility and repeatability of semi-quantitative 18F-fluorodihydrotestosterone (FDHT) uptake metrics in castration-resistant prostate cancer metastases: a prospective multi-center study.

    Science.gov (United States)

    Vargas, Hebert Alberto; Kramer, Gem M; Scott, Andrew M; Weickhardt, Andrew; Meier, Andreas A; Parada, Nicole; Beattie, Bradley J; Humm, John L; Staton, Kevin D; Zanzonico, Pat B; Lyashchenko, Serge K; Lewis, Jason S; Yaqub, Maqsood; Sosa, Ramon E; van den Eertwegh, Alfons J; Davis, Ian D; Ackermann, Uwe; Pathmaraj, Kunthi; Schuit, Robert C; Windhorst, Albert D; Chua, Sue; Weber, Wolfgang A; Larson, Steven M; Scher, Howard I; Lammertsma, Adriaan A; Hoekstra, Otto; Morris, Michael J

    2018-04-06

    18 F-fluorodihydrotestosterone ( 18 F-FDHT) is a radiolabeled analogue of the androgen receptor's primary ligand that is currently being credentialed as a biomarker for prognosis, response, and pharmacodynamic effects of new therapeutics. As part of the biomarker qualification process, we prospectively assessed its reproducibility and repeatability in men with metastatic castration-resistant prostate cancer (mCRPC). Methods: We conducted a prospective multi-institutional study of mCRPC patients undergoing two (test/re-test) 18 F-FDHT PET/CT scans on two consecutive days. Two independent readers evaluated all examinations and recorded standardized uptake values (SUVs), androgen receptor-positive tumor volumes (ARTV), and total lesion uptake (TLU) for the most avid lesion detected in each of 32 pre-defined anatomical regions. The relative absolute difference and reproducibility coefficient (RC) of each metric were calculated between the test and re-test scans. Linear regression analyses, intra-class correlation coefficients (ICC), and Bland-Altman plots were used to evaluate repeatability of 18 F-FDHT metrics. The coefficient of variation (COV) and ICC were used to assess inter-observer reproducibility. Results: Twenty-seven patients with 140 18 F-FDHT-avid regions were included. The best repeatability among 18 F-FDHT uptake metrics was found for SUV metrics (SUV max , SUVmean, and SUVpeak), with no significant differences in repeatability found among them. Correlations between the test and re-test scans were strong for all SUV metrics (R2 ≥ 0.92; ICC ≥ 0.97). The RCs of the SUV metrics ranged from 21.3% for SUVpeak to 24.6% for SUV max The test and re-test ARTV and TLU, respectively, were highly correlated (R2 and ICC ≥ 0.97), although variability was significantly higher than that for SUV (RCs > 46.4%). The PSA levels, Gleason score, weight, and age did not affect repeatability, nor did total injected activity, uptake measurement time, or differences in

  10. A computational model incorporating neural stem cell dynamics reproduces glioma incidence across the lifespan in the human population.

    Directory of Open Access Journals (Sweden)

    Roman Bauer

    Full Text Available Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computational model to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert differential susceptibility throughout the population. Overall, our model supports the hypothesis that glioma is caused by randomly-occurring oncogenic mutations within the neural stem cell population. Based on this model, we assess the influence of the (experimentally indicated decrease in the number of neural stem cells and increase of cell division rate during aging. Our model provides multiple testable predictions, and suggests that different temporal sequences of oncogenic mutations can lead to tumorigenesis. Finally, we conclude that four or five oncogenic mutations are sufficient for the formation of glioma.

  11. [Renaissance of training in general surgery in Cambodia: a unique experience or reproducible model].

    Science.gov (United States)

    Dumurgier, C; Baulieux, J

    2005-01-01

    Is the new surgical training program at the University of Phom-Penh, Cambodia a unique experience or can it serve as a model for developing countries? This report describes the encouraging first results of this didactic and hands-on surgical program. Based on their findings the authors recommend not only continuing the program in Phom-Penh but also proposing slightly modified versions to new medical universities not currently offering specialization in surgery.

  12. Evaluation of Nitinol staples for the Lapidus arthrodesis in a reproducible biomechanical model

    Directory of Open Access Journals (Sweden)

    Nicholas Alexander Russell

    2015-12-01

    Full Text Available While the Lapidus procedure is a widely accepted technique for treatment of hallux valgus, the optimal fixation method to maintain joint stability remains controversial. The purpose of this study was to evaluate the biomechanical properties of new Shape Memory Alloy staples arranged in different configurations in a repeatable 1st Tarsometatarsal arthrodesis model. Ten sawbones models of the whole foot (n=5 per group were reconstructed using a single dorsal staple or two staples in a delta configuration. Each construct was mechanically tested in dorsal four-point bending, medial four-point bending, dorsal three-point bending and plantar cantilever bending with the staples activated at 37°C. The peak load, stiffness and plantar gapping were determined for each test. Pressure sensors were used to measure the contact force and area of the joint footprint in each group. There was a significant (p < 0.05 increase in peak load in the two staple constructs compared to the single staple constructs for all testing modalities. Stiffness also increased significantly in all tests except dorsal four-point bending. Pressure sensor readings showed a significantly higher contact force at time zero and contact area following loading in the two staple constructs (p < 0.05. Both groups completely recovered any plantar gapping following unloading and restored their initial contact footprint. The biomechanical integrity and repeatability of the models was demonstrated with no construct failures due to hardware or model breakdown. Shape memory alloy staples provide fixation with the ability to dynamically apply and maintain compression across a simulated arthrodesis following a range of loading conditions.

  13. Can lagrangian models reproduce the migration time of European eel obtained from otolith analysis?

    Science.gov (United States)

    Rodríguez-Díaz, L.; Gómez-Gesteira, M.

    2017-12-01

    European eel can be found at the Bay of Biscay after a long migration across the Atlantic. The duration of migration, which takes place at larval stage, is of primary importance to understand eel ecology and, hence, its survival. This duration is still a controversial matter since it can range from 7 months to > 4 years depending on the method to estimate duration. The minimum migration duration estimated from our lagrangian model is similar to the duration obtained from the microstructure of eel otoliths, which is typically on the order of 7-9 months. The lagrangian model showed to be sensitive to different conditions like spatial and time resolution, release depth, release area and initial distribution. In general, migration showed to be faster when decreasing the depth and increasing the resolution of the model. In average, the fastest migration was obtained when only advective horizontal movement was considered. However, faster migration was even obtained in some cases when locally oriented random migration was taken into account.

  14. A Quantitative Model of Expert Transcription Typing

    Science.gov (United States)

    1993-03-08

    1-3), how degradation of the text away from normal prose affects the rate of typing (phenomena 4-6), patterns of interkey intervals (phenomena 7-11...A more detailed analysis of this phenomenon is based on the work of West and Sabban (1932). They used progressively degraded copy to test "the...company: Analytic modelling applied to real-world problems. In D. Diaper , D. Gilmore, G. Cockton, & B. Shackel (Eds.). Human-Computer Interaction INTERACT

  15. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    OpenAIRE

    Cobbs, Gary

    2012-01-01

    Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most pote...

  16. Quantitative Models and Analysis for Reactive Systems

    DEFF Research Database (Denmark)

    Thrane, Claus

    The majority of modern software and hardware systems are reactive systems, where input provided by the user (possibly another system) and the output of the system is exchanged continuously throughout the (possibly) indefinite execution of the system. Natural examples include control systems, mobi......, energy consumption, latency, mean-time to failure, and cost. For systems integrated in mass-market products, the ability to quantify trade-offs between performance and robustness, under given technical and economic constraints, is of strategic importance....... by the environment in which they are embedded. This thesis studies the semantics and properties of a model-based framework for re- active systems, in which models and specifications are assumed to contain quantifiable information, such as references to time or energy. Our goal is to develop a theory of approximation......, in terms of a new mathematical basis for systems modeling which can incompas behavioural properties as well as environmental constraints. They continue by pointing out that, continuous performance and robustness measures are paramount when dealing with physical resource levels such as clock frequency...

  17. Realizing the Living Paper using the ProvONE Model for Reproducible Research

    Science.gov (United States)

    Jones, M. B.; Jones, C. S.; Ludäscher, B.; Missier, P.; Walker, L.; Slaughter, P.; Schildhauer, M.; Cuevas-Vicenttín, V.

    2015-12-01

    Science has advanced through traditional publications that codify research results as a permenant part of the scientific record. But because publications are static and atomic, researchers can only cite and reference a whole work when building on prior work of colleagues. The open source software model has demonstrated a new approach in which strong version control in an open environment can nurture an open ecosystem of software. Developers now commonly fork and extend software giving proper credit, with less repetition, and with confidence in the relationship to original software. Through initiatives like 'Beyond the PDF', an analogous model has been imagined for open science, in which software, data, analyses, and derived products become first class objects within a publishing ecosystem that has evolved to be finer-grained and is realized through a web of linked open data. We have prototyped a Living Paper concept by developing the ProvONE provenance model for scientific workflows, with prototype deployments in DataONE. ProvONE promotes transparency and openness by describing the authenticity, origin, structure, and processing history of research artifacts and by detailing the steps in computational workflows that produce derived products. To realize the Living Paper, we decompose scientific papers into their constituent products and publish these as compound objects in the DataONE federation of archival repositories. Each individual finding and sub-product of a reseach project (such as a derived data table, a workflow or script, a figure, an image, or a finding) can be independently stored, versioned, and cited. ProvONE provenance traces link these fine-grained products within and across versions of a paper, and across related papers that extend an original analysis. This allows for open scientific publishing in which researchers extend and modify findings, creating a dynamic, evolving web of results that collectively represent the scientific enterprise. The

  18. A discrete particle model reproducing collective dynamics of a bee swarm.

    Science.gov (United States)

    Bernardi, Sara; Colombi, Annachiara; Scianna, Marco

    2018-02-01

    In this article, we present a microscopic discrete mathematical model describing collective dynamics of a bee swarm. More specifically, each bee is set to move according to individual strategies and social interactions, the former involving the desire to reach a target destination, the latter accounting for repulsive/attractive stimuli and for alignment processes. The insects tend in fact to remain sufficiently close to the rest of the population, while avoiding collisions, and they are able to track and synchronize their movement to the flight of a given set of neighbors within their visual field. The resulting collective behavior of the bee cloud therefore emerges from non-local short/long-range interactions. Differently from similar approaches present in the literature, we here test different alignment mechanisms (i.e., based either on an Euclidean or on a topological neighborhood metric), which have an impact also on the other social components characterizing insect behavior. A series of numerical realizations then shows the phenomenology of the swarm (in terms of pattern configuration, collective productive movement, and flight synchronization) in different regions of the space of free model parameters (i.e., strength of attractive/repulsive forces, extension of the interaction regions). In this respect, constraints in the possible variations of such coefficients are here given both by reasonable empirical observations and by analytical results on some stability characteristics of the defined pairwise interaction kernels, which have to assure a realistic crystalline configuration of the swarm. An analysis of the effect of unconscious random fluctuations of bee dynamics is also provided. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Sprague-Dawley rats are a sustainable and reproducible animal model for induction and study of oral submucous fibrosis

    Directory of Open Access Journals (Sweden)

    Shilpa Maria

    2015-01-01

    Full Text Available Background: Oral submucous fibrosis (OSF is a chronic debilitating disease predominantly affecting the oral cavity and oropharynx. Characteristic histological traits of OSF include epithelial atrophy, inflammation, and a generalized submucosal fibrosis. Several studies and epidemiological surveys provide substantial evidence that areca nut is the main etiological factor for OSF. Hesitance of patients to undergo biopsy procedure together with clinicians becoming increasingly reluctant to take biopsies in cases of OSF has prompted researchers to develop animal models to study the disease process. Materials and Methods: The present study evaluates the efficacy, sustainability, and reproducibility of using Sprague-Dawley (SD rats as a possible model in the induction and progression of OSF. Buccal mucosa of SD rats was injected with areca nut and pan masala solutions on alternate days over a period of 48 weeks. The control group was treated with saline. The influence of areca nut and pan masala on the oral epithelium and connective tissue was evaluated by light microscopy. Results: Oral submucous fibrosis-like lesions were seen in both the areca nut and pan masala treated groups. The histological changes observed included: Atrophic epithelium, partial or complete loss of rete ridges, juxta-epithelial hyalinization, inflammation and accumulation of dense bundles of collagen fibers subepithelially. Conclusions: Histopathological changes in SD rats following treatment with areca nut and pan masala solutions bears a close semblance to that seen in humans with OSF. The SD rats seem to be a cheap and efficient, sustainable and reproducible model for the induction and development of OSF.

  20. The diverse broad-band light-curves of Swift GRBs reproduced with the cannonball model

    CERN Document Server

    Dado, Shlomo; De Rújula, A

    2009-01-01

    Two radiation mechanisms, inverse Compton scattering (ICS) and synchrotron radiation (SR), suffice within the cannonball (CB) model of long gamma ray bursts (LGRBs) and X-ray flashes (XRFs) to provide a very simple and accurate description of their observed prompt emission and afterglows. Simple as they are, the two mechanisms and the burst environment generate the rich structure of the light curves at all frequencies and times. This is demonstrated for 33 selected Swift LGRBs and XRFs, which are well sampled from early time until late time and well represent the entire diversity of the broad band light curves of Swift LGRBs and XRFs. Their prompt gamma-ray and X-ray emission is dominated by ICS of glory light. During their fast decline phase, ICS is taken over by SR which dominates their broad band afterglow. The pulse shape and spectral evolution of the gamma-ray peaks and the early-time X-ray flares, and even the delayed optical `humps' in XRFs, are correctly predicted. The canonical and non-canonical X-ra...

  1. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  2. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  3. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  4. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, H. P.; Agoram, B.; Davies, M. R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, P. H.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  5. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  6. Comparative analysis of 5 lung cancer natural history and screening models that reproduce outcomes of the NLST and PLCO trials.

    Science.gov (United States)

    Meza, Rafael; ten Haaf, Kevin; Kong, Chung Yin; Erdogan, Ayca; Black, William C; Tammemagi, Martin C; Choi, Sung Eun; Jeon, Jihyoun; Han, Summer S; Munshi, Vidit; van Rosmalen, Joost; Pinsky, Paul; McMahon, Pamela M; de Koning, Harry J; Feuer, Eric J; Hazelton, William D; Plevritis, Sylvia K

    2014-06-01

    The National Lung Screening Trial (NLST) demonstrated that low-dose computed tomography screening is an effective way of reducing lung cancer (LC) mortality. However, optimal screening strategies have not been determined to date and it is uncertain whether lighter smokers than those examined in the NLST may also benefit from screening. To address these questions, it is necessary to first develop LC natural history models that can reproduce NLST outcomes and simulate screening programs at the population level. Five independent LC screening models were developed using common inputs and calibration targets derived from the NLST and the Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO). Imputation of missing information regarding smoking, histology, and stage of disease for a small percentage of individuals and diagnosed LCs in both trials was performed. Models were calibrated to LC incidence, mortality, or both outcomes simultaneously. Initially, all models were calibrated to the NLST and validated against PLCO. Models were found to validate well against individuals in PLCO who would have been eligible for the NLST. However, all models required further calibration to PLCO to adequately capture LC outcomes in PLCO never-smokers and light smokers. Final versions of all models produced incidence and mortality outcomes in the presence and absence of screening that were consistent with both trials. The authors developed 5 distinct LC screening simulation models based on the evidence in the NLST and PLCO. The results of their analyses demonstrated that the NLST and PLCO have produced consistent results. The resulting models can be important tools to generate additional evidence to determine the effectiveness of lung cancer screening strategies using low-dose computed tomography. © 2014 American Cancer Society.

  7. Generalized PSF modeling for optimized quantitation in PET imaging

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-ud-Din, Hassan; Karakatsanis, Nicolas A.; Jha, Abhinav K.; Casey, Michael E.; Kadrmas, Dan J.; Rahmim, Arman

    2017-06-01

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF

  8. Oblique lateral cephalometric radiographs of the mandible in implantology: usefulness and reproducibility of the technique in quantitative densitometric measurements of the mandible in vivo.

    Science.gov (United States)

    Verhoeven, J W; Ruijter, J; Cune, M S; Terlou, M

    2000-10-01

    In the literature intraoral periapical radiographs are commonly used for densitometric measurements of the mandible with implants. These films give detailed information of the implant and the surrounding bone. However, in extreme mandibular atrophy it can be difficult to obtain intraoral radiographs of adequate diagnostic quality. Extraoral Oblique Lateral Cephalometric Radiographs (OLCRs) can then be the alternative: reproducible images of large parts of the mandible can be obtained. In vitro, the results of densitometry using periapical films and OLCRs were shown to be similar. The present study aims to determine the measurement error of densitometry with OLCRs in vivo. In 16 patients (group I) with atrophic mandible and implants, duplicate OLCRs of one side of the jaw were obtained. The error of measurement for the densitometric measurements of the mandibular bone was 5.5%. The use of a specially developed correction program to compensate for undesired variations in the projection of the soft tissues of the face (tongue, lips, cheek and neck) on the radiographs resulted in a 40% reduction of that measurement error to 3.5%. This remaining error is mainly brought about by an imperfect repositioning of the patient when the duplicate OLCRs are obtained. The error caused by the image acquisition, processing and measurement is less than 1%. Deliberate variation up to 7.5 degrees of the horizontal angle wherein the OLCRs are made, results in a large error of measurement of 13.5% (group II: 17 patients). To reduce this variation the additional soft tissue correction program is unsuitable. It is concluded from this study that the described radiographic and image analysis technique is a promising tool for prospective densitometric studies of the mandible with or without implants. Especially in mandibles with bone grafts and implants, substantial changes in the graft can occur. The described technique may be particularly valuable in analyzing these changes.

  9. Can the CMIP5 models reproduce interannual to interdecadal southern African summer rainfall variability and their teleconnections?

    Science.gov (United States)

    Dieppois, Bastien; Pohl, Benjamin; Crétat, Julien; Keenlyside, Noel; New, Mark

    2017-04-01

    This study examines for the first time the ability of 28 global climate models from the Coupled Model Intercomparison Project 5 (CMIP5) to reproduce southern African summer rainfall variability and their teleconnections with large-scale modes of climate variability across the dominant timescales. In observations, summer southern African rainfall exhibits three significant timescales of variability over the twentieth century: interdecadal (15-28 years), quasi-decadal (8-13 years), and interannual (2-8 years). Most of CMIP5 simulations underestimate southern African summer rainfall variability at these three timescales, and this bias is proportionally stronger from high- to low-frequency. The inter-model spread is as important as the spread between the ensemble members of a given model, which suggests a strong influence of internal climate variability, and/or large model uncertainties. The underestimated amplitude of rainfall variability for each timescale are linked to unrealistic spatial distributions of these fluctuations over the subcontinent in most CMIP5 models. This is, at least partially, due to a poor representation of the tropical/subtropical teleconnections, which are known to favour wet conditions over southern African rainfall in the observations. Most CMIP5 realisations (85%) fail at simulating sea-surface temperature (SST) anomalies related to a negative Pacific Decadal Oscillation during wetter conditions at the interdecadal timescale. At the quasi-decadal timescale, only one-third of simulations display a negative Interdecadal Pacific Oscillation during wetter conditions, but these SST anomalies are anomalously shifted westward and poleward when compared to observed anomalies. Similar biases in simulating La Niña SST anomalies are identified in more than 50% of CMIP5 simulations at the interannual timescale. These biases in Pacific SST anomalies result in important shifts in the Walker circulation. This impacts southern Africa rainfall variability

  10. Consistency and reproducibility of next-generation sequencing and other multigene mutational assays: A worldwide ring trial study on quantitative cytological molecular reference specimens.

    Science.gov (United States)

    Malapelle, Umberto; Mayo-de-Las-Casas, Clara; Molina-Vila, Miguel A; Rosell, Rafael; Savic, Spasenija; Bihl, Michel; Bubendorf, Lukas; Salto-Tellez, Manuel; de Biase, Dario; Tallini, Giovanni; Hwang, David H; Sholl, Lynette M; Luthra, Rajyalakshmi; Weynand, Birgit; Vander Borght, Sara; Missiaglia, Edoardo; Bongiovanni, Massimo; Stieber, Daniel; Vielh, Philippe; Schmitt, Fernando; Rappa, Alessandra; Barberis, Massimo; Pepe, Francesco; Pisapia, Pasquale; Serra, Nicola; Vigliar, Elena; Bellevicine, Claudio; Fassan, Matteo; Rugge, Massimo; de Andrea, Carlos E; Lozano, Maria D; Basolo, Fulvio; Fontanini, Gabriella; Nikiforov, Yuri E; Kamel-Reid, Suzanne; da Cunha Santos, Gilda; Nikiforova, Marina N; Roy-Chowdhuri, Sinchita; Troncone, Giancarlo

    2017-08-01

    Molecular testing of cytological lung cancer specimens includes, beyond epidermal growth factor receptor (EGFR), emerging predictive/prognostic genomic biomarkers such as Kirsten rat sarcoma viral oncogene homolog (KRAS), neuroblastoma RAS viral [v-ras] oncogene homolog (NRAS), B-Raf proto-oncogene, serine/threonine kinase (BRAF), and phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit α (PIK3CA). Next-generation sequencing (NGS) and other multigene mutational assays are suitable for cytological specimens, including smears. However, the current literature reflects single-institution studies rather than multicenter experiences. Quantitative cytological molecular reference slides were produced with cell lines designed to harbor concurrent mutations in the EGFR, KRAS, NRAS, BRAF, and PIK3CA genes at various allelic ratios, including low allele frequencies (AFs; 1%). This interlaboratory ring trial study included 14 institutions across the world that performed multigene mutational assays, from tissue extraction to data analysis, on these reference slides, with each laboratory using its own mutation analysis platform and methodology. All laboratories using NGS (n = 11) successfully detected the study's set of mutations with minimal variations in the means and standard errors of variant fractions at dilution points of 10% (P = .171) and 5% (P = .063) despite the use of different sequencing platforms (Illumina, Ion Torrent/Proton, and Roche). However, when mutations at a low AF of 1% were analyzed, the concordance of the NGS results was low, and this reflected the use of different thresholds for variant calling among the institutions. In contrast, laboratories using matrix-assisted laser desorption/ionization-time of flight (n = 2) showed lower concordance in terms of mutation detection and mutant AF quantification. Quantitative molecular reference slides are a useful tool for monitoring the performance of different multigene mutational

  11. Can CFMIP2 models reproduce the leading modes of cloud vertical structure in the CALIPSO-GOCCP observations?

    Science.gov (United States)

    Wang, Fang; Yang, Song

    2018-02-01

    Using principal component (PC) analysis, three leading modes of cloud vertical structure (CVS) are revealed by the GCM-Oriented CALIPSO Cloud Product (GOCCP), i.e. tropical high, subtropical anticyclonic and extratropical cyclonic cloud modes (THCM, SACM and ECCM, respectively). THCM mainly reflect the contrast between tropical high clouds and clouds in middle/high latitudes. SACM is closely associated with middle-high clouds in tropical convective cores, few-cloud regimes in subtropical anticyclonic clouds and stratocumulus over subtropical eastern oceans. ECCM mainly corresponds to clouds along extratropical cyclonic regions. Models of phase 2 of Cloud Feedback Model Intercomparison Project (CFMIP2) well reproduce the THCM, but SACM and ECCM are generally poorly simulated compared to GOCCP. Standardized PCs corresponding to CVS modes are generally captured, whereas original PCs (OPCs) are consistently underestimated (overestimated) for THCM (SACM and ECCM) by CFMIP2 models. The effects of CVS modes on relative cloud radiative forcing (RSCRF/RLCRF) (RSCRF being calculated at the surface while RLCRF at the top of atmosphere) are studied in terms of principal component regression method. Results show that CFMIP2 models tend to overestimate (underestimated or simulate the opposite sign) RSCRF/RLCRF radiative effects (REs) of ECCM (THCM and SACM) in unit global mean OPC compared to observations. These RE biases may be attributed to two factors, one of which is underestimation (overestimation) of low/middle clouds (high clouds) (also known as stronger (weaker) REs in unit low/middle (high) clouds) in simulated global mean cloud profiles, the other is eigenvector biases in CVS modes (especially for SACM and ECCM). It is suggested that much more attention should be paid on improvement of CVS, especially cloud parameterization associated with particular physical processes (e.g. downwelling regimes with the Hadley circulation, extratropical storm tracks and others), which

  12. Reproducing Electric Field Observations during Magnetic Storms by means of Rigorous 3-D Modelling and Distortion Matrix Co-estimation

    Science.gov (United States)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2015-04-01

    Electric fields induced in the conducting Earth during magnetic storms drive currents in power transmission grids, telecommunication lines or buried pipelines. These geomagnetically induced currents (GIC) can cause severe service disruptions. The prediction of GIC is thus of great importance for public and industry. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we developed a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a model of the magnetospheric source. The latter is described by low-degree spherical harmonics; its temporal evolution is derived from observatory magnetic data. Time series of the electric field can be computed for every location on Earth's surface. The actual electric field however is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the conductivity model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and computed electric fields. Using data of various magnetic storms that occurred between 2000 and 2003, we estimated distortion matrices for observatory sites onshore and on the ocean bottom. Strong correlations between modellings and measurements validate our method. The distortion matrix estimates prove to be reliable, as they are accurately reproduced for different magnetic storms. We further show that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of electric field time series during magnetic storms. Since the required computational resources are negligible, our approach is suitable for a real-time prediction of GIC. For this purpose, a reliable forecast of the source field, e.g. based on data from satellites

  13. Quantitative modelling in design and operation of food supply systems

    NARCIS (Netherlands)

    Beek, van P.

    2004-01-01

    During the last two decades food supply systems not only got interest of food technologists but also from the field of Operations Research and Management Science. Operations Research (OR) is concerned with quantitative modelling and can be used to get insight into the optimal configuration and

  14. Modeling Logistic Performance in Quantitative Microbial Risk Assessment

    NARCIS (Netherlands)

    Rijgersberg, H.; Tromp, S.O.; Jacxsens, L.; Uyttendaele, M.

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage

  15. CSML2SBML: a novel tool for converting quantitative biological pathway models from CSML into SBML.

    Science.gov (United States)

    Li, Chen; Nagasaki, Masao; Ikeda, Emi; Sekiya, Yayoi; Miyano, Satoru

    2014-07-01

    CSML and SBML are XML-based model definition standards which are developed with the aim of creating exchange formats for modeling, visualizing and simulating biological pathways. In this article we report a release of a format convertor for quantitative pathway models, namely CSML2SBML. It translates models encoded by CSML into SBML without loss of structural and kinetic information. The simulation and parameter estimation of the resulting SBML model can be carried out with compliant tool CellDesigner for further analysis. The convertor is based on the standards CSML version 3.0 and SBML Level 2 Version 4. In our experiments, 11 out of 15 pathway models in CSML model repository and 228 models in Macrophage Pathway Knowledgebase (MACPAK) are successfully converted to SBML models. The consistency of the resulting model is validated by libSBML Consistency Check of CellDesigner. Furthermore, the converted SBML model assigned with the kinetic parameters translated from CSML model can reproduce the same dynamics with CellDesigner as CSML one running on Cell Illustrator. CSML2SBML, along with its instructions and examples for use are available at http://csml2sbml.csml.org. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Validity and reproducibility of a revised semi-quantitative food frequency questionnaire (SQFFQ) for women of age-group 12-44 years in Chengdu.

    Science.gov (United States)

    Tang, Ying; Liu, Ying; Xu, Liangzhi; Jia, Yujian; Shan, Dan; Li, Wenjuan; Pan, Xin; Kang, Deying; Huang, Chengyu; Li, Xiaosong; Zhang, Jing; Hu, Ying; Konglin, Lingli; Zhuang, Jing

    2015-03-01

    To find a credible nutritional screening tool for evaluating relationship between nutritional status and diseases in Chengdu female residents, the reliability and validity of a revised semi-quantitative food frequency questionnaire (SQFFQ) were tested. The validity was assessed by comparing the SQFFQ with the 'standard' method of 3 days' dietary recall, and the reliability was assessed by comparing the first SQFFQ with the second SQFFQ at 4 weeks interval. Correlation analysis showed that, for reliability, the average correlation coefficient (CC) of 22 kinds of nutrients was 0.66 and reduced to 0.60 after adjusting for energy; the average of intra-class correlation coefficients (ICC) was 0.65. For validity, the average CC was 0.35 and remained stable after adjusting for CC of energy or nutrients. Validity of 17 nutrients in SQFFQ survey had correlation with result of 3 days' dietary recall. The results showed that the revised SQFFQ can be used for investigating the role of nutrients in development of disease in Chengdu female residents.

  17. QSAR DataBank repository: open and linked qualitative and quantitative structure-activity relationship models.

    Science.gov (United States)

    Ruusmann, V; Sild, S; Maran, U

    2015-01-01

    Structure-activity relationship models have been used to gain insight into chemical and physical processes in biomedicine, toxicology, biotechnology, etc. for almost a century. They have been recognized as valuable tools in decision support workflows for qualitative and quantitative predictions. The main obstacle preventing broader adoption of quantitative structure-activity relationships [(Q)SARs] is that published models are still relatively difficult to discover, retrieve and redeploy in a modern computer-oriented environment. This publication describes a digital repository that makes in silico (Q)SAR-type descriptive and predictive models archivable, citable and usable in a novel way for most common research and applied science purposes. The QSAR DataBank (QsarDB) repository aims to make the processes and outcomes of in silico modelling work transparent, reproducible and accessible. Briefly, the models are represented in the QsarDB data format and stored in a content-aware repository (a.k.a. smart repository). Content awareness has two dimensions. First, models are organized into collections and then into collection hierarchies based on their metadata. Second, the repository is not only an environment for browsing and downloading models (the QDB archive) but also offers integrated services, such as model analysis and visualization and prediction making. The QsarDB repository unlocks the potential of descriptive and predictive in silico (Q)SAR-type models by allowing new and different types of collaboration between model developers and model users. The key enabling factor is the representation of (Q)SAR models in the QsarDB data format, which makes it easy to preserve and share all relevant data, information and knowledge. Model developers can become more productive by effectively reusing prior art. Model users can make more confident decisions by relying on supporting information that is larger and more diverse than before. Furthermore, the smart repository

  18. Reproducibility of regional and global longitudinal strains derived from two-dimensional speckle-tracking and doppler tissue imaging between expert and novice readers during quantitative dobutamine stress echocardiography.

    Science.gov (United States)

    Yamada, Akira; Luis, Sushil A; Sathianathan, Daniel; Khandheria, Bijoy K; Cafaro, James; Hamilton-Craig, Christian R; Platts, David G; Haseler, Luke; Burstow, Darryl; Chan, Jonathan

    2014-08-01

    Longitudinal strain (LS) is a quantitative parameter that adds incremental value to wall motion analysis. The aim of this study was to compare the reproducibility of LS derived from Doppler tissue imaging and speckle-tracking between an expert and a novice strain reader during dobutamine stress echocardiography (DSE). Forty-one patients (mean age, 65 ± 15 years; mean ejection fraction, 58 ± 11%) underwent DSE per clinical protocol. Global LS derived from speckle-tracking and regional LS derived from both speckle-tracking and Doppler tissue imaging were measured twice by an expert strain reader and also measured twice by a novice strain reader. Intraobserver and interobserver analyses were performed using intraclass correlation coefficients (ICC), Bland-Altman analysis, and absolute difference values (mean ± SD). Global LS measured by the expert strain reader demonstrated high intraobserver measurement reproducibility (rest: ICC = 0.95, absolute difference = 5.5 ± 4.9%; low dose: ICC = 0.96, absolute difference = 5.7 ± 3.7%; peak dose: ICC = 0.87, absolute difference = 11.4 ± 8.4%). Global LS measured by the novice strain reader also demonstrated high intraobserver reproducibility (rest: ICC = 0.97, absolute difference = 4.1 ± 3.4%; low dose: ICC = 0.94, absolute difference = 5.4 ± 5.9%; peak dose: ICC = 0.94, absolute difference = 6.1 ± 4.8%). Global LS also showed high interobserver agreement between the expert and novice readers at all stages of DSE (rest: ICC = 0.90, absolute difference = 8.5 ± 7.5%; low dose: ICC = 0.90, absolute difference = 8.9 ± 7.1%; peak dose: ICC = 0.87, absolute difference = 10.8 ± 8.4%). Of all parameters studied, LS derived from Doppler tissue imaging had relatively low interobserver and intraobserver agreement. Global LS is highly reproducible during all stages of DSE. This variable is a potentially reliable and reproducible measure of myocardial deformation. Copyright © 2014 American Society of Echocardiography

  19. A rat model of post-traumatic stress disorder reproduces the hippocampal deficits seen in the human syndrome

    Directory of Open Access Journals (Sweden)

    Sonal eGoswami

    2012-06-01

    Full Text Available Despite recent progress, the causes and pathophysiology of post-traumatic stress disorder (PTSD remain poorly understood, partly because of ethical limitations inherent to human studies. One approach to circumvent this obstacle is to study PTSD in a valid animal model of the human syndrome. In one such model, extreme and long-lasting behavioral manifestations of anxiety develop in a subset of Lewis rats after exposure to an intense predatory threat that mimics the type of life-and-death situation known to precipitate PTSD in humans. This study aimed to assess whether the hippocampus-associated deficits observed in the human syndrome are reproduced in this rodent model. Prior to predatory threat, different groups of rats were each tested on one of three object recognition memory tasks that varied in the types of contextual clues (i.e. that require the hippocampus or not the rats could use to identify novel items. After task completion, the rats were subjected to predatory threat and, one week later, tested on the elevated plus maze. Based on their exploratory behavior in the plus maze, rats were then classified as resilient or PTSD-like and their performance on the pre-threat object recognition tasks compared. The performance of PTSD-like rats was inferior to that of resilient rats but only when subjects relied on an allocentric frame of reference to identify novel items, a process thought to be critically dependent on the hippocampus. Therefore, these results suggest that even prior to trauma, PTSD-like rats show a deficit in hippocampal-dependent functions, as reported in twin studies of human PTSD.

  20. Quantitative and logic modelling of gene and molecular networks

    Science.gov (United States)

    Le Novère, Nicolas

    2015-01-01

    Behaviours of complex biomolecular systems are often irreducible to the elementary properties of their individual components. Explanatory and predictive mathematical models are therefore useful for fully understanding and precisely engineering cellular functions. The development and analyses of these models require their adaptation to the problems that need to be solved and the type and amount of available genetic or molecular data. Quantitative and logic modelling are among the main methods currently used to model molecular and gene networks. Each approach comes with inherent advantages and weaknesses. Recent developments show that hybrid approaches will become essential for further progress in synthetic biology and in the development of virtual organisms. PMID:25645874

  1. Reproducing the organic matter model of anthropogenic dark earth of Amazonia and testing the ecotoxicity of functionalized charcoal compounds

    Directory of Open Access Journals (Sweden)

    Carolina Rodrigues Linhares

    2012-05-01

    Full Text Available The objective of this work was to obtain organic compounds similar to the ones found in the organic matter of anthropogenic dark earth of Amazonia (ADE using a chemical functionalization procedure on activated charcoal, as well as to determine their ecotoxicity. Based on the study of the organic matter from ADE, an organic model was proposed and an attempt to reproduce it was described. Activated charcoal was oxidized with the use of sodium hypochlorite at different concentrations. Nuclear magnetic resonance was performed to verify if the spectra of the obtained products were similar to the ones of humic acids from ADE. The similarity between spectra indicated that the obtained products were polycondensed aromatic structures with carboxyl groups: a soil amendment that can contribute to soil fertility and to its sustainable use. An ecotoxicological test with Daphnia similis was performed on the more soluble fraction (fulvic acids of the produced soil amendment. Aryl chloride was formed during the synthesis of the organic compounds from activated charcoal functionalization and partially removed through a purification process. However, it is probable that some aryl chloride remained in the final product, since the ecotoxicological test indicated that the chemical functionalized soil amendment is moderately toxic.

  2. Validity, reliability, and reproducibility of linear measurements on digital models obtained from intraoral and cone-beam computed tomography scans of alginate impressions

    NARCIS (Netherlands)

    Wiranto, Matthew G.; Engelbrecht, W. Petrie; Nolthenius, Heleen E. Tutein; van der Meer, W. Joerd; Ren, Yijin

    INTRODUCTION: Digital 3-dimensional models are widely used for orthodontic diagnosis. The aim of this study was to assess the validity, reliability, and reproducibility of digital models obtained from the Lava Chairside Oral scanner (3M ESPE, Seefeld, Germany) and cone-beam computed tomography scans

  3. Reproducibility and accuracy of linear measurements on dental models derived from cone-beam computed tomography compared with digital dental casts

    NARCIS (Netherlands)

    Waard, O. de; Rangel, F.A.; Fudalej, P.S.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Breuning, K.H.

    2014-01-01

    INTRODUCTION: The aim of this study was to determine the reproducibility and accuracy of linear measurements on 2 types of dental models derived from cone-beam computed tomography (CBCT) scans: CBCT images, and Anatomodels (InVivoDental, San Jose, Calif); these were compared with digital models

  4. Coupled RipCAS-DFLOW (CoRD) Software and Data Management System for Reproducible Floodplain Vegetation Succession Modeling

    Science.gov (United States)

    Turner, M. A.; Miller, S.; Gregory, A.; Cadol, D. D.; Stone, M. C.; Sheneman, L.

    2016-12-01

    We present the Coupled RipCAS-DFLOW (CoRD) modeling system created to encapsulate the workflow to analyze the effects of stream flooding on vegetation succession. CoRD provides an intuitive command-line and web interface to run DFLOW and RipCAS in succession over many years automatically, which is a challenge because, for our application, DFLOW must be run on a supercomputing cluster via the PBS job scheduler. RipCAS is a vegetation succession model, and DFLOW is a 2D open channel flow model. Data adaptors have been developed to seamlessly connect DFLOW output data to be RipCAS inputs, and vice-versa. CoRD provides automated statistical analysis and visualization, plus automatic syncing of input and output files and model run metadata to the hydrological data management system HydroShare using its excellent Python REST client. This combination of technologies and data management techniques allows the results to be shared with collaborators and eventually published. Perhaps most importantly, it allows results to be easily reproduced via either the command-line or web user interface. This system is a result of collaboration between software developers and hydrologists participating in the Western Consortium for Watershed Analysis, Visualization, and Exploration (WC-WAVE). Because of the computing-intensive nature of this particular workflow, including automating job submission/monitoring and data adaptors, software engineering expertise is required. However, the hydrologists provide the software developers with a purpose and ensure a useful, intuitive tool is developed. Our hydrologists contribute software, too: RipCAS was developed from scratch by hydrologists on the team as a specialized, open-source version of the Computer Aided Simulation Model for Instream Flow and Riparia (CASiMiR) vegetation model; our hydrologists running DFLOW provided numerous examples and help with the supercomputing system. This project is written in Python, a popular language in the

  5. A sensitive and reproducible in vivo imaging mouse model for evaluation of drugs against late-stage human African trypanosomiasis.

    Science.gov (United States)

    Burrell-Saward, Hollie; Rodgers, Jean; Bradley, Barbara; Croft, Simon L; Ward, Theresa H

    2015-02-01

    To optimize the Trypanosoma brucei brucei GVR35 VSL-2 bioluminescent strain as an innovative drug evaluation model for late-stage human African trypanosomiasis. An IVIS® Lumina II imaging system was used to detect bioluminescent T. b. brucei GVR35 parasites in mice to evaluate parasite localization and disease progression. Drug treatment was assessed using qualitative bioluminescence imaging and real-time quantitative PCR (qPCR). We have shown that drug dose-response can be evaluated using bioluminescence imaging and confirmed quantification of tissue parasite load using qPCR. The model was also able to detect drug relapse earlier than the traditional blood film detection and even in the absence of any detectable peripheral parasites. We have developed and optimized a new, efficient method to evaluate novel anti-trypanosomal drugs in vivo and reduce the current 180 day drug relapse experiment to a 90 day model. The non-invasive in vivo imaging model reduces the time required to assess preclinical efficacy of new anti-trypanosomal drugs. © The Author 2014. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Currency risk and prices of oil and petroleum products: a simulation with a quantitative model

    International Nuclear Information System (INIS)

    Aniasi, L.; Ottavi, D.; Rubino, E.; Saracino, A.

    1992-01-01

    This paper analyzes the relationship between the exchange rates of the US Dollar against the four major European currencies and the prices of oil and its main products in those countries. In fact, the sensitivity of the prices to the exchange rate movements is of fundamental importance for the refining and distribution industries of importing countries. The result of the analysis shows that in neither free market conditions, as those present in Great Britain, France and Germany, nor in regulated markets, i.e. the italian one, do the variations of petroleum product prices fully absorb the variation of the exchange rates. In order to assess the above relationship, we first tested the order of co-integration of the time series of exchange rates of EMS currencies with those of international prices of oil and its derivative products; then we used a transfer-function model to reproduce the quantitative relationships between those variables. Using these results, we then reproduced domestic price functions with partial adjustment mechanisms. Finally, we used the above model to run a simulation of the deviation from the steady-state pattern caused by exchange-rate exogenous shocks. 21 refs., 5 figs., 3 tabs

  7. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.

    Science.gov (United States)

    Moray, Neville; Groeger, John; Stanton, Neville

    2017-02-01

    This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.

  8. Quantitative comparison of canopy conductance models using a Bayesian approach

    Science.gov (United States)

    Samanta, S.; Clayton, M. K.; Mackay, D. S.; Kruger, E. L.; Ewers, B. E.

    2008-09-01

    A quantitative model comparison methodology based on deviance information criterion, a Bayesian measure of the trade-off between model complexity and goodness of fit, is developed and demonstrated by comparing semiempirical transpiration models. This methodology accounts for parameter and prediction uncertainties associated with such models and facilitates objective selection of the simplest model, out of available alternatives, which does not significantly compromise the ability to accurately model observations. We use this methodology to compare various Jarvis canopy conductance model configurations, embedded within a larger transpiration model, against canopy transpiration measured by sap flux. The results indicate that descriptions of the dependence of stomatal conductance on vapor pressure deficit, photosynthetic radiation, and temperature, as well as the gradual variation in canopy conductance through the season are essential in the transpiration model. Use of soil moisture was moderately significant, but only when used with a hyperbolic vapor pressure deficit relationship. Subtle differences in model quality could be clearly associated with small structural changes through the use of this methodology. The results also indicate that increments in model complexity are not always accompanied by improvements in model quality and that such improvements are conditional on model structure. Possible application of this methodology to compare complex semiempirical models of natural systems in general is also discussed.

  9. Quantitative versus qualitative modeling: a complementary approach in ecosystem study.

    Science.gov (United States)

    Bondavalli, C; Favilla, S; Bodini, A

    2009-02-01

    Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.

  10. A Review on Quantitative Models for Sustainable Food Logistics Management

    Directory of Open Access Journals (Sweden)

    M. Soysal

    2012-12-01

    Full Text Available The last two decades food logistics systems have seen the transition from a focus on traditional supply chain management to food supply chain management, and successively, to sustainable food supply chain management. The main aim of this study is to identify key logistical aims in these three phases and analyse currently available quantitative models to point out modelling challenges in sustainable food logistics management (SFLM. A literature review on quantitative studies is conducted and also qualitative studies are consulted to understand the key logistical aims more clearly and to identify relevant system scope issues. Results show that research on SFLM has been progressively developing according to the needs of the food industry. However, the intrinsic characteristics of food products and processes have not yet been handled properly in the identified studies. The majority of the works reviewed have not contemplated on sustainability problems, apart from a few recent studies. Therefore, the study concludes that new and advanced quantitative models are needed that take specific SFLM requirements from practice into consideration to support business decisions and capture food supply chain dynamics.

  11. Genomic value prediction for quantitative traits under the epistatic model

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-01-01

    Full Text Available Abstract Background Most quantitative traits are controlled by multiple quantitative trait loci (QTL. The contribution of each locus may be negligible but the collective contribution of all loci is usually significant. Genome selection that uses markers of the entire genome to predict the genomic values of individual plants or animals can be more efficient than selection on phenotypic values and pedigree information alone for genetic improvement. When a quantitative trait is contributed by epistatic effects, using all markers (main effects and marker pairs (epistatic effects to predict the genomic values of plants can achieve the maximum efficiency for genetic improvement. Results In this study, we created 126 recombinant inbred lines of soybean and genotyped 80 makers across the genome. We applied the genome selection technique to predict the genomic value of somatic embryo number (a quantitative trait for each line. Cross validation analysis showed that the squared correlation coefficient between the observed and predicted embryo numbers was 0.33 when only main (additive effects were used for prediction. When the interaction (epistatic effects were also included in the model, the squared correlation coefficient reached 0.78. Conclusions This study provided an excellent example for the application of genome selection to plant breeding.

  12. A Transformative Model for Undergraduate Quantitative Biology Education

    Science.gov (United States)

    Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949

  13. A transformative model for undergraduate quantitative biology education.

    Science.gov (United States)

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.

  14. QuantUM: Quantitative Safety Analysis of UML Models

    Directory of Open Access Journals (Sweden)

    Florian Leitner-Fischer

    2011-07-01

    Full Text Available When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Also, it is necessary that the description methods used do not require a profound knowledge of formal methods. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. All inputs of the analysis are specified at the level of a UML model. This model is then automatically translated into the analysis model, and the results of the analysis are consequently represented on the level of the UML model. Thus the analysis model and the formal methods used during the analysis are hidden from the user. We illustrate the usefulness of our approach using an industrial strength case study.

  15. Quantitative analysis of a wind energy conversion model

    International Nuclear Information System (INIS)

    Zucker, Florian; Gräbner, Anna; Strunz, Andreas; Meyn, Jan-Peter

    2015-01-01

    A rotor of 12 cm diameter is attached to a precision electric motor, used as a generator, to make a model wind turbine. Output power of the generator is measured in a wind tunnel with up to 15 m s −1 air velocity. The maximum power is 3.4 W, the power conversion factor from kinetic to electric energy is c p = 0.15. The v 3 power law is confirmed. The model illustrates several technically important features of industrial wind turbines quantitatively. (paper)

  16. Towards Quantitative Systems Pharmacology Models of Chemotherapy-Induced Neutropenia.

    Science.gov (United States)

    Craig, M

    2017-05-01

    Neutropenia is a serious toxic complication of chemotherapeutic treatment. For years, mathematical models have been developed to better predict hematological outcomes during chemotherapy in both the traditional pharmaceutical sciences and mathematical biology disciplines. An increasing number of quantitative systems pharmacology (QSP) models that combine systems approaches, physiology, and pharmacokinetics/pharmacodynamics have been successfully developed. Here, I detail the shift towards QSP efforts, emphasizing the importance of incorporating systems-level physiological considerations in pharmacometrics. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  17. Quantitative analysis of a wind energy conversion model

    Science.gov (United States)

    Zucker, Florian; Gräbner, Anna; Strunz, Andreas; Meyn, Jan-Peter

    2015-03-01

    A rotor of 12 cm diameter is attached to a precision electric motor, used as a generator, to make a model wind turbine. Output power of the generator is measured in a wind tunnel with up to 15 m s-1 air velocity. The maximum power is 3.4 W, the power conversion factor from kinetic to electric energy is cp = 0.15. The v3 power law is confirmed. The model illustrates several technically important features of industrial wind turbines quantitatively.

  18. Frequency-Domain Response Analysis for Quantitative Systems Pharmacology Models.

    Science.gov (United States)

    Schulthess, Pascal; Post, Teun M; Yates, James; van der Graaf, Piet H

    2017-11-28

    Drug dosing regimen can significantly impact drug effect and, thus, the success of treatments. Nevertheless, trial and error is still the most commonly used method by conventional pharmacometric approaches to optimize dosing regimen. In this tutorial, we utilize four distinct classes of quantitative systems pharmacology models to introduce frequency-domain response analysis, a method widely used in electrical and control engineering that allows the analytical optimization of drug treatment regimen from the dynamics of the model. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  19. Development of quantitative atomic modeling for tungsten transport study using LHD plasma with tungsten pellet injection

    Science.gov (United States)

    Murakami, I.; Sakaue, H. A.; Suzuki, C.; Kato, D.; Goto, M.; Tamura, N.; Sudo, S.; Morita, S.

    2015-09-01

    Quantitative tungsten study with reliable atomic modeling is important for successful achievement of ITER and fusion reactors. We have developed tungsten atomic modeling for understanding the tungsten behavior in fusion plasmas. The modeling is applied to the analysis of tungsten spectra observed from plasmas of the large helical device (LHD) with tungsten pellet injection. We found that extreme ultraviolet (EUV) emission of W24+ to W33+ ions at 1.5-3.5 nm are sensitive to electron temperature and useful to examine the tungsten behavior in edge plasmas. We can reproduce measured EUV spectra at 1.5-3.5 nm by calculated spectra with the tungsten atomic model and obtain charge state distributions of tungsten ions in LHD plasmas at different temperatures around 1 keV. Our model is applied to calculate the unresolved transition array (UTA) seen at 4.5-7 nm tungsten spectra. We analyze the effect of configuration interaction on population kinetics related to the UTA structure in detail and find the importance of two-electron-one-photon transitions between 4p54dn+1- 4p64dn-14f. Radiation power rate of tungsten due to line emissions is also estimated with the model and is consistent with other models within factor 2.

  20. Digital clocks: simple Boolean models can quantitatively describe circadian systems.

    Science.gov (United States)

    Akman, Ozgur E; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J; Ghazal, Peter

    2012-09-07

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day-night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we anticipate

  1. Digital clocks: simple Boolean models can quantitatively describe circadian systems

    Science.gov (United States)

    Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter

    2012-01-01

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we

  2. Archiving Reproducible Research with R and Dataverse

    DEFF Research Database (Denmark)

    Leeper, Thomas

    2014-01-01

    Reproducible research and data archiving are increasingly important issues in research involving statistical analyses of quantitative data. This article introduces the dvn package, which allows R users to publicly archive datasets, analysis files, codebooks, and associated metadata in Dataverse...

  3. A Cytomorphic Chip for Quantitative Modeling of Fundamental Bio-Molecular Circuits.

    Science.gov (United States)

    2015-08-01

    We describe a 0.35 μm BiCMOS silicon chip that quantitatively models fundamental molecular circuits via efficient log-domain cytomorphic transistor equivalents. These circuits include those for biochemical binding with automatic representation of non-modular and loading behavior, e.g., in cascade and fan-out topologies; for representing variable Hill-coefficient operation and cooperative binding; for representing inducer, transcription-factor, and DNA binding; for probabilistic gene transcription with analogic representations of log-linear and saturating operation; for gain, degradation, and dynamics of mRNA and protein variables in transcription and translation; and, for faithfully representing biological noise via tunable stochastic transistor circuits. The use of on-chip DACs and ADCs enables multiple chips to interact via incoming and outgoing molecular digital data packets and thus create scalable biochemical reaction networks. The use of off-chip digital processors and on-chip digital memory enables programmable connectivity and parameter storage. We show that published static and dynamic MATLAB models of synthetic biological circuits including repressilators, feed-forward loops, and feedback oscillators are in excellent quantitative agreement with those from transistor circuits on the chip. Computationally intensive stochastic Gillespie simulations of molecular production are also rapidly reproduced by the chip and can be reliably tuned over the range of signal-to-noise ratios observed in biological cells.

  4. Quantitative insight into models of Hedgehog signal transduction.

    Science.gov (United States)

    Farzan, Shohreh F; Ogden, Stacey K; Robbins, David J

    2010-01-01

    The Hedgehog (Hh) signaling pathway is an essential regulator of embryonic development and a key factor in carcinogenesis.(1,2) Hh, a secreted morphogen, activates intracellular signaling events via downstream effector proteins, which translate the signal to regulate target gene transcription.(3,4) In a recent publication, we quantitatively compared two commonly accepted models of Hh signal transduction.(5) Each model requires a different ratio of signaling components to be feasible. Thus, we hypothesized that knowing the steady-state ratio of core signaling components might allow us to distinguish between models. We reported vast differences in the molar concentrations of endogenous effectors of Hh signaling, with Smo present in limiting concentrations.(5) This extra view summarizes the implications of this endogenous ratio in relation to current models of Hh signaling and places our results in the context of recent work describing the involvement of guanine nucleotide binding protein Galphai and Cos2 motility.

  5. The Need for Reproducibility

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-06-27

    The purpose of this presentation is to consider issues of reproducibility, specifically it determines whether bitwise reproducible computation is possible, if computational research in DOE improves its publication process, and if reproducible results can be achieved apart from the peer review process?

  6. Reliability versus reproducibility

    International Nuclear Information System (INIS)

    Lautzenheiser, C.E.

    1976-01-01

    Defect detection and reproducibility of results are two separate but closely related subjects. It is axiomatic that a defect must be detected from examination to examination or reproducibility of results is very poor. On the other hand, a defect can be detected on each of subsequent examinations for higher reliability and still have poor reproducibility of results

  7. Model for Quantitative Evaluation of Enzyme Replacement Treatment

    Directory of Open Access Journals (Sweden)

    Radeva B.

    2009-12-01

    Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.

  8. Quantitative aspects and dynamic modelling of glucosinolate metabolism

    DEFF Research Database (Denmark)

    Vik, Daniel

    and ecologically important glucosinolate (GLS) compounds of cruciferous plants – including the model plant Arabidopsis thaliana – have been studied extensively with regards to their biosynthesis and degradation. However, efforts to construct a dynamic model unifying the regulatory aspects have not been made......Advancements in ‘omics technologies now allow acquisition of enormous amounts of quantitative information about biomolecules. This has led to the emergence of new scientific sub‐disciplines e.g. computational, systems and ‘quantitative’ biology. These disciplines examine complex biological...... behaviour through computational and mathematical approaches and have resulted in substantial insights and advances in molecular biology and physiology. Capitalizing on the accumulated knowledge and data, it is possible to construct dynamic models of complex biological systems, thereby initiating the so...

  9. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  10. Quantifying Zika: Advancing the Epidemiology of Zika With Quantitative Models.

    Science.gov (United States)

    Keegan, Lindsay T; Lessler, Justin; Johansson, Michael A

    2017-12-16

    When Zika virus (ZIKV) emerged in the Americas, little was known about its biology, pathogenesis, and transmission potential, and the scope of the epidemic was largely hidden, owing to generally mild infections and no established surveillance systems. Surges in congenital defects and Guillain-Barré syndrome alerted the world to the danger of ZIKV. In the context of limited data, quantitative models were critical in reducing uncertainties and guiding the global ZIKV response. Here, we review some of the models used to assess the risk of ZIKV-associated severe outcomes, the potential speed and size of ZIKV epidemics, and the geographic distribution of ZIKV risk. These models provide important insights and highlight significant unresolved questions related to ZIKV and other emerging pathogens. Published by Oxford University Press for the Infectious Diseases Society of America 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  11. From Peer-Reviewed to Peer-Reproduced in Scholarly Publishing: The Complementary Roles of Data Models and Workflows in Bioinformatics.

    Directory of Open Access Journals (Sweden)

    Alejandra González-Beltrán

    Full Text Available Reproducing the results from a scientific paper can be challenging due to the absence of data and the computational tools required for their analysis. In addition, details relating to the procedures used to obtain the published results can be difficult to discern due to the use of natural language when reporting how experiments have been performed. The Investigation/Study/Assay (ISA, Nanopublications (NP, and Research Objects (RO models are conceptual data modelling frameworks that can structure such information from scientific papers. Computational workflow platforms can also be used to reproduce analyses of data in a principled manner. We assessed the extent by which ISA, NP, and RO models, together with the Galaxy workflow system, can capture the experimental processes and reproduce the findings of a previously published paper reporting on the development of SOAPdenovo2, a de novo genome assembler.Executable workflows were developed using Galaxy, which reproduced results that were consistent with the published findings. A structured representation of the information in the SOAPdenovo2 paper was produced by combining the use of ISA, NP, and RO models. By structuring the information in the published paper using these data and scientific workflow modelling frameworks, it was possible to explicitly declare elements of experimental design, variables, and findings. The models served as guides in the curation of scientific information and this led to the identification of inconsistencies in the original published paper, thereby allowing its authors to publish corrections in the form of an errata.SOAPdenovo2 scripts, data, and results are available through the GigaScience Database: http://dx.doi.org/10.5524/100044; the workflows are available from GigaGalaxy: http://galaxy.cbiit.cuhk.edu.hk; and the representations using the ISA, NP, and RO models are available through the SOAPdenovo2 case study website http://isa-tools.github.io/soapdenovo2

  12. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    Science.gov (United States)

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

  13. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Cobbs Gary

    2012-08-01

    Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

  14. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  15. 3D-modeling of the spine using EOS imaging system: Inter-reader reproducibility and reliability.

    Science.gov (United States)

    Rehm, Johannes; Germann, Thomas; Akbar, Michael; Pepke, Wojciech; Kauczor, Hans-Ulrich; Weber, Marc-André; Spira, Daniel

    2017-01-01

    To retrospectively assess the interreader reproducibility and reliability of EOS 3D full spine reconstructions in patients with adolescent idiopathic scoliosis (AIS). 73 patients with mean age of 17 years and a moderate AIS (median Cobb Angle 18.2°) obtained low-dose standing biplanar radiographs with EOS. Two independent readers performed "full spine" 3D reconstructions of the spine with the "full-spine" method adjusting the bone contour of every thoracic and lumbar vertebra (Th1-L5). Interreader reproducibility was assessed regarding rotation of every single vertebra in the coronal (i.e. frontal), sagittal (i.e. lateral), and axial plane, T1/T12 kyphosis, T4/T12 kyphosis, L1/L5 lordosis, L1/S1 lordosis and pelvic parameters. Radiation exposure, scan-time and 3D reconstruction time were recorded. Interclass correlation (ICC) ranged between 0.83 and 0.98 for frontal vertebral rotation, between 0.94 and 0.99 for lateral vertebral rotation and between 0.51 and 0.88 for axial vertebral rotation. ICC was 0.92 for T1/T12 kyphosis, 0.95 for T4/T12 kyphosis, 0.90 for L1/L5 lordosis, 0.85 for L1/S1 lordosis, 0.97 for pelvic incidence, 0.96 for sacral slope, 0.98 for sagittal pelvic tilt and 0.94 for lateral pelvic tilt. The mean time for reconstruction was 14.9 minutes (reader 1: 14.6 minutes, reader 2: 15.2 minutes, p3D angle measurement of vertebral rotation proved to be reliable and was performed in an acceptable reconstruction time. Interreader reproducibility of axial rotation was limited to some degree in the upper and middle thoracic spine due the obtuse angulation of the pedicles and the processi spinosi in the frontal view somewhat complicating their delineation.

  16. Dynamic contrast-enhanced computed tomography in metastatic nasopharyngeal carcinoma: reproducibility analysis and observer variability of the distributed parameter model.

    Science.gov (United States)

    Ng, Quan-Sing; Thng, Choon Hua; Lim, Wan Teck; Hartono, Septian; Thian, Yee Liang; Lee, Puor Sherng; Tan, Daniel Shao-Weng; Tan, Eng Huat; Koh, Tong San

    2012-01-01

    To determine the reproducibility and observer variability of distributed parameter analysis of dynamic contrast-enhanced computed tomography (DCE-CT) data in metastatic nasopharyngeal carcinoma, and to compare 2 approaches of region-of-interest (ROI) analyses. Following ethical approval and informed consent, 17 patients with nasopharyngeal carcinoma underwent paired DCE-CT examinations on a 64-detector scanner, measuring tumor blood flow (F, mL/100 mL/min), permeability surface area product (PS, mL/100 mL/min), fractional intravascular blood volume (v1, mL/100 mL), and fractional extracellular-extravascular volume (v2, mL/100 mL). Tumor parameters were derived by fitting (i) the ROI-averaged concentration-time curve, and (ii) the median value of parameters from voxel-level concentration-time curves. Measurement reproducibility and inter- and intraobserver variability were estimated using Bland-Altman statistics. Mean F, PS, v1, and v2 are 44.9, 20.4, 7.1, and 34.1 for ROI analysis, and 49.0, 18.7, 6.7, and 34.0 for voxel analysis, respectively. Within-subject coefficients of variation are 38.8%, 49.5%, 54.2%, and 35.9% for ROI analysis, and 15.0%, 35.1%, 33.0%, and 21.0% for voxel analysis, respectively. Repeatability coefficients are 48.2, 28.0, 10.7, and 33.9 for ROI analysis, and 20.3, 18.2, 6.1 and 19.8 for voxel analysis, respectively. Intra- and interobserver correlation coefficient ranged from 0.94 to 0.97 and 0.90 to 0.95 for voxel analysis, and 0.73 to 0.87 and 0.72 to 0.94 for ROI analysis, respectively. Measurements of F and v2 appear more reproducible than PS and v1. Voxel-level analysis improves both reproducibility and observer variability compared with ROI-averaged analysis and may retain information about tumor spatial heterogeneity.

  17. Measurement of cerebral blood flow by intravenous xenon-133 technique and a mobile system. Reproducibility using the Obrist model compared to total curve analysis

    DEFF Research Database (Denmark)

    Schroeder, T; Holstein, P; Lassen, N A

    1986-01-01

    was considerably more reproducible than CBF level. Using a single detector instead of five regional values averaged as the hemispheric flow increased standard deviation of CBF level by 10-20%, while the variation in asymmetry was doubled. In optimal measuring conditions the two models revealed no significant...... differences, but in low flow situations the artifact model yielded significantly more stable results. The present apparatus, equipped with 3-5 detectors covering each hemisphere, offers the opportunity of performing serial CBF measurements in situations not otherwise feasible.......The recent development of a mobile 10 detector unit, using i.v. Xenon-133 technique, has made it possible to perform repeated bedside measurements of cerebral blood flow (CBF). Test-retest studies were carried out in 38 atherosclerotic subjects, in order to evaluate the reproducibility of CBF level...

  18. Testing Reproducibility in Earth Sciences

    Science.gov (United States)

    Church, M. A.; Dudill, A. R.; Frey, P.; Venditti, J. G.

    2017-12-01

    Reproducibility represents how closely the results of independent tests agree when undertaken using the same materials but different conditions of measurement, such as operator, equipment or laboratory. The concept of reproducibility is fundamental to the scientific method as it prevents the persistence of incorrect or biased results. Yet currently the production of scientific knowledge emphasizes rapid publication of previously unreported findings, a culture that has emerged from pressures related to hiring, publication criteria and funding requirements. Awareness and critique of the disconnect between how scientific research should be undertaken, and how it actually is conducted, has been prominent in biomedicine for over a decade, with the fields of economics and psychology more recently joining the conversation. The purpose of this presentation is to stimulate the conversation in earth sciences where, despite implicit evidence in widely accepted classifications, formal testing of reproducibility is rare.As a formal test of reproducibility, two sets of experiments were undertaken with the same experimental procedure, at the same scale, but in different laboratories. Using narrow, steep flumes and spherical glass beads, grain size sorting was examined by introducing fine sediment of varying size and quantity into a mobile coarse bed. The general setup was identical, including flume width and slope; however, there were some variations in the materials, construction and lab environment. Comparison of the results includes examination of the infiltration profiles, sediment mobility and transport characteristics. The physical phenomena were qualitatively reproduced but not quantitatively replicated. Reproduction of results encourages more robust research and reporting, and facilitates exploration of possible variations in data in various specific contexts. Following the lead of other fields, testing of reproducibility can be incentivized through changes to journal

  19. Reproducibility study of [{sup 18}F]FPP(RGD){sub 2} uptake in murine models of human tumor xenografts

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Edwin; Liu, Shuangdong; Chin, Frederick; Cheng, Zhen [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Gowrishankar, Gayatri; Yaghoubi, Shahriar [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Wedgeworth, James Patrick [Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Berndorff, Dietmar; Gekeler, Volker [Bayer Schering Pharma AG, Global Drug Discovery, Berlin (Germany); Gambhir, Sanjiv S. [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Canary Center at Stanford for Cancer Early Detection, Nuclear Medicine, Departments of Radiology and Bioengineering, Molecular Imaging Program at Stanford, Stanford, CA (United States)

    2011-04-15

    An {sup 18}F-labeled PEGylated arginine-glycine-aspartic acid (RGD) dimer [{sup 18}F]FPP(RGD){sub 2} has been used to image tumor {alpha}{sub v}{beta}{sub 3} integrin levels in preclinical and clinical studies. Serial positron emission tomography (PET) studies may be useful for monitoring antiangiogenic therapy response or for drug screening; however, the reproducibility of serial scans has not been determined for this PET probe. The purpose of this study was to determine the reproducibility of the integrin {alpha}{sub v}{beta}{sub 3}-targeted PET probe, [{sup 18}F ]FPP(RGD){sub 2} using small animal PET. Human HCT116 colon cancer xenografts were implanted into nude mice (n = 12) in the breast and scapular region and grown to mean diameters of 5-15 mm for approximately 2.5 weeks. A 3-min acquisition was performed on a small animal PET scanner approximately 1 h after administration of [{sup 18}F]FPP(RGD){sub 2} (1.9-3.8 MBq, 50-100 {mu}Ci) via the tail vein. A second small animal PET scan was performed approximately 6 h later after reinjection of the probe to assess for reproducibility. Images were analyzed by drawing an ellipsoidal region of interest (ROI) around the tumor xenograft activity. Percentage injected dose per gram (%ID/g) values were calculated from the mean or maximum activity in the ROIs. Coefficients of variation and differences in %ID/g values between studies from the same day were calculated to determine the reproducibility. The coefficient of variation (mean {+-}SD) for %ID{sub mean}/g and %ID{sub max}/g values between [{sup 18}F]FPP(RGD){sub 2} small animal PET scans performed 6 h apart on the same day were 11.1 {+-} 7.6% and 10.4 {+-} 9.3%, respectively. The corresponding differences in %ID{sub mean}/g and %ID{sub max}/g values between scans were -0.025 {+-} 0.067 and -0.039 {+-} 0.426. Immunofluorescence studies revealed a direct relationship between extent of {alpha}{sub {nu}}{beta}{sub 3} integrin expression in tumors and tumor vasculature

  20. [Reproducing and evaluating a rabbit model of multiple organ dysfunction syndrome after cardiopulmonary resuscitation resulted from asphyxia].

    Science.gov (United States)

    Zhang, Dong; Li, Nan; Chen, Ying; Wang, Yu-shan

    2013-02-01

    To evaluate the reproduction of a model of post resuscitation multiple organ dysfunction syndrome (PR-MODS) after cardiac arrest (CA) in rabbit, in order to provide new methods for post-CA treatment. Thirty-five rabbits were randomly divided into three groups, the sham group (n=5), the 7-minute asphyxia group (n=15), and the 8-minute asphyxia group (n=15). The asphyxia CA model was reproduced with tracheal occlusion. After cardiopulmonary resuscitation (CPR), the ratio of recovery of spontaneous circulation (ROSC), the mortality at different time points and the incidence of systemic inflammatory response syndrome (SIRS) were observed in two asphyxia groups. Creatine kinase isoenzyme (CK-MB), alanine aminotransferase (ALT), creatinine (Cr), glucose (Glu) and arterial partial pressure of oxygen (PaO2) levels in blood were measured in the two asphyxia groups before CPR and 12, 24 and 48 hours after ROSC. The survived rabbits were euthanized at 48 hours after ROSC, and heart, brain, lung, kidney, liver, and intestine were harvested for pathological examination using light microscope. PR-MODS after CA was defined based on the function of main organs and their pathological changes. (1) The incidence of ROSC was 100.0% in 7-minute asphyxia group and 86.7% in 8-minute asphyxia group respectively (P>0.05). The 6-hour mortality in 8-minute asphyxia group was significantly higher than that in 7-minute asphyxia group (46.7% vs. 6.7%, P0.05). (2) There was a variety of organ dysfunctions in survived rabbits after ROSC, including chemosis, respiratory distress, hypotension, abdominal distension, weakened or disappearance of bowel peristalsis and oliguria. (3) There was no SIRS or associated changes in major organ function in the sham group. SIRS was observed at 12 - 24 hours after ROSC in the two asphyxia groups. CK-MB was increased significantly at 12 hours after ROSC compared with that before asphyxia (7-minute asphyxia group: 786.88±211.84 U/L vs. 468.20±149.45 U/L, 8

  1. Current Challenges in the First Principle Quantitative Modelling of the Lower Hybrid Current Drive in Tokamaks

    Science.gov (United States)

    Peysson, Y.; Bonoli, P. T.; Chen, J.; Garofalo, A.; Hillairet, J.; Li, M.; Qian, J.; Shiraiwa, S.; Decker, J.; Ding, B. J.; Ekedahl, A.; Goniche, M.; Zhai, X.

    2017-10-01

    The Lower Hybrid (LH) wave is widely used in existing tokamaks for tailoring current density profile or extending pulse duration to steady-state regimes. Its high efficiency makes it particularly attractive for a fusion reactor, leading to consider it for this purpose in ITER tokamak. Nevertheless, if basics of the LH wave in tokamak plasma are well known, quantitative modeling of experimental observations based on first principles remains a highly challenging exercise, despite considerable numerical efforts achieved so far. In this context, a rigorous methodology must be carried out in the simulations to identify the minimum number of physical mechanisms that must be considered to reproduce experimental shot to shot observations and also scalings (density, power spectrum). Based on recent simulations carried out for EAST, Alcator C-Mod and Tore Supra tokamaks, the state of the art in LH modeling is reviewed. The capability of fast electron bremsstrahlung, internal inductance li and LH driven current at zero loop voltage to constrain all together LH simulations is discussed, as well as the needs of further improvements (diagnostics, codes, LH model), for robust interpretative and predictive simulations.

  2. Towards Quantitative Spatial Models of Seabed Sediment Composition.

    Directory of Open Access Journals (Sweden)

    David Stephens

    Full Text Available There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom's parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method.

  3. Quantitative Modeling of Human-Environment Interactions in Preindustrial Time

    Science.gov (United States)

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-04-01

    Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical

  4. Development and reproducibility evaluation of a Monte Carlo-based standard LINAC model for quality assurance of multi-institutional clinical trials.

    Science.gov (United States)

    Usmani, Muhammad Nauman; Takegawa, Hideki; Takashina, Masaaki; Numasaki, Hodaka; Suga, Masaki; Anetai, Yusuke; Kurosu, Keita; Koizumi, Masahiko; Teshima, Teruki

    2014-11-01

    Technical developments in radiotherapy (RT) have created a need for systematic quality assurance (QA) to ensure that clinical institutions deliver prescribed radiation doses consistent with the requirements of clinical protocols. For QA, an ideal dose verification system should be independent of the treatment-planning system (TPS). This paper describes the development and reproducibility evaluation of a Monte Carlo (MC)-based standard LINAC model as a preliminary requirement for independent verification of dose distributions. The BEAMnrc MC code is used for characterization of the 6-, 10- and 15-MV photon beams for a wide range of field sizes. The modeling of the LINAC head components is based on the specifications provided by the manufacturer. MC dose distributions are tuned to match Varian Golden Beam Data (GBD). For reproducibility evaluation, calculated beam data is compared with beam data measured at individual institutions. For all energies and field sizes, the MC and GBD agreed to within 1.0% for percentage depth doses (PDDs), 1.5% for beam profiles and 1.2% for total scatter factors (Scps.). Reproducibility evaluation showed that the maximum average local differences were 1.3% and 2.5% for PDDs and beam profiles, respectively. MC and institutions' mean Scps agreed to within 2.0%. An MC-based standard LINAC model developed to independently verify dose distributions for QA of multi-institutional clinical trials and routine clinical practice has proven to be highly accurate and reproducible and can thus help ensure that prescribed doses delivered are consistent with the requirements of clinical protocols. © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  5. Quantitative Modelling of Trace Elements in Hard Coal.

    Science.gov (United States)

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  6. Advantages of Relative versus Absolute Data for the Development of Quantitative Structure-Activity Relationship Classification Models.

    Science.gov (United States)

    Ruiz, Irene Luque; Gómez-Nieto, Miguel Ángel

    2017-11-27

    The appropriate selection of a chemical space represented by the data set, the selection of its chemical data representation, the development of a correct modeling process using a robust and reproducible algorithm, and the performance of an exhaustive training and external validation determine the usability and reproducibility of a quantitative structure-activity relationship (QSAR) classification model. In this paper, we show that the use of relative versus absolute data in the representation of the data sets produces better classification models when the other processes are not modified. Relative data considers a reference frame to measure the chemical characteristics involved in the classification model, refining the data set representation and smoothing the lack of chemical information. Three data sets with different characteristics have been used in this study, and classifications models have been built applying the support vector machine algorithm. For randomly selected training and test sets, values of accuracy and area under the receiver operating characteristic curve close to 100% have been obtained for the generation of the models and external validations in all cases.

  7. Melanoma screening: Informing public health policy with quantitative modelling.

    Directory of Open Access Journals (Sweden)

    Stephen Gilmore

    Full Text Available Australia and New Zealand share the highest incidence rates of melanoma worldwide. Despite the substantial increase in public and physician awareness of melanoma in Australia over the last 30 years-as a result of the introduction of publicly funded mass media campaigns that began in the early 1980s -mortality has steadily increased during this period. This increased mortality has led investigators to question the relative merits of primary versus secondary prevention; that is, sensible sun exposure practices versus early detection. Increased melanoma vigilance on the part of the public and among physicians has resulted in large increases in public health expenditure, primarily from screening costs and increased rates of office surgery. Has this attempt at secondary prevention been effective? Unfortunately epidemiologic studies addressing the causal relationship between the level of secondary prevention and mortality are prohibitively difficult to implement-it is currently unknown whether increased melanoma surveillance reduces mortality, and if so, whether such an approach is cost-effective. Here I address the issue of secondary prevention of melanoma with respect to incidence and mortality (and cost per life saved by developing a Markov model of melanoma epidemiology based on Australian incidence and mortality data. The advantages of developing a methodology that can determine constraint-based surveillance outcomes are twofold: first, it can address the issue of effectiveness; and second, it can quantify the trade-off between cost and utilisation of medical resources on one hand, and reduced morbidity and lives saved on the other. With respect to melanoma, implementing the model facilitates the quantitative determination of the relative effectiveness and trade-offs associated with different levels of secondary and tertiary prevention, both retrospectively and prospectively. For example, I show that the surveillance enhancement that began in

  8. Pangea breakup and northward drift of the Indian subcontinent reproduced by a numerical model of mantle convection.

    Science.gov (United States)

    Yoshida, Masaki; Hamano, Yozo

    2015-02-12

    Since around 200 Ma, the most notable event in the process of the breakup of Pangea has been the high speed (up to 20 cm yr(-1)) of the northward drift of the Indian subcontinent. Our numerical simulations of 3-D spherical mantle convection approximately reproduced the process of continental drift from the breakup of Pangea at 200 Ma to the present-day continental distribution. These simulations revealed that a major factor in the northward drift of the Indian subcontinent was the large-scale cold mantle downwelling that developed spontaneously in the North Tethys Ocean, attributed to the overall shape of Pangea. The strong lateral mantle flow caused by the high-temperature anomaly beneath Pangea, due to the thermal insulation effect, enhanced the acceleration of the Indian subcontinent during the early stage of the Pangea breakup. The large-scale hot upwelling plumes from the lower mantle, initially located under Africa, might have contributed to the formation of the large-scale cold mantle downwelling in the North Tethys Ocean.

  9. Three-dimensional surgical modelling with an open-source software protocol: study of precision and reproducibility in mandibular reconstruction with the fibula free flap.

    Science.gov (United States)

    Ganry, L; Quilichini, J; Bandini, C M; Leyder, P; Hersant, B; Meningaud, J P

    2017-08-01

    Very few surgical teams currently use totally independent and free solutions to perform three-dimensional (3D) surgical modelling for osseous free flaps in reconstructive surgery. This study assessed the precision and technical reproducibility of a 3D surgical modelling protocol using free open-source software in mandibular reconstruction with fibula free flaps and surgical guides. Precision was assessed through comparisons of the 3D surgical guide to the sterilized 3D-printed guide, determining accuracy to the millimetre level. Reproducibility was assessed in three surgical cases by volumetric comparison to the millimetre level. For the 3D surgical modelling, a difference of less than 0.1mm was observed. Almost no deformations (free flap modelling was between 0.1mm and 0.4mm, and the average precision of the complete reconstructed mandible was less than 1mm. The open-source software protocol demonstrated high accuracy without complications. However, the precision of the surgical case depends on the surgeon's 3D surgical modelling. Therefore, surgeons need training on the use of this protocol before applying it to surgical cases; this constitutes a limitation. Further studies should address the transfer of expertise. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  10. The Proximal Medial Sural Nerve Biopsy Model: A Standardised and Reproducible Baseline Clinical Model for the Translational Evaluation of Bioengineered Nerve Guides

    Directory of Open Access Journals (Sweden)

    Ahmet Bozkurt

    2014-01-01

    Full Text Available Autologous nerve transplantation (ANT is the clinical gold standard for the reconstruction of peripheral nerve defects. A large number of bioengineered nerve guides have been tested under laboratory conditions as an alternative to the ANT. The step from experimental studies to the implementation of the device in the clinical setting is often substantial and the outcome is unpredictable. This is mainly linked to the heterogeneity of clinical peripheral nerve injuries, which is very different from standardized animal studies. In search of a reproducible human model for the implantation of bioengineered nerve guides, we propose the reconstruction of sural nerve defects after routine nerve biopsy as a first or baseline study. Our concept uses the medial sural nerve of patients undergoing diagnostic nerve biopsy (≥2 cm. The biopsy-induced nerve gap was immediately reconstructed by implantation of the novel microstructured nerve guide, Neuromaix, as part of an ongoing first-in-human study. Here we present (i a detailed list of inclusion and exclusion criteria, (ii a detailed description of the surgical procedure, and (iii a follow-up concept with multimodal sensory evaluation techniques. The proximal medial sural nerve biopsy model can serve as a preliminarynature of the injuries or baseline nerve lesion model. In a subsequent step, newly developed nerve guides could be tested in more unpredictable and challenging clinical peripheral nerve lesions (e.g., following trauma which have reduced comparability due to the different nature of the injuries (e.g., site of injury and length of nerve gap.

  11. Reproducibility in Seismic Imaging

    Directory of Open Access Journals (Sweden)

    González-Verdejo O.

    2012-04-01

    Full Text Available Within the field of exploration seismology, there is interest at national level of integrating reproducibility in applied, educational and research activities related to seismic processing and imaging. This reproducibility implies the description and organization of the elements involved in numerical experiments. Thus, a researcher, teacher or student can study, verify, repeat, and modify them independently. In this work, we document and adapt reproducibility in seismic processing and imaging to spread this concept and its benefits, and to encourage the use of open source software in this area within our academic and professional environment. We present an enhanced seismic imaging example, of interest in both academic and professional environments, using Mexican seismic data. As a result of this research, we prove that it is possible to assimilate, adapt and transfer technology at low cost, using open source software and following a reproducible research scheme.

  12. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  13. Accuracy and reproducibility of voxel based superimposition of cone beam computed tomography models on the anterior cranial base and the zygomatic arches.

    Science.gov (United States)

    Nada, Rania M; Maal, Thomas J J; Breuning, K Hero; Bergé, Stefaan J; Mostafa, Yehya A; Kuijpers-Jagtman, Anne Marie

    2011-02-09

    Superimposition of serial Cone Beam Computed Tomography (CBCT) scans has become a valuable tool for three dimensional (3D) assessment of treatment effects and stability. Voxel based image registration is a newly developed semi-automated technique for superimposition and comparison of two CBCT scans. The accuracy and reproducibility of CBCT superimposition on the anterior cranial base or the zygomatic arches using voxel based image registration was tested in this study. 16 pairs of 3D CBCT models were constructed from pre and post treatment CBCT scans of 16 adult dysgnathic patients. Each pair was registered on the anterior cranial base three times and on the left zygomatic arch twice. Following each superimposition, the mean absolute distances between the 2 models were calculated at 4 regions: anterior cranial base, forehead, left and right zygomatic arches. The mean distances between the models ranged from 0.2 to 0.37 mm (SD 0.08-0.16) for the anterior cranial base registration and from 0.2 to 0.45 mm (SD 0.09-0.27) for the zygomatic arch registration. The mean differences between the two registration zones ranged between 0.12 to 0.19 mm at the 4 regions. Voxel based image registration on both zones could be considered as an accurate and a reproducible method for CBCT superimposition. The left zygomatic arch could be used as a stable structure for the superimposition of smaller field of view CBCT scans where the anterior cranial base is not visible.

  14. A Transformative Model for Undergraduate Quantitative Biology Education

    OpenAIRE

    Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematic...

  15. A theoretical quantitative model for evolution of cancer chemotherapy resistance

    Directory of Open Access Journals (Sweden)

    Gatenby Robert A

    2010-04-01

    Full Text Available Abstract Background Disseminated cancer remains a nearly uniformly fatal disease. While a number of effective chemotherapies are available, tumors inevitably evolve resistance to these drugs ultimately resulting in treatment failure and cancer progression. Causes for chemotherapy failure in cancer treatment reside in multiple levels: poor vascularization, hypoxia, intratumoral high interstitial fluid pressure, and phenotypic resistance to drug-induced toxicity through upregulated xenobiotic metabolism or DNA repair mechanisms and silencing of apoptotic pathways. We propose that in order to understand the evolutionary dynamics that allow tumors to develop chemoresistance, a comprehensive quantitative model must be used to describe the interactions of cell resistance mechanisms and tumor microenvironment during chemotherapy. Ultimately, the purpose of this model is to identify the best strategies to treat different types of tumor (tumor microenvironment, genetic/phenotypic tumor heterogeneity, tumor growth rate, etc.. We predict that the most promising strategies are those that are both cytotoxic and apply a selective pressure for a phenotype that is less fit than that of the original cancer population. This strategy, known as double bind, is different from the selection process imposed by standard chemotherapy, which tends to produce a resistant population that simply upregulates xenobiotic metabolism. In order to achieve this goal we propose to simulate different tumor progression and therapy strategies (chemotherapy and glucose restriction targeting stabilization of tumor size and minimization of chemoresistance. Results This work confirms the prediction of previous mathematical models and simulations that suggested that administration of chemotherapy with the goal of tumor stabilization instead of eradication would yield better results (longer subject survival than the use of maximum tolerated doses. Our simulations also indicate that the

  16. Assessment of a numerical model to reproduce event‐scale erosion and deposition distributions in a braided river

    Science.gov (United States)

    Measures, R.; Hicks, D. M.; Brasington, J.

    2016-01-01

    Abstract Numerical morphological modeling of braided rivers, using a physics‐based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth‐averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high‐flow event. Evaluation of model performance primarily focused upon using high‐resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach‐scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers. PMID:27708477

  17. Skills of General Circulation and Earth System Models in reproducing streamflow to the ocean: the case of Congo river

    Science.gov (United States)

    Santini, M.; Caporaso, L.

    2017-12-01

    Although the importance of water resources in the context of climate change, it is still difficult to correctly simulate the freshwater cycle over the land via General Circulation and Earth System Models (GCMs and ESMs). Existing efforts from the Climate Model Intercomparison Project 5 (CMIP5) were mainly devoted to the validation of atmospheric variables like temperature and precipitation, with low attention to discharge.Here we investigate the present-day performances of GCMs and ESMs participating to CMIP5 in simulating the discharge of the river Congo to the sea thanks to: i) the long-term availability of discharge data for the Kinshasa hydrological station representative of more than 95% of the water flowing in the whole catchment; and ii) the River's still low influence by human intervention, which enables comparison with the (mostly) natural streamflow simulated within CMIP5.Our findings suggest how most of models appear overestimating the streamflow in terms of seasonal cycle, especially in the late winter and spring, while overestimation and variability across models are lower in late summer. Weighted ensemble means are also calculated, based on simulations' performances given by several metrics, showing some improvements of results.Although simulated inter-monthly and inter-annual percent anomalies do not appear significantly different from those in observed data, when translated into well consolidated indicators of drought attributes (frequency, magnitude, timing, duration), usually adopted for more immediate communication to stakeholders and decision makers, such anomalies can be misleading.These inconsistencies produce incorrect assessments towards water management planning and infrastructures (e.g. dams or irrigated areas), especially if models are used instead of measurements, as in case of ungauged basins or for basins with insufficient data, as well as when relying on models for future estimates without a preliminary quantification of model biases.

  18. Assessment of a numerical model to reproduce event-scale erosion and deposition distributions in a braided river

    Science.gov (United States)

    Williams, R. D.; Measures, R.; Hicks, D. M.; Brasington, J.

    2016-08-01

    Numerical morphological modeling of braided rivers, using a physics-based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth-averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high-flow event. Evaluation of model performance primarily focused upon using high-resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach-scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers.

  19. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    DEFF Research Database (Denmark)

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting...... particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLAN) allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLAN semantics based on discrete-time Markov chains....... The Maude implementation of PFLAN is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning) and the expected average...

  20. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    Directory of Open Access Journals (Sweden)

    Maurice H. ter Beek

    2015-04-01

    Full Text Available We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLan with action rates, which specify the likelihood of exhibiting particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLan allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLan semantics based on discrete-time Markov chains. The Maude implementation of PFLan is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning and the expected average cost of products.

  1. Herd immunity and pneumococcal conjugate vaccine: a quantitative model.

    Science.gov (United States)

    Haber, Michael; Barskey, Albert; Baughman, Wendy; Barker, Lawrence; Whitney, Cynthia G; Shaw, Kate M; Orenstein, Walter; Stephens, David S

    2007-07-20

    Invasive pneumococcal disease in older children and adults declined markedly after introduction in 2000 of the pneumococcal conjugate vaccine for young children. An empirical quantitative model was developed to estimate the herd (indirect) effects on the incidence of invasive disease among persons >or=5 years of age induced by vaccination of young children with 1, 2, or >or=3 doses of the pneumococcal conjugate vaccine, Prevnar (PCV7), containing serotypes 4, 6B, 9V, 14, 18C, 19F and 23F. From 1994 to 2003, cases of invasive pneumococcal disease were prospectively identified in Georgia Health District-3 (eight metropolitan Atlanta counties) by Active Bacterial Core surveillance (ABCs). From 2000 to 2003, vaccine coverage levels of PCV7 for children aged 19-35 months in Fulton and DeKalb counties (of Atlanta) were estimated from the National Immunization Survey (NIS). Based on incidence data and the estimated average number of doses received by 15 months of age, a Poisson regression model was fit, describing the trend in invasive pneumococcal disease in groups not targeted for vaccination (i.e., adults and older children) before and after the introduction of PCV7. Highly significant declines in all the serotypes contained in PCV7 in all unvaccinated populations (5-19, 20-39, 40-64, and >64 years) from 2000 to 2003 were found under the model. No significant change in incidence was seen from 1994 to 1999, indicating rates were stable prior to vaccine introduction. Among unvaccinated persons 5+ years of age, the modeled incidence of disease caused by PCV7 serotypes as a group dropped 38.4%, 62.0%, and 76.6% for 1, 2, and 3 doses, respectively, received on average by the population of children by the time they are 15 months of age. Incidence of serotypes 14 and 23F had consistent significant declines in all unvaccinated age groups. In contrast, the herd immunity effects on vaccine-related serotype 6A incidence were inconsistent. Increasing trends of non

  2. Repeatability and Reproducibility of Corneal Biometric Measurements Using the Visante Omni and a Rabbit Experimental Model of Post-Surgical Corneal Ectasia

    Science.gov (United States)

    Liu, Yu-Chi; Konstantopoulos, Aris; Riau, Andri K.; Bhayani, Raj; Lwin, Nyein C.; Teo, Ericia Pei Wen; Yam, Gary Hin Fai; Mehta, Jodhbir S.

    2015-01-01

    Purpose: To investigate the repeatability and reproducibility of the Visante Omni topography in obtaining topography measurements of rabbit corneas and to develop a post-surgical model of corneal ectasia. Methods: Eight rabbits were used to study the repeatability and reproducibility by assessing the intra- and interobserver bias and limits of agreement. Another nine rabbits underwent different diopters (D) of laser in situ keratosmileusis (LASIK) were used for the development of ectasia model. All eyes were examined with the Visante Omni, and corneal ultrastructure were evaluated with transmission electron microscopy (TEM). Results: There was no significant intra- or interobserver difference for mean steep and flat keratometry (K) values of simulated K, anterior, and posterior elevation measurements. Eyes underwent −5 D LASIK had a significant increase in mean amplitude of astigmatism and posterior surface elevation with time (P for trend corneal ectasia that was gradual in development and simulated the human condition. Translational Relevance: The results provide the foundations for the future evaluation of novel treatment modalities for post-surgical ectasia and keratoconus. PMID:25938004

  3. A Reliable and Reproducible Model for Assessing the Effect of Different Concentrations of α-Solanine on Rat Bone Marrow Mesenchymal Stem Cells

    Directory of Open Access Journals (Sweden)

    Adriana Ordóñez-Vásquez

    2017-01-01

    Full Text Available Αlpha-solanine (α-solanine is a glycoalkaloid present in potato (Solanum tuberosum. It has been of particular interest because of its toxicity and potential teratogenic effects that include abnormalities of the central nervous system, such as exencephaly, encephalocele, and anophthalmia. Various types of cell culture have been used as experimental models to determine the effect of α-solanine on cell physiology. The morphological changes in the mesenchymal stem cell upon exposure to α-solanine have not been established. This study aimed to describe a reliable and reproducible model for assessing the structural changes induced by exposure of mouse bone marrow mesenchymal stem cells (MSCs to different concentrations of α-solanine for 24 h. The results demonstrate that nonlethal concentrations of α-solanine (2–6 μM changed the morphology of the cells, including an increase in the number of nucleoli, suggesting elevated protein synthesis, and the formation of spicules. In addition, treatment with α-solanine reduced the number of adherent cells and the formation of colonies in culture. Immunophenotypic characterization and staining of MSCs are proposed as a reproducible method that allows description of cells exposed to the glycoalkaloid, α-solanine.

  4. Attempting to train a digital human model to reproduce human subject reach capabilities in an ejection seat aircraft

    NARCIS (Netherlands)

    Zehner, G.F.; Hudson, J.A.; Oudenhuijzen, A.

    2006-01-01

    From 1997 through 2002, the Air Force Research Lab and TNO Defence, Security and Safety (Business Unit Human Factors) were involved in a series of tests to quantify the accuracy of five Human Modeling Systems (HMSs) in determining accommodation limits of ejection seat aircraft. The results of these

  5. Isokinetic eccentric exercise as a model to induce and reproduce pathophysiological alterations related to delayed onset muscle soreness

    DEFF Research Database (Denmark)

    Lund, Henrik; Vestergaard-Poulsen, P; Kanstrup, I.L.

    1998-01-01

    Physiological alterations following unaccustomed eccentric exercise in an isokinetic dynamometer of the right m. quadriceps until exhaustion were studied, in order to create a model in which the physiological responses to physiotherapy could be measured. In experiment I (exp. I), seven selected p...

  6. Developing a Collection of Composable Data Translation Software Units to Improve Efficiency and Reproducibility in Ecohydrologic Modeling Workflows

    Science.gov (United States)

    Olschanowsky, C.; Flores, A. N.; FitzGerald, K.; Masarik, M. T.; Rudisill, W. J.; Aguayo, M.

    2017-12-01

    Dynamic models of the spatiotemporal evolution of water, energy, and nutrient cycling are important tools to assess impacts of climate and other environmental changes on ecohydrologic systems. These models require spatiotemporally varying environmental forcings like precipitation, temperature, humidity, windspeed, and solar radiation. These input data originate from a variety of sources, including global and regional weather and climate models, global and regional reanalysis products, and geostatistically interpolated surface observations. Data translation measures, often subsetting in space and/or time and transforming and converting variable units, represent a seemingly mundane, but critical step in the application workflows. Translation steps can introduce errors, misrepresentations of data, slow execution time, and interrupt data provenance. We leverage a workflow that subsets a large regional dataset derived from the Weather Research and Forecasting (WRF) model and prepares inputs to the Parflow integrated hydrologic model to demonstrate the impact translation tool software quality on scientific workflow results and performance. We propose that such workflows will benefit from a community approved collection of data transformation components. The components should be self-contained composable units of code. This design pattern enables automated parallelization and software verification, improving performance and reliability. Ensuring that individual translation components are self-contained and target minute tasks increases reliability. The small code size of each component enables effective unit and regression testing. The components can be automatically composed for efficient execution. An efficient data translation framework should be written to minimize data movement. Composing components within a single streaming process reduces data movement. Each component will typically have a low arithmetic intensity, meaning that it requires about the same number of

  7. Synchronized mammalian cell culture: part II--population ensemble modeling and analysis for development of reproducible processes.

    Science.gov (United States)

    Jandt, Uwe; Barradas, Oscar Platas; Pörtner, Ralf; Zeng, An-Ping

    2015-01-01

    The consideration of inherent population inhomogeneities of mammalian cell cultures becomes increasingly important for systems biology study and for developing more stable and efficient processes. However, variations of cellular properties belonging to different sub-populations and their potential effects on cellular physiology and kinetics of culture productivity under bioproduction conditions have not yet been much in the focus of research. Culture heterogeneity is strongly determined by the advance of the cell cycle. The assignment of cell-cycle specific cellular variations to large-scale process conditions can be optimally determined based on the combination of (partially) synchronized cultivation under otherwise physiological conditions and subsequent population-resolved model adaptation. The first step has been achieved using the physical selection method of countercurrent flow centrifugal elutriation, recently established in our group for different mammalian cell lines which is presented in Part I of this paper series. In this second part, we demonstrate the successful adaptation and application of a cell-cycle dependent population balance ensemble model to describe and understand synchronized bioreactor cultivations performed with two model mammalian cell lines, AGE1.HNAAT and CHO-K1. Numerical adaptation of the model to experimental data allows for detection of phase-specific parameters and for determination of significant variations between different phases and different cell lines. It shows that special care must be taken with regard to the sampling frequency in such oscillation cultures to minimize phase shift (jitter) artifacts. Based on predictions of long-term oscillation behavior of a culture depending on its start conditions, optimal elutriation setup trade-offs between high cell yields and high synchronization efficiency are proposed. © 2014 American Institute of Chemical Engineers.

  8. Preserve specimens for reproducibility

    Czech Academy of Sciences Publication Activity Database

    Krell, F.-T.; Klimeš, Petr; Rocha, L. A.; Fikáček, M.; Miller, S. E.

    2016-01-01

    Roč. 539, č. 7628 (2016), s. 168 ISSN 0028-0836 Institutional support: RVO:60077344 Keywords : reproducibility * specimen * biodiversity Subject RIV: EH - Ecology, Behaviour Impact factor: 40.137, year: 2016 http://www.nature.com/nature/journal/v539/n7628/full/539168b.html

  9. Minimum Information about a Cardiac Electrophysiology Experiment (MICEE): Standardised Reporting for Model Reproducibility, Interoperability, and Data Sharing

    Science.gov (United States)

    Quinn, TA; Granite, S; Allessie, MA; Antzelevitch, C; Bollensdorff, C; Bub, G; Burton, RAB; Cerbai, E; Chen, PS; Delmar, M; DiFrancesco, D; Earm, YE; Efimov, IR; Egger, M; Entcheva, E; Fink, M; Fischmeister, R; Franz, MR; Garny, A; Giles, WR; Hannes, T; Harding, SE; Hunter, PJ; Iribe, G; Jalife, J; Johnson, CR; Kass, RS; Kodama, I; Koren, G; Lord, P; Markhasin, VS; Matsuoka, S; McCulloch, AD; Mirams, GR; Morley, GE; Nattel, S; Noble, D; Olesen, SP; Panfilov, AV; Trayanova, NA; Ravens, U; Richard, S; Rosenbaum, DS; Rudy, Y; Sachs, F; Sachse, FB; Saint, DA; Schotten, U; Solovyova, O; Taggart, P; Tung, L; Varró, A; Volders, PG; Wang, K; Weiss, JN; Wettwer, E; White, E; Wilders, R; Winslow, RL; Kohl, P

    2011-01-01

    Cardiac experimental electrophysiology is in need of a well-defined Minimum Information Standard for recording, annotating, and reporting experimental data. As a step toward establishing this, we present a draft standard, called Minimum Information about a Cardiac Electrophysiology Experiment (MICEE). The ultimate goal is to develop a useful tool for cardiac electrophysiologists which facilitates and improves dissemination of the minimum information necessary for reproduction of cardiac electrophysiology research, allowing for easier comparison and utilisation of findings by others. It is hoped that this will enhance the integration of individual results into experimental, computational, and conceptual models. In its present form, this draft is intended for assessment and development by the research community. We invite the reader to join this effort, and, if deemed productive, implement the Minimum Information about a Cardiac Electrophysiology Experiment standard in their own work. PMID:21745496

  10. Quantitative analysis of prediction models for hot cracking in ...

    Indian Academy of Sciences (India)

    A RodrМguez-Prieto

    2017-11-16

    Nov 16, 2017 ... enhancing safety margins and adding greater precision to quantitative accident prediction [45]. One deterministic methodology is the stringency level (SL) approach, which is recognized as a valuable decision tool in the selection of standardized materials specifications to prevent potential failures [3].

  11. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    forecasting of quantitative snowfall at 10 meteoro- logical stations in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. At these stations of Snow and Avalanche Study Estab- lishment (SASE), snow and meteorological data are recorded twice daily at 08:30 and 17:30 hrs since more than last four decades ...

  12. A Transformative Model for Undergraduate Quantitative Biology Education

    Science.gov (United States)

    Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3)…

  13. Reproducibility of haemodynamical simulations in a subject-specific stented aneurysm model--a report on the Virtual Intracranial Stenting Challenge 2007.

    Science.gov (United States)

    Radaelli, A G; Augsburger, L; Cebral, J R; Ohta, M; Rüfenacht, D A; Balossino, R; Benndorf, G; Hose, D R; Marzo, A; Metcalfe, R; Mortier, P; Mut, F; Reymond, P; Socci, L; Verhegghe, B; Frangi, A F

    2008-07-19

    This paper presents the results of the Virtual Intracranial Stenting Challenge (VISC) 2007, an international initiative whose aim was to establish the reproducibility of state-of-the-art haemodynamical simulation techniques in subject-specific stented models of intracranial aneurysms (IAs). IAs are pathological dilatations of the cerebral artery walls, which are associated with high mortality and morbidity rates due to subarachnoid haemorrhage following rupture. The deployment of a stent as flow diverter has recently been indicated as a promising treatment option, which has the potential to protect the aneurysm by reducing the action of haemodynamical forces and facilitating aneurysm thrombosis. The direct assessment of changes in aneurysm haemodynamics after stent deployment is hampered by limitations in existing imaging techniques and currently requires resorting to numerical simulations. Numerical simulations also have the potential to assist in the personalized selection of an optimal stent design prior to intervention. However, from the current literature it is difficult to assess the level of technological advancement and the reproducibility of haemodynamical predictions in stented patient-specific models. The VISC 2007 initiative engaged in the development of a multicentre-controlled benchmark to analyse differences induced by diverse grid generation and computational fluid dynamics (CFD) technologies. The challenge also represented an opportunity to provide a survey of available technologies currently adopted by international teams from both academic and industrial institutions for constructing computational models of stented aneurysms. The results demonstrate the ability of current strategies in consistently quantifying the performance of three commercial intracranial stents, and contribute to reinforce the confidence in haemodynamical simulation, thus taking a step forward towards the introduction of simulation tools to support diagnostics and

  14. Accuracy and reproducibility of voxel based superimposition of cone beam computed tomography models on the anterior cranial base and the zygomatic arches.

    Directory of Open Access Journals (Sweden)

    Rania M Nada

    Full Text Available Superimposition of serial Cone Beam Computed Tomography (CBCT scans has become a valuable tool for three dimensional (3D assessment of treatment effects and stability. Voxel based image registration is a newly developed semi-automated technique for superimposition and comparison of two CBCT scans. The accuracy and reproducibility of CBCT superimposition on the anterior cranial base or the zygomatic arches using voxel based image registration was tested in this study. 16 pairs of 3D CBCT models were constructed from pre and post treatment CBCT scans of 16 adult dysgnathic patients. Each pair was registered on the anterior cranial base three times and on the left zygomatic arch twice. Following each superimposition, the mean absolute distances between the 2 models were calculated at 4 regions: anterior cranial base, forehead, left and right zygomatic arches. The mean distances between the models ranged from 0.2 to 0.37 mm (SD 0.08-0.16 for the anterior cranial base registration and from 0.2 to 0.45 mm (SD 0.09-0.27 for the zygomatic arch registration. The mean differences between the two registration zones ranged between 0.12 to 0.19 mm at the 4 regions. Voxel based image registration on both zones could be considered as an accurate and a reproducible method for CBCT superimposition. The left zygomatic arch could be used as a stable structure for the superimposition of smaller field of view CBCT scans where the anterior cranial base is not visible.

  15. Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2014-01-01

    that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy......Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... consumption.We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....

  16. ASSETS MANAGEMENT - A CONCEPTUAL MODEL DECOMPOSING VALUE FOR THE CUSTOMER AND A QUANTITATIVE MODEL

    Directory of Open Access Journals (Sweden)

    Susana Nicola

    2015-03-01

    Full Text Available In this paper we describe de application of a modeling framework, the so-called Conceptual Model Decomposing Value for the Customer (CMDVC, in a Footwear Industry case study, to ascertain the usefulness of this approach. The value networks were used to identify the participants, both tangible and intangible deliverables/endogenous and exogenous assets, and the analysis of their interactions as the indication for an adequate value proposition. The quantitative model of benefits and sacrifices, using the Fuzzy AHP method, enables the discussion of how the CMDVC can be applied and used in the enterprise environment and provided new relevant relations between perceived benefits (PBs.

  17. A quantitative method for defining high-arched palate using the Tcof1+/− mutant mouse as a model

    Science.gov (United States)

    Conley, Zachary R.; Hague, Molly; Kurosaka, Hiroshi; Dixon, Jill; Dixon, Michael J.; Trainor, Paul A.

    2016-01-01

    The palate functions as the roof of the mouth in mammals, separating the oral and nasal cavities. Its complex embryonic development and assembly poses unique susceptibilities to intrinsic and extrinsic disruptions. Such disruptions may cause failure of the developing palatal shelves to fuse along the midline resulting in a cleft. In other cases the palate may fuse at an arch, resulting in a vaulted oral cavity, termed high-arched palate. There are many models available for studying the pathogenesis of cleft palate but a relative paucity for high-arched palate. One condition exhibiting either cleft palate or high-arched palate is Treacher Collins syndrome, a congenital disorder characterized by numerous craniofacial anomalies. We quantitatively analyzed palatal perturbations in the Tcof1+/− mouse model of Treacher Collins syndrome, which phenocopies the condition in humans. We discovered that 46% of Tcof1+/− mutant embryos and new born pups exhibit either soft clefts or full clefts. In addition, 17% of Tcof1+/− mutants were found to exhibit high-arched palate, defined as two sigma above the corresponding wild-type population mean for height and angular based arch measurements. Furthermore, palatal shelf length and shelf width were decreased in all Tcof1+/− mutant embryos and pups compared to controls. Interestingly, these phenotypes were subsequently ameliorated through genetic inhibition of p53. The results of our study therefore provide a simple, reproducible and quantitative method for investigating models of high-arched palate. PMID:26772999

  18. A quantitative method for defining high-arched palate using the Tcof1(+/-) mutant mouse as a model.

    Science.gov (United States)

    Conley, Zachary R; Hague, Molly; Kurosaka, Hiroshi; Dixon, Jill; Dixon, Michael J; Trainor, Paul A

    2016-07-15

    The palate functions as the roof of the mouth in mammals, separating the oral and nasal cavities. Its complex embryonic development and assembly poses unique susceptibilities to intrinsic and extrinsic disruptions. Such disruptions may cause failure of the developing palatal shelves to fuse along the midline resulting in a cleft. In other cases the palate may fuse at an arch, resulting in a vaulted oral cavity, termed high-arched palate. There are many models available for studying the pathogenesis of cleft palate but a relative paucity for high-arched palate. One condition exhibiting either cleft palate or high-arched palate is Treacher Collins syndrome, a congenital disorder characterized by numerous craniofacial anomalies. We quantitatively analyzed palatal perturbations in the Tcof1(+/-) mouse model of Treacher Collins syndrome, which phenocopies the condition in humans. We discovered that 46% of Tcof1(+/-) mutant embryos and new born pups exhibit either soft clefts or full clefts. In addition, 17% of Tcof1(+/-) mutants were found to exhibit high-arched palate, defined as two sigma above the corresponding wild-type population mean for height and angular based arch measurements. Furthermore, palatal shelf length and shelf width were decreased in all Tcof1(+/-) mutant embryos and pups compared to controls. Interestingly, these phenotypes were subsequently ameliorated through genetic inhibition of p53. The results of our study therefore provide a simple, reproducible and quantitative method for investigating models of high-arched palate. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Reproducibility of ultrasonic testing

    International Nuclear Information System (INIS)

    Lecomte, J.-C.; Thomas, Andre; Launay, J.-P.; Martin, Pierre

    The reproducibility of amplitude quotations for both artificial and natural reflectors was studied for several combinations of instrument/search unit, all being of the same type. This study shows that in industrial inspection if a range of standardized equipment is used, a margin of error of about 6 decibels has to be taken into account (confidence interval of 95%). This margin is about 4 to 5 dB for natural or artificial defects located in the central area and about 6 to 7 dB for artificial defects located on the back surface. This lack of reproducibility seems to be attributable first to the search unit and then to the instrument and operator. These results were confirmed by analysis of calibration data obtained from 250 tests performed by 25 operators under shop conditions. The margin of error was higher than the 6 dB obtained in the study [fr

  20. A Review of Quantitative Situation Assessment Models for Nuclear Power Plant Operators

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Situation assessment is the process of developing situation awareness and situation awareness is defined as 'the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future.' Situation awareness is an important element influencing human actions because human decision making is based on the result of situation assessment or situation awareness. There are many models for situation awareness and those models can be categorized into qualitative or quantitative. As the effects of some input factors on situation awareness can be investigated through the quantitative models, the quantitative models are more useful for the design of operator interfaces, automation strategies, training program, and so on, than the qualitative models. This study presents the review of two quantitative models of situation assessment (SA) for nuclear power plant operators

  1. Retrospective Correction of Physiological Noise: Impact on Sensitivity, Specificity, and Reproducibility of Resting-State Functional Connectivity in a Reading Network Model.

    Science.gov (United States)

    Krishnamurthy, Venkatagiri; Krishnamurthy, Lisa C; Schwam, Dina M; Ealey, Ashley; Shin, Jaemin; Greenberg, Daphne; Morris, Robin D

    2018-03-01

    It is well accepted that physiological noise (PN) obscures the detection of neural fluctuations in resting-state functional connectivity (rsFC) magnetic resonance imaging. However, a clear consensus for an optimal PN correction (PNC) methodology and how it can impact the rsFC signal characteristics is still lacking. In this study, we probe the impact of three PNC methods: RETROICOR: (Glover et al., 2000 ), ANATICOR: (Jo et al., 2010 ), and RVTMBPM: (Bianciardi et al., 2009 ). Using a reading network model, we systematically explore the effects of PNC optimization on sensitivity, specificity, and reproducibility of rsFC signals. In terms of specificity, ANATICOR was found to be effective in removing local white matter (WM) fluctuations and also resulted in aggressive removal of expected cortical-to-subcortical functional connections. The ability of RETROICOR to remove PN was equivalent to removal of simulated random PN such that it artificially inflated the connection strength, thereby decreasing sensitivity. RVTMBPM maintained specificity and sensitivity by balanced removal of vasodilatory PN and local WM nuisance edges. Another aspect of this work was exploring the effects of PNC on identifying reading group differences. Most PNC methods accounted for between-subject PN variability resulting in reduced intersession reproducibility. This effect facilitated the detection of the most consistent group differences. RVTMBPM was most effective in detecting significant group differences due to its inherent sensitivity to removing spatially structured and temporally repeating PN arising from dense vasculature. Finally, results suggest that combining all three PNC resulted in "overcorrection" by removing signal along with noise.

  2. Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model.

    Science.gov (United States)

    Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro; Tamaki, Keiji

    2017-01-01

    In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.

  3. Reproducible research in palaeomagnetism

    Science.gov (United States)

    Lurcock, Pontus; Florindo, Fabio

    2015-04-01

    The reproducibility of research findings is attracting increasing attention across all scientific disciplines. In palaeomagnetism as elsewhere, computer-based analysis techniques are becoming more commonplace, complex, and diverse. Analyses can often be difficult to reproduce from scratch, both for the original researchers and for others seeking to build on the work. We present a palaeomagnetic plotting and analysis program designed to make reproducibility easier. Part of the problem is the divide between interactive and scripted (batch) analysis programs. An interactive desktop program with a graphical interface is a powerful tool for exploring data and iteratively refining analyses, but usually cannot operate without human interaction. This makes it impossible to re-run an analysis automatically, or to integrate it into a larger automated scientific workflow - for example, a script to generate figures and tables for a paper. In some cases the parameters of the analysis process itself are not saved explicitly, making it hard to repeat or improve the analysis even with human interaction. Conversely, non-interactive batch tools can be controlled by pre-written scripts and configuration files, allowing an analysis to be 'replayed' automatically from the raw data. However, this advantage comes at the expense of exploratory capability: iteratively improving an analysis entails a time-consuming cycle of editing scripts, running them, and viewing the output. Batch tools also tend to require more computer expertise from their users. PuffinPlot is a palaeomagnetic plotting and analysis program which aims to bridge this gap. First released in 2012, it offers both an interactive, user-friendly desktop interface and a batch scripting interface, both making use of the same core library of palaeomagnetic functions. We present new improvements to the program that help to integrate the interactive and batch approaches, allowing an analysis to be interactively explored and refined

  4. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Science.gov (United States)

    2011-05-18

    ... COMMISSION NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection... issued for public comment a document entitled: NUREG/CR-XXXX, ``Development of Quantitative Software... development of regulatory guidance for using risk information related to digital systems in the licensing...

  5. Dynamics of childhood growth and obesity development and validation of a quantitative mathematical model

    Science.gov (United States)

    Clinicians and policy makers need the ability to predict quantitatively how childhood bodyweight will respond to obesity interventions. We developed and validated a mathematical model of childhood energy balance that accounts for healthy growth and development of obesity, and that makes quantitative...

  6. Towards Reproducibility in Computational Hydrology

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei; Duffy, Chris; Arheimer, Berit

    2017-04-01

    Reproducibility is a foundational principle in scientific research. The ability to independently re-run an experiment helps to verify the legitimacy of individual findings, and evolve (or reject) hypotheses and models of how environmental systems function, and move them from specific circumstances to more general theory. Yet in computational hydrology (and in environmental science more widely) the code and data that produces published results are not regularly made available, and even if they are made available, there remains a multitude of generally unreported choices that an individual scientist may have made that impact the study result. This situation strongly inhibits the ability of our community to reproduce and verify previous findings, as all the information and boundary conditions required to set up a computational experiment simply cannot be reported in an article's text alone. In Hutton et al 2016 [1], we argue that a cultural change is required in the computational hydrological community, in order to advance and make more robust the process of knowledge creation and hypothesis testing. We need to adopt common standards and infrastructures to: (1) make code readable and re-useable; (2) create well-documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; (3) make code and workflows available, easy to find, and easy to interpret, using code and code metadata repositories. To create change we argue for improved graduate training in these areas. In this talk we reflect on our progress in achieving reproducible, open science in computational hydrology, which are relevant to the broader computational geoscience community. In particular, we draw on our experience in the Switch-On (EU funded) virtual water science laboratory (http://www.switch-on-vwsl.eu/participate/), which is an open platform for collaboration in hydrological experiments (e.g. [2]). While we use computational hydrology as

  7. Linear regression models for quantitative assessment of left ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-07-04

    Jul 4, 2008 ... computed. Linear regression models for the prediction of left ventricular structures were established. Prediction models for ... study aimed at establishing linear regression models that could be used in the prediction ..... Is white cat hypertension associated with artenal disease or left ventricular hypertrophy?

  8. Opening Reproducible Research

    Science.gov (United States)

    Nüst, Daniel; Konkol, Markus; Pebesma, Edzer; Kray, Christian; Klötgen, Stephanie; Schutzeichel, Marc; Lorenz, Jörg; Przibytzin, Holger; Kussmann, Dirk

    2016-04-01

    Open access is not only a form of publishing such that research papers become available to the large public free of charge, it also refers to a trend in science that the act of doing research becomes more open and transparent. When science transforms to open access we not only mean access to papers, research data being collected, or data being generated, but also access to the data used and the procedures carried out in the research paper. Increasingly, scientific results are generated by numerical manipulation of data that were already collected, and may involve simulation experiments that are completely carried out computationally. Reproducibility of research findings, the ability to repeat experimental procedures and confirm previously found results, is at the heart of the scientific method (Pebesma, Nüst and Bivand, 2012). As opposed to the collection of experimental data in labs or nature, computational experiments lend themselves very well for reproduction. Some of the reasons why scientists do not publish data and computational procedures that allow reproduction will be hard to change, e.g. privacy concerns in the data, fear for embarrassment or of losing a competitive advantage. Others reasons however involve technical aspects, and include the lack of standard procedures to publish such information and the lack of benefits after publishing them. We aim to resolve these two technical aspects. We propose a system that supports the evolution of scientific publications from static papers into dynamic, executable research documents. The DFG-funded experimental project Opening Reproducible Research (ORR) aims for the main aspects of open access, by improving the exchange of, by facilitating productive access to, and by simplifying reuse of research results that are published over the Internet. Central to the project is a new form for creating and providing research results, the executable research compendium (ERC), which not only enables third parties to

  9. What should a quantitative model of masking look like and why would we want it?

    Science.gov (United States)

    Francis, Gregory

    2008-07-15

    Quantitative models of backward masking appeared almost as soon as computing technology was available to simulate them; and continued interest in masking has lead to the development of new models. Despite this long history, the impact of the models on the field has been limited because they have fundamental shortcomings. This paper discusses these shortcomings and outlines what future quantitative models should look like. It also discusses several issues about modeling and how a model could be used by researchers to better explore masking and other aspects of cognition.

  10. Inhibition of basophil activation by histamine: a sensitive and reproducible model for the study of the biological activity of high dilutions.

    Science.gov (United States)

    Sainte-Laudy, J; Belon, Ph

    2009-10-01

    (another human basophil activation marker). Results were expressed in mean fluorescence intensity of the CD203c positive population (MFI-CD203c) and an activation index calculated by an algorithm. For the mouse basophil model, histamine was measured spectrofluorimetrically. The main results obtained over 28 years of work was the demonstration of a reproducible inhibition of human basophil activation by high dilutions of histamine, the effect peaks in the range of 15-17CH. The effect was not significant when histamine was replaced by histidine (a histamine precursor) or cimetidine (histamine H2 receptor antagonist) was added to the incubation medium. These results were confirmed by flow cytometry. Using the latter technique, we also showed that 4-Methyl histamine (H2 agonist) induced a similar effect, in contrast to 1-Methyl histamine, an inactive histamine metabolite. Using the mouse model, we showed that histamine high dilutions, in the same range of dilutions, inhibited histamine release. Successively, using different models to study of human and murine basophil activation, we demonstrated that high dilutions of histamine, in the range of 15-17CH induce a reproducible biological effect. This phenomenon has been confirmed by a multi-center study using the HBDT model and by at least three independent laboratories by flow cytometry. The specificity of the observed effect was confirmed, versus the water controls at the same dilution level by the absence of biological activity of inactive compounds such as histidine and 1-Methyl histamine and by the reversibility of this effect in the presence of a histamine receptor H2 antagonist.

  11. A quantitative risk-based model for reasoning over critical system properties

    Science.gov (United States)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  12. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  13. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  14. Linear regression models for quantitative assessment of left ...

    African Journals Online (AJOL)

    Changes in left ventricular structures and function have been reported in cardiomyopathies. No prediction models have been established in this environment. This study established regression models for prediction of left ventricular structures in normal subjects. A sample of normal subjects was drawn from a large urban ...

  15. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    used to simulate large-scale atmospheric circu- lation patterns and for determining the effect of changes ... to simulate precipitation and snow cover over the. Himalaya. Though this model underestimated pre- ...... Wilks D and Wilby R 1999 The weather generation game: A review of stochastic weather models; Progr. Phys.

  16. A Quantitative Causal Model Theory of Conditional Reasoning

    Science.gov (United States)

    Fernbach, Philip M.; Erb, Christopher D.

    2013-01-01

    The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…

  17. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Science.gov (United States)

    2012-07-17

    ...] Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A... Influenza Disease Models to Quantitatively Evaluate the Benefits and Risks of Vaccines: A Technical Workshop... model to quantitatively estimate the benefits and risks of a hypothetical influenza vaccine, and to seek...

  18. Quantitative modeling of chronic myeloid leukemia: insights from radiobiology

    Science.gov (United States)

    Radivoyevitch, Tomas; Hlatky, Lynn; Landaw, Julian

    2012-01-01

    Mathematical models of chronic myeloid leukemia (CML) cell population dynamics are being developed to improve CML understanding and treatment. We review such models in light of relevant findings from radiobiology, emphasizing 3 points. First, the CML models almost all assert that the latency time, from CML initiation to diagnosis, is at most ∼ 10 years. Meanwhile, current radiobiologic estimates, based on Japanese atomic bomb survivor data, indicate a substantially higher maximum, suggesting longer-term relapses and extra resistance mutations. Second, different CML models assume different numbers, between 400 and 106, of normal HSCs. Radiobiologic estimates favor values > 106 for the number of normal cells (often assumed to be the HSCs) that are at risk for a CML-initiating BCR-ABL translocation. Moreover, there is some evidence for an HSC dead-band hypothesis, consistent with HSC numbers being very different across different healthy adults. Third, radiobiologists have found that sporadic (background, age-driven) chromosome translocation incidence increases with age during adulthood. BCR-ABL translocation incidence increasing with age would provide a hitherto underanalyzed contribution to observed background adult-onset CML incidence acceleration with age, and would cast some doubt on stage-number inferences from multistage carcinogenesis models in general. PMID:22353999

  19. Evaluating quantitative and qualitative models: An application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  20. Evaluating quantitative and qualitative models: an application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  1. A suite of models to support the quantitative assessment of spread in pest risk analysis

    NARCIS (Netherlands)

    Robinet, C.; Kehlenbeck, H.; Werf, van der W.

    2012-01-01

    In the frame of the EU project PRATIQUE (KBBE-2007-212459 Enhancements of pest risk analysis techniques) a suite of models was developed to support the quantitative assessment of spread in pest risk analysis. This dataset contains the model codes (R language) for the four models in the suite. Three

  2. The place of quantitative energy models in a prospective approach

    International Nuclear Information System (INIS)

    Taverdet-Popiolek, N.

    2009-01-01

    Futurology above all depends on having the right mind set. Gaston Berger summarizes the prospective approach in 5 five main thrusts: prepare for the distant future, be open-minded (have a systems and multidisciplinary approach), carry out in-depth analyzes (draw out actors which are really determinant or the future, as well as established shed trends), take risks (imagine risky but flexible projects) and finally think about humanity, futurology being a technique at the service of man to help him build a desirable future. On the other hand, forecasting is based on quantified models so as to deduce 'conclusions' about the future. In the field of energy, models are used to draw up scenarios which allow, for instance, measuring medium or long term effects of energy policies on greenhouse gas emissions or global welfare. Scenarios are shaped by the model's inputs (parameters, sets of assumptions) and outputs. Resorting to a model or projecting by scenario is useful in a prospective approach as it ensures coherence for most of the variables that have been identified through systems analysis and that the mind on its own has difficulty to grasp. Interpretation of each scenario must be carried out in the light o the underlying framework of assumptions (the backdrop), developed during the prospective stage. When the horizon is far away (very long-term), the worlds imagined by the futurologist contain breaks (technological, behavioural and organizational) which are hard to integrate into the models. It is here that the main limit for the use of models in futurology is located. (author)

  3. Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model

    Directory of Open Access Journals (Sweden)

    Brent D. Winslow

    2017-04-01

    Full Text Available Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices.

  4. Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model.

    Science.gov (United States)

    Winslow, Brent D; Nguyen, Nam; Venta, Kimberly E

    2017-01-01

    Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices.

  5. First principles pharmacokinetic modeling: A quantitative study on Cyclosporin

    DEFF Research Database (Denmark)

    Mošat', Andrej; Lueshen, Eric; Heitzig, Martina

    2013-01-01

    renal and hepatic clearances, elimination half-life, and mass transfer coefficients, to establish drug biodistribution dynamics in all organs and tissues. This multi-scale model satisfies first principles and conservation of mass, species and momentum.Prediction of organ drug bioaccumulation...... as a function of cardiac output, physiology, pathology or administration route may be possible with the proposed PBPK framework. Successful application of our model-based drug development method may lead to more efficient preclinical trials, accelerated knowledge gain from animal experiments, and shortened time-to-market...

  6. Essays on Quantitative Marketing Models and Monte Carlo Integration Methods

    NARCIS (Netherlands)

    R.D. van Oest (Rutger)

    2005-01-01

    textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for

  7. Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.

    Science.gov (United States)

    Richards, Jef I.; Preston, Ivan L.

    Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…

  8. Quantitative modeling of human performance in complex, dynamic systems

    National Research Council Canada - National Science Library

    Baron, Sheldon; Kruser, Dana S; Huey, Beverly Messick

    1990-01-01

    ... Sheldon Baron, Dana S. Kruser, and Beverly Messick Huey, editors Panel on Human Performance Modeling Committee on Human Factors Commission on Behavioral and Social Sciences and Education National Research Council NATIONAL ACADEMY PRESS Washington, D.C. 1990 Copyrightoriginal retained, the be not from cannot book, paper original however, for version forma...

  9. A quantitative risk model for early lifecycle decision making

    Science.gov (United States)

    Feather, M. S.; Cornford, S. L.; Dunphy, J.; Hicks, K.

    2002-01-01

    Decisions made in the earliest phases of system development have the most leverage to influence the success of the entire development effort, and yet must be made when information is incomplete and uncertain. We have developed a scalable cost-benefit model to support this critical phase of early-lifecycle decision-making.

  10. Quantitative Comparison Between Crowd Models for Evacuation Planning and Evaluation

    NARCIS (Netherlands)

    Viswanathan, V.; Lee, C.E.; Lees, M.H.; Cheong, S.A.; Sloot, P.M.A.

    2014-01-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we

  11. Three-Dimensional Quantitative Morphometric Analysis (QMA for In Situ Joint and Tissue Assessment of Osteoarthritis in a Preclinical Rabbit Disease Model.

    Directory of Open Access Journals (Sweden)

    Kathryn S Stok

    Full Text Available This work utilises advances in multi-tissue imaging, and incorporates new metrics which define in situ joint changes and individual tissue changes in osteoarthritis (OA. The aims are to (1 demonstrate a protocol for processing intact animal joints for microCT to visualise relevant joint, bone and cartilage structures for understanding OA in a preclinical rabbit model, and (2 introduce a comprehensive three-dimensional (3D quantitative morphometric analysis (QMA, including an assessment of reproducibility. Sixteen rabbit joints with and without transection of the anterior cruciate ligament were scanned with microCT and contrast agents, and processed for histology. Semi-quantitative evaluation was performed on matching two-dimensional (2D histology and microCT images. Subsequently, 3D QMA was performed; including measures of cartilage, subchondral cortical and epiphyseal bone, and novel tibio-femoral joint metrics. Reproducibility of the QMA was tested on seven additional joints. A significant correlation was observed in cartilage thickness from matching histology-microCT pairs. The lateral compartment of operated joints had larger joint space width, thicker femoral cartilage and reduced bone volume, while osteophytes could be detected quantitatively. Measures between the in situ tibia and femur indicated an altered loading scenario. High measurement reproducibility was observed for all new parameters; with ICC ranging from 0.754 to 0.998. In conclusion, this study provides a novel 3D QMA to quantify macro and micro tissue measures in the joint of a rabbit OA model. New metrics were established consisting of: an angle to quantitatively measure osteophytes (σ, an angle to indicate erosion between the lateral and medial femoral condyles (ρ, a vector defining altered angulation (λ, α, β, γ and a twist angle (τ measuring instability and tissue degeneration between the femur and tibia, a length measure of joint space width (JSW, and a slope and

  12. Quantitative properties of clustering within modern microscopic nuclear models

    International Nuclear Information System (INIS)

    Volya, A.; Tchuvil’sky, Yu. M.

    2016-01-01

    A method for studying cluster spectroscopic properties of nuclear fragmentation, such as spectroscopic amplitudes, cluster form factors, and spectroscopic factors, is developed on the basis of modern precision nuclear models that take into account the mixing of large-scale shell-model configurations. Alpha-cluster channels are considered as an example. A mathematical proof of the need for taking into account the channel-wave-function renormalization generated by exchange terms of the antisymmetrization operator (Fliessbach effect) is given. Examples where this effect is confirmed by a high quality of the description of experimental data are presented. By and large, the method in question extends substantially the possibilities for studying clustering phenomena in nuclei and for improving the quality of their description.

  13. Quantitative modeling of selective lysosomal targeting for drug design

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rosania, G.; Horobin, R.W.

    2008-01-01

    Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers...... the diffusion of neutral and ionic molecules across biomembranes, protonation to mono- or bivalent ions, adsorption to lipids, and electrical attraction or repulsion. Based on simulation results, high and selective accumulation in lysosomes was found for weak mono- and bivalent bases with intermediate to high...... predicted by the model and three were close. Five of the antimalarial drugs were lipophilic weak dibasic compounds. The predicted optimum properties for a selective accumulation of weak bivalent bases in lysosomes are consistent with experimental values and are more accurate than any prior calculation...

  14. Quantitative Risk Modeling of Fire on the International Space Station

    Science.gov (United States)

    Castillo, Theresa; Haught, Megan

    2014-01-01

    The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.

  15. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    Science.gov (United States)

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  16. A right to reproduce?

    Science.gov (United States)

    Emson, H E

    1992-10-31

    Conscious control of the environment by homo sapiens has brought almost total release from the controls of ecology that limit the population of all other species. After a mere 10,000 years, humans have brought the planet close to collapse, and all the debate in the world seems unlikely to save it. A combination of uncontrolled breeding and rapacity is propelling us down the slippery slope 1st envisioned by Malthus, dragging the rest of the planet along. Only the conscious control, and most likely voluntary, reimposition of controls on breeding will reduce the overgrowth of humans, and we have far to go in that direction. "According to the United Nations Universal Declaration of Human Rights (1948, articles 16[I] and 16 [III]), Men and women of full age without any limitation due to race, nationality or religion have the right to marry and to found a family ... the family is the natural and fundamental group unit of society." The rhetoric of rights without the balancing of responsibilities is wrong in health care, and even more wrong in the context of world population. The mind-set of dominance and exploitation over the rest of creation has meant human reluctance to admit participation in a system where every part is interdependent. We must balance the right to reproduce with it responsible use, valuing interdependence, understanding, and respect with a duty not to unbalance, damage, or destroy. It is long overdue that we discard every statement of right that is unmatched by the equivalent duty and responsibility.

  17. A quantitative confidence signal detection model: 1. Fitting psychometric functions

    Science.gov (United States)

    Yi, Yongwoo

    2016-01-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. PMID:26763777

  18. High-response piezoelectricity modeled quantitatively near a phase boundary

    Science.gov (United States)

    Newns, Dennis M.; Kuroda, Marcelo A.; Cipcigan, Flaviu S.; Crain, Jason; Martyna, Glenn J.

    2017-01-01

    Interconversion of mechanical and electrical energy via the piezoelectric effect is fundamental to a wide range of technologies. The discovery in the 1990s of giant piezoelectric responses in certain materials has therefore opened new application spaces, but the origin of these properties remains a challenge to our understanding. A key role is played by the presence of a structural instability in these materials at compositions near the "morphotropic phase boundary" (MPB) where the crystal structure changes abruptly and the electromechanical responses are maximal. Here we formulate a simple, unified theoretical description which accounts for extreme piezoelectric response, its observation at compositions near the MPB, accompanied by ultrahigh dielectric constant and mechanical compliances with rather large anisotropies. The resulting model, based upon a Landau free energy expression, is capable of treating the important domain engineered materials and is found to be predictive while maintaining simplicity. It therefore offers a general and powerful means of accounting for the full set of signature characteristics in these functional materials including volume conserving sum rules and strong substrate clamping effects.

  19. Validity and reproducibility of a self-administered semi-quantitative food-frequency questionnaire for estimating usual daily fat, fibre, alcohol, caffeine and theobromine intakes among Belgian post-menopausal women.

    Science.gov (United States)

    Bolca, Selin; Huybrechts, Inge; Verschraegen, Mia; De Henauw, Stefaan; Van de Wiele, Tom

    2009-01-01

    A novel food-frequency questionnaire (FFQ) was developed and validated to assess the usual daily fat, saturated, mono-unsaturated and poly-unsaturated fatty acid, fibre, alcohol, caffeine, and theobromine intakes among Belgian post-menopausal women participating in dietary intervention trials with phyto-oestrogens. The relative validity of the FFQ was estimated by comparison with 7 day (d) estimated diet records (EDR, n 64) and its reproducibility was evaluated by repeated administrations 6 weeks apart (n 79). Although the questionnaire underestimated significantly all intakes compared to the 7 d EDR, it had a good ranking ability (r 0.47-0.94; weighted kappa 0.25-0.66) and it could reliably distinguish extreme intakes for all the estimated nutrients, except for saturated fatty acids. Furthermore, the correlation between repeated administrations was high (r 0.71-0.87) with a maximal misclassification of 7% (weighted kappa 0.33-0.80). In conclusion, these results compare favourably with those reported by others and indicate that the FFQ is a satisfactorily reliable and valid instrument for ranking individuals within this study population.

  20. Validity and Reproducibility of a Self-Administered Semi-Quantitative Food-Frequency Questionnaire for Estimating Usual Daily Fat, Fibre, Alcohol, Caffeine and Theobromine Intakes among Belgian Post-Menopausal Women

    Directory of Open Access Journals (Sweden)

    Selin Bolca

    2009-01-01

    Full Text Available A novel food-frequency questionnaire (FFQ was developed and validated to assess the usual daily fat, saturated, mono-unsaturated and poly-unsaturated fatty acid, fibre, alcohol, caffeine, and theobromine intakes among Belgian post-menopausal women participating in dietary intervention trials with phyto-oestrogens. The relative validity of the FFQ was estimated by comparison with 7 day (d estimated diet records (EDR, n 64 and its reproducibility was evaluated by repeated administrations 6 weeks apart (n 79. Although the questionnaire underestimated significantly all intakes compared to the 7 d EDR, it had a good ranking ability (r 0.47-0.94; weighted κ 0.25-0.66 and it could reliably distinguish extreme intakes for all the estimated nutrients, except for saturated fatty acids. Furthermore, the correlation between repeated administrations was high (r 0.71-0.87 with a maximal misclassification of 7% (weighted κ 0.33-0.80. In conclusion, these results compare favourably with those reported by others and indicate that the FFQ is a satisfactorily reliable and valid instrument for ranking individuals within this study population.

  1. Quantitative analysis of crossflow model of the COBRA-IV.1 code

    International Nuclear Information System (INIS)

    Lira, C.A.B.O.

    1983-01-01

    Based on experimental data in a rod bundle test section, the crossflow model of the COBRA-IV.1 code was quantitatively analysed. The analysis showed that is possible to establish some operational conditions in which the results of the theoretical model are acceptable. (author) [pt

  2. Development of probabilistic models for quantitative pathway analysis of plant pests introduction for the EU territory

    NARCIS (Netherlands)

    Douma, J.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Roques, A.; Werf, van der W.

    2015-01-01

    The aim of this report is to provide EFSA with probabilistic models for quantitative pathway analysis of plant pest introduction for the EU territory through non-edible plant products or plants. We first provide a conceptualization of two types of pathway models. The individual based PM simulates an

  3. Quantitative comparison of Zeiss-Humphrey model 840 and Rion UX-02 systems of ultrasound biomicroscopy.

    Science.gov (United States)

    Kobayashi, H; Kobayashi, K

    1999-05-01

    Our objective was to estimate the agreement between two different ultrasound biomicroscopes (UBMs) and to evaluate the clinical implications of the measurements obtained. We measured the anterior chamber depth, trabecular-iris angle, angle opening distance at 250 and 500 microm from the scleral spur, iris thickness and scleral-iris angle using the Humphrey UBM model 840 and Rion UBM UX-02 in 25 eyes of 25 normal volunteers. No significant difference was found in the mean values of any parameters measured by the Humphrey and Rion systems. Correlation coefficients of greater than 90% were observed for the parameters studied. Each system showed high reproducibility for all measured parameters. There were significant differences between the two systems in coefficients of variation for all parameters measured except the anterior chamber depth. The parameters measured with the Humphrey and Rion systems showed correlation coefficients of greater than 90%. The Humphrey system showed better reproducibility than the Rion system.

  4. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  5. Detection of Prostate Cancer: Quantitative Multiparametric MR Imaging Models Developed Using Registered Correlative Histopathology.

    Science.gov (United States)

    Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S

    2016-06-01

    Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34) of patients with peripheral-zone and whole-gland models, respectively, compared with ADC alone. Model-based CBS

  6. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    Science.gov (United States)

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  7. Interpretation of Quantitative Structure-Activity Relationship Models: Past, Present, and Future.

    Science.gov (United States)

    Polishchuk, Pavel

    2017-11-27

    This paper is an overview of the most significant and impactful interpretation approaches of quantitative structure-activity relationship (QSAR) models, their development, and application. The evolution of the interpretation paradigm from "model → descriptors → (structure)" to "model → structure" is indicated. The latter makes all models interpretable regardless of machine learning methods or descriptors used for modeling. This opens wide prospects for application of corresponding interpretation approaches to retrieve structure-property relationships captured by any models. Issues of separate approaches are discussed as well as general issues and prospects of QSAR model interpretation.

  8. Quantitative modeling of gene networks of biological systems using fuzzy Petri nets and fuzzy sets

    Directory of Open Access Journals (Sweden)

    Raed I. Hamed

    2018-01-01

    Full Text Available Quantitative demonstrating of organic frameworks has turned into an essential computational methodology in the configuration of novel and investigation of existing natural frameworks. Be that as it may, active information that portrays the framework's elements should be known keeping in mind the end goal to get pertinent results with the routine displaying strategies. This information is frequently robust or even difficult to get. Here, we exhibit a model of quantitative fuzzy rational demonstrating approach that can adapt to obscure motor information and hence deliver applicable results despite the fact that dynamic information is fragmented or just dubiously characterized. Besides, the methodology can be utilized as a part of the blend with the current cutting edge quantitative demonstrating strategies just in specific parts of the framework, i.e., where the data are absent. The contextual analysis of the methodology suggested in this paper is performed on the model of nine-quality genes. We propose a kind of FPN model in light of fuzzy sets to manage the quantitative modeling of biological systems. The tests of our model appear that the model is practical and entirely powerful for information impersonation and thinking of fuzzy expert frameworks.

  9. A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon

    Science.gov (United States)

    Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.

    2017-01-01

    The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.

  10. Implementation of a combined association-linkage model for quantitative traits in linear mixed model procedures of statistical packages

    NARCIS (Netherlands)

    Beem, A. Leo; Boomsma, Dorret I.

    2006-01-01

    A transmission disequilibrium test for quantitative traits which combines association and linkage analyses is currently available in several dedicated software packages. We describe how to implement such models in linear mixed model procedures that are available in widely used statistical packages

  11. Production process reproducibility and product quality consistency of transient gene expression in HEK293 cells with anti-PD1 antibody as the model protein.

    Science.gov (United States)

    Ding, Kai; Han, Lei; Zong, Huifang; Chen, Junsheng; Zhang, Baohong; Zhu, Jianwei

    2017-03-01

    Demonstration of reproducibility and consistency of process and product quality is one of the most crucial issues in using transient gene expression (TGE) technology for biopharmaceutical development. In this study, we challenged the production consistency of TGE by expressing nine batches of recombinant IgG antibody in human embryonic kidney 293 cells to evaluate reproducibility including viable cell density, viability, apoptotic status, and antibody yield in cell culture supernatant. Product quality including isoelectric point, binding affinity, secondary structure, and thermal stability was assessed as well. In addition, major glycan forms of antibody from different batches of production were compared to demonstrate glycosylation consistency. Glycan compositions of the antibody harvested at different time periods were also measured to illustrate N-glycan distribution over the culture time. From the results, it has been demonstrated that different TGE batches are reproducible from lot to lot in overall cell growth, product yield, and product qualities including isoelectric point, binding affinity, secondary structure, and thermal stability. Furthermore, major N-glycan compositions are consistent among different TGE batches and conserved during cell culture time.

  12. Quantitative modelling of interaction of propafenone with sodium channels in cardiac cells

    Czech Academy of Sciences Publication Activity Database

    Pásek, Michal; Šimurda, J.

    2004-01-01

    Roč. 42, č. 2 (2004), s. 151-157 ISSN 0140-0118 R&D Projects: GA ČR GP204/02/D129 Institutional research plan: CEZ:AV0Z2076919 Keywords : cardiac cell * sodium current block * quantitative modelling Subject RIV: BO - Biophysics Impact factor: 1.070, year: 2004

  13. Quantitative analyses and modelling to support achievement of the 2020 goals for nine neglected tropical diseases

    NARCIS (Netherlands)

    T.D. Hollingsworth (T. Déirdre); E.R. Adams (Emily R.); R.M. Anderson (Roy); K. Atkins (Katherine); S. Bartsch (Sarah); M-G. Basáñez (María-Gloria); M. Behrend (Matthew); D.J. Blok (David); L.A.C. Chapman (Lloyd A. C.); L.E. Coffeng (Luc); O. Courtenay (Orin); R.E. Crump (Ron E.); S.J. de Vlas (Sake); A.P. Dobson (Andrew); L. Dyson (Louise); H. Farkas (Hajnal); A.P. Galvani (Alison P.); M. Gambhir (Manoj); D. Gurarie (David); M.A. Irvine (Michael A.); S. Jervis (Sarah); M.J. Keeling (Matt J.); L. Kelly-Hope (Louise); C. King (Charles); B.Y. Lee (Bruce Y.); E.A. le Rutte (Epke); T.M. Lietman (Thomas M.); M. Ndeffo-Mbah (Martial); G.F. Medley (Graham F.); E. Michael (Edwin); A. Pandey (Abhishek); J.K. Peterson (Jennifer K.); A. Pinsent (Amy); T.C. Porco (Travis C.); J.H. Richardus (Jan Hendrik); L. Reimer (Lisa); K.S. Rock (Kat S.); B.K. Singh (Brajendra K.); W.A. Stolk (Wilma); S. Swaminathan (Subramanian); S.J. Torr (Steve J.); J. Townsend (Jeffrey); J. Truscott (James); M. Walker (Martin); A. Zoueva (Alexandra)

    2015-01-01

    textabstractQuantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an

  14. A quantitative model of the cardiac ventricular cell incorporating the transverse-axial tubular system

    Czech Academy of Sciences Publication Activity Database

    Pásek, Michal; Christé, G.; Šimurda, J.

    2003-01-01

    Roč. 22, č. 3 (2003), s. 355-368 ISSN 0231-5882 R&D Projects: GA ČR GP204/02/D129 Institutional research plan: CEZ:AV0Z2076919 Keywords : cardiac cell * tubular system * quantitative modelling Subject RIV: BO - Biophysics Impact factor: 0.794, year: 2003

  15. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT.

    Science.gov (United States)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R; La Riviere, Patrick J; Alessio, Adam M

    2014-04-07

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)(-1), cardiac output = 3, 5, 8 L min(-1)). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  16. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    Science.gov (United States)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that

  17. Repeatability and reproducibility of Population Viability Analysis (PVA and the implications for threatened species management

    Directory of Open Access Journals (Sweden)

    Clare Morrison

    2016-08-01

    Full Text Available Conservation triage focuses on prioritizing species, populations or habitats based on urgency, biodiversity benefits, recovery potential as well as cost. Population Viability Analysis (PVA is frequently used in population focused conservation prioritizations. The critical nature of many of these management decisions requires that PVA models are repeatable and reproducible to reliably rank species and/or populations quantitatively. This paper assessed the repeatability and reproducibility of a subset of previously published PVA models. We attempted to rerun baseline models from 90 publicly available PVA studies published between 2000-2012 using the two most common PVA modelling software programs, VORTEX and RAMAS-GIS. Forty percent (n = 36 failed, 50% (45 were both repeatable and reproducible, and 10% (9 had missing baseline models. Repeatability was not linked to taxa, IUCN category, PVA program version used, year published or the quality of publication outlet, suggesting that the problem is systemic within the discipline. Complete and systematic presentation of PVA parameters and results are needed to ensure that the scientific input into conservation planning is both robust and reliable, thereby increasing the chances of making decisions that are both beneficial and defensible. The implications for conservation triage may be far reaching if population viability models cannot be reproduced with confidence, thus undermining their intended value.

  18. Statistical analysis of probabilistic models of software product lines with quantitative constraints

    DEFF Research Database (Denmark)

    Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...... whose operational behaviour interacts with a store of constraints, neatly separating product configuration from product behaviour. The resulting probabilistic configurations and behaviour converge seamlessly in a semantics based on DTMCs, thus enabling quantitative analyses ranging from the likelihood...... of certain behaviour to the expected average cost of products. This is supported by a Maude implementation of QFLan, integrated with the SMT solver Z3 and the distributed statistical model checker MultiVeStA. Our approach is illustrated with a bikes product line case study....

  19. Contextual sensitivity in scientific reproducibility.

    Science.gov (United States)

    Van Bavel, Jay J; Mende-Siedlecki, Peter; Brady, William J; Reinero, Diego A

    2016-06-07

    In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher's degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed "hidden moderators") between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility.

  20. [Feasibility of the extended application of near infrared universal quantitative models].

    Science.gov (United States)

    Lei, De-Qing; Hu, Chang-Qin; Feng, Yan-Chun; Feng, Fang

    2010-11-01

    Construction of a successful near infrared analysis model is a complex task. It spends a lot of manpower and material resources, and is restricted by sample collection and model optimization. So it is important to study on the extended application of the existing near infrared (NIR) models. In this paper, cephradine capsules universal quantitative model was used as an example to study on the feasibility of its extended application. Slope/bias correction and piecewise direct standardization correction methods were used to make the universal model to fit to predict the intermediates in manufacturing processes of cephradine capsules, such as the content of powder blend or granules. The results showed that the corrected NIR universal quantitative model can be used for process control although the results of the model correction by slope/bias or piecewise direct standardization were not as good as that of model updating. And it also indicated that the model corrected by slope/bias is better than that by piecewise direct standardization. Model correction provided a new application for NIR universal models in process control.

  1. Quantitative agent based model of user behavior in an Internet discussion forum.

    Science.gov (United States)

    Sobkowicz, Pawel

    2013-01-01

    The paper presents an agent based simulation of opinion evolution, based on a nonlinear emotion/information/opinion (E/I/O) individual dynamics, to an actual Internet discussion forum. The goal is to reproduce the results of two-year long observations and analyses of the user communication behavior and of the expressed opinions and emotions, via simulations using an agent based model. The model allowed to derive various characteristics of the forum, including the distribution of user activity and popularity (outdegree and indegree), the distribution of length of dialogs between the participants, their political sympathies and the emotional content and purpose of the comments. The parameters used in the model have intuitive meanings, and can be translated into psychological observables.

  2. Quantitative agent based model of user behavior in an Internet discussion forum.

    Directory of Open Access Journals (Sweden)

    Pawel Sobkowicz

    Full Text Available The paper presents an agent based simulation of opinion evolution, based on a nonlinear emotion/information/opinion (E/I/O individual dynamics, to an actual Internet discussion forum. The goal is to reproduce the results of two-year long observations and analyses of the user communication behavior and of the expressed opinions and emotions, via simulations using an agent based model. The model allowed to derive various characteristics of the forum, including the distribution of user activity and popularity (outdegree and indegree, the distribution of length of dialogs between the participants, their political sympathies and the emotional content and purpose of the comments. The parameters used in the model have intuitive meanings, and can be translated into psychological observables.

  3. Quantitative analysis of Terminal Restriction Fragment Length Polymorphism (T-RFLP microbial community profiles: peak height data showed to be more reproducible than peak area Análise quantitativa de perfis de T-RFLP de comunidades microbianas: dados de altura de picos mostraram-se mais reprodutíveis do que os de área

    Directory of Open Access Journals (Sweden)

    Roberto A. Caffaro-Filho

    2007-12-01

    Full Text Available Terminal Restriction Fragment Length Polymorphism (T-RFLP is a culture-independent fingerprinting method for microbial community analysis. Profiles generated by an automated electrophoresis system can be analysed quantitatively using either peak height or peak area data. Statistical testing demontrated that peak height data showed to be more reproducible than peak area data.Terminal Restriction Fragment Length Polymorphism (T-RFLP é um método molecular, independente de cultivo, para análise de comunidades microbianas. Perfis gerados por um sistema automatizado de eletroforese podem ser analisados quantitativamente usando dados de altura ou área dos picos. Os dados de altura mostraram-se mais reprodutíveis do que os de área.

  4. Development of quantitative atomic modeling for tungsten transport study Using LHD plasma with tungsten pellet injection

    International Nuclear Information System (INIS)

    Murakami, I.; Sakaue, H.A.; Suzuki, C.; Kato, D.; Goto, M.; Tamura, N.; Sudo, S.; Morita, S.

    2014-10-01

    Quantitative tungsten study with reliable atomic modeling is important for successful achievement of ITER and fusion reactors. We have developed tungsten atomic modeling for understanding the tungsten behavior in fusion plasmas. The modeling is applied to the analysis of tungsten spectra observed from currentless plasmas of the Large Helical Device (LHD) with tungsten pellet injection. We found that extreme ultraviolet (EUV) lines of W 24+ to W 33+ ions are very sensitive to electron temperature (Te) and useful to examine the tungsten behavior in edge plasmas. Based on the first quantitative analysis of measured spatial profile of W 44+ ion, the tungsten concentration is determined to be n(W 44+ )/n e = 1.4x10 -4 and the total radiation loss is estimated as ∼4 MW, of which the value is roughly half the total NBI power. (author)

  5. Group Active Engagements Using Quantitative Modeling of Physiology Concepts in Large-Enrollment Biology Classes

    Directory of Open Access Journals (Sweden)

    Karen L. Carleton

    2016-12-01

    Full Text Available Organismal Biology is the third introductory biology course taught at the University of Maryland. Students learn about the geometric, physical, chemical, and thermodynamic constraints that are common to all life, and their implications for the evolution of multicellular organisms based on a common genetic “toolbox.”  An additional goal is helping students to improve their scientific logic and comfort with quantitative modeling.  We recently developed group active engagement exercises (GAEs for this Organismal Biology class.  Currently, our class is built around twelve GAE activities implemented in an auditorium lecture hall in a large enrollment class.  The GAEs examine scientific concepts using a variety of models including physical models, qualitative models, and Excel-based quantitative models. Three quantitative GAEs give students an opportunity to build their understanding of key physiological ideas. 1 The Escape from Planet Ranvier exercise reinforces student understanding that membrane permeability means that ions move through open channels in the membrane.  2 The Stressing and Straining exercise requires students to quantify the elastic modulus from data gathered either in class or from scientific literature. 3 In Leveraging Your Options exercise, students learn about lever systems and apply this knowledge to biological systems.

  6. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  7. Extended characterisation of the serotonin 2A (5-HT2A) receptor-selective PET radiotracer 11C-MDL100907 in humans: quantitative analysis, test-retest reproducibility, and vulnerability to endogenous 5-HT tone.

    Science.gov (United States)

    Talbot, Peter S; Slifstein, Mark; Hwang, Dah-Ren; Huang, Yiyun; Scher, Erica; Abi-Dargham, Anissa; Laruelle, Marc

    2012-01-02

    Scanning properties and analytic methodology of the 5-HT2A receptor-selective positron emission tomography (PET) tracer 11C-MDL100907 have been partially characterised in previous reports. We present an extended characterisation in healthy human subjects. 64 11C-MDL100907 PET scans with metabolite-corrected arterial input function were performed in 39 healthy adults (18-55 years). 12 subjects were scanned twice (duration 150 min) to provide data on plasma analysis, model order estimation, and stability and test-retest characteristics of outcome measures. All other scans were 90 min duration. 3 subjects completed scanning at baseline and following 5-HT2A receptor antagonist medication (risperidone or ciproheptadine) to provide definitive data on the suitability of the cerebellum as reference region. 10 subjects were scanned under reduced 5-HT and control conditions using rapid tryptophan depletion to investigate vulnerability to competition with endogenous 5-HT. 13 subjects were scanned as controls in clinical protocols. Pooled data were used to analyse the relationship between tracer injected mass and receptor occupancy, and age-related decline in 5-HT2A receptors. Optimum analytic method was a 2-tissue compartment model with arterial input function. However, basis function implementation of SRTM may be suitable for measuring between-group differences non-invasively and warrants further investigation. Scan duration of 90 min achieved stable outcome measures in all cortical regions except orbitofrontal which required 120 min. Binding potential (BPP and BPND) test-retest variability was very good (7-11%) in neocortical regions other than orbitofrontal, and moderately good (14-20%) in orbitofrontal cortex and medial temporal lobe. Saturation occupancy of 5-HT2A receptors by risperidone validates the use of the cerebellum as a region devoid of specific binding for the purposes of PET. We advocate a mass limit of 4.6 μg to remain below 5% receptor occupancy. 11C

  8. Quantitative chemogenomics: machine-learning models of protein-ligand interaction.

    Science.gov (United States)

    Andersson, Claes R; Gustafsson, Mats G; Strömbergsson, Helena

    2011-01-01

    Chemogenomics is an emerging interdisciplinary field that lies in the interface of biology, chemistry, and informatics. Most of the currently used drugs are small molecules that interact with proteins. Understanding protein-ligand interaction is therefore central to drug discovery and design. In the subfield of chemogenomics known as proteochemometrics, protein-ligand-interaction models are induced from data matrices that consist of both protein and ligand information along with some experimentally measured variable. The two general aims of this quantitative multi-structure-property-relationship modeling (QMSPR) approach are to exploit sparse/incomplete information sources and to obtain more general models covering larger parts of the protein-ligand space, than traditional approaches that focuses mainly on specific targets or ligands. The data matrices, usually obtained from multiple sparse/incomplete sources, typically contain series of proteins and ligands together with quantitative information about their interactions. A useful model should ideally be easy to interpret and generalize well to new unseen protein-ligand combinations. Resolving this requires sophisticated machine-learning methods for model induction, combined with adequate validation. This review is intended to provide a guide to methods and data sources suitable for this kind of protein-ligand-interaction modeling. An overview of the modeling process is presented including data collection, protein and ligand descriptor computation, data preprocessing, machine-learning-model induction and validation. Concerns and issues specific for each step in this kind of data-driven modeling will be discussed. © 2011 Bentham Science Publishers

  9. Quantitative Analysis of Intra Urban Growth Modeling using socio economic agents by combining cellular automata model with agent based model

    Science.gov (United States)

    Singh, V. K.; Jha, A. K.; Gupta, K.; Srivastav, S. K.

    2017-12-01

    Recent studies indicate that there is a significant improvement in the urban land use dynamics through modeling at finer spatial resolutions. Geo-computational models such as cellular automata and agent based model have given evident proof regarding the quantification of the urban growth pattern with urban boundary. In recent studies, socio- economic factors such as demography, education rate, household density, parcel price of the current year, distance to road, school, hospital, commercial centers and police station are considered to the major factors influencing the Land Use Land Cover (LULC) pattern of the city. These factors have unidirectional approach to land use pattern which makes it difficult to analyze the spatial aspects of model results both quantitatively and qualitatively. In this study, cellular automata model is combined with generic model known as Agent Based Model to evaluate the impact of socio economic factors on land use pattern. For this purpose, Dehradun an Indian city is selected as a case study. Socio economic factors were collected from field survey, Census of India, Directorate of economic census, Uttarakhand, India. A 3X3 simulating window is used to consider the impact on LULC. Cellular automata model results are examined for the identification of hot spot areas within the urban area and agent based model will be using logistic based regression approach where it will identify the correlation between each factor on LULC and classify the available area into low density, medium density, high density residential or commercial area. In the modeling phase, transition rule, neighborhood effect, cell change factors are used to improve the representation of built-up classes. Significant improvement is observed in the built-up classes from 84 % to 89 %. However after incorporating agent based model with cellular automata model the accuracy improved from 89 % to 94 % in 3 classes of urban i.e. low density, medium density and commercial classes

  10. [Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].

    Science.gov (United States)

    Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan

    2005-06-01

    Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.

  11. Quantitative explanation of circuit experiments and real traffic using the optimal velocity model

    Science.gov (United States)

    Nakayama, Akihiro; Kikuchi, Macoto; Shibata, Akihiro; Sugiyama, Yuki; Tadaki, Shin-ichi; Yukawa, Satoshi

    2016-04-01

    We have experimentally confirmed that the occurrence of a traffic jam is a dynamical phase transition (Tadaki et al 2013 New J. Phys. 15 103034, Sugiyama et al 2008 New J. Phys. 10 033001). In this study, we investigate whether the optimal velocity (OV) model can quantitatively explain the results of experiments. The occurrence and non-occurrence of jammed flow in our experiments agree with the predictions of the OV model. We also propose a scaling rule for the parameters of the model. Using this rule, we obtain critical density as a function of a single parameter. The obtained critical density is consistent with the observed values for highway traffic.

  12. Study on quantitative reliability analysis by multilevel flow models for nuclear power plants

    International Nuclear Information System (INIS)

    Yang Ming; Zhang Zhijian

    2011-01-01

    Multilevel Flow Models (MFM) is a goal-oriented system modeling method. MFM explicitly describes how a system performs the required functions under stated conditions for a stated period of time. This paper presents a novel system reliability analysis method based on MFM (MRA). The proposed method allows describing the system knowledge at different levels of abstraction which makes the reliability model easy for understanding, establishing, modifying and extending. The success probabilities of all main goals and sub-goals can be available by only one-time quantitative analysis. The proposed method is suitable for the system analysis and scheme comparison for complex industrial systems such as nuclear power plants. (authors)

  13. A quantitative analysis of instabilities in the linear chiral sigma model

    International Nuclear Information System (INIS)

    Nemes, M.C.; Nielsen, M.; Oliveira, M.M. de; Providencia, J. da

    1990-08-01

    We present a method to construct a complete set of stationary states corresponding to small amplitude motion which naturally includes the continuum solution. The energy wheighted sum rule (EWSR) is shown to provide for a quantitative criterium on the importance of instabilities which is known to occur in nonasymptotically free theories. Out results for the linear σ model showed be valid for a large class of models. A unified description of baryon and meson properties in terms of the linear σ model is also given. (author)

  14. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    DEFF Research Database (Denmark)

    Han, Xue; Sandels, Claes; Zhu, Kun

    2013-01-01

    operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics......, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation...

  15. Polymorphic ethyl alcohol as a model system for the quantitative study of glassy behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, H.E.; Schober, H.; Gonzalez, M.A. [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France); Bermejo, F.J.; Fayos, R.; Dawidowski, J. [Consejo Superior de Investigaciones Cientificas, Madrid (Spain); Ramos, M.A.; Vieira, S. [Universidad Autonoma de Madrid (Spain)

    1997-04-01

    The nearly universal transport and dynamical properties of amorphous materials or glasses are investigated. Reasonably successful phenomenological models have been developed to account for these properties as well as the behaviour near the glass-transition, but quantitative microscopic models have had limited success. One hindrance to these investigations has been the lack of a material which exhibits glass-like properties in more than one phase at a given temperature. This report presents results of neutron-scattering experiments for one such material ordinary ethyl alcohol, which promises to be a model system for future investigations of glassy behaviour. (author). 8 refs.

  16. Incorporation of caffeine into a quantitative model of fatigue and sleep.

    Science.gov (United States)

    Puckeridge, M; Fulcher, B D; Phillips, A J K; Robinson, P A

    2011-03-21

    A recent physiologically based model of human sleep is extended to incorporate the effects of caffeine on sleep-wake timing and fatigue. The model includes the sleep-active neurons of the hypothalamic ventrolateral preoptic area (VLPO), the wake-active monoaminergic brainstem populations (MA), their interactions with cholinergic/orexinergic (ACh/Orx) input to MA, and circadian and homeostatic drives. We model two effects of caffeine on the brain due to competitive antagonism of adenosine (Ad): (i) a reduction in the homeostatic drive and (ii) an increase in cholinergic activity. By comparing the model output to experimental data, constraints are determined on the parameters that describe the action of caffeine on the brain. In accord with experiment, the ranges of these parameters imply significant variability in caffeine sensitivity between individuals, with caffeine's effectiveness in reducing fatigue being highly dependent on an individual's tolerance, and past caffeine and sleep history. Although there are wide individual differences in caffeine sensitivity and thus in parameter values, once the model is calibrated for an individual it can be used to make quantitative predictions for that individual. A number of applications of the model are examined, using exemplar parameter values, including: (i) quantitative estimation of the sleep loss and the delay to sleep onset after taking caffeine for various doses and times; (ii) an analysis of the system's stable states showing that the wake state during sleep deprivation is stabilized after taking caffeine; and (iii) comparing model output successfully to experimental values of subjective fatigue reported in a total sleep deprivation study examining the reduction of fatigue with caffeine. This model provides a framework for quantitatively assessing optimal strategies for using caffeine, on an individual basis, to maintain performance during sleep deprivation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. From classical genetics to quantitative genetics to systems biology: modeling epistasis.

    Directory of Open Access Journals (Sweden)

    David L Aylor

    2008-03-01

    Full Text Available Gene expression data has been used in lieu of phenotype in both classical and quantitative genetic settings. These two disciplines have separate approaches to measuring and interpreting epistasis, which is the interaction between alleles at different loci. We propose a framework for estimating and interpreting epistasis from a classical experiment that combines the strengths of each approach. A regression analysis step accommodates the quantitative nature of expression measurements by estimating the effect of gene deletions plus any interaction. Effects are selected by significance such that a reduced model describes each expression trait. We show how the resulting models correspond to specific hierarchical relationships between two regulator genes and a target gene. These relationships are the basic units of genetic pathways and genomic system diagrams. Our approach can be extended to analyze data from a variety of experiments, multiple loci, and multiple environments.

  18. Refining the statistical model for quantitative immunostaining of surface-functionalized nanoparticles by AFM.

    Science.gov (United States)

    MacCuspie, Robert I; Gorka, Danielle E

    2013-10-01

    Recently, an atomic force microscopy (AFM)-based approach for quantifying the number of biological molecules conjugated to a nanoparticle surface at low number densities was reported. The number of target molecules conjugated to the analyte nanoparticle can be determined with single nanoparticle fidelity using antibody-mediated self-assembly to decorate the analyte nanoparticles with probe nanoparticles (i.e., quantitative immunostaining). This work refines the statistical models used to quantitatively interpret the observations when AFM is used to image the resulting structures. The refinements add terms to the previous statistical models to account for the physical sizes of the analyte nanoparticles, conjugated molecules, antibodies, and probe nanoparticles. Thus, a more physically realistic statistical computation can be implemented for a given sample of known qualitative composition, using the software scripts provided. Example AFM data sets, using horseradish peroxidase conjugated to gold nanoparticles, are presented to illustrate how to implement this method successfully.

  19. Accounting for genetic interactions improves modeling of individual quantitative trait phenotypes in yeast.

    Science.gov (United States)

    Forsberg, Simon K G; Bloom, Joshua S; Sadhu, Meru J; Kruglyak, Leonid; Carlborg, Örjan

    2017-04-01

    Experiments in model organisms report abundant genetic interactions underlying biologically important traits, whereas quantitative genetics theory predicts, and data support, the notion that most genetic variance in populations is additive. Here we describe networks of capacitating genetic interactions that contribute to quantitative trait variation in a large yeast intercross population. The additive variance explained by individual loci in a network is highly dependent on the allele frequencies of the interacting loci. Modeling of phenotypes for multilocus genotype classes in the epistatic networks is often improved by accounting for the interactions. We discuss the implications of these results for attempts to dissect genetic architectures and to predict individual phenotypes and long-term responses to selection.

  20. Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.

    Science.gov (United States)

    Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia

    2013-10-02

    Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series.

  1. Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model

    Science.gov (United States)

    Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi

    2017-09-01

    Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.

  2. Enhancing reproducibility: Failures from Reproducibility Initiatives underline core challenges.

    Science.gov (United States)

    Mullane, Kevin; Williams, Michael

    2017-08-15

    Efforts to address reproducibility concerns in biomedical research include: initiatives to improve journal publication standards and peer review; increased attention to publishing methodological details that enable experiments to be reconstructed; guidelines on standards for study design, implementation, analysis and execution; meta-analyses of multiple studies within a field to synthesize a common conclusion and; the formation of consortia to adopt uniform protocols and internally reproduce data. Another approach to addressing reproducibility are Reproducibility Initiatives (RIs), well-intended, high-profile, systematically peer-vetted initiatives that are intended to replace the traditional process of scientific self-correction. Outcomes from the RIs reported to date have questioned the usefulness of this approach, particularly when the RI outcome differs from other independent self-correction studies that have reproduced the original finding. As a failed RI attempt is a single outcome distinct from the original study, it cannot provide any definitive conclusions necessitating additional studies that the RI approach has neither the ability nor intent of conducting making it a questionable replacement for self-correction. A failed RI attempt also has the potential to damage the reputation of the author of the original finding. Reproduction is frequently confused with replication, an issue that is more than semantic with the former denoting "similarity" and the latter an "exact copy" - an impossible outcome in research because of known and unknown technical, environmental and motivational differences between the original and reproduction studies. To date, the RI framework has negatively impacted efforts to improve reproducibility, confounding attempts to determine whether a research finding is real. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. A method to isolate bacterial communities and characterize ecosystems from food products: Validation and utilization in as a reproducible chicken meat model.

    Science.gov (United States)

    Rouger, Amélie; Remenant, Benoit; Prévost, Hervé; Zagorec, Monique

    2017-04-17

    Influenced by production and storage processes and by seasonal changes the diversity of meat products microbiota can be very variable. Because microbiotas influence meat quality and safety, characterizing and understanding their dynamics during processing and storage is important for proposing innovative and efficient storage conditions. Challenge tests are usually performed using meat from the same batch, inoculated at high levels with one or few strains. Such experiments do not reflect the true microbial situation, and the global ecosystem is not taken into account. Our purpose was to constitute live stocks of chicken meat microbiotas to create standard and reproducible ecosystems. We searched for the best method to collect contaminating bacterial communities from chicken cuts to store as frozen aliquots. We tested several methods to extract DNA of these stored communities for subsequent PCR amplification. We determined the best moment to collect bacteria in sufficient amounts during the product shelf life. Results showed that the rinsing method associated to the use of Mobio DNA extraction kit was the most reliable method to collect bacteria and obtain DNA for subsequent PCR amplification. Then, 23 different chicken meat microbiotas were collected using this procedure. Microbiota aliquots were stored at -80°C without important loss of viability. Their characterization by cultural methods confirmed the large variability (richness and abundance) of bacterial communities present on chicken cuts. Four of these bacterial communities were used to estimate their ability to regrow on meat matrices. Challenge tests performed on sterile matrices showed that these microbiotas were successfully inoculated and could overgrow the natural microbiota of chicken meat. They can therefore be used for performing reproducible challenge tests mimicking a true meat ecosystem and enabling the possibility to test the influence of various processing or storage conditions on complex meat

  4. [Application of DOSC combined with SBC in batches transfer of NIR quantitative model].

    Science.gov (United States)

    Jia, Yi-Fei; Zhang, Ying-Ying; Xu, Bing; Wang, An-Dong; Zhan, Xue-Yan

    2017-06-01

    Near infrared model established under a certain condition can be applied to the new samples status, environmental conditions or instrument status through the model transfer. Spectral background correction and model update are two types of data process methods of NIR quantitative model transfer, and orthogonal signal regression (OSR) is a method based on spectra background correction, in which virtual standard spectra is used to fit a linear relation between master batches spectra and slave batches spectra, and map the slave batches spectra to the master batch spectra to realize the transfer of near infrared quantitative model. However, the above data processing method requires the represent activeness of the virtual standard spectra, otherwise the big error will occur in the process of regression. Therefore, direct orthogonal signal correction-slope and bias correction (DOSC-SBC) method was proposed in this paper to solve the problem of PLS model's failure to predict accurately the content of target components in the formula of different batches, analyze the difference between the spectra background of the samples from different sources and the prediction error of PLS models. DOSC method was used to eliminate the difference of spectral background unrelated to target value, and after being combined with SBC method, the system errors between the different batches of samples were corrected to make the NIR quantitative model transferred between different batches. After DOSC-SBC method was used in the preparation process of water extraction and ethanol precipitation of Lonicerae Japonicae Flos in this paper, the prediction error of new batches of samples was decreased to 7.30% from 32.3% and to 4.34% from 237%, with significantly improved prediction accuracy, so that the target component in the new batch samples can be quickly quantified. DOSC-SBC model transfer method has realized the transfer of NIR quantitative model between different batches, and this method does

  5. Variable selection in near infrared spectroscopy for quantitative models of homologous analogs of cephalosporins

    Directory of Open Access Journals (Sweden)

    Yan-Chun Feng

    2014-07-01

    Full Text Available Two universal spectral ranges (4550–4100 cm-1 and 6190–5510 cm-1 for construction of quantitative models of homologous analogs of cephalosporins were proposed by evaluating the performance of five spectral ranges and their combinations, using three data sets of cephalosporins for injection, i.e., cefuroxime sodium, ceftriaxone sodium and cefoperazone sodium. Subsequently, the proposed ranges were validated by using eight calibration sets of other homologous analogs of cephalosporins for injection, namely cefmenoxime hydrochloride, ceftezole sodium, cefmetazole, cefoxitin sodium, cefotaxime sodium, cefradine, cephazolin sodium and ceftizoxime sodium. All the constructed quantitative models for the eight kinds of cephalosporins using these universal ranges could fulfill the requirements for quick quantification. After that, competitive adaptive reweighted sampling (CARS algorithm and infrared (IR–near infrared (NIR two-dimensional (2D correlation spectral analysis were used to determine the scientific basis of these two spectral ranges as the universal regions for the construction of quantitative models of cephalosporins. The CARS algorithm demonstrated that the ranges of 4550–4100 cm-1 and 6190–5510 cm-1 included some key wavenumbers which could be attributed to content changes of cephalosporins. The IR–NIR 2D spectral analysis showed that certain wavenumbers in these two regions have strong correlations to the structures of those cephalosporins that were easy to degrade.

  6. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    International Nuclear Information System (INIS)

    Scott, M.

    1995-03-01

    The question of how to assess or compare predictions from a number of models is one of concern in the validation of models, in understanding the effects of different models and model parameterizations on model output, and ultimately in assessing model reliability. Comparison of model predictions with observed data is the basic tool of model validation while comparison of predictions amongst different models provides one measure of model credibility. The guidance provided here is intended to provide qualitative and quantitative approaches (including graphical and statistical techniques) to such comparisons for use within the BIOMOVS II project. It is hoped that others may find it useful. It contains little technical information on the actual methods but several references are provided for the interested reader. The guidelines are illustrated on data from the VAMP CB scenario. Unfortunately, these data do not permit all of the possible approaches to be demonstrated since predicted uncertainties were not provided. The questions considered are concerned with a) intercomparison of model predictions and b) comparison of model predictions with the observed data. A series of examples illustrating some of the different types of data structure and some possible analyses have been constructed. A bibliography of references on model validation is provided. It is important to note that the results of the various techniques discussed here, whether qualitative or quantitative, should not be considered in isolation. Overall model performance must also include an evaluation of model structure and formulation, i.e. conceptual model uncertainties, and results for performance measures must be interpreted in this context. Consider a number of models which are used to provide predictions of a number of quantities at a number of time points. In the case of the VAMP CB scenario, the results include predictions of total deposition of Cs-137 and time dependent concentrations in various

  7. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    Science.gov (United States)

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  8. Fixing the cracks in the crystal ball: A maturity model for quantitative risk assessment

    International Nuclear Information System (INIS)

    Rae, Andrew; Alexander, Rob; McDermid, John

    2014-01-01

    Quantitative risk assessment (QRA) is widely practiced in system safety, but there is insufficient evidence that QRA in general is fit for purpose. Defenders of QRA draw a distinction between poor or misused QRA and correct, appropriately used QRA, but this distinction is only useful if we have robust ways to identify the flaws in an individual QRA. In this paper we present a comprehensive maturity model for QRA which covers all the potential flaws discussed in the risk assessment literature and in a collection of risk assessment peer reviews. We provide initial validation of the completeness and realism of the model. Our risk assessment maturity model provides a way to prioritise both process development within an organisation and empirical research within the QRA community. - Highlights: • Quantitative risk assessment (QRA) is widely practiced, but there is insufficient evidence that it is fit for purpose. • A given QRA may be good, or it may not – we need systematic ways to distinguish this. • We have created a maturity model for QRA which covers all the potential flaws discussed in the risk assessment literature. • We have provided initial validation of the completeness and realism of the model. • The maturity model can also be used to prioritise QRA research discipline-wide

  9. Comparison of semi-quantitative and quantitative dynamic contrast-enhanced MRI evaluations of vertebral marrow perfusion in a rat osteoporosis model.

    Science.gov (United States)

    Zhu, Jingqi; Xiong, Zuogang; Zhang, Jiulong; Qiu, Yuyou; Hua, Ting; Tang, Guangyu

    2017-11-14

    This study aims to investigate the technical feasibility of semi-quantitative and quantitative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in the assessment of longitudinal changes of marrow perfusion in a rat osteoporosis model, using bone mineral density (BMD) measured by micro-computed tomography (micro-CT) and histopathology as the gold standards. Fifty rats were randomly assigned to the control group (n=25) and ovariectomy (OVX) group whose bilateral ovaries were excised (n=25). Semi-quantitative and quantitative DCE-MRI, micro-CT, and histopathological examinations were performed on lumbar vertebrae at baseline and 3, 6, 9, and 12 weeks after operation. The differences between the two groups in terms of semi-quantitative DCE-MRI parameter (maximum enhancement, E max ), quantitative DCE-MRI parameters (volume transfer constant, K trans ; interstitial volume, V e ; and efflux rate constant, K ep ), micro-CT parameter (BMD), and histopathological parameter (microvessel density, MVD) were compared at each of the time points using an independent-sample t test. The differences in these parameters between baseline and other time points in each group were assessed via Bonferroni's multiple comparison test. A Pearson correlation analysis was applied to assess the relationships between DCE-MRI, micro-CT, and histopathological parameters. In the OVX group, the E max values decreased significantly compared with those of the control group at weeks 6 and 9 (p=0.003 and 0.004, respectively). The K trans values decreased significantly compared with those of the control group from week 3 (pquantitative DCE-MRI, the quantitative DCE-MRI parameter K trans is a more sensitive and accurate index for detecting early reduced perfusion in osteoporotic bone.

  10. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  11. Curating and Preparing High-Throughput Screening Data for Quantitative Structure-Activity Relationship Modeling.

    Science.gov (United States)

    Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2016-01-01

    Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described.

  12. Validation of Dynamic Contrast-Enhanced Magnetic Resonance Imaging-Derived Vascular Permeability Measurements Using Quantitative Autoradiography in the RG2 Rat Brain Tumor Model

    Directory of Open Access Journals (Sweden)

    Moira C. Ferrier

    2007-07-01

    Full Text Available Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI is widely used to evaluate tumor permeability, yet measurements have not been directly validated in brain tumors. Our purpose was to compare estimates of forward leakage Ktrans derived from DCE-MRI to the estimates K obtained using [14C]aminoisobutyric acid quantitative autoradiography ([14C]AIB OAR, an established method of evaluating blood-tumor barrier permeability. Both DCE-MRI and [14C]AIB OAR were performed in five rats 9 to 11 days following tumor implantation. Ktrans in the tumor was estimated from DCE-MRI using the threeparameter general kinetic model and a measured vascular input function. Ki was estimated from OAR data using regions of interest (ROI closely corresponding to those used to estimate Ktrans. Ktrans and Ki correlated with each other for two independent sets of central tumor ROI (R = 0.905, P = .035; R = 0.933, P = .021. In an additional six rats, Ktrans was estimated on two occasions to show reproducibility (intraclass coefficient = 0.9993; coefficient of variance = 6.07%. In vivo blood-tumor permeability parameters derived from DCE-MRI are reproducible and correlate with the gold standard for quantifying blood tumor barrier permeability, [14C]AIB OAR.

  13. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    Science.gov (United States)

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. © 2013 Elsevier B.V. All rights reserved.

  14. A Quantitative Risk Evaluation Model for Network Security Based on Body Temperature

    Directory of Open Access Journals (Sweden)

    Y. P. Jiang

    2016-01-01

    Full Text Available These days, in allusion to the traditional network security risk evaluation model, which have certain limitations for real-time, accuracy, characterization. This paper proposed a quantitative risk evaluation model for network security based on body temperature (QREM-BT, which refers to the mechanism of biological immune system and the imbalance of immune system which can result in body temperature changes, firstly, through the r-contiguous bits nonconstant matching rate algorithm to improve the detection quality of detector and reduce missing rate or false detection rate. Then the dynamic evolution process of the detector was described in detail. And the mechanism of increased antibody concentration, which is made up of activating mature detector and cloning memory detector, is mainly used to assess network risk caused by various species of attacks. Based on these reasons, this paper not only established the equation of antibody concentration increase factor but also put forward the antibody concentration quantitative calculation model. Finally, because the mechanism of antibody concentration change is reasonable and effective, which can effectively reflect the network risk, thus body temperature evaluation model was established in this paper. The simulation results showed that, according to body temperature value, the proposed model has more effective, real time to assess network security risk.

  15. Quantitative structure-activity relationship (QSAR) for insecticides: development of predictive in vivo insecticide activity models.

    Science.gov (United States)

    Naik, P K; Singh, T; Singh, H

    2009-07-01

    Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.

  16. A Structured Review of Quantitative Models of the Pharmaceutical Supply Chain

    Directory of Open Access Journals (Sweden)

    Carlos Franco

    2017-01-01

    Full Text Available The aim of this review is to identify and provide a structured overview of quantitative models in the pharmaceutical supply chain, a subject not exhaustively studied in the previous reviews on healthcare logistics related mostly to quantitative models in healthcare or logistics studies in hospitals. The models are classified into three categories of classification: network design, inventory models, and optimization of a pharmaceutical supply chain. A taxonomy for each category is shown describing the principal features of each echelon included in the review; this taxonomy allows the readers to identify easily a paper based on the actors of the pharmaceutical supply chain. The search process included research articles published in the databases between 1984 and November 2016. In total 46 studies were included. In the review process we found that in the three fields the most common source of uncertainty used is the demand in the 56% of the cases. Within the review process we can conclude that most of the articles in the literature are focused on the optimization of the pharmaceutical supply chain and inventory models but the field on supply chain network design is not deeply studied.

  17. Enhancing the Quantitative Representation of Socioeconomic Conditions in the Shared Socio-economic Pathways (SSPs) using the International Futures Model

    Science.gov (United States)

    Rothman, D. S.; Siraj, A.; Hughes, B.

    2013-12-01

    The international research community is currently in the process of developing new scenarios for climate change research. One component of these scenarios are the Shared Socio-economic Pathways (SSPs), which describe a set of possible future socioeconomic conditions. These are presented in narrative storylines with associated quantitative drivers. The core quantitative drivers include total population, average GDP per capita, educational attainment, and urbanization at the global, regional, and national levels. At the same time there have been calls, particularly by the IAV community, for the SSPs to include additional quantitative information on other key social factors, such as income inequality, governance, health, and access to key infrastructures, which are discussed in the narratives. The International Futures system (IFs), based at the Pardee Center at the University of Denver, is able to provide forecasts of many of these indicators. IFs cannot use the SSP drivers as exogenous inputs, but we are able to create development pathways that closely reproduce the core quantitative drivers defined by the different SSPs, as well as incorporating assumptions on other key driving factors described in the qualitative narratives. In this paper, we present forecasts for additional quantitative indicators based upon the implementation of the SSP development pathways in IFs. These results will be of value to many researchers.

  18. Quantitative immunohistochemical method for detection of wheat protein in model sausage

    Directory of Open Access Journals (Sweden)

    Zuzana Řezáčová Lukášková

    2014-01-01

    Full Text Available Since gluten can induce coeliac symptoms in hypersensitive consumers with coeliac disease, it is necessary to label foodstuffs containing it. In order to label foodstuffs, it is essential to find reliable methods to accurately determine the amount of wheat protein in food. The objective of this study was to compare the quantitative detection of wheat protein in model sausages by ELISA and immunohistochemical methods. Immunohistochemistry was combined with stereology to achieve quantitative results. High correlation between addition of wheat protein and compared methods was confirmed. For ELISA method the determined values were r = 0.98, P P < 0.01. Although ELISA is an accredited method, it was not reliable, unlike immunohistochemical methods (stereology SD = 3.1.

  19. European cold winter 2009-2010: How unusual in the instrumental record and how reproducible in the ARPEGE-Climat model?

    Science.gov (United States)

    Ouzeau, G.; Cattiaux, J.; Douville, H.; Ribes, A.; Saint-Martin, D.

    2011-06-01

    Boreal winter 2009-2010 made headlines for cold anomalies in many countries of the northern mid-latitudes. Northern Europe was severely hit by this harsh winter in line with a record persistence of the negative phase of the North Atlantic Oscillation (NAO). In the present study, we first provide a wider perspective on how unusual this winter was by using the recent 20th Century Reanalysis. A weather regime analysis shows that the frequency of the negative NAO was unprecedented since winter 1939-1940, which is then used as a dynamical analog of winter 2009-2010 to demonstrate that the latter might have been much colder without the background global warming observed during the twentieth century. We then use an original nudging technique in ensembles of global atmospheric simulations driven by observed sea surface temperature (SST) and radiative forcings to highlight the relevance of the stratosphere for understanding if not predicting such anomalous winter seasons. Our results demonstrate that an improved representation of the lower stratosphere is necessary to reproduce not only the seasonal mean negative NAO signal, but also its intraseasonal distribution and the corresponding increased probability of cold waves over northern Europe.

  20. Reproducibility of a reaming test

    DEFF Research Database (Denmark)

    Pilny, Lukas; Müller, Pavel; De Chiffre, Leonardo

    2012-01-01

    The reproducibility of a reaming test was analysed to document its applicability as a performance test for cutting fluids. Reaming tests were carried out on a drilling machine using HSS reamers. Workpiece material was an austenitic stainless steel, machined using 4.75 m∙min-1 cutting speed and 0....

  1. Reproducibility of a reaming test

    DEFF Research Database (Denmark)

    Pilny, Lukas; Müller, Pavel; De Chiffre, Leonardo

    2014-01-01

    The reproducibility of a reaming test was analysed to document its applicability as a performance test for cutting fluids. Reaming tests were carried out on a drilling machine using HSS reamers. Workpiece material was an austenitic stainless steel, machined using 4.75 m•min−1 cutting speed and 0....

  2. Reproducible Bioinformatics Research for Biologists

    Science.gov (United States)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  3. Quantitative assessment of manual and robotic microcannulation for eye surgery using new eye model.

    Science.gov (United States)

    Tanaka, Shinichi; Harada, Kanako; Ida, Yoshiki; Tomita, Kyohei; Kato, Ippei; Arai, Fumihito; Ueta, Takashi; Noda, Yasuo; Sugita, Naohiko; Mitsuishi, Mamoru

    2015-06-01

    Microcannulation, a surgical procedure for the eye that requires drug injection into a 60-90 µm retinal vein, is difficult to perform manually. Robotic assistance has been proposed; however, its effectiveness in comparison to manual operation has not been quantified. An eye model has been developed to quantify the performance of manual and robotic microcannulation. The eye model, which is implemented with a force sensor and microchannels, also simulates the mechanical constraints of the instrument's movement. Ten subjects performed microcannulation using the model, with and without robotic assistance. The results showed that the robotic assistance was useful for motion stability when the drug was injected, whereas its positioning accuracy offered no advantage. An eye model was used to quantitatively assess the robotic microcannulation performance in comparison to manual operation. This approach could be valid for a better evaluation of surgical robotic assistance. Copyright © 2014 John Wiley & Sons, Ltd.

  4. SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.

    Science.gov (United States)

    Weight, Michael D; Harpending, Henry

    2017-01-01

    The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.

  5. Bad Behavior: Improving Reproducibility in Behavior Testing.

    Science.gov (United States)

    Andrews, Anne M; Cheng, Xinyi; Altieri, Stefanie C; Yang, Hongyan

    2018-01-24

    Systems neuroscience research is increasingly possible through the use of integrated molecular and circuit-level analyses. These studies depend on the use of animal models and, in many cases, molecular and circuit-level analyses. Associated with genetic, pharmacologic, epigenetic, and other types of environmental manipulations. We illustrate typical pitfalls resulting from poor validation of behavior tests. We describe experimental designs and enumerate controls needed to improve reproducibility in investigating and reporting of behavioral phenotypes.

  6. Quantitative analysis of anaerobic oxidation of methane (AOM) in marine sediments: A modeling perspective

    Science.gov (United States)

    Regnier, P.; Dale, A. W.; Arndt, S.; LaRowe, D. E.; Mogollón, J.; Van Cappellen, P.

    2011-05-01

    Recent developments in the quantitative modeling of methane dynamics and anaerobic oxidation of methane (AOM) in marine sediments are critically reviewed. The first part of the review begins with a comparison of alternative kinetic models for AOM. The roles of bioenergetic limitations, intermediate compounds and biomass growth are highlighted. Next, the key transport mechanisms in multi-phase sedimentary environments affecting AOM and methane fluxes are briefly treated, while attention is also given to additional controls on methane and sulfate turnover, including organic matter mineralization, sulfur cycling and methane phase transitions. In the second part of the review, the structure, forcing functions and parameterization of published models of AOM in sediments are analyzed. The six-orders-of-magnitude range in rate constants reported for the widely used bimolecular rate law for AOM emphasizes the limited transferability of this simple kinetic model and, hence, the need for more comprehensive descriptions of the AOM reaction system. The derivation and implementation of more complete reaction models, however, are limited by the availability of observational data. In this context, we attempt to rank the relative benefits of potential experimental measurements that should help to better constrain AOM models. The last part of the review presents a compilation of reported depth-integrated AOM rates (ΣAOM). These rates reveal the extreme variability of ΣAOM in marine sediments. The model results are further used to derive quantitative relationships between ΣAOM and the magnitude of externally impressed fluid flow, as well as between ΣAOM and the depth of the sulfate-methane transition zone (SMTZ). This review contributes to an improved understanding of the global significance of the AOM process, and helps identify outstanding questions and future directions in the modeling of methane cycling and AOM in marine sediments.

  7. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  8. Universally applicable model for the quantitative determination of lake sediment composition using fourier transform infrared spectroscopy.

    Science.gov (United States)

    Rosén, Peter; Vogel, Hendrik; Cunningham, Laura; Hahn, Annette; Hausmann, Sonja; Pienitz, Reinhard; Zolitschka, Bernd; Wagner, Bernd; Persson, Per

    2011-10-15

    Fourier transform infrared spectroscopy (FTIRS) can provide detailed information on organic and minerogenic constituents of sediment records. Based on a large number of sediment samples of varying age (0-340,000 yrs) and from very diverse lake settings in Antarctica, Argentina, Canada, Macedonia/Albania, Siberia, and Sweden, we have developed universally applicable calibration models for the quantitative determination of biogenic silica (BSi; n = 816), total inorganic carbon (TIC; n = 879), and total organic carbon (TOC; n = 3164) using FTIRS. These models are based on the differential absorbance of infrared radiation at specific wavelengths with varying concentrations of individual parameters, due to molecular vibrations associated with each parameter. The calibration models have low prediction errors and the predicted values are highly correlated with conventionally measured values (R = 0.94-0.99). Robustness tests indicate the accuracy of the newly developed FTIRS calibration models is similar to that of conventional geochemical analyses. Consequently FTIRS offers a useful and rapid alternative to conventional analyses for the quantitative determination of BSi, TIC, and TOC. The rapidity, cost-effectiveness, and small sample size required enables FTIRS determination of geochemical properties to be undertaken at higher resolutions than would otherwise be possible with the same resource allocation, thus providing crucial sedimentological information for climatic and environmental reconstructions.

  9. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  10. Effects of Noninhibitory Serpin Maspin on the Actin Cytoskeleton: A Quantitative Image Modeling Approach.

    Science.gov (United States)

    Al-Mamun, Mohammed; Ravenhill, Lorna; Srisukkham, Worawut; Hossain, Alamgir; Fall, Charles; Ellis, Vincent; Bass, Rosemary

    2016-04-01

    Recent developments in quantitative image analysis allow us to interrogate confocal microscopy images to answer biological questions. Clumped and layered cell nuclei and cytoplasm in confocal images challenges the ability to identify subcellular compartments. To date, there is no perfect image analysis method to identify cytoskeletal changes in confocal images. Here, we present a multidisciplinary study where an image analysis model was developed to allow quantitative measurements of changes in the cytoskeleton of cells with different maspin exposure. Maspin, a noninhibitory serpin influences cell migration, adhesion, invasion, proliferation, and apoptosis in ways that are consistent with its identification as a tumor metastasis suppressor. Using different cell types, we tested the hypothesis that reduction in cell migration by maspin would be reflected in the architecture of the actin cytoskeleton. A hybrid marker-controlled watershed segmentation technique was used to segment the nuclei, cytoplasm, and ruffling regions before measuring cytoskeletal changes. This was informed by immunohistochemical staining of cells transfected stably or transiently with maspin proteins, or with added bioactive peptides or protein. Image analysis results showed that the effects of maspin were mirrored by effects on cell architecture, in a way that could be described quantitatively.

  11. Data Science Innovations That Streamline Development, Documentation, Reproducibility, and Dissemination of Models in Computational Thermodynamics: An Application of Image Processing Techniques for Rapid Computation, Parameterization and Modeling of Phase Diagrams

    Science.gov (United States)

    Ghiorso, M. S.

    2014-12-01

    Computational thermodynamics (CT) represents a collection of numerical techniques that are used to calculate quantitative results from thermodynamic theory. In the Earth sciences, CT is most often applied to estimate the equilibrium properties of solutions, to calculate phase equilibria from models of the thermodynamic properties of materials, and to approximate irreversible reaction pathways by modeling these as a series of local equilibrium steps. The thermodynamic models that underlie CT calculations relate the energy of a phase to temperature, pressure and composition. These relationships are not intuitive and they are seldom well constrained by experimental data; often, intuition must be applied to generate a robust model that satisfies the expectations of use. As a consequence of this situation, the models and databases the support CT applications in geochemistry and petrology are tedious to maintain as new data and observations arise. What is required to make the process more streamlined and responsive is a computational framework that permits the rapid generation of observable outcomes from the underlying data/model collections, and importantly, the ability to update and re-parameterize the constitutive models through direct manipulation of those outcomes. CT procedures that take models/data to the experiential reference frame of phase equilibria involve function minimization, gradient evaluation, the calculation of implicit lines, curves and surfaces, contour extraction, and other related geometrical measures. All these procedures are the mainstay of image processing analysis. Since the commercial escalation of video game technology, open source image processing libraries have emerged (e.g., VTK) that permit real time manipulation and analysis of images. These tools find immediate application to CT calculations of phase equilibria by permitting rapid calculation and real time feedback between model outcome and the underlying model parameters.

  12. Quantitative model for the generic 3D shape of ICMEs at 1 AU

    Science.gov (United States)

    Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.

    2016-10-01

    Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.

  13. Computational modeling in nanomedicine: prediction of multiple antibacterial profiles of nanoparticles using a quantitative structure-activity relationship perturbation model.

    Science.gov (United States)

    Speck-Planche, Alejandro; Kleandrova, Valeria V; Luan, Feng; Cordeiro, Maria Natália D S

    2015-01-01

    We introduce the first quantitative structure-activity relationship (QSAR) perturbation model for probing multiple antibacterial profiles of nanoparticles (NPs) under diverse experimental conditions. The dataset is based on 300 nanoparticles containing dissimilar chemical compositions, sizes, shapes and surface coatings. In general terms, the NPs were tested against different bacteria, by considering several measures of antibacterial activity and diverse assay times. The QSAR perturbation model was created from 69,231 nanoparticle-nanoparticle (NP-NP) pairs, which were randomly generated using a recently reported perturbation theory approach. The model displayed an accuracy rate of approximately 98% for classifying NPs as active or inactive, and a new copper-silver nanoalloy was correctly predicted by this model with consensus accuracy of 77.73%. Our QSAR perturbation model can be used as an efficacious tool for the virtual screening of antibacterial nanomaterials.

  14. Spectral Quantitative Analysis Model with Combining Wavelength Selection and Topology Structure Optimization

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2016-01-01

    Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.

  15. Spatiotemporal microbiota dynamics from quantitative in vitro and in silico models of the gut

    Science.gov (United States)

    Hwa, Terence

    The human gut harbors a dynamic microbial community whose composition bears great importance for the health of the host. Here, we investigate how colonic physiology impacts bacterial growth behaviors, which ultimately dictate the gut microbiota composition. Combining measurements of bacterial growth physiology with analysis of published data on human physiology into a quantitative modeling framework, we show how hydrodynamic forces in the colon, in concert with other physiological factors, determine the abundances of the major bacterial phyla in the gut. Our model quantitatively explains the observed variation of microbiota composition among healthy adults, and predicts colonic water absorption (manifested as stool consistency) and nutrient intake to be two key factors determining this composition. The model further reveals that both factors, which have been identified in recent correlative studies, exert their effects through the same mechanism: changes in colonic pH that differentially affect the growth of different bacteria. Our findings show that a predictive and mechanistic understanding of microbial ecology in the human gut is possible, and offer the hope for the rational design of intervention strategies to actively control the microbiota. This work is supported by the Bill and Melinda Gates Foundation.

  16. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies. Copyright © 2015 by the Genetics Society of America.

  17. Quantitative Analysis of Situation Awareness (QASA): modelling and measuring situation awareness using signal detection theory.

    Science.gov (United States)

    Edgar, Graham K; Catherwood, Di; Baker, Steven; Sallis, Geoff; Bertels, Michael; Edgar, Helen E; Nikolla, Dritan; Buckle, Susanna; Goodwin, Charlotte; Whelan, Allana

    2017-12-29

    This paper presents a model of situation awareness (SA) that emphasises that SA is necessarily built using a subset of available information. A technique (Quantitative Analysis of Situation Awareness - QASA), based around signal detection theory, has been developed from this model that provides separate measures of actual SA (ASA) and perceived SA (PSA), together with a feature unique to QASA, a measure of bias (information acceptance). These measures allow the exploration of the relationship between actual SA, perceived SA and information acceptance. QASA can also be used for the measurement of dynamic ASA, PSA and bias. Example studies are presented and full details of the implementation of the QASA technique are provided. Practitioner Summary: This paper presents a new model of situation awareness (SA) together with an associated tool (Quantitative Analysis of Situation Awareness - QASA) that employs signal detection theory to measure several aspects of SA, including actual and perceived SA and information acceptance. Full details are given of the implementation of the tool.

  18. Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles

    International Nuclear Information System (INIS)

    Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung

    2000-07-01

    This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity

  19. Searching for recursive causal structures in multivariate quantitative genetics mixed models.

    Science.gov (United States)

    Valente, Bruno D; Rosa, Guilherme J M; de Los Campos, Gustavo; Gianola, Daniel; Silva, Martinho A

    2010-06-01

    Biology is characterized by complex interactions between phenotypes, such as recursive and simultaneous relationships between substrates and enzymes in biochemical systems. Structural equation models (SEMs) can be used to study such relationships in multivariate analyses, e.g., with multiple traits in a quantitative genetics context. Nonetheless, the number of different recursive causal structures that can be used for fitting a SEM to multivariate data can be huge, even when only a few traits are considered. In recent applications of SEMs in mixed-model quantitative genetics settings, causal structures were preselected on the basis of prior biological knowledge alone. Therefore, the wide range of possible causal structures has not been properly explored. Alternatively, causal structure spaces can be explored using algorithms that, using data-driven evidence, can search for structures that are compatible with the joint distribution of the variables under study. However, the search cannot be performed directly on the joint distribution of the phenotypes as it is possibly confounded by genetic covariance among traits. In this article we propose to search for recursive causal structures among phenotypes using the inductive causation (IC) algorithm after adjusting the data for genetic effects. A standard multiple-trait model is fitted using Bayesian methods to obtain a posterior covariance matrix of phenotypes conditional to unobservable additive genetic effects, which is then used as input for the IC algorithm. As an illustrative example, the proposed methodology was applied to simulated data related to multiple traits measured on a set of inbred lines.

  20. Development and Validation of Quantitative Structure-Activity Relationship Models for Compounds Acting on Serotoninergic Receptors

    Directory of Open Access Journals (Sweden)

    Grażyna Żydek

    2012-01-01

    Full Text Available A quantitative structure-activity relationship (QSAR study has been made on 20 compounds with serotonin (5-HT receptor affinity. Thin-layer chromatographic (TLC data and physicochemical parameters were applied in this study. RP2 TLC 60F254 plates (silanized impregnated with solutions of propionic acid, ethylbenzene, 4-ethylphenol, and propionamide (used as analogues of the key receptor amino acids and their mixtures (denoted as S1–S7 biochromatographic models were used in two developing phases as a model of drug-5-HT receptor interaction. The semiempirical method AM1 (HyperChem v. 7.0 program and ACD/Labs v. 8.0 program were employed to calculate a set of physicochemical parameters for the investigated compounds. Correlation and multiple linear regression analysis were used to search for the best QSAR equations. The correlations obtained for the compounds studied represent their interactions with the proposed biochromatographic models. The good multivariate relationships (R2=0.78–0.84 obtained by means of regression analysis can be used for predicting the quantitative effect of biological activity of different compounds with 5-HT receptor affinity. “Leave-one-out” (LOO and “leave-N-out” (LNO cross-validation methods were used to judge the predictive power of final regression equations.

  1. Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung

    2000-07-01

    This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity.

  2. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  3. Tumour-cell killing by X-rays and immunity quantitated in a mouse model system

    International Nuclear Information System (INIS)

    Porteous, D.D.; Porteous, K.M.; Hughes, M.J.

    1979-01-01

    As part of an investigation of the interaction of X-rays and immune cytotoxicity in tumour control, an experimental mouse model system has been used in which quantitative anti-tumour immunity was raised in prospective recipients of tumour-cell suspensions exposed to varying doses of X-rays in vitro before injection. Findings reported here indicate that, whilst X-rays kill a proportion of cells, induced immunity deals with a fixed number dependent upon the immune status of the host, and that X-rays and anti-tumour immunity do not act synergistically in tumour-cell killing. The tumour used was the ascites sarcoma BP8. (author)

  4. Modeling of microfluidic microbial fuel cells using quantitative bacterial transport parameters

    Science.gov (United States)

    Mardanpour, Mohammad Mahdi; Yaghmaei, Soheila; Kalantar, Mohammad

    2017-02-01

    The objective of present study is to analyze the dynamic modeling of bioelectrochemical processes and improvement of the performance of previous models using quantitative data of bacterial transport parameters. The main deficiency of previous MFC models concerning spatial distribution of biocatalysts is an assumption of initial distribution of attached/suspended bacteria on electrode or in anolyte bulk which is the foundation for biofilm formation. In order to modify this imperfection, the quantification of chemotactic motility to understand the mechanisms of the suspended microorganisms' distribution in anolyte and/or their attachment to anode surface to extend the biofilm is implemented numerically. The spatial and temporal distributions of the bacteria, as well as the dynamic behavior of the anolyte and biofilm are simulated. The performance of the microfluidic MFC as a chemotaxis assay is assessed by analyzing the bacteria activity, substrate variation, bioelectricity production rate and the influences of external resistance on the biofilm and anolyte's features.

  5. Web Applications Vulnerability Management using a Quantitative Stochastic Risk Modeling Method

    Directory of Open Access Journals (Sweden)

    Sergiu SECHEL

    2017-01-01

    Full Text Available The aim of this research is to propose a quantitative risk modeling method that reduces the guess work and uncertainty from the vulnerability and risk assessment activities of web based applications while providing users the flexibility to assess risk according to their risk appetite and tolerance with a high degree of assurance. The research method is based on the research done by the OWASP Foundation on this subject but their risk rating methodology needed de-bugging and updates in different in key areas that are presented in this paper. The modified risk modeling method uses Monte Carlo simulations to model risk characteristics that can’t be determined without guess work and it was tested in vulnerability assessment activities on real production systems and in theory by assigning discrete uniform assumptions to all risk charac-teristics (risk attributes and evaluate the results after 1.5 million rounds of Monte Carlo simu-lations.

  6. Evaluation of Land Surface Models in Reproducing Satellite-Derived LAI over the High-Latitude Northern Hemisphere. Part I: Uncoupled DGVMs

    Directory of Open Access Journals (Sweden)

    Ning Zeng

    2013-10-01

    Full Text Available Leaf Area Index (LAI represents the total surface area of leaves above a unit area of ground and is a key variable in any vegetation model, as well as in climate models. New high resolution LAI satellite data is now available covering a period of several decades. This provides a unique opportunity to validate LAI estimates from multiple vegetation models. The objective of this paper is to compare new, satellite-derived LAI measurements with modeled output for the Northern Hemisphere. We compare monthly LAI output from eight land surface models from the TRENDY compendium with satellite data from an Artificial Neural Network (ANN from the latest version (third generation of GIMMS AVHRR NDVI data over the period 1986–2005. Our results show that all the models overestimate the mean LAI, particularly over the boreal forest. We also find that seven out of the eight models overestimate the length of the active vegetation-growing season, mostly due to a late dormancy as a result of a late summer phenology. Finally, we find that the models report a much larger positive trend in LAI over this period than the satellite observations suggest, which translates into a higher trend in the growing season length. These results highlight the need to incorporate a larger number of more accurate plant functional types in all models and, in particular, to improve the phenology of deciduous trees.

  7. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    Science.gov (United States)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  8. Evaluation of multichannel reproduced sound

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2007-01-01

    from the quantified attributes predict overall preference well. The findings allow for some generalizations within musical program genres regarding the perception of and preference for certain spatial reproduction modes, but for limited generalizations across selections from different musical genres.......A study was conducted with the goal of quantifying auditory attributes which underlie listener preference for multichannel reproduced sound. Short musical excerpts were presented in mono, stereo and several multichannel formats to a panel of forty selected listeners. Scaling of auditory attributes...

  9. Quantitative Structure-activity Relationship (QSAR) Models for Docking Score Correction.

    Science.gov (United States)

    Fukunishi, Yoshifumi; Yamasaki, Satoshi; Yasumatsu, Isao; Takeuchi, Koh; Kurosawa, Takashi; Nakamura, Haruki

    2017-01-01

    In order to improve docking score correction, we developed several structure-based quantitative structure activity relationship (QSAR) models by protein-drug docking simulations and applied these models to public affinity data. The prediction models used descriptor-based regression, and the compound descriptor was a set of docking scores against multiple (∼600) proteins including nontargets. The binding free energy that corresponded to the docking score was approximated by a weighted average of docking scores for multiple proteins, and we tried linear, weighted linear and polynomial regression models considering the compound similarities. In addition, we tried a combination of these regression models for individual data sets such as IC 50 , K i , and %inhibition values. The cross-validation results showed that the weighted linear model was more accurate than the simple linear regression model. Thus, the QSAR approaches based on the affinity data of public databases should improve docking scores. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  10. Quantitative computational models of molecular self-assembly in systems biology

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-06-01

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  11. Quantitative computational models of molecular self-assembly in systems biology.

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-05-23

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  12. Quantitative Microbial Risk Assessment Tutorial Installation of Software for Watershed Modeling in Support of QMRA - Updated 2017

    Science.gov (United States)

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling: • QMRA Installation • SDMProjectBuilder (which includes the Mi...

  13. Satellite Contributions to the Quantitative Characterization of Biomass Burning for Climate Modeling

    Science.gov (United States)

    Ichoku, Charles; Kahn, Ralph; Chin, Mian

    2012-01-01

    Characterization of biomass burning from space has been the subject of an extensive body of literature published over the last few decades. Given the importance of this topic, we review how satellite observations contribute toward improving the representation of biomass burning quantitatively in climate and air-quality modeling and assessment. Satellite observations related to biomass burning may be classified into five broad categories: (i) active fire location and energy release, (ii) burned areas and burn severity, (iii) smoke plume physical disposition, (iv) aerosol distribution and particle properties, and (v) trace gas concentrations. Each of these categories involves multiple parameters used in characterizing specific aspects of the biomass-burning phenomenon. Some of the parameters are merely qualitative, whereas others are quantitative, although all are essential for improving the scientific understanding of the overall distribution (both spatial and temporal) and impacts of biomass burning. Some of the qualitative satellite datasets, such as fire locations, aerosol index, and gas estimates have fairly long-term records. They date back as far as the 1970s, following the launches of the DMSP, Landsat, NOAA, and Nimbus series of earth observation satellites. Although there were additional satellite launches in the 1980s and 1990s, space-based retrieval of quantitative biomass burning data products began in earnest following the launch of Terra in December 1999. Starting in 2000, fire radiative power, aerosol optical thickness and particle properties over land, smoke plume injection height and profile, and essential trace gas concentrations at improved resolutions became available. The 2000s also saw a large list of other new satellite launches, including Aqua, Aura, Envisat, Parasol, and CALIPSO, carrying a host of sophisticated instruments providing high quality measurements of parameters related to biomass burning and other phenomena. These improved data

  14. Quartz Crystal Microbalance Model for Quantitatively Probing the Deformation of Adsorbed Particles at Low Surface Coverage.

    Science.gov (United States)

    Gillissen, Jurriaan J J; Jackman, Joshua A; Tabaei, Seyed R; Yoon, Bo Kyeong; Cho, Nam-Joon

    2017-11-07

    Characterizing the deformation of nanoscale, soft-matter particulates at solid-liquid interfaces is a demanding task, and there are limited experimental options to perform quantitative measurements in a nonperturbative manner. Previous attempts, based on the quartz crystal microbalance (QCM) technique, focused on the high surface coverage regime and modeled the adsorbed particles as a homogeneous film, while not considering the coupling between particles and surrounding fluid and hence resulting in an underestimation of the known particle height. In this work, we develop a model for the hydrodynamic coupling between adsorbed particles and surrounding fluid in the limit of a low surface coverage, which can be used to extract shape information from QCM measurement data. We tackle this problem by using hydrodynamic simulations of an ellipsoidal particle on an oscillating surface. From the simulation results, we derived a phenomenological relation between the aspect ratio r of the absorbed particles and the slope and intercept of the line that fits instantaneous, overtone-dependent QCM data on (δ/a, -Δf/n) coordinates where δ is the viscous penetration depth, a is the particle radius, Δf is the QCM frequency shift, and n is the overtone number. The model was applied to QCM measurement data pertaining to the adsorption of 34 nm radius, fluid-phase and gel-phase liposomes onto a titanium oxide-coated surface. The osmotic pressure across the liposomal bilayer was varied to induce shape deformation. By combining these results with a membrane bending model, we determined the membrane bending energy for the gel-phase liposomes, and the results are consistent with literature values. In summary, a phenomenological model is presented and validated in order to show for the first time that QCM experiments can quantitatively measure the deformation of adsorbed particles at low surface coverage.

  15. Quantitative Modeling of Acid Wormholing in Carbonates- What Are the Gaps to Bridge

    KAUST Repository

    Qiu, Xiangdong

    2013-01-01

    Carbonate matrix acidization extends a well\\'s effective drainage radius by dissolving rock and forming conductive channels (wormholes) from the wellbore. Wormholing is a dynamic process that involves balance between the acid injection rate and reaction rate. Generally, injection rate is well defined where injection profiles can be controlled, whereas the reaction rate can be difficult to obtain due to its complex dependency on interstitial velocity, fluid composition, rock surface properties etc. Conventional wormhole propagation models largely ignore the impact of reaction products. When implemented in a job design, the significant errors can result in treatment fluid schedule, rate, and volume. A more accurate method to simulate carbonate matrix acid treatments would accomodate the effect of reaction products on reaction kinetics. It is the purpose of this work to properly account for these effects. This is an important step in achieving quantitative predictability of wormhole penetration during an acidzing treatment. This paper describes the laboratory procedures taken to obtain the reaction-product impacted kinetics at downhole conditions using a rotating disk apparatus, and how this new set of kinetics data was implemented in a 3D wormholing model to predict wormhole morphology and penetration velocity. The model explains some of the differences in wormhole morphology observed in limestone core flow experiments where injection pressure impacts the mass transfer of hydrogen ions to the rock surface. The model uses a CT scan rendered porosity field to capture the finer details of the rock fabric and then simulates the fluid flow through the rock coupled with reactions. Such a validated model can serve as a base to scale up to near wellbore reservoir and 3D radial flow geometry allowing a more quantitative acid treatment design.

  16. Qualitative and quantitative examination of the performance of regional air quality models representing different modeling approaches

    International Nuclear Information System (INIS)

    Bhumralkar, C.M.; Ludwig, F.L.; Shannon, J.D.; McNaughton, D.

    1985-04-01

    The calculations of three different air quality models were compared with the best available observations. The comparisons were made without calibrating the models to improve agreement with the observations. Model performance was poor for short averaging times (less than 24 hours). Some of the poor performance can be traced to errors in the input meteorological fields, but error exist on all levels. It should be noted that these models were not originally designed for treating short-term episodes. For short-term episodes, much of the variance in the data can arise from small spatial scale features that tend to be averaged out over longer periods. These small spatial scale features cannot be resolved with the coarse grids that are used for the meteorological and emissions inputs. Thus, it is not surprising that the models performed for the longer averaging times. The models compared were RTM-II, ENAMAP-2 and ACID. (17 refs., 5 figs., 4 tabs

  17. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  18. Quantitative evaluation and modeling of two-dimensional neovascular network complexity: the surface fractal dimension

    International Nuclear Information System (INIS)

    Grizzi, Fabio; Russo, Carlo; Colombo, Piergiuseppe; Franceschini, Barbara; Frezza, Eldo E; Cobos, Everardo; Chiriva-Internati, Maurizio

    2005-01-01

    Modeling the complex development and growth of tumor angiogenesis using mathematics and biological data is a burgeoning area of cancer research. Architectural complexity is the main feature of every anatomical system, including organs, tissues, cells and sub-cellular entities. The vascular system is a complex network whose geometrical characteristics cannot be properly defined using the principles of Euclidean geometry, which is only capable of interpreting regular and smooth objects that are almost impossible to find in Nature. However, fractal geometry is a more powerful means of quantifying the spatial complexity of real objects. This paper introduces the surface fractal dimension (D s ) as a numerical index of the two-dimensional (2-D) geometrical complexity of tumor vascular networks, and their behavior during computer-simulated changes in vessel density and distribution. We show that D s significantly depends on the number of vessels and their pattern of distribution. This demonstrates that the quantitative evaluation of the 2-D geometrical complexity of tumor vascular systems can be useful not only to measure its complex architecture, but also to model its development and growth. Studying the fractal properties of neovascularity induces reflections upon the real significance of the complex form of branched anatomical structures, in an attempt to define more appropriate methods of describing them quantitatively. This knowledge can be used to predict the aggressiveness of malignant tumors and design compounds that can halt the process of angiogenesis and influence tumor growth

  19. Quantitative Outline-based Shape Analysis and Classification of Planetary Craterforms using Supervised Learning Models

    Science.gov (United States)

    Slezak, Thomas Joseph; Radebaugh, Jani; Christiansen, Eric

    2017-10-01

    The shapes of craterform morphology on planetary surfaces provides rich information about their origins and evolution. While morphologic information provides rich visual clues to geologic processes and properties, the ability to quantitatively communicate this information is less easily accomplished. This study examines the morphology of craterforms using the quantitative outline-based shape methods of geometric morphometrics, commonly used in biology and paleontology. We examine and compare landforms on planetary surfaces using shape, a property of morphology that is invariant to translation, rotation, and size. We quantify the shapes of paterae on Io, martian calderas, terrestrial basaltic shield calderas, terrestrial ash-flow calderas, and lunar impact craters using elliptic Fourier analysis (EFA) and the Zahn and Roskies (Z-R) shape function, or tangent angle approach to produce multivariate shape descriptors. These shape descriptors are subjected to multivariate statistical analysis including canonical variate analysis (CVA), a multiple-comparison variant of discriminant analysis, to investigate the link between craterform shape and classification. Paterae on Io are most similar in shape to terrestrial ash-flow calderas and the shapes of terrestrial basaltic shield volcanoes are most similar to martian calderas. The shapes of lunar impact craters, including simple, transitional, and complex morphology, are classified with a 100% rate of success in all models. Multiple CVA models effectively predict and classify different craterforms using shape-based identification and demonstrate significant potential for use in the analysis of planetary surfaces.

  20. How well do environmental archives of atmospheric mercury deposition in the Arctic reproduce rates and trends depicted by atmospheric models and measurements?

    Science.gov (United States)

    Goodsite, M E; Outridge, P M; Christensen, J H; Dastoor, A; Muir, D; Travnikov, O; Wilson, S

    2013-05-01

    This review compares the reconstruction of atmospheric Hg deposition rates and historical trends over recent decades in the Arctic, inferred from Hg profiles in natural archives such as lake and marine sediments, peat bogs and glacial firn (permanent snowpack), against those predicted by three state-of-the-art atmospheric models based on global Hg emission inventories from 1990 onwards. Model veracity was first tested against atmospheric Hg measurements. Most of the natural archive and atmospheric data came from the Canadian-Greenland sectors of the Arctic, whereas spatial coverage was poor in other regions. In general, for the Canadian-Greenland Arctic, models provided good agreement with atmospheric gaseous elemental Hg (GEM) concentrations and trends measured instrumentally. However, there are few instrumented deposition data with which to test the model estimates of Hg deposition, and these data suggest models over-estimated deposition fluxes under Arctic conditions. Reconstructed GEM data from glacial firn on Greenland Summit showed the best agreement with the known decline in global Hg emissions after about 1980, and were corroborated by archived aerosol filter data from Resolute, Nunavut. The relatively stable or slowly declining firn and model GEM trends after 1990 were also corroborated by real-time instrument measurements at Alert, Nunavut, after 1995. However, Hg fluxes and trends in northern Canadian lake sediments and a southern Greenland peat bog did not exhibit good agreement with model predictions of atmospheric deposition since 1990, the Greenland firn GEM record, direct GEM measurements, or trends in global emissions since 1980. Various explanations are proposed to account for these discrepancies between atmosphere and archives, including problems with the accuracy of archive chronologies, climate-driven changes in Hg transfer rates from air to catchments, waters and subsequently into sediments, and post-depositional diagenesis in peat bogs

  1. Quantitative Circulatory Physiology: an integrative mathematical model of human physiology for medical education.

    Science.gov (United States)

    Abram, Sean R; Hodnett, Benjamin L; Summers, Richard L; Coleman, Thomas G; Hester, Robert L

    2007-06-01

    We have developed Quantitative Circulatory Physiology (QCP), a mathematical model of integrative human physiology containing over 4,000 variables of biological interactions. This model provides a teaching environment that mimics clinical problems encountered in the practice of medicine. The model structure is based on documented physiological responses within peer-reviewed literature and serves as a dynamic compendium of physiological knowledge. The model is solved using a desktop, Windows-based program, allowing students to calculate time-dependent solutions and interactively alter over 750 parameters that modify physiological function. The model can be used to understand proposed mechanisms of physiological function and the interactions among physiological variables that may not be otherwise intuitively evident. In addition to open-ended or unstructured simulations, we have developed 30 physiological simulations, including heart failure, anemia, diabetes, and hemorrhage. Additional stimulations include 29 patients in which students are challenged to diagnose the pathophysiology based on their understanding of integrative physiology. In summary, QCP allows students to examine, integrate, and understand a host of physiological factors without causing harm to patients. This model is available as a free download for Windows computers at http://physiology.umc.edu/themodelingworkshop.

  2. Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm

    Directory of Open Access Journals (Sweden)

    Benedek Kovács

    2006-01-01

    Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.

  3. A pulsatile flow model for in vitro quantitative evaluation of prosthetic valve regurgitation

    Directory of Open Access Journals (Sweden)

    S. Giuliatti

    2000-03-01

    Full Text Available A pulsatile pressure-flow model was developed for in vitro quantitative color Doppler flow mapping studies of valvular regurgitation. The flow through the system was generated by a piston which was driven by stepper motors controlled by a computer. The piston was connected to acrylic chambers designed to simulate "ventricular" and "atrial" heart chambers. Inside the "ventricular" chamber, a prosthetic heart valve was placed at the inflow connection with the "atrial" chamber while another prosthetic valve was positioned at the outflow connection with flexible tubes, elastic balloons and a reservoir arranged to mimic the peripheral circulation. The flow model was filled with a 0.25% corn starch/water suspension to improve Doppler imaging. A continuous flow pump transferred the liquid from the peripheral reservoir to another one connected to the "atrial" chamber. The dimensions of the flow model were designed to permit adequate imaging by Doppler echocardiography. Acoustic windows allowed placement of transducers distal and perpendicular to the valves, so that the ultrasound beam could be positioned parallel to the valvular flow. Strain-gauge and electromagnetic transducers were used for measurements of pressure and flow in different segments of the system. The flow model was also designed to fit different sizes and types of prosthetic valves. This pulsatile flow model was able to generate pressure and flow in the physiological human range, with independent adjustment of pulse duration and rate as well as of stroke volume. This model mimics flow profiles observed in patients with regurgitant prosthetic valves.

  4. Reply to the comment of S. Rayne on "QSAR model reproducibility and applicability: A case study of rate constants of hydroxyl radical reaction models applied to polybrominated diphenyl ethers and (benzo-)triazoles".

    Science.gov (United States)

    Gramatica, Paola; Kovarich, Simona; Roy, Partha Pratim

    2013-07-30

    We appreciate the interest of Dr. Rayne on our article and we completely agree that the dataset of (benzo-)triazoles, which were screened by the hydroxyl radical reaction quantitative structure-activity relationship (QSAR) model, was not only composed of benzo-triazoles but also included some simpler triazoles (without the condensed benzene ring), such as the chemicals listed by Dr. Rayne, as well as some related heterocycles (also few not aromatic). We want to clarify that in this article (as well as in other articles in which the same dataset was screened), for conciseness, the abbreviations (B)TAZs and BTAZs were used as general (and certainly too simplified) notations meaning an extended dataset of benzo-triazoles, triazoles, and related compounds. Copyright © 2013 Wiley Periodicals, Inc.

  5. Chow-Liu trees are sufficient predictive models for reproducing key features of functional networks of periictal EEG time-series.

    Science.gov (United States)

    Steimer, Andreas; Zubler, Frédéric; Schindler, Kaspar

    2015-09-01

    Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20-30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow-Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals. Copyright © 2015 Elsevier Inc. All rights

  6. Theoretical Modeling and Computer Simulations for the Origins and Evolution of Reproducing Molecular Systems and Complex Systems with Many Interactive Parts

    Science.gov (United States)

    Liang, Shoudan

    2000-01-01

    Our research effort has produced nine publications in peer-reviewed journals listed at the end of this report. The work reported here are in the following areas: (1) genetic network modeling; (2) autocatalytic model of pre-biotic evolution; (3) theoretical and computational studies of strongly correlated electron systems; (4) reducing thermal oscillations in atomic force microscope; (5) transcription termination mechanism in prokaryotic cells; and (6) the low glutamine usage in thennophiles obtained by studying completely sequenced genomes. We discuss the main accomplishments of these publications.

  7. Use of a plant level logic model for quantitative assessment of systems interactions

    International Nuclear Information System (INIS)

    Chu, B.B.; Rees, D.C.; Kripps, L.P.; Hunt, R.N.; Bradley, M.

    1985-01-01

    The Electric Power Research Institute (EPRI) has sponsored a research program to investigate methods for identifying systems interactions (SIs) and for the evaluation of their importance. Phase 1 of the EPRI research project focused on the evaluation of methods for identification of SIs. Major results of the Phase 1 activities are the documentation of four different methodologies for identification of potential SIs and development of guidelines for performing an effective plant walkdown in support of an SI analysis. Phase II of the project, currently being performed, is utilizing a plant level logic model of a pressurized water reactor (PWR) to determine the quantitative importance of identified SIs. In Phase II, previously reported events involving interactions between systems were screened and selected on the basis of their relevance to the Baltimore Gas and Electric (BGandE) Calvert Cliffs Nuclear Power Plant design and perceived potential safety significance. Selected events were then incorporated into the BGandE plant level GO logic model. The model is being exercised to calculate the relative importance of these events. Five previously identified event scenarios, extracted from licensee event reports (LERs) are being evaluated during the course of the study. A key feature of the approach being used in Phase II is the use of a logic model in a manner to effectively evaluate the impact of events on the system level and the plant level for the mitigation of transients. Preliminary study results indicate that the developed methodology can be a viable and effective means for determining the quantitative significance of SIs

  8. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    Science.gov (United States)

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  9. A quantitative quasispecies theory-based model of virus escape mutation under immune selection.

    Science.gov (United States)

    Woo, Hyung-June; Reifman, Jaques

    2012-08-07

    Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although fundamental aspects of such a balance of mutation and selection pressure have been established by the quasispecies theory decades ago, its implications have largely remained qualitative. Here, we present a quantitative approach to model the virus evolution under cytotoxic T-lymphocyte immune response. The virus quasispecies dynamics are explicitly represented by mutations in the combined sequence space of a set of epitopes within the viral genome. We stochastically simulated the growth of a viral population originating from a single wild-type founder virus and its recognition and clearance by the immune response, as well as the expansion of its genetic diversity. Applied to the immune escape of a simian immunodeficiency virus epitope, model predictions were quantitatively comparable to the experimental data. Within the model parameter space, we found two qualitatively different regimes of infectious disease pathogenesis, each representing alternative fates of the immune response: It can clear the infection in finite time or eventually be overwhelmed by viral growth and escape mutation. The latter regime exhibits the characteristic disease progression pattern of human immunodeficiency virus, while the former is bounded by maximum mutation rates that can be suppressed by the immune response. Our results demonstrate that, by explicitly representing epitope mutations and thus providing a genotype-phenotype map, the quasispecies theory can form the basis of a detailed sequence-specific model of real-world viral pathogens evolving under immune selection.

  10. Adjustments in the Almod 3W2 code models for reproducing the net load trip test in Angra I nuclear power plant

    International Nuclear Information System (INIS)

    Camargo, C.T.M.; Madeira, A.A.; Pontedeiro, A.C.; Dominguez, L.

    1986-09-01

    The recorded traces got from the net load trip test in Angra I NPP yelded the oportunity to make fine adjustments in the ALMOD 3W2 code models. The changes are described and the results are compared against plant real data. (Author) [pt

  11. Multi-Parametric Neuroimaging Reproducibility: A 3T Resource Study

    Science.gov (United States)

    Landman, Bennett A.; Huang, Alan J.; Gifford, Aliya; Vikram, Deepti S.; Lim, Issel Anne L.; Farrell, Jonathan A.D.; Bogovic, John A.; Hua, Jun; Chen, Min; Jarso, Samson; Smith, Seth A.; Joel, Suresh; Mori, Susumu; Pekar, James J.; Barker, Peter B.; Prince, Jerry L.; van Zijl, Peter C.M.

    2010-01-01

    Modern MRI image processing methods have yielded quantitative, morphometric, functional, and structural assessments of the human brain. These analyses typically exploit carefully optimized protocols for specific imaging targets. Algorithm investigators have several excellent public data resources to use to test, develop, and optimize their methods. Recently, there has been an increasing focus on combining MRI protocols in multi-parametric studies. Notably, these have included innovative approaches for fusing connectivity inferences with functional and/or anatomical characterizations. Yet, validation of the reproducibility of these interesting and novel methods has been severely hampered by the limited availability of appropriate multi-parametric data. We present an imaging protocol optimized to include state-of-the-art assessment of brain function, structure, micro-architecture, and quantitative parameters within a clinically feasible 60 minute protocol on a 3T MRI scanner. We present scan-rescan reproducibility of these imaging contrasts based on 21 healthy volunteers (11 M/10 F, 22–61 y/o). The cortical gray matter, cortical white matter, ventricular cerebrospinal fluid, thalamus, putamen, caudate, cerebellar gray matter, cerebellar white matter, and brainstem were identified with mean volume-wise reproducibility of 3.5%. We tabulate the mean intensity, variability and reproducibility of each contrast in a region of interest approach, which is essential for prospective study planning and retrospective power analysis considerations. Anatomy was highly consistent on structural acquisition (~1–5% variability), while variation on diffusion and several other quantitative scans was higher (~parametric imaging protocols. PMID:21094686

  12. Evaluating the Impacts of Spatial Uncertainties in Quantitative Precipitation Estimation (QPE) Products on Flood Modelling

    Science.gov (United States)

    Gao, Z.; Wu, H.; Li, J.; Hong, Y.; Huang, J.

    2017-12-01

    Precipitation is often the major uncertainty source of hydrologic modelling, e.g., for flood simulation. The quantitative precipitation estimation (QPE) products when used as input for hydrologic modelling can cause significant difference in model performance because of the large variations in their estimation of precipitation intensity, duration, and spatial distribution. Objectively evaluating QPE and deriving the best estimation of precipitation at river basin scale, represent a bottleneck which has been faced by the hydrometeorological community, despite they are desired by many researches including flood simulation, such as the Global Flood Monitoring System using the Dominant river tracing-Routing Integrated with VIC Environment (DRIVE) model (Wu et al., 2014). Recently we developed a Multiple-product-driven hydrological Modeling Framework (MMF) for objective evaluation of QPE products using the DRIVE model (Wu et al., 2017). In this study based on the MMF, we (1) compare location, spatial characteristics, and geometric patterns of precipitation among QPE products at various temporal scales by adopting an object-oriented method; (2) demonstrate their effects on flood magnitude and timing simulation through the DRIVE model; and (3) further investigate and understand how different precipitation spatial patterns evolute and result in difference in streamflow and flood peak (magnitude and timing), through a linear routing scheme which is employed to decompose the contribution of flood peak during rain-flood events. This study shows that there can be significant difference in spatial patterns of accumulated precipitation at various temporal scales (from days to hourly) among QPE products, which cause significant difference in flood simulation particularly in peak timing prediction. Therefore, the evaluation of spatial pattern of precipitation should be considered as an important part of the framework for objective evaluation of QPE and the derivation of the best

  13. An International Ki67 Reproducibility Study

    Science.gov (United States)

    2013-01-01

    Background In breast cancer, immunohistochemical assessment of proliferation using the marker Ki67 has potential use in both research and clinical management. However, lack of consistency across laboratories has limited Ki67’s value. A working group was assembled to devise a strategy to harmonize Ki67 analysis and increase scoring concordance. Toward that goal, we conducted a Ki67 reproducibility study. Methods Eight laboratories received 100 breast cancer cases arranged into 1-mm core tissue microarrays—one set stained by the participating laboratory and one set stained by the central laboratory, both using antibody MIB-1. Each laboratory scored Ki67 as percentage of positively stained invasive tumor cells using its own method. Six laboratories repeated scoring of 50 locally stained cases on 3 different days. Sources of variation were analyzed using random effects models with log2-transformed measurements. Reproducibility was quantified by intraclass correlation coefficient (ICC), and the approximate two-sided 95% confidence intervals (CIs) for the true intraclass correlation coefficients in these experiments were provided. Results Intralaboratory reproducibility was high (ICC = 0.94; 95% CI = 0.93 to 0.97). Interlaboratory reproducibility was only moderate (central staining: ICC = 0.71, 95% CI = 0.47 to 0.78; local staining: ICC = 0.59, 95% CI = 0.37 to 0.68). Geometric mean of Ki67 values for each laboratory across the 100 cases ranged 7.1% to 23.9% with central staining and 6.1% to 30.1% with local staining. Factors contributing to interlaboratory discordance included tumor region selection, counting method, and subjective assessment of staining positivity. Formal counting methods gave more consistent results than visual estimation. Conclusions Substantial variability in Ki67 scoring was observed among some of the world’s most experienced laboratories. Ki67 values and cutoffs for clinical decision-making cannot be transferred between laboratories without

  14. Systematic Methodology for Reproducible Optimizing Batch Operation

    DEFF Research Database (Denmark)

    Bonné, Dennis; Jørgensen, Sten Bay

    2006-01-01

    contribution furthermore presents how the asymptotic convergence of Iterative Learning Control is combined with the closed-loop performance of Model Predictive Control to form a robust and asymptotically stable optimal controller for ensuring reliable and reproducible operation of batch processes....... This controller may also be used for Optimizing control. The modeling and control performance is demonstrated on a fed-batch protein cultivation example. The presented methodologies lend themselves directly for application as Process Analytical Technologies (PAT).......This contribution presents a systematic methodology for rapid acquirement of discrete-time state space model representations of batch processes based on their historical operation data. These state space models are parsimoniously parameterized as a set of local, interdependent models. The present...

  15. Need for collection of quantitative distribution data for dosimetry and metabolic modeling

    International Nuclear Information System (INIS)

    Lathrop, K.A.

    1976-01-01

    Problems in radiation dose distribution studies in humans are discussed. Data show the effective half-times for 7 Be and 75 Se in the mouse, rat, monkey, dog, and human show no correlation with weight, body surface, or other readily apparent factor that could be used to equate nonhuman and human data. Another problem sometimes encountered in attempting to extrapolate animal data to humans involves equivalent doses of the radiopharmaceutical. A usual human dose for a radiopharmaceutical is 1 ml or 0.017 mg/kg. The same solution injected into a mouse in a convenient volume of 0.1 ml results in a dose of 4 ml/kg or 240 times that received by the human. The effect on whole body retention produced by a dose difference of similar magnitude for selenium in the rat shows the retention is at least twice as great with the smaller amount. With the development of methods for the collection of data throughout the body representing the fractional distribution of radioactivity versus time, not only can more realistic dose estimates be made, but also the tools will be provided for the study of physiological and biochemical interrelationships in the intact subject from which compartmental models may be made which have diagnostic significance. The unique requirement for quantitative biologic data needed for calculation of radiation absorbed doses is the same as the unique scientific contribution that nuclear medicine can make, which is the quantitative in vivo study of physiologic and biochemical processes. The technique involved is not the same as quantitation of a radionuclide image, but is a step beyond

  16. Tree Root System Characterization and Volume Estimation by Terrestrial Laser Scanning and Quantitative Structure Modeling

    Directory of Open Access Journals (Sweden)

    Aaron Smith

    2014-12-01

    Full Text Available The accurate characterization of three-dimensional (3D root architecture, volume, and biomass is important for a wide variety of applications in forest ecology and to better understand tree and soil stability. Technological advancements have led to increasingly more digitized and automated procedures, which have been used to more accurately and quickly describe the 3D structure of root systems. Terrestrial laser scanners (TLS have successfully been used to describe aboveground structures of individual trees and stand structure, but have only recently been applied to the 3D characterization of whole root systems. In this study, 13 recently harvested Norway spruce root systems were mechanically pulled from the soil, cleaned, and their volumes were measured by displacement. The root systems were suspended, scanned with TLS from three different angles, and the root surfaces from the co-registered point clouds were modeled with the 3D Quantitative Structure Model to determine root architecture and volume. The modeling procedure facilitated the rapid derivation of root volume, diameters, break point diameters, linear root length, cumulative percentages, and root fraction counts. The modeled root systems underestimated root system volume by 4.4%. The modeling procedure is widely applicable and easily adapted to derive other important topological and volumetric root variables.

  17. Quantitative Agent Based Model of Opinion Dynamics: Polish Elections of 2015

    Science.gov (United States)

    Sobkowicz, Pawel

    2016-01-01

    We present results of an abstract, agent based model of opinion dynamics simulations based on the emotion/information/opinion (E/I/O) approach, applied to a strongly polarized society, corresponding to the Polish political scene between 2005 and 2015. Under certain conditions the model leads to metastable coexistence of two subcommunities of comparable size (supporting the corresponding opinions)—which corresponds to the bipartisan split found in Poland. Spurred by the recent breakdown of this political duopoly, which occurred in 2015, we present a model extension that describes both the long term coexistence of the two opposing opinions and a rapid, transitory change due to the appearance of a third party alternative. We provide quantitative comparison of the model with the results of polls and elections in Poland, testing the assumptions related to the modeled processes and the parameters used in the simulations. It is shown, that when the propaganda messages of the two incumbent parties differ in emotional tone, the political status quo may be unstable. The asymmetry of the emotions within the support bases of the two parties allows one of them to be ‘invaded’ by a newcomer third party very quickly, while the second remains immune to such invasion. PMID:27171226

  18. Non Linear Programming (NLP formulation for quantitative modeling of protein signal transduction pathways.

    Directory of Open Access Journals (Sweden)

    Alexander Mitsos

    Full Text Available Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i excessive CPU time requirements and ii loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  19. Antiproliferative Pt(IV) complexes: synthesis, biological activity, and quantitative structure-activity relationship modeling.

    Science.gov (United States)

    Gramatica, Paola; Papa, Ester; Luini, Mara; Monti, Elena; Gariboldi, Marzia B; Ravera, Mauro; Gabano, Elisabetta; Gaviglio, Luca; Osella, Domenico

    2010-09-01

    Several Pt(IV) complexes of the general formula [Pt(L)2(L')2(L'')2] [axial ligands L are Cl-, RCOO-, or OH-; equatorial ligands L' are two am(m)ine or one diamine; and equatorial ligands L'' are Cl- or glycolato] were rationally designed and synthesized in the attempt to develop a predictive quantitative structure-activity relationship (QSAR) model. Numerous theoretical molecular descriptors were used alongside physicochemical data (i.e., reduction peak potential, Ep, and partition coefficient, log Po/w) to obtain a validated QSAR between in vitro cytotoxicity (half maximal inhibitory concentrations, IC50, on A2780 ovarian and HCT116 colon carcinoma cell lines) and some features of Pt(IV) complexes. In the resulting best models, a lipophilic descriptor (log Po/w or the number of secondary sp3 carbon atoms) plus an electronic descriptor (Ep, the number of oxygen atoms, or the topological polar surface area expressed as the N,O polar contribution) is necessary for modeling, supporting the general finding that the biological behavior of Pt(IV) complexes can be rationalized on the basis of their cellular uptake, the Pt(IV)-->Pt(II) reduction, and the structure of the corresponding Pt(II) metabolites. Novel compounds were synthesized on the basis of their predicted cytotoxicity in the preliminary QSAR model, and were experimentally tested. A final QSAR model, based solely on theoretical molecular descriptors to ensure its general applicability, is proposed.

  20. Quantitative evaluation of ultrasonic sound fields in anisotropic austenitic welds using 2D ray tracing model

    Science.gov (United States)

    Kolkoori, S. R.; Rahaman, M.-U.; Chinta, P. K.; Kreutzbruck, M.; Prager, J.

    2012-05-01

    Ultrasonic investigation of inhomogeneous anisotropic materials such as austenitic welds is complicated because its columnar grain structure leads to curved energy paths, beam splitting and asymmetrical beam profiles. A ray tracing model has potential advantage in analyzing the ultrasonic sound field propagation and there with optimizing the inspection parameters. In this contribution we present a 2D ray tracing model to predict energy ray paths, ray amplitudes and travel times for the three wave modes quasi longitudinal, quasi shear vertical, and shear horizontal waves in austenitic weld materials. Inhomogenity in the austenitic weld material is represented by discretizing the inhomogeneous region into several homogeneous layers. At each interface between the layers the reflection and transmission problem is computed and yields energy direction, amplitude and energy coefficients. The ray amplitudes are computed accurately by taking into account directivity, divergence and density of rays, phase relations as well as transmission coefficients. Ultrasonic sound fields obtained from the ray tracing model are compared quantitatively with the 2D Elastodynamic Finite Integration Technique (EFIT). The excellent agreement between both models confirms the validity of the presented ray tracing results. Experiments are conducted on austenitic weld samples with longitudinal beam transducer as transmitting probe and amplitudes at the rear surface are scanned by means of electrodynamical probes. Finally, the ray tracing model results are also validated through the experiments.

  1. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    Science.gov (United States)

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  2. Daphnia and fish toxicity of (benzo)triazoles: validated QSAR models, and interspecies quantitative activity-activity modelling.

    Science.gov (United States)

    Cassani, Stefano; Kovarich, Simona; Papa, Ester; Roy, Partha Pratim; van der Wal, Leon; Gramatica, Paola

    2013-08-15

    Due to their chemical properties synthetic triazoles and benzo-triazoles ((B)TAZs) are mainly distributed to the water compartments in the environment, and because of their wide use the potential effects on aquatic organisms are cause of concern. Non testing approaches like those based on quantitative structure-activity relationships (QSARs) are valuable tools to maximize the information contained in existing experimental data and predict missing information while minimizing animal testing. In the present study, externally validated QSAR models for the prediction of acute (B)TAZs toxicity in Daphnia magna and Oncorhynchus mykiss have been developed according to the principles for the validation of QSARs and their acceptability for regulatory purposes, proposed by the Organization for Economic Co-operation and Development (OECD). These models are based on theoretical molecular descriptors, and are statistically robust, externally predictive and characterized by a verifiable structural applicability domain. They have been applied to predict acute toxicity for over 300 (B)TAZs without experimental data, many of which are in the pre-registration list of the REACH regulation. Additionally, a model based on quantitative activity-activity relationships (QAAR) has been developed, which allows for interspecies extrapolation from daphnids to fish. The importance of QSAR/QAAR, especially when dealing with specific chemical classes like (B)TAZs, for screening and prioritization of pollutants under REACH, has been highlighted. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Simulation of the hydrodynamic conditions of the eye to better reproduce the drug release from hydrogel contact lenses: experiments and modeling.

    Science.gov (United States)

    Pimenta, A F R; Valente, A; Pereira, J M C; Pereira, J C F; Filipe, H P; Mata, J L G; Colaço, R; Saramago, B; Serro, A P

    2016-12-01

    Currently, most in vitro drug release studies for ophthalmic applications are carried out in static sink conditions. Although this procedure is simple and useful to make comparative studies, it does not describe adequately the drug release kinetics in the eye, considering the small tear volume and flow rates found in vivo. In this work, a microfluidic cell was designed and used to mimic the continuous, volumetric flow rate of tear fluid and its low volume. The suitable operation of the cell, in terms of uniformity and symmetry of flux, was proved using a numerical model based in the Navier-Stokes and continuity equations. The release profile of a model system (a hydroxyethyl methacrylate-based hydrogel (HEMA/PVP) for soft contact lenses (SCLs) loaded with diclofenac) obtained with the microfluidic cell was compared with that obtained in static conditions, showing that the kinetics of release in dynamic conditions is slower. The application of the numerical model demonstrated that the designed cell can be used to simulate the drug release in the whole range of the human eye tear film volume and allowed to estimate the drug concentration in the volume of liquid in direct contact with the hydrogel. The knowledge of this concentration, which is significantly different from that measured in the experimental tests during the first hours of release, is critical to predict the toxicity of the drug release system and its in vivo efficacy. In conclusion, the use of the microfluidic cell in conjunction with the numerical model shall be a valuable tool to design and optimize new therapeutic drug-loaded SCLs.

  4. Reproducibility of isotope ratio measurements

    International Nuclear Information System (INIS)

    Elmore, D.

    1981-01-01

    The use of an accelerator as part of a mass spectrometer has improved the sensitivity for measuring low levels of long-lived radionuclides by several orders of magnitude. However, the complexity of a large tandem accelerator and beam transport system has made it difficult to match the precision of low energy mass spectrometry. Although uncertainties for accelerator measured isotope ratios as low as 1% have been obtained under favorable conditions, most errors quoted in the literature for natural samples are in the 5 to 20% range. These errors are dominated by statistics and generally the reproducibility is unknown since the samples are only measured once

  5. Reproducibility of esophageal scintigraphy using semi-solid yoghurt

    Energy Technology Data Exchange (ETDEWEB)

    Imai, Yukinori; Kinoshita, Manabu; Asakura, Yasushi; Kakinuma, Tohru; Shimoji, Katsunori; Fujiwara, Kenji; Suzuki, Kenji; Miyamae, Tatsuya [Saitama Medical School, Moroyama (Japan)

    1999-10-01

    Esophageal scintigraphy is a non-invasive method which evaluate esophageal function quantitatively. We applied new technique using semi-solid yoghurt, which can evaluate esophageal function in a sitting position. To evaluate the reproducibility of this method, scintigraphy were performed in 16 healthy volunteers. From the result of four swallows except the first one, the mean coefficients of variation in esophageal transit time and esophageal emptying time were 12.8% and 13.4% respectively (interday variation). As regards the interday variation, this method had also good reproducibility from the result on the 2 separate days. (author)

  6. Linear approaches to intramolecular Förster resonance energy transfer probe measurements for quantitative modeling.

    Directory of Open Access Journals (Sweden)

    Marc R Birtwistle

    Full Text Available Numerous unimolecular, genetically-encoded Förster Resonance Energy Transfer (FRET probes for monitoring biochemical activities in live cells have been developed over the past decade. As these probes allow for collection of high frequency, spatially resolved data on signaling events in live cells and tissues, they are an attractive technology for obtaining data to develop quantitative, mathematical models of spatiotemporal signaling dynamics. However, to be useful for such purposes the observed FRET from such probes should be related to a biological quantity of interest through a defined mathematical relationship, which is straightforward when this relationship is linear, and can be difficult otherwise. First, we show that only in rare circumstances is the observed FRET linearly proportional to a biochemical activity. Therefore in most cases FRET measurements should only be compared either to explicitly modeled probes or to concentrations of products of the biochemical activity, but not to activities themselves. Importantly, we find that FRET measured by standard intensity-based, ratiometric methods is inherently non-linear with respect to the fraction of probes undergoing FRET. Alternatively, we find that quantifying FRET either via (1 fluorescence lifetime imaging (FLIM or (2 ratiometric methods where the donor emission intensity is divided by the directly-excited acceptor emission intensity (denoted R(alt is linear with respect to the fraction of probes undergoing FRET. This linearity property allows one to calculate the fraction of active probes based on the FRET measurement. Thus, our results suggest that either FLIM or ratiometric methods based on R(alt are the preferred techniques for obtaining quantitative data from FRET probe experiments for mathematical modeling purposes.

  7. Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data

    Directory of Open Access Journals (Sweden)

    Gao Shouguo

    2011-08-01

    Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.

  8. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    Science.gov (United States)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  9. The JBEI quantitative metabolic modeling library (jQMM): a python library for modeling microbial metabolism

    DEFF Research Database (Denmark)

    Birkel, Garrett W.; Ghosh, Amit; Kumar, Vinay S.

    2017-01-01

    analysis, new methods for the effective use of the ever more readily available and abundant -omics data (i.e. transcriptomics, proteomics and metabolomics) are urgently needed.Results: The jQMM library presented here provides an open-source, Python-based framework for modeling internal metabolic fluxes...

  10. Benchmarking the Sandbox: Quantitative Comparisons of Numerical and Analogue Models of Brittle Wedge Dynamics (Invited)

    Science.gov (United States)

    Buiter, S.; Schreurs, G.; Geomod2008 Team

    2010-12-01

    When numerical and analogue models are used to investigate the evolution of deformation processes in crust and lithosphere, they face specific challenges related to, among others, large contrasts in material properties, the heterogeneous character of continental lithosphere, the presence of a free surface, the occurrence of large deformations including viscous flow and offset on shear zones, and the observation that several deformation mechanisms may be active simultaneously. These pose specific demands on numerical software and laboratory models. By combining the two techniques, we can utilize the strengths of each individual method and test the model-independence of our results. We can perhaps even consider our findings to be more robust if we find similar-to-same results irrespective of the modeling method that was used. To assess the role of modeling method and to quantify the variability among models with identical setups, we have performed a direct comparison of results of 11 numerical codes and 15 analogue experiments. We present three experiments that describe shortening of brittle wedges and that resemble setups frequently used by especially analogue modelers. Our first experiment translates a non-accreting wedge with a stable surface slope. In agreement with critical wedge theory, all models maintain their surface slope and do not show internal deformation. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. All models show similar cross-sectional evolutions that demonstrate reproducibility to first order. However

  11. Nonlinear quantitative radiation sensitivity prediction model based on NCI-60 cancer cell lines.

    Science.gov (United States)

    Zhang, Chunying; Girard, Luc; Das, Amit; Chen, Sun; Zheng, Guangqiang; Song, Kai

    2014-01-01

    We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT) related genes were selected by significance analysis of microarrays (SAM). Orthogonal latent variables (LVs) were then extracted by the partial least squares (PLS) method as the new compressive input variables. Finally, support vector machine (SVM) regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray) values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a) reducing the root mean square error (RMSE) of the radiation sensitivity prediction model from 0.20 to 0.011; and (b) improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.

  12. Nonlinear Quantitative Radiation Sensitivity Prediction Model Based on NCI-60 Cancer Cell Lines

    Directory of Open Access Journals (Sweden)

    Chunying Zhang

    2014-01-01

    Full Text Available We proposed a nonlinear model to perform a novel quantitative radiation sensitivity prediction. We used the NCI-60 panel, which consists of nine different cancer types, as the platform to train our model. Important radiation therapy (RT related genes were selected by significance analysis of microarrays (SAM. Orthogonal latent variables (LVs were then extracted by the partial least squares (PLS method as the new compressive input variables. Finally, support vector machine (SVM regression model was trained with these LVs to predict the SF2 (the surviving fraction of cells after a radiation dose of 2 Gy γ-ray values of the cell lines. Comparison with the published results showed significant improvement of the new method in various ways: (a reducing the root mean square error (RMSE of the radiation sensitivity prediction model from 0.20 to 0.011; and (b improving prediction accuracy from 62% to 91%. To test the predictive performance of the gene signature, three different types of cancer patient datasets were used. Survival analysis across these different types of cancer patients strongly confirmed the clinical potential utility of the signature genes as a general prognosis platform. The gene regulatory network analysis identified six hub genes that are involved in canonical cancer pathways.

  13. Quantitative structure-activity relationship models of chemical transformations from matched pairs analyses.

    Science.gov (United States)

    Beck, Jeremy M; Springer, Clayton

    2014-04-28

    The concepts of activity cliffs and matched molecular pairs (MMP) are recent paradigms for analysis of data sets to identify structural changes that may be used to modify the potency of lead molecules in drug discovery projects. Analysis of MMPs was recently demonstrated as a feasible technique for quantitative structure-activity relationship (QSAR) modeling of prospective compounds. Although within a small data set, the lack of matched pairs, and the lack of knowledge about specific chemical transformations limit prospective applications. Here we present an alternative technique that determines pairwise descriptors for each matched pair and then uses a QSAR model to estimate the activity change associated with a chemical transformation. The descriptors effectively group similar transformations and incorporate information about the transformation and its local environment. Use of a transformation QSAR model allows one to estimate the activity change for novel transformations and therefore returns predictions for a larger fraction of test set compounds. Application of the proposed methodology to four public data sets results in increased model performance over a benchmark random forest and direct application of chemical transformations using QSAR-by-matched molecular pairs analysis (QSAR-by-MMPA).

  14. Data-driven interdisciplinary mathematical modelling quantitatively unveils competition dynamics of co-circulating influenza strains.

    Science.gov (United States)

    Ho, Bin-Shenq; Chao, Kun-Mao

    2017-07-28

    Co-circulation of influenza strains is common to seasonal epidemics and pandemic emergence. Competition was considered involved in the vicissitudes of co-circulating influenza strains but never quantitatively studied at the human population level. The main purpose of the study was to explore the competition dynamics of co-circulating influenza strains in a quantitative way. We constructed a heterogeneous dynamic transmission model and ran the model to fit the weekly A/H1N1 influenza virus isolation rate through an influenza season. The construction process started on the 2007-2008 single-clade influenza season and, with the contribution from the clade-based A/H1N1 epidemiological curves, advanced to the 2008-2009 two-clade influenza season. Pearson method was used to estimate the correlation coefficient between the simulated epidemic curve and the observed weekly A/H1N1 influenza virus isolation rate curve. The model found the potentially best-fit simulation with correlation coefficient up to 96% and all the successful simulations converging to the best-fit. The annual effective reproductive number of each co-circulating influenza strain was estimated. We found that, during the 2008-2009 influenza season, the annual effective reproductive number of the succeeding A/H1N1 clade 2B-2, carrying H275Y mutation in the neuraminidase, was estimated around 1.65. As to the preceding A/H1N1 clade 2C-2, the annual effective reproductive number would originally be equivalent to 1.65 but finally took on around 0.75 after the emergence of clade 2B-2. The model reported that clade 2B-2 outcompeted for the 2008-2009 influenza season mainly because clade 2C-2 suffered from a reduction of transmission fitness of around 71% on encountering the former. We conclude that interdisciplinary data-driven mathematical modelling could bring to light the transmission dynamics of the A/H1N1 H275Y strains during the 2007-2009 influenza seasons worldwide and may inspire us to tackle the

  15. Immortalized keratinocytes derived from patients with epidermolytic ichthyosis reproduce the disease phenotype: a useful in vitro model for testing new treatments.

    Science.gov (United States)

    Chamcheu, J C; Pihl-Lundin, I; Mouyobo, C E; Gester, T; Virtanen, M; Moustakas, A; Navsaria, H; Vahlquist, A; Törmä, H

    2011-02-01

    Epidermolytic ichthyosis (EI) is a skin fragility disorder caused by mutations in genes encoding suprabasal keratins 1 and 10. While the aetiology of EI is known, model systems are needed for pathophysiological studies and development of novel therapies. To generate immortalized keratinocyte lines from patients with EI for studies of EI cell pathology and the effects of chemical chaperones as putative therapies. We derived keratinocytes from three patients with EI and one healthy control and established immortalized keratinocytes using human papillomavirus 16-E6/E7. Growth and differentiation characteristics, ability to regenerate organotypic epidermis, keratin expression, formation of cytoskeletal aggregates, and responses to heat shock and chemical chaperones were assessed. The cell lines EH11 (K1_p.Val176_Lys197del), EH21 (K10_p.156Arg>Gly), EH31 (K10_p.Leu161_Asp162del) and NKc21 (wild-type) currently exceed 160 population doublings and differentiate when exposed to calcium. At resting state, keratin aggregates were detected in 9% of calcium-differentiated EH31 cells, but not in any other cell line. Heat stress further increased this proportion to 30% and also induced aggregates in 3% of EH11 cultures. Treatment with trimethylamine N-oxide and 4-phenylbutyrate (4-PBA) reduced the fraction of aggregate-containing cells and affected the mRNA expression of keratins 1 and 10 while 4-PBA also modified heat shock protein 70 (HSP70) expression. Furthermore, in situ proximity ligation assay suggested a colocalization between HSP70 and keratins 1 and 10. Reconstituted epidermis from EI cells cornified but EH21 and EH31 cells produced suprabasal cytolysis, closely resembling the in vivo phenotype. These immortalized cell lines represent a useful model for studying EI biology and novel therapies. © 2011 The Authors. BJD © 2011 British Association of Dermatologists.

  16. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME

    Science.gov (United States)

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2017-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948

  17. Reproducible Hydrogeophysical Inversions through the Open-Source Library pyGIMLi

    Science.gov (United States)

    Wagner, F. M.; Rücker, C.; Günther, T.

    2017-12-01

    Many tasks in applied geosciences cannot be solved by a single measurement method and require the integration of geophysical, geotechnical and hydrological methods. In the emerging field of hydrogeophysics, researchers strive to gain quantitative information on process-relevant subsurface parameters by means of multi-physical models, which simulate the dynamic process of interest as well as its geophysical response. However, such endeavors are associated with considerable technical challenges, since they require coupling of different numerical models. This represents an obstacle for many practitioners and students. Even technically versatile users tend to build individually tailored solutions by coupling different (and potentially proprietary) forward simulators at the cost of scientific reproducibility. We argue that the reproducibility of studies in computational hydrogeophysics, and therefore the advancement of the field itself, requires versatile open-source software. To this end, we present pyGIMLi - a flexible and computationally efficient framework for modeling and inversion in geophysics. The object-oriented library provides management for structured and unstructured meshes in 2D and 3D, finite-element and finite-volume solvers, various geophysical forward operators, as well as Gauss-Newton based frameworks for constrained, joint and fully-coupled inversions with flexible regularization. In a step-by-step demonstration, it is shown how the hydrogeophysical response of a saline tracer migration can be simulated. Tracer concentration data from boreholes and measured voltages at the surface are subsequently used to estimate the hydraulic conductivity distribution of the aquifer within a single reproducible Python script.

  18. Diffusion-weighted MRI and quantitative biophysical modeling of hippocampal neurite loss in chronic stress.

    Directory of Open Access Journals (Sweden)

    Peter Vestergaard-Poulsen

    Full Text Available Chronic stress has detrimental effects on physiology, learning and memory and is involved in the development of anxiety and depressive disorders. Besides changes in synaptic formation and neurogenesis, chronic stress also induces dendritic remodeling in the hippocampus, amygdala and the prefrontal cortex. Investigations of dendritic remodeling during development and treatment of stress are currently limited by the invasive nature of histological and stereological methods. Here we show that high field diffusion-weighted MRI combined with quantitative biophysical modeling of the hippocampal dendritic loss in 21 day restraint stressed rats highly correlates with former histological findings. Our study strongly indicates that diffusion-weighted MRI is sensitive to regional dendritic loss and thus a promising candidate for non-invasive studies of dendritic plasticity in chronic stress and stress-related disorders.

  19. Assessment for Improvement: Two Models for Assessing a Large Quantitative Reasoning Requirement

    Directory of Open Access Journals (Sweden)

    Mary C. Wright

    2015-03-01

    Full Text Available We present two models for assessment of a large and diverse quantitative reasoning (QR requirement at the University of Michigan. These approaches address two key challenges in assessment: (1 dissemination of findings for curricular improvement and (2 resource constraints associated with measurement of large programs. Approaches we present for data collection include convergent validation of self-report surveys, as well as use of mixed methods and learning analytics. Strategies we present for dissemination of findings include meetings with instructors to share data and best practices, sharing of results through social media, and use of easily accessible dashboards. These assessment approaches may be of particular interest to universities with large numbers of students engaging in a QR experience, projects that involve multiple courses with diverse instructional goals, or those who wish to promote evidence-based curricular improvement.

  20. Image guided interstitial laser thermotherapy: a canine model evaluated by magnetic resonance imaging and quantitative autoradiography.

    Science.gov (United States)

    Muacevic, A; Peller, M; Ruprecht, L; Berg, D; Fend, L; Sroka, R; Reulen, H J; Reiser, M; Tonn, J Ch; Kreth, F W

    2005-02-01

    To determine the applicability and safety of a new canine model suitable for correlative magnetic resonance imaging (MRI) studies and morphological/pathophysiological examination over time after interstitial laser thermotherapy (ILTT) in brain tissue. A laser fibre (Diode Laser 830 nm) with an integrated temperature feedback system was inserted into the right frontal white matter in 18 dogs using frameless navigation technique. MRI thermometry (phase mapping i.e. chemical shift of the proton resonance frequency) during interstitial heating was compared to simultaneously recorded interstitial fiberoptic temperature readings on the border of the lesion. To study brain capillary function in response to ILTT over time quantitative autoradiography was performed investigating the unidirectional blood-to-tissue transport of carbon-14-labelled alpha amino-isobutyric acid (transfer constant K of AIB) 12, 36 hours, 7, 14 days, 4 weeks and 3 months after ILTT. All laser procedures were well tolerated, laser and temperature fibres could be adequately placed in the right frontal lobe in all animals. In 5 animals MRI-based temperature quantification correlated strongly to invasive temperature measurements. In the remaining animals the temperature fibre was located in the area of susceptibility artifacts, therefore, no temperature correlation was possible. The laser lesions consisted of a central area of calcified necrosis which was surrounded by an area of reactive brain tissue with increased permeability. Quantitative autoradiography indicated a thin and spherical blood brain barrier lesion. The magnitude of K of AIB increased from 12 hours to 14 days after ILTT and decreased thereafter. The mean value of K of AIB was 19 times (2 times) that of normal white matter (cortex), respectively. ILTT causes transient, highly localised areas of increased capillary permeability surrounding the laser lesion. Phase contrast imaging for MRI thermomonitoring can currently not be used for

  1. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches.

    Science.gov (United States)

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations.

  2. Quantitative assessments of mantle flow models against seismic observations: Influence of uncertainties in mineralogical parameters

    Science.gov (United States)

    Schuberth, Bernhard S. A.

    2017-04-01

    One of the major challenges in studies of Earth's deep mantle is to bridge the gap between geophysical hypotheses and observations. The biggest dataset available to investigate the nature of mantle flow are recordings of seismic waveforms. On the other hand, numerical models of mantle convection can be simulated on a routine basis nowadays for earth-like parameters, and modern thermodynamic mineralogical models allow us to translate the predicted temperature field to seismic structures. The great benefit of the mineralogical models is that they provide the full non-linear relation between temperature and seismic velocities and thus ensure a consistent conversion in terms of magnitudes. This opens the possibility for quantitative assessments of the theoretical predictions. The often-adopted comparison between geodynamic and seismic models is unsuitable in this respect owing to the effects of damping, limited resolving power and non-uniqueness inherent to tomographic inversions. The most relevant issue, however, is related to wavefield effects that reduce the magnitude of seismic signals (e.g., traveltimes of waves), a phenomenon called wavefront healing. Over the past couple of years, we have developed an approach that takes the next step towards a quantitative assessment of geodynamic models and that enables us to test the underlying geophysical hypotheses directly against seismic observations. It is based solely on forward modelling and warrants a physically correct treatment of the seismic wave equation without theoretical approximations. Fully synthetic 3-D seismic wavefields are computed using a spectral element method for 3-D seismic structures derived from mantle flow models. This way, synthetic seismograms are generated independent of any seismic observations. Furthermore, through the wavefield simulations, it is possible to relate the magnitude of lateral temperature variations in the dynamic flow simulations directly to body-wave traveltime residuals. The

  3. A Mouse Model That Reproduces the Developmental Pathways and Site Specificity of the Cancers Associated With the Human BRCA1 Mutation Carrier State

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2015-10-01

    Full Text Available Predisposition to breast and extrauterine Müllerian carcinomas in BRCA1 mutation carriers is due to a combination of cell-autonomous consequences of BRCA1 inactivation on cell cycle homeostasis superimposed on cell-nonautonomous hormonal factors magnified by the effects of BRCA1 mutations on hormonal changes associated with the menstrual cycle. We used the Müllerian inhibiting substance type 2 receptor (Mis2r promoter and a truncated form of the Follicle stimulating hormone receptor (Fshr promoter to introduce conditional knockouts of Brca1 and p53 not only in mouse mammary and Müllerian epithelia, but also in organs that control the estrous cycle. Sixty percent of the double mutant mice developed invasive Müllerian and mammary carcinomas. Mice carrying heterozygous mutations in Brca1 and p53 also developed invasive tumors, albeit at a lesser (30% rate, in which the wild type alleles were no longer present due to loss of heterozygosity. While mice carrying heterozygous mutations in both genes developed mammary tumors, none of the mice carrying only a heterozygous p53 mutation developed such tumors (P < 0.0001, attesting to a role for Brca1 mutations in tumor development. This mouse model is attractive to investigate cell-nonautonomous mechanisms associated with cancer predisposition in BRCA1 mutation carriers and to investigate the merit of chemo-preventive drugs targeting such mechanisms.

  4. A quantitative model for dermal infection and oedema in BALB/c mice pinna.

    Science.gov (United States)

    Marino-Marmolejo, Erika Nahomy; Flores-Hernández, Flor Yohana; Flores-Valdez, Mario Alberto; García-Morales, Luis Felipe; González-Villegas, Ana Cecilia; Bravo-Madrigal, Jorge

    2016-12-12

    Pharmaceutical industry demands innovation for developing new molecules to improve effectiveness and safety of therapeutic medicines. Preclinical assays are the first tests performed to evaluate new therapeutic molecules using animal models. Currently, there are several models for evaluation of treatments, for dermal oedema or infection. However, the most common or usual way is to induce the inflammation with chemical substances instead of infectious agents. On the other hand, this kind of models require the implementation of histological techniques and the interpretation of pathologies to verify the effectiveness of the therapy under assessment. This work was focused on developing a quantitative model of infection and oedema in mouse pinna. The infection was achieved with a strain of Streptococcus pyogenes that was inoculated in an injury induced at the auricle of BALB/c mice, the induced oedema was recorded by measuring the ear thickness with a digital micrometer and histopathological analysis was performed to verify the damage. The presence of S. pyogenes at the infection site was determined every day by culture. Our results showed that S. pyogenes can infect the mouse pinna and that it can be recovered at least for up to 4 days from the infected site; we also found that S. pyogenes can induce a bigger oedema than the PBS-treated control for at least 7 days; our results were validated with an antibacterial and anti-inflammatory formulation made with ciprofloxacin and hydrocortisone. The model we developed led us to emulate a dermal infection and allowed us to objectively evaluate the increase or decrease of the oedema by measuring the thickness of the ear pinna, and to determine the presence of the pathogen in the infection site. We consider that the model could be useful for assessment of new anti-inflammatory or antibacterial therapies for dermal infections.

  5. Validation of quantitative structure-activity relationship (QSAR) model for photosensitizer activity prediction.

    Science.gov (United States)

    Frimayanti, Neni; Yam, Mun Li; Lee, Hong Boon; Othman, Rozana; Zain, Sharifuddin M; Rahman, Noorsaadah Abd

    2011-01-01

    Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR) method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT) activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA) method. Based on the method, r(2) value, r(2) (CV) value and r(2) prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC(50) values ranging from 0.39 μM to 7.04 μM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r(2) prediction for external test set) of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set.

  6. Validation of Quantitative Structure-Activity Relationship (QSAR) Model for Photosensitizer Activity Prediction

    Science.gov (United States)

    Frimayanti, Neni; Yam, Mun Li; Lee, Hong Boon; Othman, Rozana; Zain, Sharifuddin M.; Rahman, Noorsaadah Abd.

    2011-01-01

    Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR) method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT) activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA) method. Based on the method, r2 value, r2 (CV) value and r2 prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC50 values ranging from 0.39 μM to 7.04 μM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r2 prediction for external test set) of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set. PMID:22272096

  7. Quantitative modelling of the degradation processes of cement grout. Project CEMMOD

    Energy Technology Data Exchange (ETDEWEB)

    Grandia, Fidel; Galindez, Juan-Manuel; Arcos, David; Molinero, Jorge (Amphos21 Consulting S.L., Barcelona (Spain))

    2010-05-15

    Grout cement is planned to be used in the sealing of water-conducting fractures in the deep geological storage of spent nuclear fuel waste. The integrity of such cementitious materials should be ensured in a time framework of decades to a hundred of years as mimum. However, their durability must be quantified since grout degradation may jeopardize the stability of other components in the repository due to the potential release of hyperalkaline plumes. The model prediction of the cement alteration has been challenging in the last years mainly due to the difficulty to reproduce the progressive change in composition of the Calcium-Silicate-Hydrate (CSH) compounds as the alteration proceeds. In general, the data obtained from laboratory experiments show a rather similar dependence between the pH of pore water and the Ca-Si ratio of the CSH phases. The Ca-Si ratio decreases as the CSH is progressively replaced by Si-enriched phases. An elegant and reasonable approach is the use of solid solution models even keeping in mind that CSH phases are not crystalline solids but gels. An additional obstacle is the uncertainty in the initial composition of the grout to be considered in the calculations because only the recipe of low-pH clinker is commonly provided by the manufacturer. The hydration process leads to the formation of new phases and, importantly, creates porosity. A number of solid solution models have been reported in literature. Most of them assumed a strong non-ideal binary solid solution series to account for the observed changes in the Ca-Si ratios in CSH. However, it results very difficult to reproduce the degradation of the CSH in the whole Ca-Si range of compositions (commonly Ca/Si=0.5-2.5) by considering only two end-members and fixed nonideality parameters. Models with multiple non-ideal end-members with interaction parameters as a function of the solid composition can solve the problem but these can not be managed in the existing codes of reactive

  8. Using Modified Contour Deformable Model to Quantitatively Estimate Ultrasound Parameters for Osteoporosis Assessment

    Science.gov (United States)

    Chen, Yung-Fu; Du, Yi-Chun; Tsai, Yi-Ting; Chen, Tainsong

    Osteoporosis is a systemic skeletal disease, which is characterized by low bone mass and micro-architectural deterioration of bone tissue, leading to bone fragility. Finding an effective method for prevention and early diagnosis of the disease is very important. Several parameters, including broadband ultrasound attenuation (BUA), speed of sound (SOS), and stiffness index (STI), have been used to measure the characteristics of bone tissues. In this paper, we proposed a method, namely modified contour deformable model (MCDM), bases on the active contour model (ACM) and active shape model (ASM) for automatically detecting the calcaneus contour from quantitative ultrasound (QUS) parametric images. The results show that the difference between the contours detected by the MCDM and the true boundary for the phantom is less than one pixel. By comparing the phantom ROIs, significant relationship was found between contour mean and bone mineral density (BMD) with R=0.99. The influence of selecting different ROI diameters (12, 14, 16 and 18 mm) and different region-selecting methods, including fixed region (ROI fix ), automatic circular region (ROI cir ) and calcaneal contour region (ROI anat ), were evaluated for testing human subjects. Measurements with large ROI diameters, especially using fixed region, result in high position errors (10-45%). The precision errors of the measured ultrasonic parameters for ROI anat are smaller than ROI fix and ROI cir . In conclusion, ROI anat provides more accurate measurement of ultrasonic parameters for the evaluation of osteoporosis and is useful for clinical application.

  9. Effect of arterial deprivation on growing femoral epiphysis: Quantitative magnetic resonance imaging using a piglet model

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Jung Eun; Yoo, Won Joon; Kim, In One; Kim, Woo Sun; Choi, Young Hun [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2015-06-15

    To investigate the usefulness of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and diffusion MRI for the evaluation of femoral head ischemia. Unilateral femoral head ischemia was induced by selective embolization of the medial circumflex femoral artery in 10 piglets. All MRIs were performed immediately (1 hour) and after embolization (1, 2, and 4 weeks). Apparent diffusion coefficients (ADCs) were calculated for the femoral head. The estimated pharmacokinetic parameters (Kep and Ve from two-compartment model) and semi-quantitative parameters including peak enhancement, time-to-peak (TTP), and contrast washout were evaluated. The epiphyseal ADC values of the ischemic hip decreased immediately (1 hour) after embolization. However, they increased rapidly at 1 week after embolization and remained elevated until 4 weeks after embolization. Perfusion MRI of ischemic hips showed decreased epiphyseal perfusion with decreased Kep immediately after embolization. Signal intensity-time curves showed delayed TTP with limited contrast washout immediately post-embolization. At 1-2 weeks after embolization, spontaneous reperfusion was observed in ischemic epiphyses. The change of ADC (p = 0.043) and Kep (p = 0.043) were significantly different between immediate (1 hour) after embolization and 1 week post-embolization. Diffusion MRI and pharmacokinetic model obtained from the DCE-MRI are useful in depicting early changes of perfusion and tissue damage using the model of femoral head ischemia in skeletally immature piglets.

  10. A functional-structural model of rice linking quantitative genetic information with morphological development and physiological processes

    NARCIS (Netherlands)

    Xu, L.F.; Henke, M.; Zhu, J.; Kurth, W.; Buck-Sorlin, G.H.

    2011-01-01

    Background and Aims Although quantitative trait loci (QTL) analysis of yield-related traits for rice has developed rapidly, crop models using genotype information have been proposed only relatively recently. As a first step towards a generic genotype-phenotype model, we present here a

  11. Quantitative structure-activity relationship modeling of the toxicity of organothiophosphate pesticides to Daphnia magna and Cyprinus carpio

    NARCIS (Netherlands)

    Zvinavashe, E.; Du, T.; Griff, T.; Berg, van den J.H.J.; Soffers, A.E.M.F.; Vervoort, J.J.M.; Murk, A.J.; Rietjens, I.

    2009-01-01

    Within the REACH regulatory framework in the EU, quantitative structure-activity relationships (QSAR) models are expected to help reduce the number of animals used for experimental testing. The objective of this study was to develop QSAR models to describe the acute toxicity of organothiophosphate

  12. Analysis of Water Conflicts across Natural and Societal Boundaries: Integration of Quantitative Modeling and Qualitative Reasoning

    Science.gov (United States)

    Gao, Y.; Balaram, P.; Islam, S.

    2009-12-01

    , the knowledge generated from these studies cannot be easily generalized or transferred to other basins. Here, we present an approach to integrate the quantitative and qualitative methods to study water issues and capture the contextual knowledge of water management- by combining the NSSs framework and an area of artificial intelligence called qualitative reasoning. Using the Apalachicola-Chattahoochee-Flint (ACF) River Basin dispute as an example, we demonstrate how quantitative modeling and qualitative reasoning can be integrated to examine the impact of over abstraction of water from the river on the ecosystem and the role of governance in shaping the evolution of the ACF water dispute.

  13. Invasive growth of Saccharomyces cerevisiae depends on environmental triggers: a quantitative model.

    Science.gov (United States)

    Zupan, Jure; Raspor, Peter

    2010-04-01

    In this contribution, the influence of various physicochemical factors on Saccharomyces cerevisiae invasive growth is examined quantitatively. Agar-invasion assays are generally applied for in vitro studies on S. cerevisiae invasiveness, the phenomenon observed as a putative virulence trait in this clinically more and more concerning yeast. However, qualitative agar-invasion assays, used until now, strongly limit the feasibility and interpretation of analyses and therefore needed to be improved. Besides, knowledge in this field concerning the physiology of invasive growth, influenced by stress conditions related to the human alimentary tract and food, is poor and should be expanded. For this purpose, a quantitative agar-invasion assay, presented in our previous work, was applied in this contribution to clarify the significance of the stress factors controlling the adhesion and invasion of the yeast in greater detail. Ten virulent and non-virulent S. cerevisiae strains were assayed at various temperatures, pH values, nutrient starvation, modified atmosphere, and different concentrations of NaCl, CaCl2 and preservatives. With the use of specific parameters, like a relative invasion, eight invasive growth models were hypothesized, which enabled intelligible interpretation of the results. A strong preference for invasive growth (meaning high relative invasion) was observed when the strains were grown on nitrogen- and glucose-depleted media. A significant increase in the invasion of the strains was also determined at temperatures typical for human fever (37-39 degrees C). On the other hand, a strong repressive effect on invasion was found in the presence of salts, anoxia and some preservatives. Copyright 2010 John Wiley & Sons, Ltd.

  14. Evaluation of the agonist PET radioligand [¹¹C]GR103545 to image kappa opioid receptor in humans: kinetic model selection, test-retest reproducibility and receptor occupancy by the antagonist PF-04455242.

    Science.gov (United States)

    Naganawa, Mika; Jacobsen, Leslie K; Zheng, Ming-Qiang; Lin, Shu-Fei; Banerjee, Anindita; Byon, Wonkyung; Weinzimmer, David; Tomasi, Giampaolo; Nabulsi, Nabeel; Grimwood, Sarah; Badura, Lori L; Carson, Richard E; McCarthy, Timothy J; Huang, Yiyun

    2014-10-01

    Kappa opioid receptors (KOR) are implicated in several brain disorders. In this report, a first-in-human positron emission tomography (PET) study was conducted with the potent and selective KOR agonist tracer, [(11)C]GR103545, to determine an appropriate kinetic model for analysis of PET imaging data and assess the test-retest reproducibility of model-derived binding parameters. The non-displaceable distribution volume (V(ND)) was estimated from a blocking study with naltrexone. In addition, KOR occupancy of PF-04455242, a selective KOR antagonist that is active in preclinical models of depression, was also investigated. For determination of a kinetic model and evaluation of test-retest reproducibility, 11 subjects were scanned twice with [(11)C]GR103545. Seven subjects were scanned before and 75 min after oral administration of naltrexone (150 mg). For the KOR occupancy study, six subjects were scanned at baseline and 1.5 h and 8 h after an oral dose of PF-04455242 (15 mg, n=1 and 30 mg, n=5). Metabolite-corrected arterial input functions were measured and all scans were 150 min in duration. Regional time-activity curves (TACs) were analyzed with 1- and 2-tissue compartment models (1TC and 2TC) and the multilinear analysis (MA1) method to derive regional volume of distribution (V(T)). Relative test-retest variability (TRV), absolute test-retest variability (aTRV) and intra-class coefficient (ICC) were calculated to assess test-retest reproducibility of regional VT. Occupancy plots were computed for blocking studies to estimate occupancy and V(ND). The half maximal inhibitory concentration (IC50) of PF-04455242 was determined from occupancies and drug concentrations in plasma. [(11)C]GR103545 in vivo K(D) was also estimated. Regional TACs were well described by the 2TC model and MA1. However, 2TC VT was sometimes estimated with high standard error. Thus MA1 was the model of choice. Test-retest variability was ~15%, depending on the outcome measure. The blocking

  15. Reproducibility in light microscopy: Maintenance, standards and SOPs.

    Science.gov (United States)

    Deagle, Rebecca C; Wee, Tse-Luen Erika; Brown, Claire M

    2017-08-01

    Light microscopy has grown to be a valuable asset in both the physical and life sciences. It is a highly quantitative method available in individual research laboratories and often centralized in core facilities. However, although quantitative microscopy is becoming a customary tool in research, it is rarely standardized. To achieve accurate quantitative microscopy data and reproducible results, three levels of standardization must be considered: (1) aspects of the microscope, (2) the sample, and (3) the detector. The accuracy of the data is only as reliable as the imaging system itself, thereby imposing the need for routine standard performance testing. Depending on the task some maintenance procedures should be performed once a month, some before each imaging session, while others conducted annually. This text should be implemented as a resource for researchers to integrate with their own standard operating procedures to ensure the highest quality quantitative microscopy data. Copyright © 2017. Published by Elsevier Ltd.

  16. Two states are not enough: quantitative evaluation of the valence-bond intramolecular charge-transfer model and its use in predicting bond length alternation effects.

    Science.gov (United States)

    Jarowski, Peter D; Mo, Yirong

    2014-12-15

    The structural weights of the canonical resonance contributors used in the Two-state valence-bond charge-transfer model, neutral (N, R1) and ionic (VB-CT, R2), to the ground states and excited states of a series of linear dipolar intramolecular charge-transfer chromophores containing a buta-1,3-dien-1,4-diyl bridge have been computed by using the block-localized wavefunction (BLW) method at the B3LYP/6-311+G(d) level to provide the first quantitative assessment of this simple model. Ground- and excited-state analysis reveals surprisingly low ground-state structural weights for the VB-CT resonance form using either this Two-state model or an expanded Ten-state model. The VB-CT state is found to be more prominent in the excited state. Individual resonance forms were structurally optimized to understand the origins of the bond length alternation (BLA) of the bridging unit. Using a Wheland energy-based weighting scheme, the weighted average of the optimized bond lengths with the Two-state model was unable to reproduce the BLA features with values 0.04 to 0.02 Å too large compared to the fully delocalized (FD) structure (BLW: ca. -0.13 to -0.07 Å, FD: ca. -0.09 to -0.05 Å). Instead, an expanded Ten-state model fit the BLA values of the FD structure to within only 0.001 Å of FD. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Lower-order effects adjustment in quantitative traits model-based multifactor dimensionality reduction.

    Science.gov (United States)

    Mahachie John, Jestinah M; Cattaert, Tom; Lishout, François Van; Gusareva, Elena S; Steen, Kristel Van

    2012-01-01

    Identifying gene-gene interactions or gene-environment interactions in studies of human complex diseases remains a big challenge in genetic epidemiology. An additional challenge, often forgotten, is to account for important lower-order genetic effects. These may hamper the identification of genuine epistasis. If lower-order genetic effects contribute to the genetic variance of a trait, identified statistical interactions may simply be due to a signal boost of these effects. In this study, we restrict attention to quantitative traits and bi-allelic SNPs as genetic markers. Moreover, our interaction study focuses on 2-way SNP-SNP interactions. Via simulations, we assess the performance of different corrective measures for lower-order genetic effects in Model-Based Multifactor Dimensionality Reduction epistasis detection, using additive and co-dominant coding schemes. Performance is evaluated in terms of power and familywise error rate. Our simulations indicate that empirical power estimates are reduced with correction of lower-order effects, likewise familywise error rates. Easy-to-use automatic SNP selection procedures, SNP selection based on "top" findings, or SNP selection based on p-value criterion for interesting main effects result in reduced power but also almost zero false positive rates. Always accounting for main effects in the SNP-SNP pair under investigation during Model-Based Multifactor Dimensionality Reduction analysis adequately controls false positive epistasis findings. This is particularly true when adopting a co-dominant corrective coding scheme. In conclusion, automatic search procedures to identify lower-order effects to correct for during epistasis screening should be avoided. The same is true for procedures that adjust for lower-order effects prior to Model-Based Multifactor Dimensionality Reduction and involve using residuals as the new trait. We advocate using "on-the-fly" lower-order effects adjusting when screening for SNP-SNP interactions

  18. A probabilistic quantitative risk assessment model for the long-term work zone crashes.

    Science.gov (United States)

    Meng, Qiang; Weng, Jinxian; Qu, Xiaobo

    2010-11-01

    Work zones especially long-term work zones increase traffic conflicts and cause safety problems. Proper casualty risk assessment for a work zone is of importance for both traffic safety engineers and travelers. This paper develops a novel probabilistic quantitative risk assessment (QRA) model to evaluate the casualty risk combining frequency and consequence of all accident scenarios triggered by long-term work zone crashes. The casualty risk is measured by the individual risk and societal risk. The individual risk can be interpreted as the frequency of a driver/passenger being killed or injured, and the societal risk describes the relation between frequency and the number of casualties. The proposed probabilistic QRA model consists of the estimation of work zone crash frequency, an event tree and consequence estimation models. There are seven intermediate events--age (A), crash unit (CU), vehicle type (VT), alcohol (AL), light condition (LC), crash type (CT) and severity (S)--in the event tree. Since the estimated value of probability for some intermediate event may have large uncertainty, the uncertainty can thus be characterized by a random variable. The consequence estimation model takes into account the combination effects of speed and emergency medical service response time (ERT) on the consequence of work zone crash. Finally, a numerical example based on the Southeast Michigan work zone crash data is carried out. The numerical results show that there will be a 62% decrease of individual fatality risk and 44% reduction of individual injury risk if the mean travel speed is slowed down by 20%. In addition, there will be a 5% reduction of individual fatality risk and 0.05% reduction of individual injury risk if ERT is reduced by 20%. In other words, slowing down speed is more effective than reducing ERT in the casualty risk mitigation. 2010 Elsevier Ltd. All rights reserved.

  19. Rock physics models for constraining quantitative interpretation of ultrasonic data for biofilm growth and development

    Science.gov (United States)

    Alhadhrami, Fathiya Mohammed

    This study examines the use of rock physics modeling for quantitative interpretation of seismic data in the context of microbial growth and biofilm formation in unconsolidated sediment. The impetus for this research comes from geophysical experiments by Davis et al. (2010) and Kwon and Ajo-Franklin et al. (2012). These studies observed that microbial growth has a small effect on P-wave velocities (VP) but a large effect on seismic amplitudes. Davis et al. (2010) and Kwon and Ajo-Franklin et al. (2012) speculated that the amplitude variations were due to a combination of rock mechanical changes from accumulation of microbial growth related features such as biofilms. A more definite conclusion can be drawn by developing rock physics models that connect rock properties to seismic amplitudes. The primary objective of this work is to provide an explanation for high amplitude attenuation due to biofilm growth. The results suggest that biofilm formation in the Davis et al. (2010) experiment exhibit two growth styles: a loadbearing style where biofilm behaves like an additional mineral grain and a non-loadbearing mode where the biofilm grows into the pore spaces. In the loadbearing mode, the biofilms contribute to the stiffness of the sediments. We refer to this style as "filler." In the non-loadbearing mode, the biofilms contribute only to change in density of sediments without affecting their strength. We refer to this style of microbial growth as "mushroom." Both growth styles appear to be changing permeability more than the moduli or the density. As the result, while the VP velocity remains relatively unchanged, the amplitudes can change significantly depending on biofilm saturation. Interpreting seismic data from biofilm growths in term of rock physics models provide a greater insight into the sediment-fluid interaction. The models in turn can be used to understand microbial enhanced oil recovery and in assisting in solving environmental issues such as creating bio

  20. Inference of quantitative models of bacterial promoters from time-series reporter gene data.

    Science.gov (United States)

    Stefan, Diana; Pinel, Corinne; Pinhal, Stéphane; Cinquemani, Eugenio; Geiselmann, Johannes; de Jong, Hidde

    2015-01-01

    The inference of regulatory interactions and quantitative models of gene regulation from time-series transcriptomics data has been extensively studied and applied to a range of problems in drug discovery, cancer research, and biotechnology. The application of existing methods is commonly based on implicit assumptions on the biological processes under study. First, the measurements of mRNA abundance obtained in transcriptomics experiments are taken to be representative of protein concentrations. Second, the observed changes in gene expression are assumed to be solely due to transcription factors and other specific regulators, while changes in the activity of the gene expression machinery and other global physiological effects are neglected. While convenient in practice, these assumptions are often not valid and bias the reverse engineering process. Here we systematically investigate, using a combination of models and experiments, the importance of this bias and possible corrections. We measure in real time and in vivo the activity of genes involved in the FliA-FlgM module of the E. coli motility network. From these data, we estimate protein concentrations and global physiological effects by means of kinetic models of gene expression. Our results indicate that correcting for the bias of commonly-made assumptions improves the quality of the models inferred from the data. Moreover, we show by simulation that these improvements are expected to be even stronger for systems in which protein concentrations have longer half-lives and the activity of the gene expression machinery varies more strongly across conditions than in the FliA-FlgM module. The approach proposed in this study is broadly applicable when using time-series transcriptome data to learn about the structure and dynamics of regulatory networks. In the case of the FliA-FlgM module, our results demonstrate the importance of global physiological effects and the active regulation of FliA and FlgM half-lives for

  1. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method

    Directory of Open Access Journals (Sweden)

    Sette Alessandro

    2005-05-01

    Full Text Available Abstract Background Many processes in molecular biology involve the recognition of short sequences of nucleic-or amino acids, such as the binding of immunogenic peptides to major histocompatibility complex (MHC molecules. From experimental data, a model of the sequence specificity of these processes can be constructed, such as a sequence motif, a scoring matrix or an artificial neural network. The purpose of these models is two-fold. First, they can provide a summary of experimental results, allowing for a deeper understanding of the mechanisms involved in sequence recognition. Second, such models can be used to predict the experimental outcome for yet untested sequences. In the past we reported the development of a method to generate such models called the Stabilized Matrix Method (SMM. This method has been successfully applied to predicting peptide binding to MHC molecules, peptide transport by the transporter associated with antigen presentation (TAP and proteasomal cleavage of protein sequences. Results Herein we report the implementation of the SMM algorithm as a publicly available software package. Specific features determining the type of problems the method is most appropriate for are discussed. Advantageous features of the package are: (1 the output generated is easy to interpret, (2 input and output are both quantitative, (3 specific computational strategies to handle experimental noise are built in, (4 the algorithm is designed to effectively handle bounded experimental data, (5 experimental data from randomized peptide libraries and conventional peptides can easily be combined, and (6 it is possible to incorporate pair interactions between positions of a sequence. Conclusion Making the SMM method publicly available enables bioinformaticians and experimental biologists to easily access it, to compare its performance to other prediction methods, and to extend it to other applications.

  2. How plants manage food reserves at night: quantitative models and open questions

    Directory of Open Access Journals (Sweden)

    Antonio eScialdone

    2015-03-01

    Full Text Available In order to cope with night-time darkness, plants during the day allocate part of their photosynthate for storage, often as starch. This stored reserve is then degraded at night to sustain metabolism and growth. However, night-time starch degradation must be tightly controlled, as over-rapid turnover results in premature depletion of starch before dawn, leading to starvation. Recent experiments in Arabidopsis have shown that starch degradation proceeds at a constant rate during the night and is set such that starch reserves are exhausted almost precisely at dawn. Intriguingly, this pattern is robust with the degradation rate being adjusted to compensate for unexpected changes in the time of darkness onset. While a fundamental role for the circadian clock is well established, the underlying mechanisms controlling starch degradation remain poorly characterized. Here, we discuss recent quantitative models that have been proposed to explain how plants can compute the appropriate starch degradation rate, a process that requires an effective arithmetic division calculation. We review experimental confirmation of the models, and describe aspects that require further investigation. Overall, the process of night-time starch degradation necessitates a fundamental metabolic role for the circadian clock and, more generally, highlights how cells process information in order to optimally manage their resources.

  3. How plants manage food reserves at night: quantitative models and open questions.

    Science.gov (United States)

    Scialdone, Antonio; Howard, Martin

    2015-01-01

    In order to cope with night-time darkness, plants during the day allocate part of their photosynthate for storage, often as starch. This stored reserve is then degraded at night to sustain metabolism and growth. However, night-time starch degradation must be tightly controlled, as over-rapid turnover results in premature depletion of starch before dawn, leading to starvation. Recent experiments in Arabidopsis have shown that starch degradation proceeds at a constant rate during the night and is set such that starch reserves are exhausted almost precisely at dawn. Intriguingly, this pattern is robust with the degradation rate being adjusted to compensate for unexpected changes in the time of darkness onset. While a fundamental role for the circadian clock is well-established, the underlying mechanisms controlling starch degradation remain poorly characterized. Here, we discuss recent quantitative models that have been proposed to explain how plants can compute the appropriate starch degradation rate, a process that requires an effective arithmetic division calculation. We review experimental confirmation of the models, and describe aspects that require further investigation. Overall, the process of night-time starch degradation necessitates a fundamental metabolic role for the circadian clock and, more generally, highlights how cells process information in order to optimally manage their resources.

  4. An ex vivo model to quantitatively analyze cell migration in tissue.

    Science.gov (United States)

    O'Leary, Conor J; Weston, Mikail; McDermott, Kieran W

    2018-01-01

    Within the developing central nervous system, the ability of cells to migrate throughout the tissue parenchyma to reach their target destination and undergo terminal differentiation is vital to normal central nervous system (CNS) development. To develop novel therapies to treat the injured CNS, it is essential that the migratory behavior of cell populations is understood. Many studies have examined the ability of individual neurons to migrate through the developing CNS, describing specific modes of migration including locomotion and somal translocation. Few studies have investigated the mass migration of large populations of neural progenitors, particularly in the developing the spinal cord. Here, we describe a method to robustly analyze large numbers of migrating cells using a co-culture assay. The ex vivo tissue model promotes the survival and differentiation of co-cultured progenitor cells. Using this assay, we demonstrate that migrating neuroepithelial progenitor cells display region specific migration patterns within the dorsal and ventral spinal cord at defined developmental time points. The technique described here is a viable ex vivo model to quantitatively analyze cell migration and differentiation. We demonstrate the ability to detect changes in cell migration within distinct tissue region across tissue samples using the technique described here. Developmental Dynamics 247:201-211, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Probabilistic Quantitative Precipitation Forecasting over East China using Bayesian Model Averaging

    Science.gov (United States)

    Yang, Ai; Yuan, Huiling

    2014-05-01

    The Bayesian model averaging (BMA) is a post-processing method that weights the predictive probability density functions (PDFs) of individual ensemble members. This study investigates the BMA method for calibrating quantitative precipitation forecasts (QPFs) from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database. The QPFs over East Asia during summer (June-August) 2008-2011 are generated from six operational ensemble prediction systems (EPSs), including ECMWF, UKMO, NCEP, CMC, JMA, CMA, and multi-center ensembles of their combinations. The satellite-based precipitation estimate product TRMM 3B42 V7 is used as the verification dataset. In the BMA post-processing for precipitation forecasts, the PDF matching method is first applied to bias-correct systematic errors in each forecast member, by adjusting PDFs of forecasts to match PDFs of observations. Next, a logistic regression and two-parameter gamma distribution are used to fit the probability of rainfall occurrence and precipitation distribution. Through these two steps, the BMA post-processing bias-corrects ensemble forecasts systematically. The 60-70% cumulative density function (CDF) predictions well estimate moderate precipitation compared to raw ensemble mean, while the 90% upper boundary of BMA CDF predictions can be set as a threshold of extreme precipitation alarm. In general, the BMA method is more capable of multi-center ensemble post-processing, which improves probabilistic QPFs (PQPFs) with better ensemble spread and reliability. KEYWORDS: Bayesian model averaging (BMA); post-processing; ensemble forecast; TIGGE

  6. Overcoming pain thresholds with multilevel models-an example using quantitative sensory testing (QST) data.

    Science.gov (United States)

    Hirschfeld, Gerrit; Blankenburg, Markus R; Süß, Moritz; Zernikow, Boris

    2015-01-01

    The assessment of somatosensory function is a cornerstone of research and clinical practice in neurology. Recent initiatives have developed novel protocols for quantitative sensory testing (QST). Application of these methods led to intriguing findings, such as the presence lower pain-thresholds in healthy children compared to healthy adolescents. In this article, we (re-) introduce the basic concepts of signal detection theory (SDT) as a method to investigate such differences in somatosensory function in detail. SDT describes participants' responses according to two parameters, sensitivity and response-bias. Sensitivity refers to individuals' ability to discriminate between painful and non-painful stimulations. Response-bias refers to individuals' criterion for giving a "painful" response. We describe how multilevel models can be used to estimate these parameters and to overcome central critiques of these methods. To provide an example we apply these methods to data from the mechanical pain sensitivity test of the QST protocol. The results show that adolescents are more sensitive to mechanical pain and contradict the idea that younger children simply use more lenient criteria to report pain. Overall, we hope that the wider use of multilevel modeling to describe somatosensory functioning may advance neurology research and practice.

  7. DEVELOPMENT OF MODEL FOR QUANTITATIVE EVALUATION OF DYNAMICALLY STABLE FORMS OF RIVER CHANNELS

    Directory of Open Access Journals (Sweden)

    O. V. Zenkin

    2017-01-01

    systems. The determination of regularities of development of bed forms and quantitative relations between their parameters are based on modeling the “right” forms of riverbed.The research has resulted in establishing and testing methodology of simulation modeling, which allows one to identify dynamically stable form of riverbed. 

  8. Quantitative structure activity relationship model for predicting the depletion percentage of skin allergic chemical substances of glutathione

    International Nuclear Information System (INIS)

    Si Hongzong; Wang Tao; Zhang Kejun; Duan Yunbo; Yuan Shuping; Fu Aiping; Hu Zhide

    2007-01-01

    A quantitative model was developed to predict the depletion percentage of glutathione (DPG) compounds by gene expression programming (GEP). Each kind of compound was represented by several calculated structural descriptors involving constitutional, topological, geometrical, electrostatic and quantum-chemical features of compounds. The GEP method produced a nonlinear and five-descriptor quantitative model with a mean error and a correlation coefficient of 10.52 and 0.94 for the training set, 22.80 and 0.85 for the test set, respectively. It is shown that the GEP predicted results are in good agreement with experimental ones, better than those of the heuristic method

  9. Automated model-based quantitative analysis of phantoms with spherical inserts in FDG PET scans.

    Science.gov (United States)

    Ulrich, Ethan J; Sunderland, John J; Smith, Brian J; Mohiuddin, Imran; Parkhurst, Jessica; Plichta, Kristin A; Buatti, John M; Beichel, Reinhard R

    2018-01-01

    Quality control plays an increasingly important role in quantitative PET imaging and is typically performed using phantoms. The purpose of this work was to develop and validate a fully automated analysis method for two common PET/CT quality assurance phantoms: the NEMA NU-2 IQ and SNMMI/CTN oncology phantom. The algorithm was designed to only utilize the PET scan to enable the analysis of phantoms with thin-walled inserts. We introduce a model-based method for automated analysis of phantoms with spherical inserts. Models are first constructed for each type of phantom to be analyzed. A robust insert detection algorithm uses the model to locate all inserts inside the phantom. First, candidates for inserts are detected using a scale-space detection approach. Second, candidates are given an initial label using a score-based optimization algorithm. Third, a robust model fitting step aligns the phantom model to the initial labeling and fixes incorrect labels. Finally, the detected insert locations are refined and measurements are taken for each insert and several background regions. In addition, an approach for automated selection of NEMA and CTN phantom models is presented. The method was evaluated on a diverse set of 15 NEMA and 20 CTN phantom PET/CT scans. NEMA phantoms were filled with radioactive tracer solution at 9.7:1 activity ratio over background, and CTN phantoms were filled with 4:1 and 2:1 activity ratio over background. For quantitative evaluation, an independent reference standard was generated by two experts using PET/CT scans of the phantoms. In addition, the automated approach was compared against manual analysis, which represents the current clinical standard approach, of the PET phantom scans by four experts. The automated analysis method successfully detected and measured all inserts in all test phantom scans. It is a deterministic algorithm (zero variability), and the insert detection RMS error (i.e., bias) was 0.97, 1.12, and 1.48 mm for phantom

  10. Interdiffusion of the aluminum magnesium system. Quantitative analysis and numerical model; Interdiffusion des Aluminium-Magnesium-Systems. Quantitative Analyse und numerische Modellierung

    Energy Technology Data Exchange (ETDEWEB)

    Seperant, Florian

    2012-03-21

    Aluminum coatings are a promising approach to protect magnesium alloys against corrosion and thereby making them accessible to a variety of technical applications. Thermal treatment enhances the adhesion of the aluminium coating on magnesium by interdiffusion. For a deeper understanding of the diffusion process at the interface, a quantitative description of the Al-Mg system is necessary. On the basis of diffusion experiments with infinite reservoirs of aluminum and magnesium, the interdiffusion coefficients of the intermetallic phases of the Al-Mg-system are calculated with the Sauer-Freise method for the first time. To solve contradictions in the literature concerning the intrinsic diffusion coefficients, the possibility of a bifurcation of the Kirkendall plane is considered. Furthermore, a physico-chemical description of interdiffusion is provided to interpret the observed phase transitions. The developed numerical model is based on a temporally varied discretization of the space coordinate. It exhibits excellent quantitative agreement with the experimentally measured concentration profile. This confirms the validity of the obtained diffusion coefficients. Moreover, the Kirkendall shift in the Al-Mg system is simulated for the first time. Systems with thin aluminum coatings on magnesium also exhibit a good correlation between simulated and experimental concentration profiles. Thus, the diffusion coefficients are also valid for Al-coated systems. Hence, it is possible to derive parameters for a thermal treatment by simulation, resulting in an optimized modification of the magnesium surface for technical applications.

  11. A quantitative model of intracellular growth of Legionella pneumophila in Acanthamoeba castellanii.

    Science.gov (United States)

    Moffat, J F; Tompkins, L S

    1992-01-01

    A model of intracellular growth for Legionella pneumophila in Acanthamoeba castellanii has been developed and provides a quantitative measure of survival and replication after entry. In this model, Acanthamoeba monolayers were incubated with bacteria in tissue culture plates under nutrient-limiting conditions. Gentamicin was used to kill extracellular bacteria following the period of incubation, and the number of intracellular bacteria was determined following lysis of amebae. Intracellular growth of virulent L. pneumophila and other wild-type Legionella species was observed when the assay was performed at 37 degrees C. At room temperature, none of the Legionella strains tested grew intracellularly, while an avirulent L. pneumophila strain was unable to replicate in this assay at either temperature. The effect of nutrient limitation on A. castellanii during the assay prevented multiplication of the amebae and increased the level of infection by Legionella spp. The level of infection of the amebae was directly proportional to the multiplicity of infection with bacteria; at an inoculum of 1.03 x 10(7) bacteria added to wells containing 1.10 x 10(5) amebae (multiplicity of infection of 100), approximately 4.4% of A. castellanii cells became infected. Cytochalasin D reduced the uptake of bacteria by the amebae primarily by causing amebae to lift off the culture dish, reducing the number of target hosts; methylamine also reduced the level of initial infection, yet neither inhibitor was able to prevent intracellular replication of Legionella spp. Consequently, once the bacteria entered the cell, only lowered temperature could restrict replication. This model of intracellular growth provides a one-step growth curve and should be useful to study the molecular basis of the host-parasite interaction. PMID:1729191

  12. Quantitative vertebral morphometry based on parametric modeling of vertebral bodies in 3D.

    Science.gov (United States)

    Stern, D; Njagulj, V; Likar, B; Pernuš, F; Vrtovec, T

    2013-04-01

    Quantitative vertebral morphometry (QVM) was performed by parametric modeling of vertebral bodies in three dimensions (3D). Identification of vertebral fractures in two dimensions is a challenging task due to the projective nature of radiographic images and variability in the vertebral shape. By generating detailed 3D anatomical images, computed tomography (CT) enables accurate measurement of vertebral deformations and fractures. A detailed 3D representation of the vertebral body shape is obtained by automatically aligning a parametric 3D model to vertebral bodies in CT images. The parameters of the 3D model describe clinically meaningful morphometric vertebral body features, and QVM in 3D is performed by comparing the parameters to their statistical values. Thresholds and parameters that best discriminate between normal and fractured vertebral bodies are determined by applying statistical classification analysis. The proposed QVM in 3D was applied to 454 normal and 228 fractured vertebral bodies, yielding classification sensitivity of 92.5% at 7.5% specificity, with corresponding accuracy of 92.5% and precision of 86.1%. The 3D shape parameters that provided the best separation between normal and fractured vertebral bodies were the vertebral body height and the inclination and concavity of both vertebral endplates. The described QVM in 3D is able to efficiently and objectively discriminate between normal and fractured vertebral bodies and identify morphological cases (wedge, (bi)concavity, or crush) and grades (1, 2, or 3) of vertebral body fractures. It may be therefore valuable for diagnosing and predicting vertebral fractures in patients who are at risk of osteoporosis.

  13. A rodent model of traumatic stress induces lasting sleep and quantitative electroencephalographic disturbances.

    Science.gov (United States)

    Nedelcovych, Michael T; Gould, Robert W; Zhan, Xiaoyan; Bubser, Michael; Gong, Xuewen; Grannan, Michael; Thompson, Analisa T; Ivarsson, Magnus; Lindsley, Craig W; Conn, P Jeffrey; Jones, Carrie K

    2015-03-18

    Hyperarousal and sleep disturbances are common, debilitating symptoms of post-traumatic stress disorder (PTSD). PTSD patients also exhibit abnormalities in quantitative electroencephalography (qEEG) power spectra during wake as well as rapid eye movement (REM) and non-REM (NREM) sleep. Selective serotonin reuptake inhibitors (SSRIs), the first-line pharmacological treatment for PTSD, provide modest remediation of the hyperarousal symptoms in PTSD patients, but have little to no effect on the sleep-wake architecture deficits. Development of novel therapeutics for these sleep-wake architecture deficits is limited by a lack of relevant animal models. Thus, the present study investigated whether single prolonged stress (SPS), a rodent model of traumatic stress, induces PTSD-like sleep-wake and qEEG spectral power abnormalities that correlate with changes in central serotonin (5-HT) and neuropeptide Y (NPY) signaling in rats. Rats were implanted with telemetric recording devices to continuously measure EEG before and after SPS treatment. A second cohort of rats was used to measure SPS-induced changes in plasma corticosterone, 5-HT utilization, and NPY expression in brain regions that comprise the neural fear circuitry. SPS caused sustained dysregulation of NREM and REM sleep, accompanied by state-dependent alterations in qEEG power spectra indicative of cortical hyperarousal. These changes corresponded with acute induction of the corticosterone receptor co-chaperone FK506-binding protein 51 and delayed reductions in 5-HT utilization and NPY expression in the amygdala. SPS represents a preclinical model of PTSD-related sleep-wake and qEEG disturbances with underlying alterations in neurotransmitter systems known to modulate both sleep-wake architecture and the neural fear circuitry.

  14. Quantitative microleakage analysis of endodontic temporary filling materials using a glucose penetration model.

    Science.gov (United States)

    Kim, Sin-Young; Ahn, Jin-Soo; Yi, Young-Ah; Lee, Yoon; Hwang, Ji-Yun; Seo, Deog-Gyu

    2015-02-01

    The purpose of this study was to analyze the sealing ability of different temporary endodontic materials over a 6-week period using a glucose penetration model. Standardized holes were formed on 48 dentin discs from human premolars. The thicknesses of the specimens were distributed evenly to 2 mm, 3 mm and 4 mm. Prepared dentin specimens were randomly assigned into six groups (n = 7) and the holes in the dentin specimens were filled with two kinds of temporary filling materials as per the manufacturers' instructions as follows: Caviton (GC Corporation, Tokyo, Japan) 2 mm, 3 mm, 4 mm and IRM (Dentsply International Inc., Milford, DE) 2 mm, 3 mm, 4 mm. The remaining specimens were used as positive and negative controls and all specimens underwent thermocycling (1000; 5-55°C). The sealing ability of all samples was evaluated using the leakage model for glucose. The samples were analyzed by a spectrophotometer in quantitative glucose microleakage test over a period of 6 weeks. As a statistical inference, a mixed effect analysis was applied to analyze serial measurements over time. The Caviton groups showed less glucose penetration in comparison with the IRM groups. The Caviton 4 mm group demonstrated relatively low glucose leakage over the test period. High glucose leakage was detected throughout the test period in all IRM groups. The glucose leakage level increased after 1 week in the Caviton 2 mm group and after 4 weeks in the Caviton 3 mm and 4 mm groups (p penetration model during 6 weeks. Temporary filling of Caviton to at least 3 mm in thickness is necessary and temporary filling periods should not exceed 4 weeks.

  15. Model developments for quantitative estimates of the benefits of the signals on nuclear power plant availability and economics

    International Nuclear Information System (INIS)

    Seong, Poong Hyun

    1993-01-01

    A novel framework for quantitative estimates of the benefits of signals on nuclear power plant availability and economics has been developed in this work. The models developed in this work quantify how the perfect signals affect the human operator's success in restoring the power plant to the desired state when it enters undesirable transients. Also, the models quantify the economic benefits of these perfect signals. The models have been applied to the condensate feedwater system of the nuclear power plant for demonstration. (Author)

  16. A quantitative exposure model simulating human norovirus transmission during preparation of deli sandwiches.

    Science.gov (United States)

    Stals, Ambroos; Jacxsens, Liesbeth; Baert, Leen; Van Coillie, Els; Uyttendaele, Mieke

    2015-03-02

    Human noroviruses (HuNoVs) are a major cause of food borne gastroenteritis worldwide. They are often transmitted via infected and shedding food handlers manipulating foods such as deli sandwiches. The presented study aimed to simulate HuNoV transmission during the preparation of deli sandwiches in a sandwich bar. A quantitative exposure model was developed by combining the GoldSim® and @Risk® software packages. Input data were collected from scientific literature and from a two week observational study performed at two sandwich bars. The model included three food handlers working during a three hour shift on a shared working surface where deli sandwiches are prepared. The model consisted of three components. The first component simulated the preparation of the deli sandwiches and contained the HuNoV reservoirs, locations within the model allowing the accumulation of NoV and the working of intervention measures. The second component covered the contamination sources being (1) the initial HuNoV contaminated lettuce used on the sandwiches and (2) HuNoV originating from a shedding food handler. The third component included four possible intervention measures to reduce HuNoV transmission: hand and surface disinfection during preparation of the sandwiches, hand gloving and hand washing after a restroom visit. A single HuNoV shedding food handler could cause mean levels of 43±18, 81±37 and 18±7 HuNoV particles present on the deli sandwiches, hands and working surfaces, respectively. Introduction of contaminated lettuce as the only source of HuNoV resulted in the presence of 6.4±0.8 and 4.3±0.4 HuNoV on the food and hand reservoirs. The inclusion of hand and surface disinfection and hand gloving as a single intervention measure was not effective in the model as only marginal reductions of HuNoV levels were noticeable in the different reservoirs. High compliance of hand washing after a restroom visit did reduce HuNoV presence substantially on all reservoirs. The

  17. Effect of platform, reference material, and quantification model on enumeration of Enterococcus by quantitative PCR methods

    Science.gov (United States)

    Quantitative polymerase chain reaction (qPCR) is increasingly being used for the quantitative detection of fecal indicator bacteria in beach water. QPCR allows for same-day health warnings, and its application is being considered as an optionn for recreational water quality testi...

  18. Reproducibility in Research: Systems, Infrastructure, Culture

    Directory of Open Access Journals (Sweden)

    Tom Crick

    2017-11-01

    Full Text Available The reproduction and replication of research results has become a major issue for a number of scientific disciplines. In computer science and related computational disciplines such as systems biology, the challenges closely revolve around the ability to implement (and exploit novel algorithms and models. Taking a new approach from the literature and applying it to a new codebase frequently requires local knowledge missing from the published manuscripts and transient project websites. Alongside this issue, benchmarking, and the lack of open, transparent and fair benchmark sets present another barrier to the verification and validation of claimed results. In this paper, we outline several recommendations to address these issues, driven by specific examples from a range of scientific domains. Based on these recommendations, we propose a high-level prototype open automated platform for scientific software development which effectively abstracts specific dependencies from the individual researcher and their workstation, allowing easy sharing and reproduction of results. This new e-infrastructure for reproducible computational science offers the potential to incentivise a culture change and drive the adoption of new techniques to improve the quality and efficiency – and thus reproducibility – of scientific exploration.

  19. Multiple-Strain Approach and Probabilistic Modeling of Consumer Habits in Quantitative Microbial Risk Assessment: A Quantitative Assessment of Exposure to Staphylococcal Enterotoxin A in Raw Milk.

    Science.gov (United States)

    Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier

    2016-03-01

    Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical

  20. Assessing the toxic effects of ethylene glycol ethers using Quantitative Structure Toxicity Relationship models

    International Nuclear Information System (INIS)

    Ruiz, Patricia; Mumtaz, Moiz; Gombar, Vijay

    2011-01-01

    Experimental determination of toxicity profiles consumes a great deal of time, money, and other resources. Consequently, businesses, societies, and regulators strive for reliable alternatives such as Quantitative Structure Toxicity Relationship (QSTR) models to fill gaps in toxicity profiles of compounds of concern to human health. The use of glycol ethers and their health effects have recently attracted the attention of international organizations such as the World Health Organization (WHO). The board members of Concise International Chemical Assessment Documents (CICAD) recently identified inadequate testing as well as gaps in toxicity profiles of ethylene glycol mono-n-alkyl ethers (EGEs). The CICAD board requested the ATSDR Computational Toxicology and Methods Development Laboratory to conduct QSTR assessments of certain specific toxicity endpoints for these chemicals. In order to evaluate the potential health effects of EGEs, CICAD proposed a critical QSTR analysis of the mutagenicity, carcinogenicity, and developmental effects of EGEs and other selected chemicals. We report here results of the application of QSTRs to assess rodent carcinogenicity, mutagenicity, and developmental toxicity of four EGEs: 2-methoxyethanol, 2-ethoxyethanol, 2-propoxyethanol, and 2-butoxyethanol and their metabolites. Neither mutagenicity nor carcinogenicity is indicated for the parent compounds, but these compounds are predicted to be developmental toxicants. The predicted toxicity effects were subjected to reverse QSTR (rQSTR) analysis to identify structural attributes that may be the main drivers of the developmental toxicity potential of these compounds.

  1. Quantitative assessment of bone defect healing by multidetector CT in a pig model

    International Nuclear Information System (INIS)

    Riegger, Carolin; Kroepil, Patric; Lanzman, Rotem S.; Miese, Falk R.; Antoch, Gerald; Scherer, Axel; Jungbluth, Pascal; Hakimi, Mohssen; Wild, Michael; Hakimi, Ahmad R.

    2012-01-01

    To evaluate multidetector CT volumetry in the assessment of bone defect healing in comparison to histopathological findings in an animal model. In 16 mini-pigs, a circumscribed tibial bone defect was created. Multidetector CT (MDCT) of the tibia was performed on a 64-row scanner 42 days after the operation. The extent of bone healing was estimated quantitatively by MDCT volumetry using a commercially available software programme (syngo Volume, Siemens, Germany).The volume of the entire defect (including all pixels from -100 to 3,000 HU), the nonconsolidated areas (-100 to 500 HU), and areas of osseous consolidation (500 to 3,000 HU) were assessed and the extent of consolidation was calculated. Histomorphometry served as the reference standard. The extent of osseous consolidation in MDCT volumetry ranged from 19 to 92% (mean 65.4 ± 18.5%). There was a significant correlation between histologically visible newly formed bone and the extent of osseous consolidation on MDCT volumetry (r = 0.82, P < 0.0001). A significant negative correlation was detected between osseous consolidation on MDCT and histological areas of persisting defect (r = -0.9, P < 0.0001). MDCT volumetry is a promising tool for noninvasive monitoring of bone healing, showing excellent correlation with histomorphometry. (orig.)

  2. Advances in the molecular modeling and quantitative structure-activity relationship-based design for antihistamines.

    Science.gov (United States)

    Galvez, Jorge; Galvez-Llompart, Maria; Zanni, Riccardo; Garcia-Domenech, Ramon

    2013-03-01

    Nowadays the use of antihistamines (AH) is increasing steadily. These drugs are able to act on a variety of pathological conditions of the organism. A number of computer-aided (in silico) approaches have been developed to discover and develop novel AH drugs. Among these methods stand the ones based on drug-receptor docking, thermodynamics, as well as the quantitative structure-activity relationships (QSAR). This review collates the most recent advances in the use of computer approaches for the search and characterization of novel AH drugs. Within the QSAR methods, particular attention will be paid to those based on molecular topology (MT) because of their demonstrated efficacy in discovering new drugs. Collateral topics will also be dealt with including: docking studies, thermodynamic aspects, molecular modeling and so on. These issues will be treated to the extent that they have interest as complementary to QSAR-MT. Given the importance of the use of AHs, the search for new drugs in this field has become imperative today. In this regard, the use of QSAR methods based on MT, namely QSAR-MT, has proven to be a powerful tool when the goal is discovering new hit or lead structures. It has been shown that antihistaminic activity is complex and different for the four known types of receptors (H1 to H4) and that electronic, steric and physicochemical issues determine drug activity. These factors, along with the purely structural ones, can be deduced from topological and topochemical information.

  3. Quantitative assessment of bone defect healing by multidetector CT in a pig model

    Energy Technology Data Exchange (ETDEWEB)

    Riegger, Carolin; Kroepil, Patric; Lanzman, Rotem S.; Miese, Falk R.; Antoch, Gerald; Scherer, Axel [University Duesseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, Duesseldorf (Germany); Jungbluth, Pascal; Hakimi, Mohssen; Wild, Michael [University Duesseldorf, Medical Faculty, Department of Traumatology and Hand Surgery, Duesseldorf (Germany); Hakimi, Ahmad R. [Universtity Duesseldorf, Medical Faculty, Department of Oral Surgery, Duesseldorf (Germany)

    2012-05-15

    To evaluate multidetector CT volumetry in the assessment of bone defect healing in comparison to histopathological findings in an animal model. In 16 mini-pigs, a circumscribed tibial bone defect was created. Multidetector CT (MDCT) of the tibia was performed on a 64-row scanner 42 days after the operation. The extent of bone healing was estimated quantitatively by MDCT volumetry using a commercially available software programme (syngo Volume, Siemens, Germany).The volume of the entire defect (including all pixels from -100 to 3,000 HU), the nonconsolidated areas (-100 to 500 HU), and areas of osseous consolidation (500 to 3,000 HU) were assessed and the extent of consolidation was calculated. Histomorphometry served as the reference standard. The extent of osseous consolidation in MDCT volumetry ranged from 19 to 92% (mean 65.4 {+-} 18.5%). There was a significant correlation between histologically visible newly formed bone and the extent of osseous consolidation on MDCT volumetry (r = 0.82, P < 0.0001). A significant negative correlation was detected between osseous consolidation on MDCT and histological areas of persisting defect (r = -0.9, P < 0.0001). MDCT volumetry is a promising tool for noninvasive monitoring of bone healing, showing excellent correlation with histomorphometry. (orig.)

  4. Quantitative profiling of brain lipid raft proteome in a mouse model of fragile X syndrome.

    Science.gov (United States)

    Kalinowska, Magdalena; Castillo, Catherine; Francesconi, Anna

    2015-01-01

    Fragile X Syndrome, a leading cause of inherited intellectual disability and autism, arises from transcriptional silencing of the FMR1 gene encoding an RNA-binding protein, Fragile X Mental Retardation Protein (FMRP). FMRP can regulate the expression of approximately 4% of brain transcripts through its role in regulation of mRNA transport, stability and translation, thus providing a molecular rationale for its potential pleiotropic effects on neuronal and brain circuitry function. Several intracellular signaling pathways are dysregulated in the absence of FMRP suggesting that cellular deficits may be broad and could result in homeostatic changes. Lipid rafts are specialized regions of the plasma membrane, enriched in cholesterol and glycosphingolipids, involved in regulation of intracellular signaling. Among transcripts targeted by FMRP, a subset encodes proteins involved in lipid biosynthesis and homeostasis, dysregulation of which could affect the integrity and function of lipid rafts. Using a quantitative mass spectrometry-based approach we analyzed the lipid raft proteome of Fmr1 knockout mice, an animal model of Fragile X syndrome, and identified candidate proteins that are differentially represented in Fmr1 knockout mice lipid rafts. Furthermore, network analysis of these candidate proteins reveals connectivity between them and predicts functional connectivity with genes encoding components of myelin sheath, axonal processes and growth cones. Our findings provide insight to aid identification of molecular and cellular dysfunctions arising from Fmr1 silencing and for uncovering shared pathologies between Fragile X syndrome and other autism spectrum disorders.

  5. Evaluation of tongue motor biomechanics during swallowing—From oral feeding models to quantitative sensing methods

    Directory of Open Access Journals (Sweden)

    Takahiro Ono

    2009-09-01

    Full Text Available In today's aging society, dentists are more likely to treat patients with dysphagia and are required to select an optimal treatment option based on a complete understanding of the swallowing function. Although the tongue plays an important role in mastication and swallowing as described in human oral feeding models developed in 1990s, physiological significances of tongue function has been poorly understood due to the difficulty in monitoring and analyzing it. This review summarizes recent approaches used to evaluate tongue function during swallowing quantitatively mainly focusing on modern sensing methods such as manofluorography, sensing probes, pressure sensors installed in the palatal plates and ultrasound imaging of tongue movement. Basic understanding on the kinematics and biomechanics of tongue movement during swallowing in normal subjects was provided by the series of studies. There have been few studies, however, on the pathological change of tongue function in dysphagic patients. Therefore further improvement in measurement devices and technologies and additional multidisciplinary studies are needed to establish therapeutic evidence regarding tongue movement, as well as the best prosthodontic approach for dysphagia rehabilitation.

  6. Quantitative studies of animal colour constancy: using the chicken as model

    Science.gov (United States)

    2016-01-01

    Colour constancy is the capacity of visual systems to keep colour perception constant despite changes in the illumination spectrum. Colour constancy has been tested extensively in humans and has also been described in many animals. In humans, colour constancy is often studied quantitatively, but besides humans, this has only been done for the goldfish and the honeybee. In this study, we quantified colour constancy in the chicken by training the birds in a colour discrimination task and testing them in changed illumination spectra to find the largest illumination change in which they were able to remain colour-constant. We used the receptor noise limited model for animal colour vision to quantify the illumination changes, and found that colour constancy performance depended on the difference between the colours used in the discrimination task, the training procedure and the time the chickens were allowed to adapt to a new illumination before making a choice. We analysed literature data on goldfish and honeybee colour constancy with the same method and found that chickens can compensate for larger illumination changes than both. We suggest that future studies on colour constancy in non-human animals could use a similar approach to allow for comparison between species and populations. PMID:27170714

  7. Synthesis, photodynamic activity, and quantitative structure-activity relationship modelling of a series of BODIPYs.

    Science.gov (United States)

    Caruso, Enrico; Gariboldi, Marzia; Sangion, Alessandro; Gramatica, Paola; Banfi, Stefano

    2017-02-01

    Here we report the synthesis of eleven new BODIPYs (14-24) characterized by the presence of an aromatic ring on the 8 (meso) position and of iodine atoms on the pyrrolic 2,6 positions. These molecules, together with twelve BODIPYs already reported by us (1-12), represent a large panel of BODIPYs showing different atoms or groups as substituent of the aromatic moiety. Two physico-chemical features ( 1 O 2 generation rate and lipophilicity), which can play a fundamental role in the outcome as photosensitizers, have been studied. The in vitro photo-induced cell-killing efficacy of 23 PSs was studied on the SKOV3 cell line treating the cells for 24h in the dark then irradiating for 2h with a green LED device (fluence 25.2J/cm 2 ). The cell-killing efficacy was assessed with the MTT test and compared with that one of meso un-substituted compound (13). In order to understand the possible effect of the substituents, a predictive quantitative structure-activity relationship (QSAR) regression model, based on theoretical holistic molecular descriptors, was developed. The results clearly indicate that the presence of an aromatic ring is fundamental for an excellent photodynamic response, whereas the electronic effects and the position of the substituents on the aromatic ring do not influence the photodynamic efficacy. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Quantitative models of persistence and relapse from the perspective of behavioral momentum theory: Fits and misfits.

    Science.gov (United States)

    Nevin, John A; Craig, Andrew R; Cunningham, Paul J; Podlesnik, Christopher A; Shahan, Timothy A; Sweeney, Mary M

    2017-08-01

    We review quantitative accounts of behavioral momentum theory (BMT), its application to clinical treatment, and its extension to post-intervention relapse of target behavior. We suggest that its extension can account for relapse using reinstatement and renewal models, but that its application to resurgence is flawed both conceptually and in its failure to account for recent data. We propose that the enhanced persistence of target behavior engendered by alternative reinforcers is limited to their concurrent availability within a distinctive stimulus context. However, a failure to find effects of stimulus-correlated reinforcer rates in a Pavlovian-to-Instrumental Transfer (PIT) paradigm challenges even a straightforward Pavlovian account of alternative reinforcer effects. BMT has been valuable in understanding basic research findings and in guiding clinical applications and accounting for their data, but alternatives are needed that can account more effectively for resurgence while encompassing basic data on resistance to change as well as other forms of relapse. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Quantitative three-dimensional modeling of zeotile through discrete electron tomography.

    Science.gov (United States)

    Bals, Sara; Batenburg, K Joost; Liang, Duoduo; Lebedev, Oleg; Van Tendeloo, Gustaaf; Aerts, Alexander; Martens, Johan A; Kirschhock, Christine E A

    2009-04-08

    Discrete electron tomography is a new approach for three-dimensional reconstruction of nanoscale objects. The technique exploits prior knowledge of the object to be reconstructed, which results in an improvement of the quality of the reconstructions. Through the combination of conventional transmission electron microscopy and discrete electron tomography with a model-based approach, quantitative structure determination becomes possible. In the present work, this approach is used to unravel the building scheme of Zeotile-4, a silica material with two levels of structural order. The layer seq