WorldWideScience

Sample records for model quantitatively reproduces

  1. Quantitative Evaluation of Ionosphere Models for Reproducing Regional TEC During Geomagnetic Storms

    Science.gov (United States)

    Shim, J. S.; Kuznetsova, M.; Rastaetter, L.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B.; Foster, B.; Fuller-Rowell, T. J.; Goncharenko, L. P.; Huba, J.; Mitchell, C. N.; Ridley, A. J.; Fedrizzi, M.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.

    2015-12-01

    TEC (Total Electron Content) is one of the key parameters in description of the ionospheric variability that has influence on the accuracy of navigation and communication systems. To assess current TEC modeling capability of ionospheric models during geomagnetic storms and to establish a baseline against which future improvement can be compared, we quantified the ionospheric models' performance by comparing modeled vertical TEC values with ground-based GPS TEC measurements and Multi-Instrument Data Analysis System (MIDAS) TEC. The comparison focused on North America and Europe sectors during selected two storm events: 2006 AGU storm (14-15 Dec. 2006) and 2013 March storm (17-19 Mar. 2013). The ionospheric models used for this study range from empirical to physics-based, and physics-based data assimilation models. We investigated spatial and temporal variations of TEC during the storms. In addition, we considered several parameters to quantify storm impacts on TEC: TEC changes compared to quiet time, rate of TEC change, and maximum increase/decrease during the storms. In this presentation, we focus on preliminary results of the comparison of the models performance in reproducing the storm-time TEC variations using the parameters and skill scores. This study has been supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) for the space science communities to use.

  2. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  3. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  4. Reproducible quantitative proteotype data matrices for systems biology.

    Science.gov (United States)

    Röst, Hannes L; Malmström, Lars; Aebersold, Ruedi

    2015-11-05

    Historically, many mass spectrometry-based proteomic studies have aimed at compiling an inventory of protein compounds present in a biological sample, with the long-term objective of creating a proteome map of a species. However, to answer fundamental questions about the behavior of biological systems at the protein level, accurate and unbiased quantitative data are required in addition to a list of all protein components. Fueled by advances in mass spectrometry, the proteomics field has thus recently shifted focus toward the reproducible quantification of proteins across a large number of biological samples. This provides the foundation to move away from pure enumeration of identified proteins toward quantitative matrices of many proteins measured across multiple samples. It is argued here that data matrices consisting of highly reproducible, quantitative, and unbiased proteomic measurements across a high number of conditions, referred to here as quantitative proteotype maps, will become the fundamental currency in the field and provide the starting point for downstream biological analysis. Such proteotype data matrices, for example, are generated by the measurement of large patient cohorts, time series, or multiple experimental perturbations. They are expected to have a large effect on systems biology and personalized medicine approaches that investigate the dynamic behavior of biological systems across multiple perturbations, time points, and individuals. © 2015 Röst et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  5. The reproducibility of quantitative measurements in lumbar magnetic resonance imaging of children from the general population

    DEFF Research Database (Denmark)

    Masharawi, Y; Kjær, Per; Bendix, T

    2008-01-01

    STUDY DESIGN: Quantitative lumbar magnetic resonance imaging (MRI) measurements in children were taken twice and analyzed for intra- and intertester reproducibility. OBJECTIVE: To evaluate the reproducibility of a variety of lumbar quantitative measurements taken from MRIs of children from the ge...

  6. Direct, quantitative clinical assessment of hand function: usefulness and reproducibility.

    Science.gov (United States)

    Goodson, Alexander; McGregor, Alison H; Douglas, Jane; Taylor, Peter

    2007-05-01

    Methods of assessing functional impairment in arthritic hands include pain assessments and disability scoring scales which are subjective, variable over time and fail to take account of the patients' need to adapt to deformities. The aim of this study was to evaluate measures of functional strength and joint motion in the assessment of the rheumatoid (RA) and osteoarthritic (OA) hand. Ten control subjects, ten RA and ten OA patients were recruited for the study. All underwent pain and disability scoring and functional assessment of the hand using measures of pinch/grip strength and range of joint motion (ROM). Functional assessments including ROM analyses at interphalangeal (IP), metacarpophalangeal (MCP) and wrist joints along with pinch/grip strength clearly discriminated between patient groups (RA vs. OA MCP ROM P<0.0001), pain and disability scales were unable to. In the RA there were demonstrable relationships between ROM measurements and disability (R2=0.31) as well as disease duration (R2=0.37). Intra-patient measures of strength were robust whereas inter-patient comparisons showed variability. In conclusion, pinch/grip strength and ROM are clinically reproducible assessments that may more accurately reflect functional impairment associated with arthritis.

  7. Multi-laboratory assessment of reproducibility, qualitative and quantitative performance of SWATH-mass spectrometry.

    Science.gov (United States)

    Collins, Ben C; Hunter, Christie L; Liu, Yansheng; Schilling, Birgit; Rosenberger, George; Bader, Samuel L; Chan, Daniel W; Gibson, Bradford W; Gingras, Anne-Claude; Held, Jason M; Hirayama-Kurogi, Mio; Hou, Guixue; Krisp, Christoph; Larsen, Brett; Lin, Liang; Liu, Siqi; Molloy, Mark P; Moritz, Robert L; Ohtsuki, Sumio; Schlapbach, Ralph; Selevsek, Nathalie; Thomas, Stefani N; Tzeng, Shin-Cheng; Zhang, Hui; Aebersold, Ruedi

    2017-08-21

    Quantitative proteomics employing mass spectrometry is an indispensable tool in life science research. Targeted proteomics has emerged as a powerful approach for reproducible quantification but is limited in the number of proteins quantified. SWATH-mass spectrometry consists of data-independent acquisition and a targeted data analysis strategy that aims to maintain the favorable quantitative characteristics (accuracy, sensitivity, and selectivity) of targeted proteomics at large scale. While previous SWATH-mass spectrometry studies have shown high intra-lab reproducibility, this has not been evaluated between labs. In this multi-laboratory evaluation study including 11 sites worldwide, we demonstrate that using SWATH-mass spectrometry data acquisition we can consistently detect and reproducibly quantify >4000 proteins from HEK293 cells. Using synthetic peptide dilution series, we show that the sensitivity, dynamic range and reproducibility established with SWATH-mass spectrometry are uniformly achieved. This study demonstrates that the acquisition of reproducible quantitative proteomics data by multiple labs is achievable, and broadly serves to increase confidence in SWATH-mass spectrometry data acquisition as a reproducible method for large-scale protein quantification.SWATH-mass spectrometry consists of a data-independent acquisition and a targeted data analysis strategy that aims to maintain the favorable quantitative characteristics on the scale of thousands of proteins. Here, using data generated by eleven groups worldwide, the authors show that SWATH-MS is capable of generating highly reproducible data across different laboratories.

  8. Inter-laboratory evaluation of instrument platforms and experimental workflows for quantitative accuracy and reproducibility assessment

    NARCIS (Netherlands)

    Percy, Andrew J.; Tamura-Wells, Jessica; Albar, Juan Pablo; Aloria, Kerman; Amirkhani, Ardeshir; Araujo, Gabriel D T; Arizmendi, Jesus M.; Blanco, Francisco J.; Canals, Francesc; Cho, Jin Young; Colomé-Calls, Núria; Corrales, Fernando J.; Domont, Gilberto; Espadas, Guadalupe; Fernandez-Puente, Patricia; Gil, Concha; Haynes, Paul A.; Hernáez, Maria Luisa; Kim, Jin Young; Kopylov, Arthur; Marcilla, Miguel; McKay, Mathew J.; Mirzaei, Mehdi; Molloy, Mark P.; Ohlund, Leanne B.; Paik, Young Ki; Paradela, Alberto; Raftery, Mark; Sabidó, Eduard; Sleno, Lekha; Wilffert, Daniel; Wolters, Justina C.; Yoo, Jong Shin; Zgoda, Victor; Parker, Carol E.; Borchers, Christoph H.

    2015-01-01

    The reproducibility of plasma protein quantitation between laboratories and between instrument types was examined in a large-scale international study involving 16 laboratories and 19 LC-MS/MS platforms, using two kits designed to evaluate instrument performance and one kit designed to evaluate the

  9. Towards reproducible descriptions of neuronal network models.

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2009-08-01

    Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  10. Lung cancer perfusion at multi-detector row CT: reproducibility of whole tumor quantitative measurements.

    Science.gov (United States)

    Ng, Quan-Sing; Goh, Vicky; Fichte, Heinz; Klotz, Ernst; Fernie, Pat; Saunders, Michele I; Hoskin, Peter J; Padhani, Anwar R

    2006-05-01

    Institutional review board approval and informed consent were obtained for this study. The aim of the study was to prospectively assess, in patients with lung cancer, the reproducibility of a quantitative whole tumor perfusion computed tomographic (CT) technique. Paired CT studies were performed in 10 patients (eight men, two women; mean age, 66 years) with lung cancer. Whole tumor permeability and blood volume were measured, and reproducibility was evaluated by using Bland-Altman statistics. Coefficient of variation of 9.49% for permeability and 26.31% for blood volume and inter- and intraobserver variability ranging between 3.30% and 6.34% indicate reliable assessment with this whole tumor technique.

  11. A Reproducible Computerized Method for Quantitation of Capillary Density using Nailfold Capillaroscopy.

    Science.gov (United States)

    Cheng, Cynthia; Lee, Chadd W; Daskalakis, Constantine

    2015-10-27

    Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient's microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.(1) This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique.

  12. Inter-laboratory evaluation of instrument platforms and experimental workflows for quantitative accuracy and reproducibility assessment

    Directory of Open Access Journals (Sweden)

    Andrew J. Percy

    2015-09-01

    Full Text Available The reproducibility of plasma protein quantitation between laboratories and between instrument types was examined in a large-scale international study involving 16 laboratories and 19 LC–MS/MS platforms, using two kits designed to evaluate instrument performance and one kit designed to evaluate the entire bottom-up workflow. There was little effect of instrument type on the quality of the results, demonstrating the robustness of LC/MRM-MS with isotopically labeled standards. Technician skill was a factor, as errors in sample preparation and sub-optimal LC–MS performance were evident. This highlights the importance of proper training and routine quality control before quantitation is done on patient samples.

  13. Advanced NSCLC First Pass Perfusion at 64-slice CT: Reproducibility of Volume-based Quantitative Measurement

    Directory of Open Access Journals (Sweden)

    Jie HU

    2010-05-01

    Full Text Available Background and objective The aim of this study is to explore the reproducibility of volume-based quantitative measurement of non-small cell lung cancer (NSCLC perfusion at 64-slice CT. Methods Fourteen patients with proved advanced NSCLC were enrolled in this dynamic first pass volume-based CT perfusion (CTP study (8×5 mm collimation, and they underwent the second scan within 24 h. According to the longest diameters, those patients were classified to ≤3 cm and >3 cm groups, and each group had 7 patients. Intraclass correlation coefficient (ICC and Bland-Altman statistics were used to evaluate the reproducibility of CTP imaging. Results In both groups of advanced NSCLC, the reproducibility with BF, BV, and PS values were good (ICC >0.75 for all, but mean transit time (MTT values. For advanced NSCLC (≤3 cm, repeatability coefficient (RC values with blood flow (BF, blood volume (BV, MTT and permeability surface area product (PS values were 56%, 45%, 114%, and 78%, respectively, and the 95% change intervals of RC were -39%-53%, -29%-62%, -83%-145%, and -57%-98%, respectively. For advanced NSCLC (>3 cm, those values were 46%, 30%, 59%, and 33%, respectively, and the 95% change intervals of RC were -48%-45%, -33%-26%, -54%-64%, and -18%-48%. Conclusion There is greater reproducibility of tumor size >3 cm than that of ≤3 cm. BF and BV could be addressed for reliable clinical application in antiangiogenesis therapeutic monitoring with advanced NSCLC patients.

  14. Reproducibility of LCA models of crude oil production.

    Science.gov (United States)

    Vafi, Kourosh; Brandt, Adam R

    2014-11-04

    Scientific models are ideally reproducible, with results that converge despite varying methods. In practice, divergence between models often remains due to varied assumptions, incompleteness, or simply because of avoidable flaws. We examine LCA greenhouse gas (GHG) emissions models to test the reproducibility of their estimates for well-to-refinery inlet gate (WTR) GHG emissions. We use the Oil Production Greenhouse gas Emissions Estimator (OPGEE), an open source engineering-based life cycle assessment (LCA) model, as the reference model for this analysis. We study seven previous studies based on six models. We examine the reproducibility of prior results by successive experiments that align model assumptions and boundaries. The root-mean-square error (RMSE) between results varies between ∼1 and 8 g CO2 eq/MJ LHV when model inputs are not aligned. After model alignment, RMSE generally decreases only slightly. The proprietary nature of some of the models hinders explanations for divergence between the results. Because verification of the results of LCA GHG emissions is often not possible by direct measurement, we recommend the development of open source models for use in energy policy. Such practice will lead to iterative scientific review, improvement of models, and more reliable understanding of emissions.

  15. Reproducibility Issues : Avoiding Pitfalls in Animal Inflammation Models

    NARCIS (Netherlands)

    Laman, Jon D; Kooistra, Susanne M; Clausen, Björn E; Clausen, Björn E.; Laman, Jon D.

    2017-01-01

    In light of an enhanced awareness of ethical questions and ever increasing costs when working with animals in biomedical research, there is a dedicated and sometimes fierce debate concerning the (lack of) reproducibility of animal models and their relevance for human inflammatory diseases. Despite

  16. Modeling and evaluating repeatability and reproducibility of ordinal classifications

    NARCIS (Netherlands)

    J. de Mast; W.N. van Wieringen

    2010-01-01

    This paper argues that currently available methods for the assessment of the repeatability and reproducibility of ordinal classifications are not satisfactory. The paper aims to study whether we can modify a class of models from Item Response Theory, well established for the study of the reliability

  17. Reproducibility Issues: Avoiding Pitfalls in Animal Inflammation Models.

    Science.gov (United States)

    Laman, Jon D; Kooistra, Susanne M; Clausen, Björn E

    2017-01-01

    In light of an enhanced awareness of ethical questions and ever increasing costs when working with animals in biomedical research, there is a dedicated and sometimes fierce debate concerning the (lack of) reproducibility of animal models and their relevance for human inflammatory diseases. Despite evident advancements in searching for alternatives, that is, replacing, reducing, and refining animal experiments-the three R's of Russel and Burch (1959)-understanding the complex interactions of the cells of the immune system, the nervous system and the affected tissue/organ during inflammation critically relies on in vivo models. Consequently, scientific advancement and ultimately novel therapeutic interventions depend on improving the reproducibility of animal inflammation models. As a prelude to the remaining hands-on protocols described in this volume, here, we summarize potential pitfalls of preclinical animal research and provide resources and background reading on how to avoid them.

  18. Assessment of Modeling Capability for Reproducing Storm Impacts on TEC

    Science.gov (United States)

    Shim, J. S.; Kuznetsova, M. M.; Rastaetter, L.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B. A.; Foerster, M.; Foster, B.; Fuller-Rowell, T. J.; Huba, J. D.; Goncharenko, L. P.; Mannucci, A. J.; Namgaladze, A. A.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.

    2014-12-01

    During geomagnetic storm, the energy transfer from solar wind to magnetosphere-ionosphere system adversely affects the communication and navigation systems. Quantifying storm impacts on TEC (Total Electron Content) and assessment of modeling capability of reproducing storm impacts on TEC are of importance to specifying and forecasting space weather. In order to quantify storm impacts on TEC, we considered several parameters: TEC changes compared to quiet time (the day before storm), TEC difference between 24-hour intervals, and maximum increase/decrease during the storm. We investigated the spatial and temporal variations of the parameters during the 2006 AGU storm event (14-15 Dec. 2006) using ground-based GPS TEC measurements in the selected 5 degree eight longitude sectors. The latitudinal variations were also studied in two longitude sectors among the eight sectors where data coverage is relatively better. We obtained modeled TEC from various ionosphere/thermosphere (IT) models. The parameters from the models were compared with each other and with the observed values. We quantified performance of the models in reproducing the TEC variations during the storm using skill scores. This study has been supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) for the space science communities to use.

  19. Reproducibility and Transparency in Ocean-Climate Modeling

    Science.gov (United States)

    Hannah, N.; Adcroft, A.; Hallberg, R.; Griffies, S. M.

    2015-12-01

    Reproducibility is a cornerstone of the scientific method. Within geophysical modeling and simulation achieving reproducibility can be difficult, especially given the complexity of numerical codes, enormous and disparate data sets, and variety of supercomputing technology. We have made progress on this problem in the context of a large project - the development of new ocean and sea ice models, MOM6 and SIS2. Here we present useful techniques and experience.We use version control not only for code but the entire experiment working directory, including configuration (run-time parameters, component versions), input data and checksums on experiment output. This allows us to document when the solutions to experiments change, whether due to code updates or changes in input data. To avoid distributing large input datasets we provide the tools for generating these from the sources, rather than provide raw input data.Bugs can be a source of non-determinism and hence irreproducibility, e.g. reading from or branching on uninitialized memory. To expose these we routinely run system tests, using a memory debugger, multiple compilers and different machines. Additional confidence in the code comes from specialised tests, for example automated dimensional analysis and domain transformations. This has entailed adopting a code style where we deliberately restrict what a compiler can do when re-arranging mathematical expressions.In the spirit of open science, all development is in the public domain. This leads to a positive feedback, where increased transparency and reproducibility makes using the model easier for external collaborators, who in turn provide valuable contributions. To facilitate users installing and running the model we provide (version controlled) digital notebooks that illustrate and record analysis of output. This has the dual role of providing a gross, platform-independent, testing capability and a means to documents model output and analysis.

  20. Reproducibility in the automated quantitative assessment of HER2/neu for breast cancer

    Directory of Open Access Journals (Sweden)

    Tyler Keay

    2013-01-01

    Full Text Available Background: With the emerging role of digital imaging in pathology and the application of automated image-based algorithms to a number of quantitative tasks, there is a need to examine factors that may affect the reproducibility of results. These factors include the imaging properties of whole slide imaging (WSI systems and their effect on the performance of quantitative tools. This manuscript examines inter-scanner and inter-algorithm variability in the assessment of the commonly used HER2/neu tissue-based biomarker for breast cancer with emphasis on the effect of algorithm training. Materials and Methods: A total of 241 regions of interest from 64 breast cancer tissue glass slides were scanned using three different whole-slide images and were analyzed using two different automated image analysis algorithms, one with preset parameters and another incorporating a procedure for objective parameter optimization. Ground truth from a panel of seven pathologists was available from a previous study. Agreement analysis was used to compare the resulting HER2/neu scores. Results: The results of our study showed that inter-scanner agreement in the assessment of HER2/neu for breast cancer in selected fields of view when analyzed with any of the two algorithms examined in this study was equal or better than the inter-observer agreement previously reported on the same set of data. Results also showed that discrepancies observed between algorithm results on data from different scanners were significantly reduced when the alternative algorithm that incorporated an objective re-training procedure was used, compared to the commercial algorithm with preset parameters. Conclusion: Our study supports the use of objective procedures for algorithm training to account for differences in image properties between WSI systems.

  1. Using a 1-D model to reproduce diurnal SST signals

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, Jacob L.

    2014-01-01

    of measurement. A generally preferred approach to bridge the gap between in situ and remotely obtained measurements is through modelling of the upper ocean temperature. This ESA supported study focuses on the implementation of the 1 dimensional General Ocean Turbulence Model (GOTM), in order to resolve...... profiles, along with the selection of the coefficients for the 2-band parametrisation of light’s penetration in the water column, hold a key role in the agreement of the modelled output with observations. To improve the surface heat budget and the distribution of heat, the code was modified to include...... Institution Upper Ocean Processes Group archive. The successful implementation of the new parametrisations is verified while the model reproduces the diurnal signals seen from in situ measurements. Special focus is given to testing and validation of different set-ups using campaign data from the Atlantic...

  2. Venusian Polar Vortex reproduced by a general circulation model

    Science.gov (United States)

    Ando, Hiroki; Sugimoto, Norihiko; Takagi, Masahiro

    2016-10-01

    Unlike the polar vortices observed in the Earth, Mars and Titan atmospheres, the observed Venus polar vortex is warmer than the mid-latitudes at cloud-top levels (~65 km). This warm polar vortex is zonally surrounded by a cold latitude band located at ~60 degree latitude, which is a unique feature called 'cold collar' in the Venus atmosphere [e.g. Taylor et al. 1980; Piccioni et al. 2007]. Although these structures have been observed in numerous previous observations, the formation mechanism is still unknown. In addition, an axi-asymmetric feature is always seen in the warm polar vortex. It changes temporally and sometimes shows a hot polar dipole or S-shaped structure as shown by a lot of infrared measurements [e.g. Garate-Lopez et al. 2013; 2015]. However, its vertical structure has not been investigated. To solve these problems, we performed a numerical simulation of the Venus atmospheric circulation using a general circulation model named AFES for Venus [Sugimoto et al. 2014] and reproduced these puzzling features.And then, the reproduced structures of the atmosphere and the axi-asymmetirc feature are compared with some previous observational results.In addition, the quasi-periodical zonal-mean zonal wind fluctuation is also seen in the Venus polar vortex reproduced in our model. This might be able to explain some observational results [e.g. Luz et al. 2007] and implies that the polar vacillation might also occur in the Venus atmosphere, which is silimar to the Earth's polar atmosphere. We will also show some initial results about this point in this presentation.

  3. Research Spotlight: Improved model reproduces the 2003 European heat wave

    Science.gov (United States)

    Schultz, Colin

    2011-04-01

    In August 2003, record-breaking temperatures raged across much of Europe. In France, maximum temperatures of 37°C (99°F) persisted for 9 days straight, the longest such stretch since 1873. About 40,000 deaths (14,000 in France alone) were attributed to the extreme heat and low humidity. Various climate conditions must come into alignment to produce extreme weather like the 2003 heat wave, and despite a concerted effort, forecasting models have so far been unable to accurately reproduce the event—including the modern European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble modeling system for seasonal forecasts, which went into operation in 2007. (Geophysical Research Letters, doi:10.1029/2010GL046455, 2011)

  4. Reproducibility of UAV-based photogrammetric surface models

    Science.gov (United States)

    Anders, Niels; Smith, Mike; Cammeraat, Erik; Keesstra, Saskia

    2016-04-01

    Soil erosion, rapid geomorphological change and vegetation degradation are major threats to the human and natural environment in many regions. Unmanned Aerial Vehicles (UAVs) and Structure-from-Motion (SfM) photogrammetry are invaluable tools for the collection of highly detailed aerial imagery and subsequent low cost production of 3D landscapes for an assessment of landscape change. Despite the widespread use of UAVs for image acquisition in monitoring applications, the reproducibility of UAV data products has not been explored in detail. This paper investigates this reproducibility by comparing the surface models and orthophotos derived from different UAV flights that vary in flight direction and altitude. The study area is located near Lorca, Murcia, SE Spain, which is a semi-arid medium-relief locale. The area is comprised of terraced agricultural fields that have been abandoned for about 40 years and have suffered subsequent damage through piping and gully erosion. In this work we focused upon variation in cell size, vertical and horizontal accuracy, and horizontal positioning of recognizable landscape features. The results suggest that flight altitude has a significant impact on reconstructed point density and related cell size, whilst flight direction affects the spatial distribution of vertical accuracy. The horizontal positioning of landscape features is relatively consistent between the different flights. We conclude that UAV data products are suitable for monitoring campaigns for land cover purposes or geomorphological mapping, but special care is required when used for monitoring changes in elevation.

  5. Can a coupled meteorology–chemistry model reproduce the ...

    Science.gov (United States)

    The ability of a coupled meteorology–chemistry model, i.e., Weather Research and Forecast and Community Multiscale Air Quality (WRF-CMAQ), to reproduce the historical trend in aerosol optical depth (AOD) and clear-sky shortwave radiation (SWR) over the Northern Hemisphere has been evaluated through a comparison of 21-year simulated results with observation-derived records from 1990 to 2010. Six satellite-retrieved AOD products including AVHRR, TOMS, SeaWiFS, MISR, MODIS-Terra and MODIS-Aqua as well as long-term historical records from 11 AERONET sites were used for the comparison of AOD trends. Clear-sky SWR products derived by CERES at both the top of atmosphere (TOA) and surface as well as surface SWR data derived from seven SURFRAD sites were used for the comparison of trends in SWR. The model successfully captured increasing AOD trends along with the corresponding increased TOA SWR (upwelling) and decreased surface SWR (downwelling) in both eastern China and the northern Pacific. The model also captured declining AOD trends along with the corresponding decreased TOA SWR (upwelling) and increased surface SWR (downwelling) in the eastern US, Europe and the northern Atlantic for the period of 2000–2010. However, the model underestimated the AOD over regions with substantial natural dust aerosol contributions, such as the Sahara Desert, Arabian Desert, central Atlantic and northern Indian Ocean. Estimates of the aerosol direct radiative effect (DRE) at TOA a

  6. A reproducible nonlethal animal model for studying cyanide poisoning.

    Science.gov (United States)

    Vick, J; Marino, M T; von Bredow, J D; Kaminskis, A; Brewer, T

    2000-12-01

    Previous studies using bolus intravenous injections of sodium cyanide have been used to model the sudden exposure to high concentrations of cyanide that could occur on the battlefield. This study was designed to develop a model that would simulate the type of exposure to cyanide gas that could happen during actual low-level continuous types of exposure and then compare it with the bolus model. Cardiovascular and respiratory recordings taken from anesthetized dogs have been used previously to characterize the lethal effects of cyanide. The intravenous, bolus injection of 2.5 mg/kg sodium cyanide provides a model in which a greater than lethal concentration is attained. In contrast, our model uses a slow, intravenous infusion of cyanide to titrate each animal to its own inherent end point, which coincides with the amount of cyanide needed to induce death through respiratory arrest. In this model, therapeutic intervention can be used to restore respiration and allow for the complete recovery of the animals. After recovery, the same animal can be given a second infusion of cyanide, followed again by treatment and recovery, providing a reproducible end point. This end point can then be expressed as the total amount of cyanide per body weight (mg/kg) required to kill. In this study, the average dose of sodium cyanide among 12 animals was 1.21 mg/kg, which is approximately half the cyanide used in the bolus model. Thus, titration to respiratory arrest followed by resuscitation provides a repetitive-use animal model that can be used to test the efficacy of various forms of pretreatment and/or therapy without the loss of a single animal.

  7. A reproducible brain tumour model established from human glioblastoma biopsies

    Directory of Open Access Journals (Sweden)

    Li Xingang

    2009-12-01

    Full Text Available Abstract Background Establishing clinically relevant animal models of glioblastoma multiforme (GBM remains a challenge, and many commonly used cell line-based models do not recapitulate the invasive growth patterns of patient GBMs. Previously, we have reported the formation of highly invasive tumour xenografts in nude rats from human GBMs. However, implementing tumour models based on primary tissue requires that these models can be sufficiently standardised with consistently high take rates. Methods In this work, we collected data on growth kinetics from a material of 29 biopsies xenografted in nude rats, and characterised this model with an emphasis on neuropathological and radiological features. Results The tumour take rate for xenografted GBM biopsies were 96% and remained close to 100% at subsequent passages in vivo, whereas only one of four lower grade tumours engrafted. Average time from transplantation to the onset of symptoms was 125 days ± 11.5 SEM. Histologically, the primary xenografts recapitulated the invasive features of the parent tumours while endothelial cell proliferations and necrosis were mostly absent. After 4-5 in vivo passages, the tumours became more vascular with necrotic areas, but also appeared more circumscribed. MRI typically revealed changes related to tumour growth, several months prior to the onset of symptoms. Conclusions In vivo passaging of patient GBM biopsies produced tumours representative of the patient tumours, with high take rates and a reproducible disease course. The model provides combinations of angiogenic and invasive phenotypes and represents a good alternative to in vitro propagated cell lines for dissecting mechanisms of brain tumour progression.

  8. Qualitative and quantitative histopathology in transitional cell carcinomas of the urinary bladder. An international investigation of intra- and interobserver reproducibility

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Sasaki, M; Fukuzawa, S

    1994-01-01

    of both qualitative and quantitative grading methods. Grading of malignancy was performed by one observer in Japan (using the World Health Organization scheme), and by two observers in Denmark (using the Bergkvist system). A "translation" between the systems, grade for grade, and kappa statistics were....... The results obtained in this study stress the need for objective, quantitative histopathologic techniques substituting qualitative, subjective methods in prognosis-related grading of malignancy....... a random, systematic sampling scheme. RESULTS: The results were compared by bivariate correlation analyses and Kendall's tau. The international interobserver reproducibility of qualitative gradings was rather poor (kappa = 0.51), especially for grade 2 tumors (kappa = 0.28). Likewise, the interobserver...

  9. Reproducibility analysis on shear wave elastography (SWE)-based quantitative assessment for skin elasticity

    Science.gov (United States)

    Sun, Yang; Ma, Chuan; Liang, XiaoLong; Wang, Run; Fu, Ying; Wang, ShuMin; Cui, LiGang; Zhang, ChunLei

    2017-01-01

    Abstract Shear Wave Elastography (SWE) is an objective and non-invasive method widely used to quantify the tissue solidity. However, there are concerns about the accuracy of the skin SWE results due to the low signal-to-noise ratio (SNR) caused by subcutaneous fat, muscle and bone. This article analyzed the reproducibility of the result for skin SWE and therefore evaluated the availability of SME for skin elasticity involved diseases. Thirty volunteers (mean age: 37 ± 12 years) were selected. SWE were taken on the skin of abdomen and the middle tibia in order to assess the impact of fat, muscle and bone on SWE results. Skin in the area of anterior and lateral tibia marked with seven parallel lines, and each line indicated an identical thickness of the subcutaneous fat from 1–7 mm. Intra-class correlation coefficients (ICC) were used to evaluate the intra-observer and inter-observer reproducibility. The solidity of abdominal skin showed soft and small individual differences (12.4 ± 2.7 kPa), whereas high shear moduli (25–48 kPa) were observed in the skin above tibia and tibialis anterior muscle. When the subcutaneous fat was thicker than 3 mm (≥3), we obtained excellent intra-observer reproducibility (ICC range 0.78–0.98) and inter-observer reproducibility (ICC range 0.75–0.98). The thickness of subcutaneous fat could affect the reproducibility of skin SWE. The further study on skin SWE standardization should be taken. PMID:28489803

  10. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    negligible and the procedure was simple to perform and easily reproduced. It may be a useful tool in the investigation of antiangiogenic and anticancer therapeutics. Ultrasound was found to be a highly accurate tool for tumor diagnosis, localization and measurement and may be recommended for monitoring tumor growth in this model.

  11. Reproducibility of CSF quantitative culture methods for estimating rate of clearance in cryptococcal meningitis.

    Science.gov (United States)

    Dyal, Jonathan; Akampurira, Andrew; Rhein, Joshua; Morawski, Bozena M; Kiggundu, Reuben; Nabeta, Henry W; Musubire, Abdu K; Bahr, Nathan C; Williams, Darlisha A; Bicanic, Tihana; Larsen, Robert A; Meya, David B; Boulware, David R

    2016-05-01

    Quantitative cerebrospinal fluid (CSF) cultures provide a measure of disease severity in cryptococcal meningitis. The fungal clearance rate by quantitative cultures has become a primary endpoint for phase II clinical trials. This study determined the inter-assay accuracy of three different quantitative culture methodologies. Among 91 participants with meningitis symptoms in Kampala, Uganda, during August-November 2013, 305 CSF samples were prospectively collected from patients at multiple time points during treatment. Samples were simultaneously cultured by three methods: (1) St. George's 100 mcl input volume of CSF with five 1:10 serial dilutions, (2) AIDS Clinical Trials Group (ACTG) method using 1000, 100, 10 mcl input volumes, and two 1:100 dilutions with 100 and 10 mcl input volume per dilution on seven agar plates; and (3) 10 mcl calibrated loop of undiluted and 1:100 diluted CSF (loop). Quantitative culture values did not statistically differ between St. George-ACTG methods (P= .09) but did for St. George-10 mcl loop (P< .001). Repeated measures pairwise correlation between any of the methods was high (r≥0.88). For detecting sterility, the ACTG-method had the highest negative predictive value of 97% (91% St. George, 60% loop), but the ACTG-method had occasional (∼10%) difficulties in quantification due to colony clumping. For CSF clearance rate, St. George-ACTG methods did not differ overall (mean -0.05 ± 0.07 log10CFU/ml/day;P= .14) on a group level; however, individual-level clearance varied. The St. George and ACTG quantitative CSF culture methods produced comparable but not identical results. Quantitative cultures can inform treatment management strategies.

  12. Quantitative Aortic Distensibility Measurement Using CT in Patients with Abdominal Aortic Aneurysm: Reproducibility and Clinical Relevance

    Directory of Open Access Journals (Sweden)

    Yunfei Zha

    2017-01-01

    Full Text Available Purpose. To investigate the reproducibility of aortic distensibility (D measurement using CT and assess its clinical relevance in patients with infrarenal abdominal aortic aneurysm (AAA. Methods. 54 patients with infrarenal abdominal aortic aneurysm were studied to determine their distensibility by using 64-MDCT. Aortic cross-sectional area changes were determined at two positions of the aorta, immediately below the lowest renal artery (level 1. and at the level of its maximal diameter (level 2. by semiautomatic segmentation. Measurement reproducibility was assessed using intraclass correlation coefficient (ICC and Bland-Altman analyses. Stepwise multiple regression analysis was performed to assess linear associations between aortic D and anthropometric and biochemical parameters. Results. A mean distensibility of Dlevel  1.=(1.05±0.22×10-5  Pa-1 and Dlevel  2.=(0.49±0.18×10-5  Pa-1 was found. ICC proved excellent consistency between readers over two locations: 0.92 for intraobserver and 0.89 for interobserver difference in level 1. and 0.85 and 0.79 in level 2. Multivariate analysis of all these variables showed sac distensibility to be independently related (R2=0.68 to BMI, diastolic blood pressure, and AAA diameter. Conclusions. Aortic distensibility measurement in patients with AAA demonstrated high inter- and intraobserver agreement and may be valuable when choosing the optimal dimensions graft for AAA before endovascular aneurysm repair.

  13. Short- and long-term quantitation reproducibility of brain metabolites in the medial wall using proton echo planar spectroscopic imaging.

    Science.gov (United States)

    Tsai, Shang-Yueh; Lin, Yi-Ru; Wang, Woan-Chyi; Niddam, David M

    2012-11-15

    Proton echo planar spectroscopic imaging (PEPSI) is a fast magnetic resonance spectroscopic imaging (MRSI) technique that allows mapping spatial metabolite distributions in the brain. Although the medial wall of the cortex is involved in a wide range of pathological conditions, previous MRSI studies have not focused on this region. To decide the magnitude of metabolic changes to be considered significant in this region, the reproducibility of the method needs to be established. The study aims were to establish the short- and long-term reproducibility of metabolites in the right medial wall and to compare regional differences using a constant short-echo time (TE30) and TE averaging (TEavg) optimized to yield glutamatergic information. 2D sagittal PEPSI was implemented at 3T using a 32 channel head coil. Acquisitions were repeated immediately and after approximately 2 weeks to assess the coefficients of variation (COV). COVs were obtained from eight regions-of-interest (ROIs) of varying size and location. TE30 resulted in better spectral quality and similar or lower quantitation uncertainty for all metabolites except glutamate (Glu). When Glu and glutamine (Gln) were quantified together (Glx) reduced quantitation uncertainty and increased reproducibility was observed for TE30. TEavg resulted in lowered quantitation uncertainty for Glu but in less reliable quantification of several other metabolites. TEavg did not result in a systematically improved short- or long-term reproducibility for Glu. The ROI volume was a major factor influencing reproducibility. For both short- and long-term repetitions, the Glu COVs obtained with TEavg were 5-8% for the large ROIs, 12-17% for the medium sized ROIs and 16-26% for the smaller cingulate ROIs. COVs obtained with TE30 for the less specific Glx were 3-5%, 8-10% and 10-15%. COVs for N-acetyl aspartate, creatine and choline using TE30 with long-term repetition were between 2-10%. Our results show that the cost of more specific

  14. Qualitative and quantitative histopathology in transitional cell carcinomas of the urinary bladder. An international investigation of intra- and interobserver reproducibility

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Sasaki, M; Fukuzawa, S

    1994-01-01

    of both qualitative and quantitative grading methods. Grading of malignancy was performed by one observer in Japan (using the World Health Organization scheme), and by two observers in Denmark (using the Bergkvist system). A "translation" between the systems, grade for grade, and kappa statistics were....... The results obtained in this study stress the need for objective, quantitative histopathologic techniques substituting qualitative, subjective methods in prognosis-related grading of malignancy....... a random, systematic sampling scheme.RESULTS: The results were compared by bivariate correlation analyses and Kendall's tau. The international interobserver reproducibility of qualitative gradings was rather poor (kappa = 0.51), especially for grade 2 tumors (kappa = 0.28). Likewise, the interobserver...

  15. Repeatability and Reproducibility of Quantitative Corneal Shape Analysis after Orthokeratology Treatment Using Image-Pro Plus Software

    Science.gov (United States)

    Mei, Ying; Tang, Zhiping

    2016-01-01

    Purpose. To evaluate the repeatability and reproducibility of quantitative analysis of the morphological corneal changes after orthokeratology treatment using “Image-Pro Plus 6.0” software (IPP). Methods. Three sets of measurements were obtained: two sets by examiner 1 with 5 days apart and one set by examiner 2 on the same day. Parameters of the eccentric distance, eccentric angle, area, and roundness of the corneal treatment zone were measured using IPP. The intraclass correlation coefficient (ICC) and repetitive coefficient (COR) were used to calculate the repeatability and reproducibility of these three sets of measurements. Results. ICC analysis suggested “excellent” reliability of more than 0.885 for all variables, and COR values were less than 10% for all variables within the same examiner. ICC analysis suggested “excellent” reliability for all variables of more than 0.90, and COR values were less than 10% for all variables between different examiners. All extreme values of the eccentric distance and area of the treatment zone pointed to the same material number in three sets of measurements. Conclusions. IPP could be used to acquire the exact data of the characteristic morphological corneal changes after orthokeratology treatment with good repeatability and reproducibility. This trial is registered with trial registration number: ChiCTR-IPR-14005505.

  16. Repeatability and Reproducibility of Quantitative Corneal Shape Analysis after Orthokeratology Treatment Using Image-Pro Plus Software

    Directory of Open Access Journals (Sweden)

    Ying Mei

    2016-01-01

    Full Text Available Purpose. To evaluate the repeatability and reproducibility of quantitative analysis of the morphological corneal changes after orthokeratology treatment using “Image-Pro Plus 6.0” software (IPP. Methods. Three sets of measurements were obtained: two sets by examiner 1 with 5 days apart and one set by examiner 2 on the same day. Parameters of the eccentric distance, eccentric angle, area, and roundness of the corneal treatment zone were measured using IPP. The intraclass correlation coefficient (ICC and repetitive coefficient (COR were used to calculate the repeatability and reproducibility of these three sets of measurements. Results. ICC analysis suggested “excellent” reliability of more than 0.885 for all variables, and COR values were less than 10% for all variables within the same examiner. ICC analysis suggested “excellent” reliability for all variables of more than 0.90, and COR values were less than 10% for all variables between different examiners. All extreme values of the eccentric distance and area of the treatment zone pointed to the same material number in three sets of measurements. Conclusions. IPP could be used to acquire the exact data of the characteristic morphological corneal changes after orthokeratology treatment with good repeatability and reproducibility. This trial is registered with trial registration number: ChiCTR-IPR-14005505.

  17. Interscan reproducibility of quantitative coronary plaque volume and composition from CT coronary angiography using an automated method

    Energy Technology Data Exchange (ETDEWEB)

    Schuhbaeck, Annika; Achenbach, Stephan [University of Erlangen, Department of Cardiology, Erlangen (Germany); Dey, Damini [Cedars-Sinai Medical Center, Biomedical Imaging Research Institute, Los Angeles (United States); Otaki, Yuka; Slomka, Piotr; Berman, Daniel S. [Cedars-Sinai Medical Center, Department of Imaging and Medicine, Los Angeles (United States); Kral, Brian G.; Lai, Shenghan [Johns Hopkins University, Department of Medicine, Devision of Cardiology, Baltimore (United States); Fishman, Elliott K.; Lai, Hong [Johns Hopkins University, Department of Medicine, Devision of Cardiology, Baltimore (United States); Johns Hopkins University, Department of Radiology, Baltimore (United States)

    2014-09-15

    Quantitative measurements of coronary plaque volume may play a role in serial studies to determine disease progression or regression. Our aim was to evaluate the interscan reproducibility of quantitative measurements of coronary plaque volumes using a standardized automated method. Coronary dual source computed tomography angiography (CTA) was performed twice in 20 consecutive patients with known coronary artery disease within a maximum time difference of 100 days. The total plaque volume (TP), the volume of non-calcified plaque (NCP) and calcified plaque (CP) as well as the maximal remodelling index (RI) were determined using automated software. Mean TP volume was 382.3 ± 236.9 mm{sup 3} for the first and 399.0 ± 247.3 mm{sup 3} for the second examination (p = 0.47). There were also no significant differences for NCP volumes, CP volumes or RI. Interscan correlation of the plaque volumes was very good (Pearson's correlation coefficients: r = 0.92, r = 0.90 and r = 0.96 for TP, NCP and CP volumes, respectively). Automated software is a time-saving method that allows accurate assessment of coronary atherosclerotic plaque volumes in coronary CTA with high reproducibility. With this approach, serial studies appear to be possible. (orig.)

  18. Stability and reproducibility of semi-quantitative risk assessment methods within the occupational health and safety scope.

    Science.gov (United States)

    Carvalho, F; Melo, R B

    2015-01-01

    In many enterprises the semi-quantitative approach turns out to be the available and most suitable technique to perform a risk assessment. Despite its advantages, we cannot disregard the existing gap in terms of validation of this type of applications. This paper reports a study about risk assessments' reliability, namely both inter-coder (reproducibility) and intra-coder (stability) reliability of the semi-quantitative approach. This study comprised 4 fundamental stages. Data collection relied on free and systematized observations and made use of video recording, documental research, analysis grids and questionnaires specifically developed for this purpose. A set of different analysts were asked to use four semi-quantitative risk assessment methods (in two different moments) to estimate and assess six risks identified in two tasks accomplished to produce Airbags. The Krippendorff's Alpha Coefficient (α K) was the agreement measure selected to evaluate both inter-coder and intra-coder consensus. The preliminary results revealed a general low concordance (α K risk assessment results obtained by individuals with different levels of experience or expertise. This study revealed that the use of the semi-quantitative approach should be done with caution.

  19. New model for datasets citation and extraction reproducibility in VAMDC

    CERN Document Server

    Zwölf, Carlo Maria; Dubernet, Marie-Lise

    2016-01-01

    In this paper we present a new paradigm for the identification of datasets extracted from the Virtual Atomic and Molecular Data Centre (VAMDC) e-science infrastructure. Such identification includes information on the origin and version of the datasets, references associated to individual data in the datasets, as well as timestamps linked to the extraction procedure. This paradigm is described through the modifications of the language used to exchange data within the VAMDC and through the services that will implement those modifications. This new paradigm should enforce traceability of datasets, favour reproducibility of datasets extraction, and facilitate the systematic citation of the authors having originally measured and/or calculated the extracted atomic and molecular data.

  20. New model for datasets citation and extraction reproducibility in VAMDC

    Science.gov (United States)

    Zwölf, Carlo Maria; Moreau, Nicolas; Dubernet, Marie-Lise

    2016-09-01

    In this paper we present a new paradigm for the identification of datasets extracted from the Virtual Atomic and Molecular Data Centre (VAMDC) e-science infrastructure. Such identification includes information on the origin and version of the datasets, references associated to individual data in the datasets, as well as timestamps linked to the extraction procedure. This paradigm is described through the modifications of the language used to exchange data within the VAMDC and through the services that will implement those modifications. This new paradigm should enforce traceability of datasets, favor reproducibility of datasets extraction, and facilitate the systematic citation of the authors having originally measured and/or calculated the extracted atomic and molecular data.

  1. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...

  2. Reproducibility and accuracy of quantitative assessment of articular cartilage volume measurements with 3.0 tesla magnetic resonance imaging

    Institute of Scientific and Technical Information of China (English)

    XING Wei; SHENG Jing; CHEN Wen-hua; TIAN Jian-ming; ZHANG Li-rong; WANG Dong-qing

    2011-01-01

    Background Quantitative magnetic resonance imaging (qMRI) of articular cartilage represents a powerful tool in osteoarthritis research, but has so far been confined to a field strength of 1.5 T. The aim of the study was to determine the reproducibility and accuracy of qMRI assessments of the knee cartilage volume by comparing quantitative swine cartilage volumes of the sagittal (sag) multi echo data imagine combination water-excitation (MEDICwe) sequence and the fast low-angle shoot water-excitation (FLASHwe) sequence at 3.0-T MRI to directly measured volumes (DMV) of the surgically removed articular cartilage.Methods Test-retest MRI was acquired in 20 swine knees. Two sag FLASHwe sequences and two sag MEDICwe sequences (spatial resolution 0.4 mm × 0.4 mm × 1.0 mm of 3-dimension (3D) were acquired at 3-T MRI in a knee.Articular cartilage volume was calculated from 3D reformations of the MRI by using a manual program. Calculated volumes were compared with DMV of the surgically removed articular cartilage. Knee joint cartilage plates were quantified paired in order.Results In the knee joint of swine, reproducibility errors (paired analysis) for cartilage volume were 2.5% to 3.2% with sag FLASHwe, and 1.6% to 3.0% with sag MEDICwe. Correlation coefficients between results obtained with qMRI and DMV ranged from 0.90 to 0.98 for cartilage volume. Systematic pairwise difference between results obtained with qMRI and DMV ranged from -1.1% to 2.8%. Random pairwise differences between results obtained with qMRI and DMV ranged from (2.9 ±2.4)% to (6.8±4.5)%.Conclusions FLASHwe and MEDICwe sequences permit highly accurate and reproducible analysis of cartilage volume in the knee joints of swine at 3-T MRI. Cartilage volume reproducibility for the MEDICwe data is slightly higher than the FLASHwe data.

  3. Can a regional climate model reproduce observed extreme temperatures?

    Directory of Open Access Journals (Sweden)

    Peter F. Craigmile

    2013-10-01

    Full Text Available Using output from a regional Swedish climate model and observations from the Swedish synoptic observational network, we compare seasonal minimum temperatures from model output and observations using marginal extreme value modeling techniques. We make seasonal comparisons using generalized extreme value models and empirically estimate the shift in the distribution as a function of the regional climate model values, using the Doksum shift function. Spatial and temporal comparisons over south central Sweden are made by building hierarchical Bayesian generalized extreme value models for the observed minima and regional climate model output. Generally speaking the regional model is surprisingly well calibrated for minimum temperatures. We do detect a problem in the regional model to produce minimum temperatures close to 0◦C. The seasonal spatial effects are quite similar between data and regional model. The observations indicate relatively strong warming, especially in the northern region. This signal is present in the regional model, but is not as strong.

  4. In-vivo quantification of natural incipient caries lesions using the quantitative light-induced fluoroscence method: a reproducibility study

    Science.gov (United States)

    Tranaeus, Sofia; Shi, Xie-Qi; Trollsas, Karin; Lindgren, Lars-Erik; Angmar-Mansson, Birgit

    2000-03-01

    A new method for detection and quantification of natural incipient caries lesions, the Quantitative Light-induced Fluorescence method (QLF), has recently been developed. The aim of this study was to test the repeatability and reproducibility of the analytical part of the method. In vivo captured images (CCD-video camera, Panasonic WV-KS 152, with an argon ion laser as light source) of 15 different incipient caries lesions on smooth surfaces were analyzed by three analysts. The images were analyzed three times in a randomized order, twice for the first reconstructed area (P1A1 and P1A2), and then once for a second one (P2A1). Three parameters were measured, lesion area (mm2), average change in fluorescence (%), and maximum change in fluorescence (%) in the lesion. Repeated measures ANOVA were used to calculate the intra-, and inter-examiner reliability. Intra-examiner reliability for all three analysts showed an intra-class correlation coefficient, R, between 0.93 and 0.99 (for the analyses with the first patch, P1A1 and P1A2, as well as between the first and the second patch, P1A1 and P2A1). Inter-examiner reliability showed an inter-class correlation coefficient, R, between 0.95 and 0.99 (for analyses P1A1, P1A2 and P2A1). It was concluded that the Quantitative Light- induced fluorescence method showed excellent repeatability and reproducibility concerning the analytical part of the method.

  5. A model project for reproducible papers: critical temperature for the Ising model on a square lattice

    CERN Document Server

    Dolfi, M; Hehn, A; Imriška, J; Pakrouski, K; Rønnow, T F; Troyer, M; Zintchenko, I; Chirigati, F; Freire, J; Shasha, D

    2014-01-01

    In this paper we present a simple, yet typical simulation in statistical physics, consisting of large scale Monte Carlo simulations followed by an involved statistical analysis of the results. The purpose is to provide an example publication to explore tools for writing reproducible papers. The simulation estimates the critical temperature where the Ising model on the square lattice becomes magnetic to be Tc /J = 2.26934(6) using a finite size scaling analysis of the crossing points of Binder cumulants. We provide a virtual machine which can be used to reproduce all figures and results.

  6. A structured model of video reproduces primary visual cortical organisation.

    Directory of Open Access Journals (Sweden)

    Pietro Berkes

    2009-09-01

    Full Text Available The visual system must learn to infer the presence of objects and features in the world from the images it encounters, and as such it must, either implicitly or explicitly, model the way these elements interact to create the image. Do the response properties of cells in the mammalian visual system reflect this constraint? To address this question, we constructed a probabilistic model in which the identity and attributes of simple visual elements were represented explicitly and learnt the parameters of this model from unparsed, natural video sequences. After learning, the behaviour and grouping of variables in the probabilistic model corresponded closely to functional and anatomical properties of simple and complex cells in the primary visual cortex (V1. In particular, feature identity variables were activated in a way that resembled the activity of complex cells, while feature attribute variables responded much like simple cells. Furthermore, the grouping of the attributes within the model closely parallelled the reported anatomical grouping of simple cells in cat V1. Thus, this generative model makes explicit an interpretation of complex and simple cells as elements in the segmentation of a visual scene into basic independent features, along with a parametrisation of their moment-by-moment appearances. We speculate that such a segmentation may form the initial stage of a hierarchical system that progressively separates the identity and appearance of more articulated visual elements, culminating in view-invariant object recognition.

  7. Reproducibility of immunostaining quantification and description of a new digital image processing procedure for quantitative evaluation of immunohistochemistry in pathology.

    Science.gov (United States)

    Bernardo, Vagner; Lourenço, Simone Q C; Cruz, Renato; Monteiro-Leal, Luiz H; Silva, Licínio E; Camisasca, Danielle R; Farina, Marcos; Lins, Ulysses

    2009-08-01

    Quantification of immunostaining is a widely used technique in pathology. Nonetheless, techniques that rely on human vision are prone to inter- and intraobserver variability, and they are tedious and time consuming. Digital image analysis (DIA), now available in a variety of platforms, improves quantification performance: however, the stability of these different DIA systems is largely unknown. Here, we describe a method to measure the reproducibility of DIA systems. In addition, we describe a new image-processing strategy for quantitative evaluation of immunostained tissue sections using DAB/hematoxylin-stained slides. This approach is based on image subtraction, using a blue low pass filter in the optical train, followed by digital contrast and brightness enhancement. Results showed that our DIA system yields stable counts, and that this method can be used to evaluate the performance of DIA systems. The new image-processing approach creates an image that aids both human visual observation and DIA systems in assessing immunostained slides, delivers a quantitative performance similar to that of bright field imaging, gives thresholds with smaller ranges, and allows the segmentation of strongly immunostained areas, all resulting in a higher probability of representing specific staining. We believe that our approach offers important advantages to immunostaining quantification in pathology.

  8. Reproducing Phenomenology of Peroxidation Kinetics via Model Optimization

    Science.gov (United States)

    Ruslanov, Anatole D.; Bashylau, Anton V.

    2010-06-01

    We studied mathematical modeling of lipid peroxidation using a biochemical model system of iron (II)-ascorbate-dependent lipid peroxidation of rat hepatocyte mitochondrial fractions. We found that antioxidants extracted from plants demonstrate a high intensity of peroxidation inhibition. We simplified the system of differential equations that describes the kinetics of the mathematical model to a first order equation, which can be solved analytically. Moreover, we endeavor to algorithmically and heuristically recreate the processes and construct an environment that closely resembles the corresponding natural system. Our results demonstrate that it is possible to theoretically predict both the kinetics of oxidation and the intensity of inhibition without resorting to analytical and biochemical research, which is important for cost-effective discovery and development of medical agents with antioxidant action from the medicinal plants.

  9. Reproducible Infection Model for Clostridium perfringens in Broiler Chickens

    DEFF Research Database (Denmark)

    Pedersen, Karl; Friis-Holm, Lotte Bjerrum; Heuer, Ole Eske

    2008-01-01

    Experiments were carried out to establish an infection and disease model for Clostridium perfringens in broiler chickens. Previous experiments had failed to induce disease and only a transient colonization with challenge strains had been obtained. In the present study, two series of experiments w...

  10. Hydrological Modeling Reproducibility Through Data Management and Adaptors for Model Interoperability

    Science.gov (United States)

    Turner, M. A.

    2015-12-01

    Because of a lack of centralized planning and no widely-adopted standards among hydrological modeling research groups, research communities, and the data management teams meant to support research, there is chaos when it comes to data formats, spatio-temporal resolutions, ontologies, and data availability. All this makes true scientific reproducibility and collaborative integrated modeling impossible without some glue to piece it all together. Our Virtual Watershed Integrated Modeling System provides the tools and modeling framework hydrologists need to accelerate and fortify new scientific investigations by tracking provenance and providing adaptors for integrated, collaborative hydrologic modeling and data management. Under global warming trends where water resources are under increasing stress, reproducible hydrological modeling will be increasingly important to improve transparency and understanding of the scientific facts revealed through modeling. The Virtual Watershed Data Engine is capable of ingesting a wide variety of heterogeneous model inputs, outputs, model configurations, and metadata. We will demonstrate one example, starting from real-time raw weather station data packaged with station metadata. Our integrated modeling system will then create gridded input data via geostatistical methods along with error and uncertainty estimates. These gridded data are then used as input to hydrological models, all of which are available as web services wherever feasible. Models may be integrated in a data-centric way where the outputs too are tracked and used as inputs to "downstream" models. This work is part of an ongoing collaborative Tri-state (New Mexico, Nevada, Idaho) NSF EPSCoR Project, WC-WAVE, comprised of researchers from multiple universities in each of the three states. The tools produced and presented here have been developed collaboratively alongside watershed scientists to address specific modeling problems with an eye on the bigger picture of

  11. Accuracy and reproducibility of measurements on plaster models and digital models created using an intraoral scanner.

    Science.gov (United States)

    Camardella, Leonardo Tavares; Breuning, Hero; de Vasconcellos Vilella, Oswaldo

    2017-05-01

    The purpose of the present study was to evaluate the accuracy and reproducibility of measurements made on digital models created using an intraoral color scanner compared to measurements on dental plaster models. This study included impressions of 28 volunteers. Alginate impressions were used to make plaster models, and each volunteers' dentition was scanned with a TRIOS Color intraoral scanner. Two examiners performed measurements on the plaster models using a digital caliper and measured the digital models using Ortho Analyzer software. The examiners measured 52 distances, including tooth diameter and height, overjet, overbite, intercanine and intermolar distances, and the sagittal relationship. The paired t test was used to assess intra-examiner performance and measurement accuracy of the two examiners for both plaster and digital models. The level of clinically relevant differences between the measurements according to the threshold used was evaluated and a formula was applied to calculate the chance of finding clinically relevant errors on measurements on plaster and digital models. For several parameters, statistically significant differences were found between the measurements on the two different models. However, most of these discrepancies were not considered clinically significant. The measurement of the crown height of upper central incisors had the highest measurement error for both examiners. Based on the interexaminer performance, reproducibility of the measurements was poor for some of the parameters. Overall, our findings showed that most of the measurements on digital models created using the TRIOS Color scanner and measured with Ortho Analyzer software had a clinically acceptable accuracy compared to the same measurements made with a caliper on plaster models, but the measuring method can affect the reproducibility of the measurements.

  12. Bitwise identical compiling setup: prospective for reproducibility and reliability of earth system modeling

    Directory of Open Access Journals (Sweden)

    R. Li

    2015-11-01

    Full Text Available Reproducibility and reliability are fundamental principles of scientific research. A compiling setup that includes a specific compiler version and compiler flags is essential technical supports for Earth system modeling. With the fast development of computer software and hardware, compiling setup has to be updated frequently, which challenges the reproducibility and reliability of Earth system modeling. The existing results of a simulation using an original compiling setup may be irreproducible by a newer compiling setup because trivial round-off errors introduced by the change of compiling setup can potentially trigger significant changes in simulation results. Regarding the reliability, a compiler with millions of lines of codes may have bugs that are easily overlooked due to the uncertainties or unknowns in Earth system modeling. To address these challenges, this study shows that different compiling setups can achieve exactly the same (bitwise identical results in Earth system modeling, and a set of bitwise identical compiling setups of a model can be used across different compiler versions and different compiler flags. As a result, the original results can be more easily reproduced; for example, the original results with an older compiler version can be reproduced exactly with a newer compiler version. Moreover, this study shows that new test cases can be generated based on the differences of bitwise identical compiling setups between different models, which can help detect software bugs or risks in the codes of models and compilers and finally improve the reliability of Earth system modeling.

  13. Accuracy and reproducibility of dental replica models reconstructed by different rapid prototyping techniques

    NARCIS (Netherlands)

    Hazeveld, Aletta; Huddleston Slater, James J. R.; Ren, Yijin

    INTRODUCTION: Rapid prototyping is a fast-developing technique that might play a significant role in the eventual replacement of plaster dental models. The aim of this study was to investigate the accuracy and reproducibility of physical dental models reconstructed from digital data by several rapid

  14. A Detailed Data-Driven Network Model of Prefrontal Cortex Reproduces Key Features of In Vivo Activity.

    Science.gov (United States)

    Hass, Joachim; Hertäg, Loreen; Durstewitz, Daniel

    2016-05-01

    The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition.

  15. Voxel-level reproducibility assessment of modality independent elastography in a pre-clinical murine model

    Science.gov (United States)

    Flint, Katelyn M.; Weis, Jared A.; Yankeelov, Thomas E.; Miga, Michael I.

    2015-03-01

    Changes in tissue mechanical properties, measured non-invasively by elastography methods, have been shown to be an important diagnostic tool, particularly for cancer. Tissue elasticity information, tracked over the course of therapy, may be an important prognostic indicator of tumor response to treatment. While many elastography techniques exist, this work reports on the use of a novel form of elastography that uses image texture to reconstruct elastic property distributions in tissue (i.e., a modality independent elastography (MIE) method) within the context of a pre-clinical breast cancer system.1,2 The elasticity results have previously shown good correlation with independent mechanical testing.1 Furthermore, MIE has been successfully utilized to localize and characterize lesions in both phantom experiments and simulation experiments with clinical data.2,3 However, the reproducibility of this method has not been characterized in previous work. The goal of this study is to evaluate voxel-level reproducibility of MIE in a pre-clinical model of breast cancer. Bland-Altman analysis of co-registered repeat MIE scans in this preliminary study showed a reproducibility index of 24.7% (scaled to a percent of maximum stiffness) at the voxel level. As opposed to many reports in the magnetic resonance elastography (MRE) literature that speak to reproducibility measures of the bulk organ, these results establish MIE reproducibility at the voxel level; i.e., the reproducibility of locally-defined mechanical property measurements throughout the tumor volume.

  16. A standardised and reproducible model of intra-abdominal infection and abscess formation in rats

    NARCIS (Netherlands)

    Bosscha, K; Nieuwenhuijs, VB; Gooszen, AW; van Duijvenbode-Beumer, H; Visser, MR; Verweij, Willem; Akkermans, LMA

    2000-01-01

    Objective: To develop a standardised and reproducible model of intra-abdominal infection and abscess formation in rats. Design: Experimental study. Setting: University hospital, The Netherlands. Subjects: 36 adult male Wistar rats. Interventions: In 32 rats, peritonitis was produced using two differ

  17. Qualitative and quantitative histopathology in transitional cell carcinomas of the urinary bladder. An international investigation of intra- and interobserver reproducibility

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Sasaki, M; Fukuzawa, S

    1994-01-01

    BACKGROUND: Histopathologic, prognosis-related grading of malignancy by means of morphologic examination in transitional cell carcinomas of the urinary bladder (TCC) may be subject to observer variation, resulting in a reduced level of reproducibility. This may confound comparisons of treatment...... results. Using objective, unbiased stereologic techniques and ordinary histomorphometry, such problems may be solved. EXPERIMENTAL DESIGN: A study of 110 patients with papillary or solid transitional cell carcinomas of the urinary bladder in stage Ta through T4 was carried out, addressing reproducibility...

  18. A force-based model to reproduce stop-and-go waves in pedestrian dynamics

    CERN Document Server

    Chraibi, Mohcine; Schadschneider, Andreas

    2015-01-01

    Stop-and-go waves in single-file movement are a phenomenon that is ob- served empirically in pedestrian dynamics. It manifests itself by the co-existence of two phases: moving and stopping pedestrians. We show analytically based on a simplified one-dimensional scenario that under some conditions the system can have instable homogeneous solutions. Hence, oscillations in the trajectories and in- stabilities emerge during simulations. To our knowledge there exists no force-based model which is collision- and oscillation-free and meanwhile can reproduce phase separation. We develop a new force-based model for pedestrian dynamics able to reproduce qualitatively the phenomenon of phase separation. We investigate analytically the stability condition of the model and define regimes of parameter values where phase separation can be observed. We show by means of simulations that the predefined conditions lead in fact to the expected behavior and validate our model with respect to empirical findings.

  19. Using the mouse to model human disease: increasing validity and reproducibility

    Directory of Open Access Journals (Sweden)

    Monica J. Justice

    2016-02-01

    Full Text Available Experiments that use the mouse as a model for disease have recently come under scrutiny because of the repeated failure of data, particularly derived from preclinical studies, to be replicated or translated to humans. The usefulness of mouse models has been questioned because of irreproducibility and poor recapitulation of human conditions. Newer studies, however, point to bias in reporting results and improper data analysis as key factors that limit reproducibility and validity of preclinical mouse research. Inaccurate and incomplete descriptions of experimental conditions also contribute. Here, we provide guidance on best practice in mouse experimentation, focusing on appropriate selection and validation of the model, sources of variation and their influence on phenotypic outcomes, minimum requirements for control sets, and the importance of rigorous statistics. Our goal is to raise the standards in mouse disease modeling to enhance reproducibility, reliability and clinical translation of findings.

  20. Anatomical Reproducibility of a Head Model Molded by a Three-dimensional Printer.

    Science.gov (United States)

    Kondo, Kosuke; Nemoto, Masaaki; Masuda, Hiroyuki; Okonogi, Shinichi; Nomoto, Jun; Harada, Naoyuki; Sugo, Nobuo; Miyazaki, Chikao

    2015-01-01

    We prepared rapid prototyping models of heads with unruptured cerebral aneurysm based on image data of computed tomography angiography (CTA) using a three-dimensional (3D) printer. The objective of this study was to evaluate the anatomical reproducibility and accuracy of these models by comparison with the CTA images on a monitor. The subjects were 22 patients with unruptured cerebral aneurysm who underwent preoperative CTA. Reproducibility of the microsurgical anatomy of skull bone and arteries, the length and thickness of the main arteries, and the size of cerebral aneurysm were compared between the CTA image and rapid prototyping model. The microsurgical anatomy and arteries were favorably reproduced, apart from a few minute regions, in the rapid prototyping models. No significant difference was noted in the measured lengths of the main arteries between the CTA image and rapid prototyping model, but errors were noted in their thickness (p 3D printer. It was concluded that these models are useful tools for neurosurgical simulation. The thickness of the main arteries and size of cerebral aneurysm should be comprehensively judged including other neuroimaging in consideration of errors.

  1. Reproducibility of Quantitative Brain Imaging Using a PET-Only and a Combined PET/MR System

    Science.gov (United States)

    Lassen, Martin L.; Muzik, Otto; Beyer, Thomas; Hacker, Marcus; Ladefoged, Claes Nøhr; Cal-González, Jacobo; Wadsak, Wolfgang; Rausch, Ivo; Langer, Oliver; Bauer, Martin

    2017-01-01

    The purpose of this study was to test the feasibility of migrating a quantitative brain imaging protocol from a positron emission tomography (PET)-only system to an integrated PET/MR system. Potential differences in both absolute radiotracer concentration as well as in the derived kinetic parameters as a function of PET system choice have been investigated. Five healthy volunteers underwent dynamic (R)-[11C]verapamil imaging on the same day using a GE-Advance (PET-only) and a Siemens Biograph mMR system (PET/MR). PET-emission data were reconstructed using a transmission-based attenuation correction (AC) map (PET-only), whereas a standard MR-DIXON as well as a low-dose CT AC map was applied to PET/MR emission data. Kinetic modeling based on arterial blood sampling was performed using a 1-tissue-2-rate constant compartment model, yielding kinetic parameters (K1 and k2) and distribution volume (VT). Differences for parametric values obtained in the PET-only and the PET/MR systems were analyzed using a 2-way Analysis of Variance (ANOVA). Comparison of DIXON-based AC (PET/MR) with emission data derived from the PET-only system revealed average inter-system differences of −33 ± 14% (p PET/MR resulted in slightly lower systematic differences of −16 ± 18% for K1 and −9 ± 10% for k2. The average differences in VT were −18 ± 10% (p PET/MR and PET-only imaging due to different standard AC methods employed. Therefore, a transfer of imaging protocols from PET-only to PET/MR systems is not straightforward without application of proper correction methods. Clinical Trial Registration: www.clinicaltrialsregister.eu, identifier 2013-001724-19 PMID:28769742

  2. Quantitative assessment of left ventricular mechanical dyssynchrony using cine cardiovascular magnetic resonance imaging: Inter-study reproducibility.

    Science.gov (United States)

    Kowallick, Johannes T; Morton, Geraint; Lamata, Pablo; Jogiya, Roy; Kutty, Shelby; Hasenfuß, Gerd; Lotz, Joachim; Chiribiri, Amedeo; Nagel, Eike; Schuster, Andreas

    2017-01-01

    To determine the inter-study reproducibility of left ventricular (LV) mechanical dyssynchrony measures based on standard cardiovascular magnetic resonance (CMR) cine images. Steady-state free precession (SSFP) LV short-axis stacks and three long-axes were acquired on the same day at three time points. Circumferential strain systolic dyssynchrony indexes (SDI), area-SDI as well as circumferential and radial uniformity ratio estimates (CURE and RURE, respectively) were derived from CMR myocardial feature-tracking (CMR-FT) based on the tracking of three SSFP short-axis planes. Furthermore, 4D-LV-analysis based on SSFP short-axis stacks and longitudinal planes was performed to quantify 4D-volume-SDI. A single-centre London teaching hospital. 16 healthy volunteers. Inter-study reproducibility between the repeated exams. CURE and RURE as well as 4D-volume-SDI showed good inter-study reproducibility (coefficient of variation [CoV] 6.4%-12.9%). Circumferential strain and area-SDI showed higher variability between the repeated measurements (CoV 24.9%-37.5%). Uniformity ratio estimates showed the lowest inter-study variability (CoV 6.4%-8.5%). Derivation of LV mechanical dyssynchrony measures from standard cine images is feasible using CMR-FT and 4D-LV-analysis tools. Uniformity ratio estimates and 4D-volume-SDI showed good inter-study reproducibility. Their clinical value should next be explored in patients who potentially benefit from cardiac resynchronization therapy.

  3. The mathematics of cancer: integrating quantitative models.

    Science.gov (United States)

    Altrock, Philipp M; Liu, Lin L; Michor, Franziska

    2015-12-01

    Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology.

  4. Quantitative analysis of relationships between irradiation parameters and the reproducibility of cyclotron-produced (99m)Tc yields.

    Science.gov (United States)

    Tanguay, J; Hou, X; Buckley, K; Schaffer, P; Bénard, F; Ruth, T J; Celler, A

    2015-05-21

    Cyclotron production of (99m)Tc through the (100)Mo(p,2n) (99m)Tc reaction channel is actively being investigated as an alternative to reactor-based (99)Mo generation by nuclear fission of (235)U. An exciting aspect of this approach is that it can be implemented using currently-existing cyclotron infrastructure to supplement, or potentially replace, conventional (99m)Tc production methods that are based on aging and increasingly unreliable nuclear reactors. Successful implementation will require consistent production of large quantities of high-radionuclidic-purity (99m)Tc. However, variations in proton beam currents and the thickness and isotopic composition of enriched (100)Mo targets, in addition to other irradiation parameters, may degrade reproducibility of both radionuclidic purity and absolute (99m)Tc yields. The purpose of this article is to present a method for quantifying relationships between random variations in production parameters, including (100)Mo target thicknesses and proton beam currents, and reproducibility of absolute (99m)Tc yields (defined as the end of bombardment (EOB) (99m)Tc activity). Using the concepts of linear error propagation and the theory of stochastic point processes, we derive a mathematical expression that quantifies the influence of variations in various irradiation parameters on yield reproducibility, quantified in terms of the coefficient of variation of the EOB (99m)Tc activity. The utility of the developed formalism is demonstrated with an example. We show that achieving less than 20% variability in (99m)Tc yields will require highly-reproducible target thicknesses and proton currents. These results are related to the service rate which is defined as the percentage of (99m)Tc production runs that meet the minimum daily requirement of one (or many) nuclear medicine departments. For example, we show that achieving service rates of 84.0%, 97.5% and 99.9% with 20% variations in target thicknesses requires producing on average

  5. Validation of EURO-CORDEX regional climate models in reproducing the variability of precipitation extremes in Romania

    Science.gov (United States)

    Dumitrescu, Alexandru; Busuioc, Aristita

    2016-04-01

    EURO-CORDEX is the European branch of the international CORDEX initiative that aims to provide improved regional climate change projections for Europe. The main objective of this paper is to document the performance of the individual models in reproducing the variability of precipitation extremes in Romania. Here three EURO-CORDEX regional climate models (RCMs) ensemble (scenario RCP4.5) are analysed and inter-compared: DMI-HIRHAM5, KNMI-RACMO2.2 and MPI-REMO. Compared to previous studies, when the RCM validation regarding the Romanian climate has mainly been made on mean state and at station scale, a more quantitative approach of precipitation extremes is proposed. In this respect, to have a more reliable comparison with observation, a high resolution daily precipitation gridded data set was used as observational reference (CLIMHYDEX project). The comparison between the RCM outputs and observed grid point values has been made by calculating three extremes precipitation indices, recommended by the Expert Team on Climate Change Detection Indices (ETCCDI), for the 1976-2005 period: R10MM, annual count of days when precipitation ≥10mm; RX5DAY, annual maximum 5-day precipitation and R95P%, precipitation fraction of annual total precipitation due to daily precipitation > 95th percentile. The RCMs capability to reproduce the mean state for these variables, as well as the main modes of their spatial variability (given by the first three EOF patterns), are analysed. The investigation confirms the ability of RCMs to simulate the main features of the precipitation extreme variability over Romania, but some deficiencies in reproducing of their regional characteristics were found (for example, overestimation of the mea state, especially over the extra Carpathian regions). This work has been realised within the research project "Changes in climate extremes and associated impact in hydrological events in Romania" (CLIMHYDEX), code PN II-ID-2011-2-0073, financed by the Romanian

  6. Cellular automaton model in the fundamental diagram approach reproducing the synchronized outflow of wide moving jams

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Jun-fang, E-mail: tianhustbjtu@hotmail.com [MOE Key Laboratory for Urban Transportation Complex Systems Theory and Technology, Beijing Jiaotong University, Beijing 100044 (China); Yuan, Zhen-zhou; Jia, Bin; Fan, Hong-qiang; Wang, Tao [MOE Key Laboratory for Urban Transportation Complex Systems Theory and Technology, Beijing Jiaotong University, Beijing 100044 (China)

    2012-09-10

    Velocity effect and critical velocity are incorporated into the average space gap cellular automaton model [J.F. Tian, et al., Phys. A 391 (2012) 3129], which was able to reproduce many spatiotemporal dynamics reported by the three-phase theory except the synchronized outflow of wide moving jams. The physics of traffic breakdown has been explained. Various congested patterns induced by the on-ramp are reproduced. It is shown that the occurrence of synchronized outflow, free outflow of wide moving jams is closely related with drivers time delay in acceleration at the downstream jam front and the critical velocity, respectively. -- Highlights: ► Velocity effect is added into average space gap cellular automaton model. ► The physics of traffic breakdown has been explained. ► The probabilistic nature of traffic breakdown is simulated. ► Various congested patterns induced by the on-ramp are reproduced. ► The occurrence of synchronized outflow of jams depends on drivers time delay.

  7. Hidden-variable models for the spin singlet: I. Non-local theories reproducing quantum mechanics

    CERN Document Server

    Di Lorenzo, Antonio

    2011-01-01

    A non-local hidden variable model reproducing the quantum mechanical probabilities for a spin singlet is presented. The non-locality is concentrated in the distribution of the hidden variables. The model otherwise satisfies both the hypothesis of outcome independence, made in the derivation of Bell inequality, and of compliance with Malus's law, made in the derivation of Leggett inequality. It is shown through the prescription of a protocol that the non-locality can be exploited to send information instantaneously provided that the hidden variables can be measured, even though they cannot be controlled.

  8. On some problems with reproducing the Standard Model fields and interactions in five-dimensional warped brane world models

    CERN Document Server

    Smolyakov, Mikhail N

    2015-01-01

    In the present paper we discuss some problems which arise, when the matter, gauge and Higgs fields are allowed to propagate in the bulk of five-dimensional brane world models with compact extra dimension and their zero Kaluza-Klein modes are supposed to exactly reproduce the Standard Model fields and their interactions.

  9. Reproducibility blues.

    Science.gov (United States)

    Pulverer, Bernd

    2015-11-12

    Research findings advance science only if they are significant, reliable and reproducible. Scientists and journals must publish robust data in a way that renders it optimally reproducible. Reproducibility has to be incentivized and supported by the research infrastructure but without dampening innovation.

  10. Current reinforcement model reproduces center-in-center vein trajectory of Physarum polycephalum.

    Science.gov (United States)

    Akita, Dai; Schenz, Daniel; Kuroda, Shigeru; Sato, Katsuhiko; Ueda, Kei-Ichi; Nakagaki, Toshiyuki

    2017-06-01

    Vein networks span the whole body of the amoeboid organism in the plasmodial slime mould Physarum polycephalum, and the network topology is rearranged within an hour in response to spatio-temporal variations of the environment. It has been reported that this tube morphogenesis is capable of solving mazes, and a mathematical model, named the 'current reinforcement rule', was proposed based on the adaptability of the veins. Although it is known that this model works well for reproducing some key characters of the organism's maze-solving behaviour, one important issue is still open: In the real organism, the thick veins tend to trace the shortest possible route by cutting the corners at the turn of corridors, following a center-in-center trajectory, but it has not yet been examined whether this feature also appears in the mathematical model, using corridors of finite width. In this report, we confirm that the mathematical model reproduces the center-in-center trajectory of veins around corners observed in the maze-solving experiment. © 2017 Japanese Society of Developmental Biologists.

  11. Quantitative Performance Evaluator for Proteomics (QPEP): Web-based Application for Reproducible Evaluation of Proteomics Preprocessing Methods.

    Science.gov (United States)

    Strbenac, Dario; Zhong, Ling; Raftery, Mark J; Wang, Penghao; Wilson, Susan R; Armstrong, Nicola J; Yang, Jean Y H

    2017-07-07

    Tandem mass spectrometry is one of the most popular techniques for quantitation of proteomes. There exists a large variety of options in each stage of data preprocessing that impact the bias and variance of the summarized protein-level values. Using a newly released data set satisfying a replicated Latin squares design, a diverse set of performance metrics has been developed and implemented in a web-based application, Quantitative Performance Evaluator for Proteomics (QPEP). QPEP has the flexibility to allow users to apply their own method to preprocess this data set and share the results, allowing direct and straightforward comparison of new methodologies. Application of these new metrics to three case studies highlights that (i) the summarization of peptides to proteins is robust to the choice of peptide summary used, (ii) the differences between iTRAQ labels are stronger than the differences between experimental runs, and (iii) the commercial software ProteinPilot performs equivalently well at between-sample normalization to more complicated methods developed by academics. Importantly, finding (ii) underscores the benefits of using the principles of randomization and blocking to avoid the experimental measurements being confounded by technical factors. Data are available via ProteomeXchange with identifier PXD003608.

  12. QSAR model reproducibility and applicability: a case study of rate constants of hydroxyl radical reaction models applied to polybrominated diphenyl ethers and (benzo-)triazoles.

    Science.gov (United States)

    Roy, Partha Pratim; Kovarich, Simona; Gramatica, Paola

    2011-08-01

    The crucial importance of the three central OECD principles for quantitative structure-activity relationship (QSAR) model validation is highlighted in a case study of tropospheric degradation of volatile organic compounds (VOCs) by OH, applied to two CADASTER chemical classes (PBDEs and (benzo-)triazoles). The application of any QSAR model to chemicals without experimental data largely depends on model reproducibility by the user. The reproducibility of an unambiguous algorithm (OECD Principle 2) is guaranteed by redeveloping MLR models based on both updated version of DRAGON software for molecular descriptors calculation and some freely available online descriptors. The Genetic Algorithm has confirmed its ability to always select the most informative descriptors independently on the input pool of variables. The ability of the GA-selected descriptors to model chemicals not used in model development is verified by three different splittings (random by response, K-ANN and K-means clustering), thus ensuring the external predictivity of the new models, independently of the training/prediction set composition (OECD Principle 5). The relevance of checking the structural applicability domain becomes very evident on comparing the predictions for CADASTER chemicals, using the new models proposed herein, with those obtained by EPI Suite. Copyright © 2011 Wiley Periodicals, Inc.

  13. [Amniocentesis trainer: development of a cheap and reproducible new training model].

    Science.gov (United States)

    Tassin, M; Cordier, A-G; Laher, G; Benachi, A; Mandelbrot, L

    2012-11-01

    Amniocentesis is the most common invasive procedure for prenatal diagnosis. It is essential to master this sampling technique prior to performing more complex ultrasound-guided interventions (cordocentesis, drain insertion). Training is a challenge because of the risks associated with the procedure, as well as the impact on the patient's anxiety. An amniocentesis simulator allows for safe training and repeats interventions, thus accelerating the learning curve, and also allows for periodic evaluation of proficiency. We present here a new, simple, and cost-effective amniotrainer model that reproduces real life conditions, using chicken breast and condoms filled with water.

  14. Extreme Rainfall Events Over Southern Africa: Assessment of a Climate Model to Reproduce Daily Extremes

    Science.gov (United States)

    Williams, C.; Kniveton, D.; Layberry, R.

    2007-12-01

    It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The subcontinent is considered especially vulnerable extreme events, due to a number of factors including extensive poverty, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of a state-of-the-art climate model to simulate climate at daily timescales is carried out using satellite derived rainfall data from the Microwave Infra-Red Algorithm (MIRA). This dataset covers the period from 1993-2002 and the whole of southern Africa at a spatial resolution of 0.1 degree longitude/latitude. Once the model's ability to reproduce extremes has been assessed, idealised regions of SST anomalies are used to force the model, with the overall aim of investigating the ways in which SST anomalies influence rainfall extremes over southern Africa. In this paper, results from sensitivity testing of the UK Meteorological Office Hadley Centre's climate model's domain size are firstly presented. Then simulations of current climate from the model, operating in both regional and global mode, are compared to the MIRA dataset at daily timescales. Thirdly, the ability of the model to reproduce daily rainfall extremes will be assessed, again by a comparison with extremes from the MIRA dataset. Finally, the results from the idealised SST experiments are briefly presented, suggesting associations between rainfall extremes and both local and remote SST anomalies.

  15. A novel highly reproducible and lethal nonhuman primate model for orthopox virus infection.

    Directory of Open Access Journals (Sweden)

    Marit Kramski

    Full Text Available The intentional re-introduction of Variola virus (VARV, the agent of smallpox, into the human population is of great concern due its bio-terroristic potential. Moreover, zoonotic infections with Cowpox (CPXV and Monkeypox virus (MPXV cause severe diseases in humans. Smallpox vaccines presently available can have severe adverse effects that are no longer acceptable. The efficacy and safety of new vaccines and antiviral drugs for use in humans can only be demonstrated in animal models. The existing nonhuman primate models, using VARV and MPXV, need very high viral doses that have to be applied intravenously or intratracheally to induce a lethal infection in macaques. To overcome these drawbacks, the infectivity and pathogenicity of a particular CPXV was evaluated in the common marmoset (Callithrix jacchus.A CPXV named calpox virus was isolated from a lethal orthopox virus (OPV outbreak in New World monkeys. We demonstrated that marmosets infected with calpox virus, not only via the intravenous but also the intranasal route, reproducibly develop symptoms resembling smallpox in humans. Infected animals died within 1-3 days after onset of symptoms, even when very low infectious viral doses of 5x10(2 pfu were applied intranasally. Infectious virus was demonstrated in blood, saliva and all organs analyzed.We present the first characterization of a new OPV infection model inducing a disease in common marmosets comparable to smallpox in humans. Intranasal virus inoculation mimicking the natural route of smallpox infection led to reproducible infection. In vivo titration resulted in an MID(50 (minimal monkey infectious dose 50% of 8.3x10(2 pfu of calpox virus which is approximately 10,000-fold lower than MPXV and VARV doses applied in the macaque models. Therefore, the calpox virus/marmoset model is a suitable nonhuman primate model for the validation of vaccines and antiviral drugs. Furthermore, this model can help study mechanisms of OPV pathogenesis.

  16. Digital versus plaster study models: how accurate and reproducible are they?

    Science.gov (United States)

    Abizadeh, Neilufar; Moles, David R; O'Neill, Julian; Noar, Joseph H

    2012-09-01

    To compare measurements of occlusal relationships and arch dimensions taken from digital study models with those taken from plaster models. Laboratory study The Orthodontic Department, Kettering General Hospital, Kettering, UK Methods and materials: One hundred and twelve sets of study models with a range of malocclusions and various degrees of crowding were selected. Occlusal features were measured manually with digital callipers on the plaster models. The same measurements were performed on digital images of the study models. Each method was carried out twice in order to check for intra-operator variability. The repeatability and reproducibility of the methods was assessed. Statistically significant differences between the two methods were found. In 8 of the 16 occlusal features measured, the plaster measurements were more repeatable. However, those differences were not of sufficient magnitude to have clinical relevance. In addition there were statistically significant systematic differences for 12 of the 16 occlusal features, with the plaster measurements being greater for 11 of these, indicating the digital model scans were not a true 11 representation of the plaster models. The repeatability of digital models compared with plaster models is satisfactory for clinical applications, although this study demonstrated some systematic differences. Digital study models can therefore be considered for use as an adjunct to clinical assessment of the occlusion, but as yet may not supersede current methods for scientific purposes.

  17. Computed Tomography of the Human Pineal Gland for Study of the Sleep-Wake Rhythm: Reproducibility of a Semi-Quantitative Approach

    Energy Technology Data Exchange (ETDEWEB)

    Schmitz, S.A.; Platzek, I.; Kunz, D.; Mahlberg, R.; Wolf, K.J.; Heidenreich, J.O. [Charite - Universitaetsmedizin Berlin, Campus Benjamin Franklin, Berlin (Germany). Dept. of Radiology and Nuclear Medicine

    2006-10-15

    Purpose: To propose a semi-quantitative computed tomography (CT) protocol for determining uncalcified pineal tissue (UCPT), and to evaluate its reproducibility in modification of studies showing that the degree of calcification is a potential marker of deficient melatonin production and may prove an instability marker of circadian rhythm. Material and Methods: Twenty-two pineal gland autopsy specimens were scanned in a skull phantom with different slice thickness twice and the uncalcified tissue visually assessed using a four-point scale. The maximum gland density was measured and its inverse graded on a non-linear four-point scale. The sum of both scores was multiplied by the gland volume to yield the UCPT. The within-subject variance of UCPT was determined and compared between scans of different slice thickness. Results: The UCPT of the first measurement, in arbitrary units, was 39{+-}52.5 for 1 mm slice thickness, 44{+-}51.1 for 2 mm, 45{+-}34.8 for 4 mm, and 84{+-}58.0 for 8 mm. Significant differences of within-subject variance of UCPT were found between 1 and 4 mm, 1 and 8 mm, and 2 and 8 mm slice thicknesses ( P <0.05). Conclusion: A superior reproducibility of the semi-quantitative CT determination of UCPT was found using 1 and 2 mm slice thicknesses. These data support the use of thin slices of 1 and 2 mm. The benefit in reproducibility from thin slices has to be carefully weighted against their considerably higher radiation exposure.

  18. Fourier modeling of the BOLD response to a breath-hold task: Optimization and reproducibility.

    Science.gov (United States)

    Pinto, Joana; Jorge, João; Sousa, Inês; Vilela, Pedro; Figueiredo, Patrícia

    2016-07-15

    Cerebrovascular reactivity (CVR) reflects the capacity of blood vessels to adjust their caliber in order to maintain a steady supply of brain perfusion, and it may provide a sensitive disease biomarker. Measurement of the blood oxygen level dependent (BOLD) response to a hypercapnia-inducing breath-hold (BH) task has been frequently used to map CVR noninvasively using functional magnetic resonance imaging (fMRI). However, the best modeling approach for the accurate quantification of CVR maps remains an open issue. Here, we compare and optimize Fourier models of the BOLD response to a BH task with a preparatory inspiration, and assess the test-retest reproducibility of the associated CVR measurements, in a group of 10 healthy volunteers studied over two fMRI sessions. Linear combinations of sine-cosine pairs at the BH task frequency and its successive harmonics were added sequentially in a nested models approach, and were compared in terms of the adjusted coefficient of determination and corresponding variance explained (VE) of the BOLD signal, as well as the number of voxels exhibiting significant BOLD responses, the estimated CVR values, and their test-retest reproducibility. The brain average VE increased significantly with the Fourier model order, up to the 3rd order. However, the number of responsive voxels increased significantly only up to the 2nd order, and started to decrease from the 3rd order onwards. Moreover, no significant relative underestimation of CVR values was observed beyond the 2nd order. Hence, the 2nd order model was concluded to be the optimal choice for the studied paradigm. This model also yielded the best test-retest reproducibility results, with intra-subject coefficients of variation of 12 and 16% and an intra-class correlation coefficient of 0.74. In conclusion, our results indicate that a Fourier series set consisting of a sine-cosine pair at the BH task frequency and its two harmonics is a suitable model for BOLD-fMRI CVR measurements

  19. On the reproducibility of spatiotemporal traffic dynamics with microscopic traffic models

    CERN Document Server

    Knorr, Florian

    2012-01-01

    Traffic flow is a very prominent example of a driven non-equilibrium system. A characteristic phenomenon of traffic dynamics is the spontaneous and abrupt drop of the average velocity on a stretch of road leading to congestion. Such a traffic breakdown corresponds to a boundary-induced phase transition from free flow to congested traffic. In this paper, we study the ability of selected microscopic traffic models to reproduce a traffic breakdown, and we investigate its spatiotemporal dynamics. For our analysis, we use empirical traffic data from stationary loop detectors on a German Autobahn showing a spontaneous breakdown. We then present several methods to assess the results and compare the models with each other. In addition, we will also discuss some important modeling aspects and their impact on the resulting spatiotemporal pattern. The investigation of different downstream boundary conditions, for example, shows that the physical origin of the traffic breakdown may be artificially induced by the setup of...

  20. On the reproducibility of spatiotemporal traffic dynamics with microscopic traffic models

    Science.gov (United States)

    Knorr, Florian; Schreckenberg, Michael

    2012-10-01

    Traffic flow is a very prominent example of a driven non-equilibrium system. A characteristic phenomenon of traffic dynamics is the spontaneous and abrupt drop of the average velocity on a stretch of road leading to congestion. Such a traffic breakdown corresponds to a boundary-induced phase transition from free flow to congested traffic. In this paper, we study the ability of selected microscopic traffic models to reproduce a traffic breakdown, and we investigate its spatiotemporal dynamics. For our analysis, we use empirical traffic data from stationary loop detectors on a German Autobahn showing a spontaneous breakdown. We then present several methods to assess the results and compare the models with each other. In addition, we will also discuss some important modeling aspects and their impact on the resulting spatiotemporal pattern. The investigation of different downstream boundary conditions, for example, shows that the physical origin of the traffic breakdown may be artificially induced by the setup of the boundaries.

  1. Quantitative structure - mesothelioma potency model ...

    Science.gov (United States)

    Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar

  2. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    on the existence of a quotient construction, allowing a property phi of a parallel system phi/A to be transformed into a sufficient and necessary quotient-property yolA to be satisfied by the component 13. Given a model checking problem involving a network Pi I and a property yo, the method gradually move (by...

  3. Reproducibility, reliability and validity of measurements obtained from Cecile3 digital models

    Directory of Open Access Journals (Sweden)

    Gustavo Adolfo Watanabe-Kanno

    2009-09-01

    Full Text Available The aim of this study was to determine the reproducibility, reliability and validity of measurements in digital models compared to plaster models. Fifteen pairs of plaster models were obtained from orthodontic patients with permanent dentition before treatment. These were digitized to be evaluated with the program Cécile3 v2.554.2 beta. Two examiners measured three times the mesiodistal width of all the teeth present, intercanine, interpremolar and intermolar distances, overjet and overbite. The plaster models were measured using a digital vernier. The t-Student test for paired samples and interclass correlation coefficient (ICC were used for statistical analysis. The ICC of the digital models were 0.84 ± 0.15 (intra-examiner and 0.80 ± 0.19 (inter-examiner. The average mean difference of the digital models was 0.23 ± 0.14 and 0.24 ± 0.11 for each examiner, respectively. When the two types of measurements were compared, the values obtained from the digital models were lower than those obtained from the plaster models (p < 0.05, although the differences were considered clinically insignificant (differences < 0.1 mm. The Cécile digital models are a clinically acceptable alternative for use in Orthodontics.

  4. Assessment of the reliability of reproducing two-dimensional resistivity models using an image processing technique.

    Science.gov (United States)

    Ishola, Kehinde S; Nawawi, Mohd Nm; Abdullah, Khiruddin; Sabri, Ali Idriss Aboubakar; Adiat, Kola Abdulnafiu

    2014-01-01

    This study attempts to combine the results of geophysical images obtained from three commonly used electrode configurations using an image processing technique in order to assess their capabilities to reproduce two-dimensional (2-D) resistivity models. All the inverse resistivity models were processed using the PCI Geomatica software package commonly used for remote sensing data sets. Preprocessing of the 2-D inverse models was carried out to facilitate further processing and statistical analyses. Four Raster layers were created, three of these layers were used for the input images and the fourth layer was used as the output of the combined images. The data sets were merged using basic statistical approach. Interpreted results show that all images resolved and reconstructed the essential features of the models. An assessment of the accuracy of the images for the four geologic models was performed using four criteria: the mean absolute error and mean percentage absolute error, resistivity values of the reconstructed blocks and their displacements from the true models. Generally, the blocks of the images of maximum approach give the least estimated errors. Also, the displacement of the reconstructed blocks from the true blocks is the least and the reconstructed resistivities of the blocks are closer to the true blocks than any other combined used. Thus, it is corroborated that when inverse resistivity models are combined, most reliable and detailed information about the geologic models is obtained than using individual data sets.

  5. Classical signal model reproducing quantum probabilities for single and coincidence detections

    Science.gov (United States)

    Khrennikov, Andrei; Nilsson, Börje; Nordebo, Sven

    2012-05-01

    We present a simple classical (random) signal model reproducing Born's rule. The crucial point of our approach is that the presence of detector's threshold and calibration procedure have to be treated not as simply experimental technicalities, but as the basic counterparts of the theoretical model. We call this approach threshold signal detection model (TSD). The experiment on coincidence detection which was done by Grangier in 1986 [22] played a crucial role in rejection of (semi-)classical field models in favour of quantum mechanics (QM): impossibility to resolve the wave-particle duality in favour of a purely wave model. QM predicts that the relative probability of coincidence detection, the coefficient g(2) (0), is zero (for one photon states), but in (semi-)classical models g(2)(0) >= 1. In TSD the coefficient g(2)(0) decreases as 1/ɛ2d, where ɛd > 0 is the detection threshold. Hence, by increasing this threshold an experimenter can make the coefficient g(2) (0) essentially less than 1. The TSD-prediction can be tested experimentally in new Grangier type experiments presenting a detailed monitoring of dependence of the coefficient g(2)(0) on the detection threshold. Structurally our model has some similarity with the prequantum model of Grossing et al. Subquantum stochasticity is composed of the two counterparts: a stationary process in the space of internal degrees of freedom and the random walk type motion describing the temporal dynamics.

  6. Accuracy and reproducibility of linear measurements of resin, plaster, digital and printed study-models.

    Science.gov (United States)

    Saleh, Waleed K; Ariffin, Emy; Sherriff, Martyn; Bister, Dirk

    2015-01-01

    To compare the accuracy and reproducibility of measurements of on-screen three-dimensional (3D) digital surface models captured by a 3Shape R700™ laser-scanner, with measurements made using a digital caliper on acrylic, plaster models or model replicas. Four sets of typodont models were used. Acrylic models, alginate impressions, plaster models and physical replicas were measured. The 3Shape R700™ laser-scanning device with 3Shape™ software was used for scans and measurements. Linear measurements were recorded for selected landmarks, on each of the physical models and on the 3D digital surface models on ten separate occasions by a single examiner. Comparing measurements taken on the physical models the mean difference of the measurements was 0.32 mm (SD 0.15 mm). For the different methods (physical versus digital) the mean difference was 0.112 mm (SD 0.15 mm). None of the values showed a statistically significant difference (p plaster and acrylic models. The comparison of measurements on the physical models showed no significant difference. The 3Shape R700™ is a reliable device for capturing surface details of models in a digital format. When comparing measurements taken manually and digitally there was no statistically significant difference. The Objet Eden 250™ 3D prints proved to be as accurate as the original acrylic, plaster, or alginate impressions as was shown by the accuracy of the measurements taken. This confirms that using virtual study models can be a reliable method, replacing traditional plaster models.

  7. Reproducibility of VPCT parameters in the normal pancreas: comparison of two different kinetic calculation models.

    Science.gov (United States)

    Kaufmann, Sascha; Schulze, Maximilian; Horger, Thomas; Oelker, Aenne; Nikolaou, Konstantin; Horger, Marius

    2015-09-01

    To assess the reproducibility of volume computed tomographic perfusion (VPCT) measurements in normal pancreatic tissue using two different kinetic perfusion calculation models at three different time points. Institutional ethical board approval was obtained for retrospective analysis of pancreas perfusion data sets generated by our prospective study for liver response monitoring to local therapy in patients experiencing unresectable hepatocellular carcinoma, which was approved by the institutional review board. VPCT of the entire pancreas was performed in 41 patients (mean age, 64.8 years) using 26 consecutive volume measurements and intravenous injection of 50 mL of iodinated contrast at a flow rate of 5 mL/s. Blood volume(BV) and blood flow (BF) were calculated using two mathematical methods: maximum slope + Patlak analysis versus deconvolution method. Pancreas perfusion was calculated using two volume of interests. Median interval between the first and the second VPCT was 2 days and between the second and the third VPCT 82 days. Variability was assessed with within-patient coefficients of variation (CVs) and Bland-Altman analyses. Interobserver agreement for all perfusion parameters was calculated using intraclass correlation coefficients (ICCs). BF and BV values varied widely by method of analysis as did within-patient CVs for BF and BV at the second versus the first VPCT by 22.4%/50.4% (method 1) and 24.6%/24.0% (method 2) measured in the pancreatic head and 18.4%/62.6% (method 1) and 23.8%/28.1% (method 2) measured in the pancreatic corpus and at the third versus the first VPCT by 21.7%/61.8% (method 1) and 25.7%/34.5% (method 2) measured also in the pancreatic head and 19.1%/66.1% (method 1) and 22.0%/31.8% (method 2) measured in the pancreatic corpus, respectively. Interobserver agreement measured with ICC shows fair-to-good reproducibility. VPCT performed with the presented examinational protocol is reproducible and can be used for monitoring

  8. Classical signal model reproducing quantum probabilities for single and coincidence detections

    CERN Document Server

    Khrennikov, Andrei; Nordebo, Sven

    2011-01-01

    We present a simple classical (random) signal model reproducing Born's rule. The crucial point of our approach is that the presence of detector's threshold and calibration procedure have to be treated not as simply experimental technicalities, but as the basic counterparts of the theoretical model. We call this approach threshold signal detection model (TSD). The experiment on coincidence detection which was done by Grangier in 1986 \\cite{Grangier} played a crucial role in rejection of (semi-)classical field models in favor of quantum mechanics (QM): impossibility to resolve the wave-particle duality in favor of a purely wave model. QM predicts that the relative probability of coincidence detection, the coefficient $g^{(2)}(0),$ is zero (for one photon states), but in (semi-)classical models $g^{(2)}(0)\\geq 1.$ In TSD the coefficient $g^{(2)}(0)$ decreases as $1/{\\cal E}_d^2,$ where ${\\cal E}_d>0$ is the detection threshold. Hence, by increasing this threshold an experimenter can make the coefficient $g^{(2)}...

  9. An analytical nonlinear model for laminate multiferroic composites reproducing the DC magnetic bias dependent magnetoelectric properties.

    Science.gov (United States)

    Lin, Lizhi; Wan, Yongping; Li, Faxin

    2012-07-01

    In this work, we propose an analytical nonlinear model for laminate multiferroic composites in which the magnetic-field-induced strain in magnetostrictive phase is described by a standard square law taking the stress effect into account, whereas the ferroelectric phase retains a linear piezoelectric response. Furthermore, differing from previous models which assume uniform deformation, we take into account the stress attenuation and adopt non-uniform deformation along the layer thickness in both piezoelectric and magnetostrictive phases. Analysis of this model on L-T and L-L modes of sandwiched Terfenol-D/lead zirconate titanate/Terfenol-D composites can well reproduce the observed dc magnetic field (H(dc)) dependent magnetoelectric coefficients, which reach their maximum with the H(dc) all at about 500 Oe. The model also suggests that stress attenuation along the layer thickness in practical composites should be taken into account. Furthermore, the model also indicates that a high volume fraction of magnetostrictive phase is required to get giant magnetoelectric coupling, coinciding with existing models.

  10. Assessment of a climate model to reproduce rainfall variability and extremes over Southern Africa

    Science.gov (United States)

    Williams, C. J. R.; Kniveton, D. R.; Layberry, R.

    2010-01-01

    It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The sub-continent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of ability of a state of the art climate model to simulate climate at daily timescales is carried out using satellite-derived rainfall data from the Microwave Infrared Rainfall Algorithm (MIRA). This dataset covers the period from 1993 to 2002 and the whole of southern Africa at a spatial resolution of 0.1° longitude/latitude. This paper concentrates primarily on the ability of the model to simulate the spatial and temporal patterns of present-day rainfall variability over southern Africa and is not intended to discuss possible future changes in climate as these have been documented elsewhere. Simulations of current climate from the UK Meteorological Office Hadley Centre's climate model, in both regional and global mode, are firstly compared to the MIRA dataset at daily timescales. Secondly, the ability of the model to reproduce daily rainfall extremes is assessed, again by a comparison with

  11. Models that include supercoiling of topological domains reproduce several known features of interphase chromosomes.

    Science.gov (United States)

    Benedetti, Fabrizio; Dorier, Julien; Burnier, Yannis; Stasiak, Andrzej

    2014-03-01

    Understanding the structure of interphase chromosomes is essential to elucidate regulatory mechanisms of gene expression. During recent years, high-throughput DNA sequencing expanded the power of chromosome conformation capture (3C) methods that provide information about reciprocal spatial proximity of chromosomal loci. Since 2012, it is known that entire chromatin in interphase chromosomes is organized into regions with strongly increased frequency of internal contacts. These regions, with the average size of ∼1 Mb, were named topological domains. More recent studies demonstrated presence of unconstrained supercoiling in interphase chromosomes. Using Brownian dynamics simulations, we show here that by including supercoiling into models of topological domains one can reproduce and thus provide possible explanations of several experimentally observed characteristics of interphase chromosomes, such as their complex contact maps.

  12. Relative validity and reproducibility of a parent-administered semi-quantitative FFQ for assessing food intake in Danish children aged 3-9 years

    DEFF Research Database (Denmark)

    Buch-Andersen, Tine; Perez-Cueto Eulert, Federico Jose Armando; Toft, Ulla

    2016-01-01

    OBJECTIVE: To assess the relative validity and reproducibility of the semi-quantitative FFQ (SFFQ) applied in the evaluation of a community intervention study, SoL-Bornholm, for estimating food intakes. DESIGN: The reference measure was a 4 d estimated food record. The SFFQ was completed two times...... with the food records, especially for vegetables. For most intakes, the mean difference increased with increasing intake. Gross misclassification was on average higher for energy and nutrients (17 %) than for foods (8 %). Spearman correlation coefficients were significant for twelve out of fourteen intakes......, ranging from 0·29 to 0·63 for foods and from 0·12 to 0·48 for energy and nutrients. Comparing the repeated SFFQ administrations, the intakes of the first SFFQ were slightly higher than those of the second SFFQ. Gross misclassification was low for most intakes; on average 6 % for foods and 8 % for energy...

  13. A rat tail temporary static compression model reproduces different stages of intervertebral disc degeneration with decreased notochordal cell phenotype.

    Science.gov (United States)

    Hirata, Hiroaki; Yurube, Takashi; Kakutani, Kenichiro; Maeno, Koichiro; Takada, Toru; Yamamoto, Junya; Kurakawa, Takuto; Akisue, Toshihiro; Kuroda, Ryosuke; Kurosaka, Masahiro; Nishida, Kotaro

    2014-03-01

    The intervertebral disc nucleus pulposus (NP) has two phenotypically distinct cell types-notochordal cells (NCs) and non-notochordal chondrocyte-like cells. In human discs, NCs are lost during adolescence, which is also when discs begin to show degenerative signs. However, little evidence exists regarding the link between NC disappearance and the pathogenesis of disc degeneration. To clarify this, a rat tail disc degeneration model induced by static compression at 1.3 MPa for 0, 1, or 7 days was designed and assessed for up to 56 postoperative days. Radiography, MRI, and histomorphology showed degenerative disc findings in response to the compression period. Immunofluorescence displayed that the number of DAPI-positive NP cells decreased with compression; particularly, the decrease was notable in larger, vacuolated, cytokeratin-8- and galectin-3-co-positive cells, identified as NCs. The proportion of TUNEL-positive cells, which predominantly comprised non-NCs, increased with compression. Quantitative PCR demonstrated isolated mRNA up-regulation of ADAMTS-5 in the 1-day loaded group and MMP-3 in the 7-day loaded group. Aggrecan-1 and collagen type 2α-1 mRNA levels were down-regulated in both groups. This rat tail temporary static compression model, which exhibits decreased NC phenotype, increased apoptotic cell death, and imbalanced catabolic and anabolic gene expression, reproduces different stages of intervertebral disc degeneration.

  14. Diffusion-weighted magnetic resonance imaging of breast lesions: the influence of different fat-suppression techniques on quantitative measurements and their reproducibility

    Energy Technology Data Exchange (ETDEWEB)

    Muertz, P.; Tsesarskiy, M.; Kowal, A.; Traeber, F.; Willinek, W.A.; Leutner, C.C.; Schmiedel, A.; Schild, H.H. [University of Bonn, Department of Radiology, Bonn (Germany); Gieseke, J. [University of Bonn, Department of Radiology, Bonn (Germany); Philips Healthcare, Best (Netherlands)

    2014-10-15

    The aim of this study was to evaluate the influence of different fat-suppression techniques on quantitative measurements and their reproducibility when applied to diffusion-weighted imaging (DWI) of breast lesions. Twenty-five patients with different types of breast lesions were examined on a clinical 1.5-T magnetic resonance imaging (MRI) system. Two diffusion-weighted sequences with different fat-suppression methods were applied: one with spectral presaturation by inversion recovery (SPIR), and one with short-TI inversion recovery (STIR). The acquisition of both sequence variants was repeated with modified shim volume. Lesion-to-background contrast (LBC), apparent diffusion coefficients (ADC) ADC(0,1000) and ADC(50,1000), and their coefficients of variation (CV) were determined. In four patients, the image quality of DWI with SPIR was insufficient. In the other 21 patients, 46 regions of interest (ROI), including 11 malignant and 35 benign lesions, were analysed. The LBC, ADC(0,1000) and ADC(50,1000) values, which did not differ between initial and repeated measurements, were significantly higher for STIR than for SPIR. The mean CV improved from 10.8 % to 4.0 % (P = 0.0047) for LBC, from 6.3 % to 2.9 % (P = 0.0041) for ADC(0,1000), and from 6.3 % to 2.6 % (P = 0.0049) for ADC(50,1000). For STIR compared to SPIR fat suppression, improved lesion conspicuity, higher ADC values, and better measurement reproducibility were found in breast DWI. circle Quality of fat suppression influences quantitative DWI breast lesion measurements. circle In breast DWI, STIR fat suppression worked more reliably than SPIR. (orig.)

  15. Quantitative Modeling of Earth Surface Processes

    Science.gov (United States)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes. More details...

  16. A novel, stable and reproducible acute lung injury model induced by oleic acid in immature piglet

    Institute of Scientific and Technical Information of China (English)

    ZHU Yao-bin; LING Feng; ZHANG Yan-bo; LIU Ai-jun; LIU Dong-hai; QIAO Chen-hui; WANG Qiang; LIU Ying-long

    2011-01-01

    Background Young children are susceptible to pulmonary injury,and acute lung injury (ALl) often results in a high mortality and financial costs in pediatric patients.A good ALl model will help us to gain a better understanding of the real pathophysiological picture and to evaluate novel treatment approaches to acute respiratory distress syndrome (ARDS) more accurately and liberally.This study aimed to establish a hemodynamically stable and reproducible model with ALl in piglet induced by oleic acid.Methods Six Chinese mini-piglets were used to establish ALl models by oleic acid.Hemodynamic and pulmonary function data were measured.Histopathological assessment was performed.Results Mean blood pressure,heart rate (HR),cardiac output (CO),central venous pressure (CVP) and left atrial pressure (LAP) were sharply decreased after oleic acid given,while the mean pulmonary arterial pressure (MPAP) was increased in comparison with baseline (P <0.05).pH,arterial partial pressure of O2 (PaO2),PaO2/inspired O2 fraction (FiO2) and lung compliance decreased,while PaCO2 and airway pressure increased in comparison with baseline (P <0.05).The lung histology showed severe inflammation,hyaline membranes,intra-alveolar and interstitial hemorrhage.Conclusion This experiment established a stable model which allows for a diversity of studies on early lung injury.

  17. Demography-based adaptive network model reproduces the spatial organization of human linguistic groups

    Science.gov (United States)

    Capitán, José A.; Manrubia, Susanna

    2015-12-01

    The distribution of human linguistic groups presents a number of interesting and nontrivial patterns. The distributions of the number of speakers per language and the area each group covers follow log-normal distributions, while population and area fulfill an allometric relationship. The topology of networks of spatial contacts between different linguistic groups has been recently characterized, showing atypical properties of the degree distribution and clustering, among others. Human demography, spatial conflicts, and the construction of networks of contacts between linguistic groups are mutually dependent processes. Here we introduce an adaptive network model that takes all of them into account and successfully reproduces, using only four model parameters, not only those features of linguistic groups already described in the literature, but also correlations between demographic and topological properties uncovered in this work. Besides their relevance when modeling and understanding processes related to human biogeography, our adaptive network model admits a number of generalizations that broaden its scope and make it suitable to represent interactions between agents based on population dynamics and competition for space.

  18. Exploring predictive and reproducible modeling with the single-subject FIAC dataset.

    Science.gov (United States)

    Chen, Xu; Pereira, Francisco; Lee, Wayne; Strother, Stephen; Mitchell, Tom

    2006-05-01

    Predictive modeling of functional magnetic resonance imaging (fMRI) has the potential to expand the amount of information extracted and to enhance our understanding of brain systems by predicting brain states, rather than emphasizing the standard spatial mapping. Based on the block datasets of Functional Imaging Analysis Contest (FIAC) Subject 3, we demonstrate the potential and pitfalls of predictive modeling in fMRI analysis by investigating the performance of five models (linear discriminant analysis, logistic regression, linear support vector machine, Gaussian naive Bayes, and a variant) as a function of preprocessing steps and feature selection methods. We found that: (1) independent of the model, temporal detrending and feature selection assisted in building a more accurate predictive model; (2) the linear support vector machine and logistic regression often performed better than either of the Gaussian naive Bayes models in terms of the optimal prediction accuracy; and (3) the optimal prediction accuracy obtained in a feature space using principal components was typically lower than that obtained in a voxel space, given the same model and same preprocessing. We show that due to the existence of artifacts from different sources, high prediction accuracy alone does not guarantee that a classifier is learning a pattern of brain activity that might be usefully visualized, although cross-validation methods do provide fairly unbiased estimates of true prediction accuracy. The trade-off between the prediction accuracy and the reproducibility of the spatial pattern should be carefully considered in predictive modeling of fMRI. We suggest that unless the experimental goal is brain-state classification of new scans on well-defined spatial features, prediction alone should not be used as an optimization procedure in fMRI data analysis.

  19. Quantitative system validation in model driven design

    DEFF Research Database (Denmark)

    Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois;

    2010-01-01

    The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...

  20. Recent trends in social systems quantitative theories and quantitative models

    CERN Document Server

    Hošková-Mayerová, Šárka; Soitu, Daniela-Tatiana; Kacprzyk, Janusz

    2017-01-01

    The papers collected in this volume focus on new perspectives on individuals, society, and science, specifically in the field of socio-economic systems. The book is the result of a scientific collaboration among experts from “Alexandru Ioan Cuza” University of Iaşi (Romania), “G. d’Annunzio” University of Chieti-Pescara (Italy), "University of Defence" of Brno (Czech Republic), and "Pablo de Olavide" University of Sevilla (Spain). The heterogeneity of the contributions presented in this volume reflects the variety and complexity of social phenomena. The book is divided in four Sections as follows. The first Section deals with recent trends in social decisions. Specifically, it aims to understand which are the driving forces of social decisions. The second Section focuses on the social and public sphere. Indeed, it is oriented on recent developments in social systems and control. Trends in quantitative theories and models are described in Section 3, where many new formal, mathematical-statistical to...

  1. A stable and reproducible human blood-brain barrier model derived from hematopoietic stem cells.

    Directory of Open Access Journals (Sweden)

    Romeo Cecchelli

    Full Text Available The human blood brain barrier (BBB is a selective barrier formed by human brain endothelial cells (hBECs, which is important to ensure adequate neuronal function and protect the central nervous system (CNS from disease. The development of human in vitro BBB models is thus of utmost importance for drug discovery programs related to CNS diseases. Here, we describe a method to generate a human BBB model using cord blood-derived hematopoietic stem cells. The cells were initially differentiated into ECs followed by the induction of BBB properties by co-culture with pericytes. The brain-like endothelial cells (BLECs express tight junctions and transporters typically observed in brain endothelium and maintain expression of most in vivo BBB properties for at least 20 days. The model is very reproducible since it can be generated from stem cells isolated from different donors and in different laboratories, and could be used to predict CNS distribution of compounds in human. Finally, we provide evidence that Wnt/β-catenin signaling pathway mediates in part the BBB inductive properties of pericytes.

  2. Can a global model reproduce observed trends in summertime surface ozone levels?

    Directory of Open Access Journals (Sweden)

    S. Koumoutsaris

    2012-01-01

    Full Text Available Quantifying trends in surface ozone concentrations are critical for assessing pollution control strategies. Here we use observations and results from a global chemical transport model to examine the trends (1991–2005 in daily maximum 8-hour average concentrations in summertime surface ozone at rural sites in Europe and the United States. We find a decrease in observed ozone concentrations at the high end of the probability distribution at many of the sites in both regions. The model attributes these trends to a decrease in local anthropogenic ozone precursors, although simulated decreasing trends are overestimated in comparison with observed ones. The low end of observed distribution show small upward trends over Europe and the western US and downward trends in Eastern US. The model cannot reproduce these observed trends, especially over Europe and the western US. In particular, simulated changes between the low and high end of the distributions in these two regions are not significant. Sensitivity simulations indicate that emissions from far away source regions do not affect significantly ozone trends at both ends of the distribution. This is in contrast with previously available results, which indicated that increasing ozone trends at the low percentiles may reflect an increase in ozone background associated with increasing remote sources of ozone precursors. Possible reasons for discrepancies between observed and simulated trends are discussed.

  3. Animal models that best reproduce the clinical manifestations of human intoxication with organophosphorus compounds.

    Science.gov (United States)

    Pereira, Edna F R; Aracava, Yasco; DeTolla, Louis J; Beecham, E Jeffrey; Basinger, G William; Wakayama, Edgar J; Albuquerque, Edson X

    2014-08-01

    The translational capacity of data generated in preclinical toxicological studies is contingent upon several factors, including the appropriateness of the animal model. The primary objectives of this article are: 1) to analyze the natural history of acute and delayed signs and symptoms that develop following an acute exposure of humans to organophosphorus (OP) compounds, with an emphasis on nerve agents; 2) to identify animal models of the clinical manifestations of human exposure to OPs; and 3) to review the mechanisms that contribute to the immediate and delayed OP neurotoxicity. As discussed in this study, clinical manifestations of an acute exposure of humans to OP compounds can be faithfully reproduced in rodents and nonhuman primates. These manifestations include an acute cholinergic crisis in addition to signs of neurotoxicity that develop long after the OP exposure, particularly chronic neurologic deficits consisting of anxiety-related behavior and cognitive deficits, structural brain damage, and increased slow electroencephalographic frequencies. Because guinea pigs and nonhuman primates, like humans, have low levels of circulating carboxylesterases-the enzymes that metabolize and inactivate OP compounds-they stand out as appropriate animal models for studies of OP intoxication. These are critical points for the development of safe and effective therapeutic interventions against OP poisoning because approval of such therapies by the Food and Drug Administration is likely to rely on the Animal Efficacy Rule, which allows exclusive use of animal data as evidence of the effectiveness of a drug against pathologic conditions that cannot be ethically or feasibly tested in humans.

  4. A Semi-Analytic dynamical friction model that reproduces core stalling

    CERN Document Server

    Petts, James A; Read, Justin I

    2015-01-01

    We present a new semi-analytic model for dynamical friction based on Chandrasekhar's formalism. The key novelty is the introduction of physically motivated, radially varying, maximum and minimum impact parameters. With these, our model gives an excellent match to full N-body simulations for isotropic background density distributions, both cuspy and shallow, without any fine-tuning of the model parameters. In particular, we are able to reproduce the dramatic core-stalling effect that occurs in shallow/constant density cores, for the first time. This gives us new physical insight into the core-stalling phenomenon. We show that core stalling occurs in the limit in which the product of the Coulomb logarithm and the local fraction of stars with velocity lower than the infalling body tends to zero. For cuspy backgrounds, this occurs when the infalling mass approaches the enclosed background mass. For cored backgrounds, it occurs at larger distances from the centre, due to a combination of a rapidly increasing minim...

  5. Stochastic model of financial markets reproducing scaling and memory in volatility return intervals

    Science.gov (United States)

    Gontis, V.; Havlin, S.; Kononovicius, A.; Podobnik, B.; Stanley, H. E.

    2016-11-01

    We investigate the volatility return intervals in the NYSE and FOREX markets. We explain previous empirical findings using a model based on the interacting agent hypothesis instead of the widely-used efficient market hypothesis. We derive macroscopic equations based on the microscopic herding interactions of agents and find that they are able to reproduce various stylized facts of different markets and different assets with the same set of model parameters. We show that the power-law properties and the scaling of return intervals and other financial variables have a similar origin and could be a result of a general class of non-linear stochastic differential equations derived from a master equation of an agent system that is coupled by herding interactions. Specifically, we find that this approach enables us to recover the volatility return interval statistics as well as volatility probability and spectral densities for the NYSE and FOREX markets, for different assets, and for different time-scales. We find also that the historical S&P500 monthly series exhibits the same volatility return interval properties recovered by our proposed model. Our statistical results suggest that human herding is so strong that it persists even when other evolving fluctuations perturbate the financial system.

  6. Enhancement of accuracy and reproducibility of parametric modeling for estimating abnormal intra-QRS potentials in signal-averaged electrocardiograms.

    Science.gov (United States)

    Lin, Chun-Cheng

    2008-09-01

    This work analyzes and attempts to enhance the accuracy and reproducibility of parametric modeling in the discrete cosine transform (DCT) domain for the estimation of abnormal intra-QRS potentials (AIQP) in signal-averaged electrocardiograms. One hundred sets of white noise with a flat frequency response were introduced to simulate the unpredictable, broadband AIQP when quantitatively analyzing estimation error. Further, a high-frequency AIQP parameter was defined to minimize estimation error caused by the overlap between normal QRS and AIQP in low-frequency DCT coefficients. Seventy-two patients from Taiwan were recruited for the study, comprising 30 patients with ventricular tachycardia (VT) and 42 without VT. Analytical results showed that VT patients had a significant decrease in the estimated AIQP. The global diagnostic performance (area under the receiver operating characteristic curve) of AIQP rose from 73.0% to 84.2% in lead Y, and from 58.3% to 79.1% in lead Z, when the high-frequency range fell from 100% to 80%. The combination of AIQP and ventricular late potentials further enhanced performance to 92.9% (specificity=90.5%, sensitivity=90%). Therefore, the significantly reduced AIQP in VT patients, possibly also including dominant unpredictable potentials within the normal QRS complex, may be new promising evidence of ventricular arrhythmias.

  7. Reproducibility of summertime diurnal precipitation over northern Eurasia simulated by CMIP5 climate models

    Science.gov (United States)

    Hirota, N.; Takayabu, Y. N.

    2015-12-01

    Reproducibility of diurnal precipitation over northern Eurasia simulated by CMIP5 climate models in their historical runs were evaluated, in comparison with station data (NCDC-9813) and satellite data (GSMaP-V5). We first calculated diurnal cycles by averaging precipitation at each local solar time (LST) in June-July-August during 1981-2000 over the continent of northern Eurasia (0-180E, 45-90N). Then we examined occurrence time of maximum precipitation and a contribution of diurnally varying precipitation to the total precipitation.The contribution of diurnal precipitation was about 21% in both NCDC-9813 and GSMaP-V5. The maximum precipitation occurred at 18LST in NCDC-9813 but 16LST in GSMaP-V5, indicating some uncertainties even in the observational datasets. The diurnal contribution of the CMIP5 models varied largely from 11% to 62%, and their timing of the precipitation maximum ranged from 11LST to 20LST. Interestingly, the contribution and the timing had strong negative correlation of -0.65. The models with larger diurnal precipitation showed precipitation maximum earlier around noon. Next, we compared sensitivity of precipitation to surface temperature and tropospheric humidity between 5 models with large diurnal precipitation (LDMs) and 5 models with small diurnal precipitation (SDMs). Precipitation in LDMs showed high sensitivity to surface temperature, indicating its close relationship with local instability. On the other hand, synoptic disturbances were more active in SDMs with a dominant role of the large scale condensation, and precipitation in SDMs was more related with tropospheric moisture. Therefore, the relative importance of the local instability and the synoptic disturbances was suggested to be an important factor in determining the contribution and timing of the diurnal precipitation. Acknowledgment: This study is supported by Green Network of Excellence (GRENE) Program by the Ministry of Education, Culture, Sports, Science and Technology

  8. Commentary on the integration of model sharing and reproducibility analysis to scholarly publishing workflow in computational biomechanics.

    Science.gov (United States)

    Erdemir, Ahmet; Guess, Trent M; Halloran, Jason P; Modenese, Luca; Reinbolt, Jeffrey A; Thelen, Darryl G; Umberger, Brian R; Erdemir, Ahmet; Guess, Trent M; Halloran, Jason P; Modenese, Luca; Reinbolt, Jeffrey A; Thelen, Darryl G; Umberger, Brian R; Umberger, Brian R; Erdemir, Ahmet; Thelen, Darryl G; Guess, Trent M; Reinbolt, Jeffrey A; Modenese, Luca; Halloran, Jason P

    2016-10-01

    The overall goal of this paper is to demonstrate that dissemination of models and analyses for assessing the reproducibility of simulation results can be incorporated in the scientific review process in biomechanics. As part of a special issue on model sharing and reproducibility in the IEEE Transactions on Biomedical Engineering, two manuscripts on computational biomechanics were submitted: Rajagopal et al., IEEE Trans. Biomed. Eng., 2016 and Schmitz and Piovesan, IEEE Trans. Biomed. Eng., 2016. Models used in these studies were shared with the scientific reviewers and the public. In addition to the standard review of the manuscripts, the reviewers downloaded the models and performed simulations that reproduced results reported in the studies. There was general agreement between simulation results of the authors and those of the reviewers. Discrepancies were resolved during the necessary revisions. The manuscripts and instructions for download and simulation were updated in response to the reviewers' feedback; changes that may otherwise have been missed if explicit model sharing and simulation reproducibility analysis was not conducted in the review process. Increased burden on the authors and the reviewers, to facilitate model sharing and to repeat simulations, were noted. When the authors of computational biomechanics studies provide access to models and data, the scientific reviewers can download and thoroughly explore the model, perform simulations, and evaluate simulation reproducibility beyond the traditional manuscript-only review process. Model sharing and reproducibility analysis in scholarly publishing will result in a more rigorous review process, which will enhance the quality of modeling and simulation studies and inform future users of computational models.

  9. Fast bootstrapping and permutation testing for assessing reproducibility and interpretability of multivariate fMRI decoding models.

    Directory of Open Access Journals (Sweden)

    Bryan R Conroy

    Full Text Available Multivariate decoding models are increasingly being applied to functional magnetic imaging (fMRI data to interpret the distributed neural activity in the human brain. These models are typically formulated to optimize an objective function that maximizes decoding accuracy. For decoding models trained on full-brain data, this can result in multiple models that yield the same classification accuracy, though some may be more reproducible than others--i.e. small changes to the training set may result in very different voxels being selected. This issue of reproducibility can be partially controlled by regularizing the decoding model. Regularization, along with the cross-validation used to estimate decoding accuracy, typically requires retraining many (often on the order of thousands of related decoding models. In this paper we describe an approach that uses a combination of bootstrapping and permutation testing to construct both a measure of cross-validated prediction accuracy and model reproducibility of the learned brain maps. This requires re-training our classification method on many re-sampled versions of the fMRI data. Given the size of fMRI datasets, this is normally a time-consuming process. Our approach leverages an algorithm called fast simultaneous training of generalized linear models (FaSTGLZ to create a family of classifiers in the space of accuracy vs. reproducibility. The convex hull of this family of classifiers can be used to identify a subset of Pareto optimal classifiers, with a single-optimal classifier selectable based on the relative cost of accuracy vs. reproducibility. We demonstrate our approach using full-brain analysis of elastic-net classifiers trained to discriminate stimulus type in an auditory and visual oddball event-related fMRI design. Our approach and results argue for a computational approach to fMRI decoding models in which the value of the interpretation of the decoding model ultimately depends upon optimizing a

  10. Rainfall variability and extremes over southern Africa: Assessment of a climate model to reproduce daily extremes

    Science.gov (United States)

    Williams, C. J. R.; Kniveton, D. R.; Layberry, R.

    2009-04-01

    It is increasingly accepted that that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The subcontinent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of ability of a state of the art climate model to simulate climate at daily timescales is carried out using satellite derived rainfall data from the Microwave Infra-Red Algorithm (MIRA). This dataset covers the period from 1993-2002 and the whole of southern Africa at a spatial resolution of 0.1 degree longitude/latitude. The ability of a climate model to simulate current climate provides some indication of how much confidence can be applied to its future predictions. In this paper, simulations of current climate from the UK Meteorological Office Hadley Centre's climate model, in both regional and global mode, are firstly compared to the MIRA dataset at daily timescales. This concentrates primarily on the ability of the model to simulate the spatial and temporal patterns of rainfall variability over southern Africa. Secondly, the ability of the model to reproduce daily rainfall extremes will

  11. Quantitative model validation techniques: new insights

    CERN Document Server

    Ling, You

    2012-01-01

    This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...

  12. A novel approach to evaluating the reproducibility of a replication technique for the manufacture of electroconductive replicas for use in quantitative clinical dental wear studies.

    Science.gov (United States)

    Chadwick, R G; Mitchell, H L; Ward, S

    2004-04-01

    The assessment of the progression of tooth surface loss has until recently been limited to either the application of subjective ranking scales or visual comparison of sequential study casts. The development of quantitative measuring techniques offers the potential of greater accuracy and sensitivity. As direct intra-oral measurement is problematical such approaches often utilize impressions of the teeth, recorded at different epochs, to construct replicas for mapping and comparison. This in vitro investigation sought to determine the reproducibility of such an approach taking into account the total process chain. Two inlay cavities (one large, one small) were prepared in the palatal aspect of a plastic maxillary central incisor and restored with two flush fitting inlays. A series of impressions of this tooth were recorded, using a special tray and an addition cured light bodied silicone impression material (President, Coltene, Switzerland), with (a) both inlays in (b) both inlays out (c) large inlay out and small inlay in (d) large inlay in and small inlay out - a total of 16 impressions. Electroconductive replicas were fabricated from these and mapped using a computer controlled probe. Each series simulated wear of the tooth. A surface matching and difference detection algorithm was then used to compare each series of replicas and calculate the proportion of the surface undergoing simulated wear by a direct comparison of (a) matched to (b) or, indirectly as the summation of the results of matches of (a) with (c) and (a) with (d). The mean proportion of the surface with wear calculated directly was 26.6% (s.d.=0.6) and indirectly 26.1% (s.d. = 0.5). A one-way anova revealed no significant difference (P > 0.05). It is concluded that determining wear by this method is highly reproducible.

  13. Building a Database for a Quantitative Model

    Science.gov (United States)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  14. Reliability and reproducibility of quantitative assessment of left ventricular function and volumes with 3-slice segmentation of cine steady-state free precession short axis images

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Christopher, E-mail: cnguye38@uci.edu [School of Medicine, University of California, Irvine, Orange, CA (United States); Kuoy, Edward, E-mail: ekuoy@uci.edu [School of Medicine, University of California, Irvine, Orange, CA (United States); Ruehm, Stefan, E-mail: sruehm@mednet.ucla.edu [Diagnostic Cardiovascular Imaging, University of California, Los Angeles (United States); Krishnam, Mayil, E-mail: mskrishn@uci.edu [Cardiovascular and Thoracic Imaging, Radiological Sciences, University of California, Irvine, Orange, CA (United States)

    2015-07-15

    Highlights: • Quantitative LV assessment in CMR requires contour tracing of multiple SA images. • Conventional multi-slice method for LV assessment is tedious and time-consuming. • 3-slice segmentation is comparable to multi-slice method in determining LVEF. • 3-slice method is reliable and reproducible in determining LV volumes and mass. • 3-slice method reduces post-processing time compared to multi-slice method. - Abstract: Objectives: Quantitative assessment of left ventricular (LV) functional parameters in cardiac MR requires time-consuming contour tracing across multiple short axis images. This study assesses global LV functional parameters using 3-slice segmentation on steady state free precision (SSFP) cine short axis images and compares the results with conventional multi-slice segmentation of LV. Methods: Data were collected from 61 patients who underwent cardiac MRI for various clinical indications. Semi-automated cardiac MR software was used to trace LV contours both at multiple slices from base to apex as well as just 3 slices (base, mid, and apical) by two readers. Left ventricular ejection fraction (LVEF), LV volumes, and LV mass were calculated using both methods. Results: Bland–Altman plot revealed narrow limits of agreement (−4.4% to 5.1%) between LVEF obtained by the two methods. Bland–Altman analysis showed slightly wider limits of agreement between end-diastolic volumes (−5.0 to 12.0%; −3.9 to 8.5 ml/m{sup 2}), end-systolic volumes (−10.9 to 14.7%; −4.1 to 6.5 ml/m{sup 2}), and LV mass (−5.2 to 12.7%; −4.8 to 10.2 g/m{sup 2}) obtained by the two methods. There was a small mean difference between LV volumes and LV mass obtained using multi-slice and 3-slice segmentation. No statistically significant difference existed between the LV parameters obtained by the two readers using 3-slice segmentation (p > 0.05). Multi-slice assessment required approximately 15 min per study while 3-slice assessment required less than 5

  15. Composite model to reproduce the mechanical behaviour of methane hydrate bearing soils

    Science.gov (United States)

    De la Fuente, Maria

    2016-04-01

    Methane hydrate bearing sediments (MHBS) are naturally-occurring materials containing different components in the pores that may suffer phase changes under relative small temperature and pressure variations for conditions typically prevailing a few hundreds of meters below sea level. Their modelling needs to account for heat and mass balance equations of the different components, and several strategies already exist to combine them (e.g., Rutqvist & Moridis, 2009; Sánchez et al. 2014). These equations have to be completed by restrictions and constitutive laws reproducing the phenomenology of heat and fluid flows, phase change conditions and mechanical response. While the formulation of the non-mechanical laws generally includes explicitly the mass fraction of methane in each phase, which allows for a natural update of parameters during phase changes, mechanical laws are, in most cases, stated for the whole solid skeleton (Uchida et al., 2012; Soga et al. 2006). In this paper, a mechanical model is proposed to cope with the response of MHBS. It is based on a composite approach that allows defining the thermo-hydro-mechanical response of mineral skeleton and solid hydrates independently. The global stress-strain-temperature response of the solid phase (grains + hydrate) is then obtained by combining both responses according to energy principle following the work by Pinyol et al. (2007). In this way, dissociation of MH can be assessed on the basis of the stress state and temperature prevailing locally within the hydrate component. Besides, its structuring effect is naturally accounted for by the model according to patterns of MH inclusions within soil pores. This paper describes the fundamental hypothesis behind the model and its formulation. Its performance is assessed by comparison with laboratory data presented in the literature. An analysis of MHBS response to several stress-temperature paths representing potential field cases is finally presented. References

  16. Quantitative magnetospheric models: results and perspectives.

    Science.gov (United States)

    Kuznetsova, M.; Hesse, M.; Gombosi, T.; Csem Team

    Global magnetospheric models are indispensable tool that allow multi-point measurements to be put into global context Significant progress is achieved in global MHD modeling of magnetosphere structure and dynamics Medium resolution simulations confirm general topological pictures suggested by Dungey State of the art global models with adaptive grids allow performing simulations with highly resolved magnetopause and magnetotail current sheet Advanced high-resolution models are capable to reproduced transient phenomena such as FTEs associated with formation of flux ropes or plasma bubbles embedded into magnetopause and demonstrate generation of vortices at magnetospheric flanks On the other hand there is still controversy about the global state of the magnetosphere predicted by MHD models to the point of questioning the length of the magnetotail and the location of the reconnection sites within it For example for steady southwards IMF driving condition resistive MHD simulations produce steady configuration with almost stationary near-earth neutral line While there are plenty of observational evidences of periodic loading unloading cycle during long periods of southward IMF Successes and challenges in global modeling of magnetispheric dynamics will be addessed One of the major challenges is to quantify the interaction between large-scale global magnetospheric dynamics and microphysical processes in diffusion regions near reconnection sites Possible solutions to controversies will be discussed

  17. Can a stepwise steady flow computational fluid dynamics model reproduce unsteady particulate matter separation for common unit operations?

    Science.gov (United States)

    Pathapati, Subbu-Srikanth; Sansalone, John J

    2011-07-01

    Computational fluid dynamics (CFD) is emerging as a model for resolving the fate of particulate matter (PM) by unit operations subject to rainfall-runoff loadings. However, compared to steady flow CFD models, there are greater computational requirements for unsteady hydrodynamics and PM loading models. Therefore this study examines if integrating a stepwise steady flow CFD model can reproduce PM separation by common unit operations loaded by unsteady flow and PM loadings, thereby reducing computational effort. Utilizing monitored unit operation data from unsteady events as a metric, this study compares the two CFD modeling approaches for a hydrodynamic separator (HS), a primary clarifier (PC) tank, and a volumetric clarifying filtration system (VCF). Results indicate that while unsteady CFD models reproduce PM separation of each unit operation, stepwise steady CFD models result in significant deviation for HS and PC models as compared to monitored data; overestimating the physical size requirements of each unit required to reproduce monitored PM separation results. In contrast, the stepwise steady flow approach reproduces PM separation by the VCF, a combined gravitational sedimentation and media filtration unit operation that provides attenuation of turbulent energy and flow velocity.

  18. A novel, recovery, and reproducible minimally invasive cardiopulmonary bypass model with lung injury in rats

    Institute of Scientific and Technical Information of China (English)

    LI Ling-ke; CHENG Wei; LIU Dong-hai; ZHANG Jing; ZHU Yao-bin; QIAO Chen-hui; ZHANG Yan-bo

    2013-01-01

    Background Cardiopulmonary bypass (CPB) has been shown to be associated with a systemic inflammatory response leading to postoperative organ dysfunction.Elucidating the underlying mechanisms and developing protective strategies for the pathophysiological consequences of CPB have been hampered due to the absence of a satisfactory recovery animal model.The purpose of this study was to establish a good rat model of CPB to study the pathophysiology of potential complications.Methods Twenty adult male Sprague-Dawley rats weighing 450-560 g were randomly divided into a CPB group (n=10)and a control group (n=10).All rats were anaesthetized and mechanically ventilated.The carotid artery and jugular vein were cannulated.The blood was drained from the dght atrium via the right jugular and transferred by a miniaturized roller pump to a hollow fiber oxygenator and back to the rat via the left carotid artery.Priming consisted of 8 ml of homologous blood and 8 ml of colloid.The surface of the hollow fiber oxygenator was 0.075 m2.CPB was conducted for 60 minutes at a flow rate of 100-120 ml.kg-1.min-1 in the CPB group.Oxygen flow/perfusion flow was 0.8 to 1.0,and the mean arterial pressure remained 60-80 mmHg.Blood gas analysis,hemodynamic investigations,and lung histology were subsequently examined.Results All CPB rats recovered from the operative process without incident.Normal cardiac function after successful weaning was confirmed by electrocardiography and blood pressure measurements.Mean arterial pressure remained stable.The results of blood gas analysis at different times were within the normal range.Levels of IL-1β and TNF-α were higher in the lung tissue in the CPB group (P <0.005).Histological examination revealed marked increases in interstitial congestion,edema,and inflammation in the CPB group.Conclusion This novel,recovery,and reproducible minimally invasive CPB model may open the field for various studies on the pathophysiological process of CPB and systemic

  19. Quantitative Models and Analysis for Reactive Systems

    DEFF Research Database (Denmark)

    Thrane, Claus

    phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided...... by the environment in which they are embedded. This thesis studies the semantics and properties of a model-based framework for re- active systems, in which models and specifications are assumed to contain quantifiable information, such as references to time or energy. Our goal is to develop a theory of approximation......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...

  20. Can model observers be developed to reproduce radiologists' diagnostic performances? Our study says not so fast!

    Science.gov (United States)

    Lee, Juhun; Nishikawa, Robert M.; Reiser, Ingrid; Boone, John M.

    2016-03-01

    The purpose of this study was to determine radiologists' diagnostic performances on different image reconstruction algorithms that could be used to optimize image-based model observers. We included a total of 102 pathology proven breast computed tomography (CT) cases (62 malignant). An iterative image reconstruction (IIR) algorithm was used to obtain 24 reconstructions with different image appearance for each image. Using quantitative image feature analysis, three IIRs and one clinical reconstruction of 50 lesions (25 malignant) were selected for a reader study. The reconstructions spanned a range of smooth-low noise to sharp-high noise image appearance. The trained classifiers' AUCs on the above reconstructions ranged from 0.61 (for smooth reconstruction) to 0.95 (for sharp reconstruction). Six experienced MQSA radiologists read 200 cases (50 lesions times 4 reconstructions) and provided the likelihood of malignancy of each lesion. Radiologists' diagnostic performances (AUC) ranged from 0.7 to 0.89. However, there was no agreement among the six radiologists on which image appearance was the best, in terms of radiologists' having the highest diagnostic performances. Specifically, two radiologists indicated sharper image appearance was diagnostically superior, another two radiologists indicated smoother image appearance was diagnostically superior, and another two radiologists indicated all image appearances were diagnostically similar to each other. Due to the poor agreement among radiologists on the diagnostic ranking of images, it may not be possible to develop a model observer for this particular imaging task.

  1. Assessing reproducibility by the within-subject coefficient of variation with random effects models.

    Science.gov (United States)

    Quan, H; Shih, W J

    1996-12-01

    In this paper we consider the use of within-subject coefficient of variation (WCV) for assessing the reproducibility or reliability of a measurement. Application to assessing reproducibility of biochemical markers for measuring bone turnover is described and the comparison with intraclass correlation is discussed. Both maximum likelihood and moment confidence intervals of WCV are obtained through their corresponding asymptotic distributions. Normal and log-normal cases are considered. In general, WCV is preferred when the measurement scale bears intrinsic meaning and is not subject to arbitrary shifting. The intraclass correlation may be preferred when a fixed population of subjects can be well identified.

  2. TU-C-12A-07: Characterization of Longitudinal Reproducibility of Quantitative Diffusion Imaging Data Acquired with Four Different Protocols Using a Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Li, X [Georgia Regents University - Athens, Athens, Georgia (United States); Buzzelli, M; Randazzo, W; Yanasak, N [Georgia Regents University, Augusta, GA (Georgia)

    2014-06-15

    Purpose: To characterize and compare the longitudinal reproducibility of diffusion imaging data acquired with four different protocols using a phantom. Methods: The Diffusive Quantitative Imaging Phantom (DQIP) was constructed using fifteen cylindrical compartments within a larger compartment, filled with deionized water doped with CuSO4 and NaCl. The smaller compartments contained arrays of hexagonal or cylindrical glass capillaries of varying inner diameters, for differing restraint of water diffusion. The sensitivity of diffusion imaging metrics to signal-to-noise ratio (SNR) was probed by doping compartments with differing ratios of deuterium oxide to H2O. A cork phantom enclosure was constructed to increase thermal stability during scanning and a cork holder was made to reproduce scanner positioning. Four different protocols of DWI (diffusion weighted imaging) and DTI (Diffusion tensor imaging) imaging were assembled on a GE Excite HDx 3.0T MRI scanner to collect imaging data over 9-10 days. Data was processed with in-house software created in Matlab to obtain fractional anisotropy (FA) and apparent diffusion coefficient (ADC) values. Results: All DTI and DWI sequences showed good longitudinal stability of mean FA and ADC values per compartment, exhibiting low standard deviation ∼9%. A t-test was performed to compare mean FA values from the DTI clinical protocol to those of the DTI special protocol, indicating significantly different values in the majority of compartments. ANOVA performed on ADC values for all DTI and DWI sequences also showed significantly different values in a majority of compartments. Conclusion: This work has the potential for quantifying systemic variations between diffusion imaging sequences from different platforms. Characterization of DWI and DTI performance were done over four sequences with predictable results. This data suggests that the DQIP phantom may be a reliable method of monitoring day-to-day and scan-to-scan variation in

  3. Quantitative Verification of a Force-based Model for Pedestrian Dynamics

    CERN Document Server

    Chraibi, Mohcine; Schadschneider, Andreas; Mackens, Wolfgang

    2009-01-01

    This paper introduces a spatially continuous force-based model for simulating pedestrian dynamics. The main intention of this work is the quantitative description of pedestrian movement through bottlenecks and in corridors. Measurements of flow and density at bottlenecks will be presented and compared with empirical data. Furthermore the fundamental diagram for the movement in a corridor is reproduced. The results of the proposed model show a good agreement with empirical data.

  4. Reproducible long-term disc degeneration in a large animal model

    NARCIS (Netherlands)

    Hoogendoorn, R.J.W.; Helder, M.N.; Kroeze, R.J.; Bank, R.A.; Smit, T.H.; Wuisman, P.I.J.M.

    2008-01-01

    STUDY DESIGN. Twelve goats were chemically degenerated and the development of the degenerative signs was followed for 26 weeks to evaluate the progression of the induced degeneration. The results were also compared with a previous study to determine the reproducibility. OBJECTIVES. The purpose of th

  5. Reproducing the Wechsler Intelligence Scale for Children-Fifth Edition: Factor Model Results

    Science.gov (United States)

    Beaujean, A. Alexander

    2016-01-01

    One of the ways to increase the reproducibility of research is for authors to provide a sufficient description of the data analytic procedures so that others can replicate the results. The publishers of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) do not follow these guidelines when reporting their confirmatory factor…

  6. Quantitative Models and Analysis for Reactive Systems

    DEFF Research Database (Denmark)

    Thrane, Claus

    phones and websites. Acknowledging that now more than ever, systems come in contact with the physical world, we need to revise the way we construct models and verification algorithms, to take into account the behavior of systems in the presence of approximate, or quantitative information, provided......, allowing verification procedures to quantify judgements, on how suitable a model is for a given specification — hence mitigating the usual harsh distinction between satisfactory and non-satisfactory system designs. This information, among other things, allows us to evaluate the robustness of our framework......, by studying how small changes to our models affect the verification results. A key source of motivation for this work can be found in The Embedded Systems Design Challenge [HS06] posed by Thomas A. Henzinger and Joseph Sifakis. It contains a call for advances in the state-of-the-art of systems verification...

  7. Assessment of the potential forecasting skill of a global hydrological model in reproducing the occurrence of monthly flow extremes

    NARCIS (Netherlands)

    Candogan Yossef, N.A.N.N.; Beek, L.P.H. van; Kwadijk, J.C.J.; Bierkens, M.F.P.

    2012-01-01

    As an initial step in assessing the prospect of using global hydrological models (GHMs) for hydrological forecasting, this study investigates the skill of the GHM PCRGLOBWB in reproducing the occurrence of past extremes in monthly discharge on a global scale. Global terrestrial hydrology from 1958

  8. Can a coupled meteorology–chemistry model reproduce the historical trend in aerosol direct radiative effects over the Northern Hemisphere?

    Science.gov (United States)

    The ability of a coupled meteorology–chemistry model, i.e., Weather Research and Forecast and Community Multiscale Air Quality (WRF-CMAQ), to reproduce the historical trend in aerosol optical depth (AOD) and clear-sky shortwave radiation (SWR) over the Northern Hemisphere h...

  9. Can a coupled meteorology–chemistry model reproduce the historical trend in aerosol direct radiative effects over the Northern Hemisphere?

    Science.gov (United States)

    The ability of a coupled meteorology–chemistry model, i.e., Weather Research and Forecast and Community Multiscale Air Quality (WRF-CMAQ), to reproduce the historical trend in aerosol optical depth (AOD) and clear-sky shortwave radiation (SWR) over the Northern Hemisphere h...

  10. Reproducibility of scratch assays is affected by the initial degree of confluence: Experiments, modelling and model selection.

    Science.gov (United States)

    Jin, Wang; Shah, Esha T; Penington, Catherine J; McCue, Scott W; Chopin, Lisa K; Simpson, Matthew J

    2016-02-01

    Scratch assays are difficult to reproduce. Here we identify a previously overlooked source of variability which could partially explain this difficulty. We analyse a suite of scratch assays in which we vary the initial degree of confluence (initial cell density). Our results indicate that the rate of re-colonisation is very sensitive to the initial density. To quantify the relative roles of cell migration and proliferation, we calibrate the solution of the Fisher-Kolmogorov model to cell density profiles to provide estimates of the cell diffusivity, D, and the cell proliferation rate, λ. This procedure indicates that the estimates of D and λ are very sensitive to the initial density. This dependence suggests that the Fisher-Kolmogorov model does not accurately represent the details of the collective cell spreading process, since this model assumes that D and λ are constants that ought to be independent of the initial density. Since higher initial cell density leads to enhanced spreading, we also calibrate the solution of the Porous-Fisher model to the data as this model assumes that the cell flux is an increasing function of the cell density. Estimates of D and λ associated with the Porous-Fisher model are less sensitive to the initial density, suggesting that the Porous-Fisher model provides a better description of the experiments.

  11. Some problems with reproducing the Standard Model fields and interactions in five-dimensional warped brane world models

    Science.gov (United States)

    Smolyakov, Mikhail N.; Volobuev, Igor P.

    2016-01-01

    In this paper we examine, from the purely theoretical point of view and in a model-independent way, the case, when matter, gauge and Higgs fields are allowed to propagate in the bulk of five-dimensional brane world models with compact extra dimension, and the Standard Model fields and their interactions are supposed to be reproduced by the corresponding zero Kaluza-Klein modes. An unexpected result is that in order to avoid possible pathological behavior in the fermion sector, it is necessary to impose constraints on the fermion field Lagrangian. In the case when the fermion zero modes are supposed to be localized at one of the branes, these constraints imply an additional relation between the vacuum profile of the Higgs field and the form of the background metric. Moreover, this relation between the vacuum profile of the Higgs field and the form of the background metric results in the exact reproduction of the gauge boson and fermion sectors of the Standard Model by the corresponding zero mode four-dimensional effective theory in all the physically relevant cases, allowed by the absence of pathologies. Meanwhile, deviations from these conditions can lead either back to pathological behavior in the fermion sector or to a variance between the resulting zero mode four-dimensional effective theory and the Standard Model, which, depending on the model at hand, may, in principle, result in constraints putting the theory out of the reach of the present day experiments.

  12. Quantitative bioluminescence imaging of mouse tumor models.

    Science.gov (United States)

    Tseng, Jen-Chieh; Kung, Andrew L

    2015-01-05

    Bioluminescence imaging (BLI) has become an essential technique for preclinical evaluation of anticancer therapeutics and provides sensitive and quantitative measurements of tumor burden in experimental cancer models. For light generation, a vector encoding firefly luciferase is introduced into human cancer cells that are grown as tumor xenografts in immunocompromised hosts, and the enzyme substrate luciferin is injected into the host. Alternatively, the reporter gene can be expressed in genetically engineered mouse models to determine the onset and progression of disease. In addition to expression of an ectopic luciferase enzyme, bioluminescence requires oxygen and ATP, thus only viable luciferase-expressing cells or tissues are capable of producing bioluminescence signals. Here, we summarize a BLI protocol that takes advantage of advances in hardware, especially the cooled charge-coupled device camera, to enable detection of bioluminescence in living animals with high sensitivity and a large dynamic range.

  13. Quantitative assessment model for gastric cancer screening

    Institute of Scientific and Technical Information of China (English)

    Kun Chen; Wei-Ping Yu; Liang Song; Yi-Min Zhu

    2005-01-01

    AIM: To set up a mathematic model for gastric cancer screening and to evaluate its function in mass screening for gastric cancer.METHODS: A case control study was carried on in 66patients and 198 normal people, then the risk and protective factors of gastric cancer were determined, including heavy manual work, foods such as small yellow-fin tuna, dried small shrimps, squills, crabs, mothers suffering from gastric diseases, spouse alive, use of refrigerators and hot food,etc. According to some principles and methods of probability and fuzzy mathematics, a quantitative assessment model was established as follows: first, we selected some factors significant in statistics, and calculated weight coefficient for each one by two different methods; second, population space was divided into gastric cancer fuzzy subset and non gastric cancer fuzzy subset, then a mathematic model for each subset was established, we got a mathematic expression of attribute degree (AD).RESULTS: Based on the data of 63 patients and 693 normal people, AD of each subject was calculated. Considering the sensitivity and specificity, the thresholds of AD values calculated were configured with 0.20 and 0.17, respectively.According to these thresholds, the sensitivity and specificity of the quantitative model were about 69% and 63%.Moreover, statistical test showed that the identification outcomes of these two different calculation methods were identical (P>0.05).CONCLUSION: The validity of this method is satisfactory.It is convenient, feasible, economic and can be used to determine individual and population risks of gastric cancer.

  14. Elusive reproducibility.

    Science.gov (United States)

    Gori, Gio Batta

    2014-08-01

    Reproducibility remains a mirage for many biomedical studies because inherent experimental uncertainties generate idiosyncratic outcomes. The authentication and error rates of primary empirical data are often elusive, while multifactorial confounders beset experimental setups. Substantive methodological remedies are difficult to conceive, signifying that many biomedical studies yield more or less plausible results, depending on the attending uncertainties. Real life applications of those results remain problematic, with important exceptions for counterfactual field validations of strong experimental signals, notably for some vaccines and drugs, and for certain safety and occupational measures. It is argued that industrial, commercial and public policies and regulations could not ethically rely on unreliable biomedical results; rather, they should be rationally grounded on transparent cost-benefit tradeoffs.

  15. Global Quantitative Modeling of Chromatin Factor Interactions

    Science.gov (United States)

    Zhou, Jian; Troyanskaya, Olga G.

    2014-01-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  16. Automated quantitative gait analysis in animal models of movement disorders

    Directory of Open Access Journals (Sweden)

    Vandeputte Caroline

    2010-08-01

    Full Text Available Abstract Background Accurate and reproducible behavioral tests in animal models are of major importance in the development and evaluation of new therapies for central nervous system disease. In this study we investigated for the first time gait parameters of rat models for Parkinson's disease (PD, Huntington's disease (HD and stroke using the Catwalk method, a novel automated gait analysis test. Static and dynamic gait parameters were measured in all animal models, and these data were compared to readouts of established behavioral tests, such as the cylinder test in the PD and stroke rats and the rotarod tests for the HD group. Results Hemiparkinsonian rats were generated by unilateral injection of the neurotoxin 6-hydroxydopamine in the striatum or in the medial forebrain bundle. For Huntington's disease, a transgenic rat model expressing a truncated huntingtin fragment with multiple CAG repeats was used. Thirdly, a stroke model was generated by a photothrombotic induced infarct in the right sensorimotor cortex. We found that multiple gait parameters were significantly altered in all three disease models compared to their respective controls. Behavioural deficits could be efficiently measured using the cylinder test in the PD and stroke animals, and in the case of the PD model, the deficits in gait essentially confirmed results obtained by the cylinder test. However, in the HD model and the stroke model the Catwalk analysis proved more sensitive than the rotarod test and also added new and more detailed information on specific gait parameters. Conclusion The automated quantitative gait analysis test may be a useful tool to study both motor impairment and recovery associated with various neurological motor disorders.

  17. Quantitative Modeling of Landscape Evolution, Treatise on Geomorphology

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Schoorl, J.M.; Claessens, L.F.G.; Veldkamp, A.; Shroder, F.S.

    2013-01-01

    This chapter reviews quantitative modeling of landscape evolution – which means that not just model studies but also modeling concepts are discussed. Quantitative modeling is contrasted with conceptual or physical modeling, and four categories of model studies are presented. Procedural studies focus

  18. Whole-body skeletal imaging in mice utilizing microPET: optimization of reproducibility and applications in animal models of bone disease

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Frank [The Crump Institute for Molecular Imaging, Department of Molecular and Medical Pharmacology, University of California School of Medicine, 700 Westwood Blvd., Los Angeles, CA 90095 (United States); Department of Nuclear Medicine, Ludwig-Maximilians-University, Munich (Germany); Lee, Yu-Po; Lieberman, Jay R. [Department of Orthopedic Surgery, University of California School of Medicine, Los Angeles, California (United States); Loening, Andreas M.; Chatziioannou, Arion [The Crump Institute for Molecular Imaging, Department of Molecular and Medical Pharmacology, University of California School of Medicine, 700 Westwood Blvd., Los Angeles, CA 90095 (United States); Freedland, Stephen J.; Belldegrun, Arie S. [Department of Urology, University of California School of Medicine, Los Angeles, California (United States); Leahy, Richard [University of Southern California School of Bioengineering, Los Angeles, California (United States); Sawyers, Charles L. [Department of Medicine, University of California School of Medicine, Los Angeles, California (United States); Gambhir, Sanjiv S. [The Crump Institute for Molecular Imaging, Department of Molecular and Medical Pharmacology, University of California School of Medicine, 700 Westwood Blvd., Los Angeles, CA 90095 (United States); UCLA-Jonsson Comprehensive Cancer Center and Department of Biomathematics, University of California School of Medicine, Los Angeles, California (United States)

    2002-09-01

    The aims were to optimize reproducibility and establish [{sup 18}F]fluoride ion bone scanning in mice, using a dedicated small animal positron emission tomography (PET) scanner (microPET) and to correlate functional findings with anatomical imaging using computed tomography (microCAT). Optimal tracer uptake time for [{sup 18}F]fluoride ion was determined by performing dynamic microPET scans. Quantitative reproducibility was measured using region of interest (ROI)-based counts normalized to (a) the injected dose, (b) integral of the heart time-activity curve, or (c) ROI over the whole skeleton. Bone lesions were repetitively imaged. Functional images were correlated with X-ray and microCAT. The plateau of [{sup 18}F]fluoride uptake occurs 60 min after injection. The highest reproducibility was achieved by normalizing to an ROI over the whole skeleton, with a mean percent coefficient of variation [(SD/mean) x 100] of <15%-20%. Benign and malignant bone lesions were successfully repetitively imaged. Preliminary correlation of microPET with microCAT demonstrated the high sensitivity of microPET and the ability of microCAT to detect small osteolytic lesions. Whole-body [{sup 18}F]fluoride ion bone imaging using microPET is reproducible and can be used to serially monitor normal and pathological changes to the mouse skeleton. Morphological imaging with microCAT is useful to display correlative changes in anatomy. Detailed in vivo studies of the murine skeleton in various small animal models of bone diseases should now be possible. (orig.)

  19. Impact of soil parameter and physical process on reproducibility of hydrological processes by land surface model in semiarid grassland

    Science.gov (United States)

    Miyazaki, S.; Yorozu, K.; Asanuma, J.; Kondo, M.; Saito, K.

    2014-12-01

    The land surface model (LSM) takes part in the land-atmosphere interaction on the earth system model for the climate change research. In this study, we evaluated the impact of soil parameters and physical process on reproducibility of hydrological process by LSM Minimal Advanced Treatments of Surface Interaction and RunOff (MATSIRO; Takata et al, 2003, GPC) forced by the meteorological data observed at grassland in semiarid climate in China and Mongolia. The testing of MATSIRO was carried out offline mode over the semiarid grassland sites at Tongyu (44.42 deg. N, 122.87 deg. E, altitude: 184m) in China, Kherlen Bayan Ulaan (KBU; 47.21 deg. N, 108.74 deg. E, altitude: 1235m) and Arvaikheer (46.23 N, 102.82E, altitude: 1,813m) in Mongolia. Although all sites locate semiarid grassland, the climate condition is different among sites, which the annual air temperature and precipitation are 5.7 deg. C and 388mm (Tongyu), 1.2 deg.C and 180mm (KBU), and 0.4 deg. C and 245mm(Arvaikheer). We can evaluate the effect of climate condition on the model performance. Three kinds of experiments have been carried out, which was run with the default parameters (CTL), the observed parameters (OBS) for soil physics and hydrology, and vegetation, and refined MATSIRO with the effect of ice in thermal parameters and unfrozen water below the freezing with same parameters as OBS run (OBSr). The validation data has been provided by CEOP(http://www.ceop.net/) , RAISE(http://raise.suiri.tsukuba.ac.jp/), GAME-AAN (Miyazaki et al., 2004, JGR) for Tongyu, KBU, and Arvaikheer, respectively. The reproducibility of the net radiation, the soil temperature (Ts), and latent heat flux (LE) were well reproduced by OBS and OBSr run. The change of soil physical and hydraulic parameter affected the reproducibility of soil temperature (Ts) and soil moisture (SM) as well as energy flux component especially for the sensible heat flux (H) and soil heat flux (G). The reason for the great improvement on the

  20. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  1. Assessing the relative effectiveness of statistical downscaling and distribution mapping in reproducing rainfall statistics based on climate model results

    Science.gov (United States)

    Langousis, Andreas; Mamalakis, Antonios; Deidda, Roberto; Marrocu, Marino

    2016-01-01

    To improve the level skill of climate models (CMs) in reproducing the statistics of daily rainfall at a basin level, two types of statistical approaches have been suggested. One is statistical correction of CM rainfall outputs based on historical series of precipitation. The other, usually referred to as statistical rainfall downscaling, is the use of stochastic models to conditionally simulate rainfall series, based on large-scale atmospheric forcing from CMs. While promising, the latter approach attracted reduced attention in recent years, since the developed downscaling schemes involved complex weather identification procedures, while demonstrating limited success in reproducing several statistical features of rainfall. In a recent effort, Langousis and Kaleris () developed a statistical framework for simulation of daily rainfall intensities conditional on upper-air variables, which is simpler to implement and more accurately reproduces several statistical properties of actual rainfall records. Here we study the relative performance of: (a) direct statistical correction of CM rainfall outputs using nonparametric distribution mapping, and (b) the statistical downscaling scheme of Langousis and Kaleris (), in reproducing the historical rainfall statistics, including rainfall extremes, at a regional level. This is done for an intermediate-sized catchment in Italy, i.e., the Flumendosa catchment, using rainfall and atmospheric data from four CMs of the ENSEMBLES project. The obtained results are promising, since the proposed downscaling scheme is more accurate and robust in reproducing a number of historical rainfall statistics, independent of the CM used and the characteristics of the calibration period. This is particularly the case for yearly rainfall maxima.

  2. How well do CMIP5 climate models reproduce explosive cyclones in the extratropics of the Northern Hemisphere?

    Science.gov (United States)

    Seiler, C.; Zwiers, F. W.

    2016-02-01

    Extratropical explosive cyclones are rapidly intensifying low pressure systems with severe wind speeds and heavy precipitation, affecting livelihoods and infrastructure primarily in coastal and marine environments. This study evaluates how well the most recent generation of climate models reproduces extratropical explosive cyclones in the Northern Hemisphere for the period 1980-2005. An objective-feature tracking algorithm is used to identify and track cyclones from 25 climate models and three reanalysis products. Model biases are compared to biases in the sea surface temperature (SST) gradient, the polar jet stream, the Eady growth rate, and model resolution. Most models accurately reproduce the spatial distribution of explosive cyclones when compared to reanalysis data ( R = 0.94), with high frequencies along the Kuroshio Current and the Gulf Stream. Three quarters of the models however significantly underpredict explosive cyclone frequencies, by a third on average and by two thirds in the worst case. This frequency bias is significantly correlated with jet stream speed in the inter-model spread ( R ≥ 0.51), which in the Atlantic is correlated with a negative meridional SST gradient ( R = -0.56). The importance of the jet stream versus other variables considered in this study also applies to the interannual variability of explosive cyclone frequency. Furthermore, models with fewer explosive cyclones tend to underpredict the corresponding deepening rates ( R ≥ 0.88). A follow-up study will assess the impacts of climate change on explosive cyclones, and evaluate how model biases presented in this study affect the projections.

  3. A novel, comprehensive, and reproducible porcine model for determining the timing of bruises in forensic pathology

    DEFF Research Database (Denmark)

    Barington, Kristiane; Jensen, Henrik Elvang

    2016-01-01

    that resulted in bruises were inflicted on the back. In addition, 2 control pigs were included in the study. The pigs were euthanized consecutively from 1 to 10 h after the infliction of bruises. Following gross evaluation, skin, and muscle tissues were sampled for histology. Results Grossly, the bruises...... appeared uniform and identical to the tramline bruises seen in humans and pigs subjected to blunt trauma. Histologically, the number of neutrophils in the subcutis, the number of macrophages in the muscle tissue, and the localization of neutrophils and macrophages in muscle tissue showed a time...... in order to identify gross and histological parameters that may be useful in determining the age of a bruise. Methods The mechanical device was able to apply a single reproducible stroke with a plastic tube that was equivalent to being struck by a man. In each of 10 anesthetized pigs, four strokes...

  4. Assessment of the performance of numerical modeling in reproducing a replenishment of sediments in a water-worked channel

    Science.gov (United States)

    Juez, C.; Battisacco, E.; Schleiss, A. J.; Franca, M. J.

    2016-06-01

    The artificial replenishment of sediment is used as a method to re-establish sediment continuity downstream of a dam. However, the impact of this technique on the hydraulics conditions, and resulting bed morphology, is yet to be understood. Several numerical tools have been developed during last years for modeling sediment transport and morphology evolution which can be used for this application. These models range from 1D to 3D approaches: the first being over simplistic for the simulation of such a complex geometry; the latter requires often a prohibitive computational effort. However, 2D models are computationally efficient and in these cases may already provide sufficiently accurate predictions of the morphology evolution caused by the sediment replenishment in a river. Here, the 2D shallow water equations in combination with the Exner equation are solved by means of a weak-coupled strategy. The classical friction approach considered for reproducing the bed channel roughness has been modified to take into account the morphological effect of replenishment which provokes a channel bed fining. Computational outcomes are compared with four sets of experimental data obtained from several replenishment configurations studied in the laboratory. The experiments differ in terms of placement volume and configuration. A set of analysis parameters is proposed for the experimental-numerical comparison, with particular attention to the spreading, covered surface and travel distance of placed replenishment grains. The numerical tool is reliable in reproducing the overall tendency shown by the experimental data. The effect of fining roughness is better reproduced with the approach herein proposed. However, it is also highlighted that the sediment clusters found in the experiment are not well numerically reproduced in the regions of the channel with a limited number of sediment grains.

  5. Reproducible ion-current-based approach for 24-plex comparison of the tissue proteomes of hibernating versus normal myocardium in swine models.

    Science.gov (United States)

    Qu, Jun; Young, Rebeccah; Page, Brian J; Shen, Xiaomeng; Tata, Nazneen; Li, Jun; Duan, Xiaotao; Fallavollita, James A; Canty, John M

    2014-05-02

    Hibernating myocardium is an adaptive response to repetitive myocardial ischemia that is clinically common, but the mechanism of adaptation is poorly understood. Here we compared the proteomes of hibernating versus normal myocardium in a porcine model with 24 biological replicates. Using the ion-current-based proteomic strategy optimized in this study to expand upon previous proteomic work, we identified differentially expressed proteins in new molecular pathways of cardiovascular interest. The methodological strategy includes efficient extraction with detergent cocktail; precipitation/digestion procedure with high, quantitative peptide recovery; reproducible nano-LC/MS analysis on a long, heated column packed with small particles; and quantification based on ion-current peak areas. Under the optimized conditions, high efficiency and reproducibility were achieved for each step, which enabled a reliable comparison of 24 the myocardial samples. To achieve confident discovery of differentially regulated proteins in hibernating myocardium, we used highly stringent criteria to define "quantifiable proteins". These included the filtering criteria of low peptide FDR and S/N > 10 for peptide ion currents, and each protein was quantified independently from ≥2 distinct peptides. For a broad methodological validation, the quantitative results were compared with a parallel, well-validated 2D-DIGE analysis of the same model. Excellent agreement between the two orthogonal methods was observed (R = 0.74), and the ion-current-based method quantified almost one order of magnitude more proteins. In hibernating myocardium, 225 significantly altered proteins were discovered with a low false-discovery rate (∼3%). These proteins are involved in biological processes including metabolism, apoptosis, stress response, contraction, cytoskeleton, transcription, and translation. This provides compelling evidence that hibernating myocardium adapts to chronic ischemia. The major metabolic

  6. Toward quantitative modeling of silicon phononic thermocrystals

    Energy Technology Data Exchange (ETDEWEB)

    Lacatena, V. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France); IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Haras, M.; Robillard, J.-F., E-mail: jean-francois.robillard@isen.iemn.univ-lille1.fr; Dubois, E. [IEMN UMR CNRS 8520, Institut d' Electronique, de Microélectronique et de Nanotechnologie, Avenue Poincaré, F-59652 Villeneuve d' Ascq (France); Monfray, S.; Skotnicki, T. [STMicroelectronics, 850, rue Jean Monnet, F-38926 Crolles (France)

    2015-03-16

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  7. Quantitative modeling of the ionospheric response to geomagnetic activity

    Directory of Open Access Journals (Sweden)

    T. J. Fuller-Rowell

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  8. Quantitative comparisons of analogue models of brittle wedge dynamics

    Science.gov (United States)

    Schreurs, Guido

    2010-05-01

    Analogue model experiments are widely used to gain insights into the evolution of geological structures. In this study, we present a direct comparison of experimental results of 14 analogue modelling laboratories using prescribed set-ups. A quantitative analysis of the results will document the variability among models and will allow an appraisal of reproducibility and limits of interpretation. This has direct implications for comparisons between structures in analogue models and natural field examples. All laboratories used the same frictional analogue materials (quartz and corundum sand) and prescribed model-building techniques (sieving and levelling). Although each laboratory used its own experimental apparatus, the same type of self-adhesive foil was used to cover the base and all the walls of the experimental apparatus in order to guarantee identical boundary conditions (i.e. identical shear stresses at the base and walls). Three experimental set-ups using only brittle frictional materials were examined. In each of the three set-ups the model was shortened by a vertical wall, which moved with respect to the fixed base and the three remaining sidewalls. The minimum width of the model (dimension parallel to mobile wall) was also prescribed. In the first experimental set-up, a quartz sand wedge with a surface slope of ˜20° was pushed by a mobile wall. All models conformed to the critical taper theory, maintained a stable surface slope and did not show internal deformation. In the next two experimental set-ups, a horizontal sand pack consisting of alternating quartz sand and corundum sand layers was shortened from one side by the mobile wall. In one of the set-ups a thin rigid sheet covered part of the model base and was attached to the mobile wall (i.e. a basal velocity discontinuity distant from the mobile wall). In the other set-up a basal rigid sheet was absent and the basal velocity discontinuity was located at the mobile wall. In both types of experiments

  9. Cellular automaton model with dynamical 2D speed-gap relation reproduces empirical and experimental features of traffic flow

    CERN Document Server

    Tian, Junfang; Ma, Shoufeng; Zhu, Chenqiang; Jiang, Rui; Ding, YaoXian

    2015-01-01

    This paper proposes an improved cellular automaton traffic flow model based on the brake light model, which takes into account that the desired time gap of vehicles is remarkably larger than one second. Although the hypothetical steady state of vehicles in the deterministic limit corresponds to a unique relationship between speeds and gaps in the proposed model, the traffic states of vehicles dynamically span a two-dimensional region in the plane of speed versus gap, due to the various randomizations. It is shown that the model is able to well reproduce (i) the free flow, synchronized flow, jam as well as the transitions among the three phases; (ii) the evolution features of disturbances and the spatiotemporal patterns in a car-following platoon; (iii) the empirical time series of traffic speed obtained from NGSIM data. Therefore, we argue that a model can potentially reproduce the empirical and experimental features of traffic flow, provided that the traffic states are able to dynamically span a 2D speed-gap...

  10. An exact arithmetic toolbox for a consistent and reproducible structural analysis of metabolic network models.

    Science.gov (United States)

    Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie

    2014-10-07

    Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations.

  11. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  12. A novel porcine model of ataxia telangiectasia reproduces neurological features and motor deficits of human disease.

    Science.gov (United States)

    Beraldi, Rosanna; Chan, Chun-Hung; Rogers, Christopher S; Kovács, Attila D; Meyerholz, David K; Trantzas, Constantin; Lambertz, Allyn M; Darbro, Benjamin W; Weber, Krystal L; White, Katherine A M; Rheeden, Richard V; Kruer, Michael C; Dacken, Brian A; Wang, Xiao-Jun; Davis, Bryan T; Rohret, Judy A; Struzynski, Jason T; Rohret, Frank A; Weimer, Jill M; Pearce, David A

    2015-11-15

    Ataxia telangiectasia (AT) is a progressive multisystem disorder caused by mutations in the AT-mutated (ATM) gene. AT is a neurodegenerative disease primarily characterized by cerebellar degeneration in children leading to motor impairment. The disease progresses with other clinical manifestations including oculocutaneous telangiectasia, immune disorders, increased susceptibly to cancer and respiratory infections. Although genetic investigations and physiological models have established the linkage of ATM with AT onset, the mechanisms linking ATM to neurodegeneration remain undetermined, hindering therapeutic development. Several murine models of AT have been successfully generated showing some of the clinical manifestations of the disease, however they do not fully recapitulate the hallmark neurological phenotype, thus highlighting the need for a more suitable animal model. We engineered a novel porcine model of AT to better phenocopy the disease and bridge the gap between human and current animal models. The initial characterization of AT pigs revealed early cerebellar lesions including loss of Purkinje cells (PCs) and altered cytoarchitecture suggesting a developmental etiology for AT and could advocate for early therapies for AT patients. In addition, similar to patients, AT pigs show growth retardation and develop motor deficit phenotypes. By using the porcine system to model human AT, we established the first animal model showing PC loss and motor features of the human disease. The novel AT pig provides new opportunities to unmask functions and roles of ATM in AT disease and in physiological conditions.

  13. Can a global model reproduce observed trends in summertime surface ozone levels?

    OpenAIRE

    S. Koumoutsaris; I. Bey

    2012-01-01

    Quantifying trends in surface ozone concentrations are critical for assessing pollution control strategies. Here we use observations and results from a global chemical transport model to examine the trends (1991–2005) in daily maximum 8-hour average concentrations in summertime surface ozone at rural sites in Europe and the United States. We find a decrease in observed ozone concentrations at the high end of the probability distribution at many of the sites in both regions. The model attribut...

  14. Qualitative vs. quantitative software process simulation modelling: conversion and comparison

    OpenAIRE

    Zhang, He; Kitchenham, Barbara; Jeffery, Ross

    2009-01-01

    peer-reviewed Software Process Simulation Modeling (SPSM) research has increased in the past two decades. However, most of these models are quantitative, which require detailed understanding and accurate measurement. As the continuous work to our previous studies in qualitative modeling of software process, this paper aims to investigate the structure equivalence and model conversion between quantitative and qualitative process modeling, and to compare the characteristics and performance o...

  15. Quantitative modelling of the biomechanics of the avian syrinx

    NARCIS (Netherlands)

    Elemans, C.P.H.; Larsen, O.N.; Hoffmann, M.R.; Leeuwen, van J.L.

    2003-01-01

    We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts

  16. Quantitative modelling of the biomechanics of the avian syrinx

    DEFF Research Database (Denmark)

    Elemans, Coen P. H.; Larsen, Ole Næsbye; Hoffmann, Marc R.

    2003-01-01

    We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts...

  17. Augmenting a Large-Scale Hydrology Model to Reproduce Groundwater Variability

    Science.gov (United States)

    Stampoulis, D.; Reager, J. T., II; Andreadis, K.; Famiglietti, J. S.

    2016-12-01

    To understand the influence of groundwater on terrestrial ecosystems and society, global assessment of groundwater temporal fluctuations is required. A water table was initialized in the Variable Infiltration Capacity (VIC) hydrologic model in a semi-realistic approach to account for groundwater variability. Global water table depth data derived from observations at nearly 2 million well sites compiled from government archives and published literature, as well as groundwater model simulations, were used to create a new soil layer of varying depth for each model grid cell. The new 4-layer version of VIC, hereafter named VIC-4L, was run with and without assimilating NASA's Gravity Recovery and Climate Experiment (GRACE) observations. The results were compared with simulations using the original VIC version (named VIC-3L) with GRACE assimilation, while all runs were compared with well data.

  18. Energy and nutrient deposition and excretion in the reproducing sow: model development and evaluation

    DEFF Research Database (Denmark)

    Hansen, A V; Strathe, A B; Theil, Peter Kappel;

    2014-01-01

    Air and nutrient emissions from swine operations raise environmental concerns. During the reproduction phase, sows consume and excrete large quantities of nutrients. The objective of this study was to develop a mathematical model to describe energy and nutrient partitioning and predict manure...... excretion and composition and methane emissions on a daily basis. The model was structured to contain gestation and lactation modules, which can be run separately or sequentially, with outputs from the gestation module used as inputs to the lactation module. In the gestating module, energy and protein...... production, and maternal growth with body tissue losses constrained within biological limits. Global sensitivity analysis showed that nonlinearity in the parameters was small. The model outputs considered were the total protein and fat deposition, average urinary and fecal N excretion, average methane...

  19. An exponent tunable network model for reproducing density driven superlinear relation

    CERN Document Server

    Qin, Yuhao; Xu, Lida; Gao, Zi-You

    2014-01-01

    Previous works have shown the universality of allometric scalings under density and total value at city level, but our understanding about the size effects of regions on them is still poor. Here, we revisit the scaling relations between gross domestic production (GDP) and population (POP) under total and density value. We first reveal that the superlinear scaling is a general feature under density value crossing different regions. The scaling exponent $\\beta$ under density value falls into the range $(1.0, 2.0]$, which unexpectedly goes beyond the range observed by Pan et al. (Nat. Commun. vol. 4, p. 1961 (2013)). To deal with the wider range, we propose a network model based on 2D lattice space with the spatial correlation factor $\\alpha$ as parameter. Numerical experiments prove that the generated scaling exponent $\\beta$ in our model is fully tunable by the spatial correlation factor $\\alpha$. We conjecture that our model provides a general platform for extensive urban and regional studies.

  20. A simple branching model that reproduces language family and language population distributions

    Science.gov (United States)

    Schwämmle, Veit; de Oliveira, Paulo Murilo Castro

    2009-07-01

    Human history leaves fingerprints in human languages. Little is known about language evolution and its study is of great importance. Here we construct a simple stochastic model and compare its results to statistical data of real languages. The model is based on the recent finding that language changes occur independently of the population size. We find agreement with the data additionally assuming that languages may be distinguished by having at least one among a finite, small number of different features. This finite set is also used in order to define the distance between two languages, similarly to linguistics tradition since Swadesh.

  1. An exact arithmetic toolbox for a consistent and reproducible structural analysis of metabolic network models

    National Research Council Canada - National Science Library

    Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie

    2014-01-01

    .... Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic...

  2. Reproducible infection model for Clostridium perfringens in broiler chickens

    DEFF Research Database (Denmark)

    Pedersen, Karl; Friis-Holm, Lotte Bjerrum; Heuer, Ole Eske

    2008-01-01

    Experiments were carried out to establish an infection and disease model for Clostridium perfringens in broiler chickens. Previous experiments had failed to induce disease and only a transient colonization with challenge strains had been obtained. In the present study, two series of experiments w...

  3. Establishing a Reproducible Hypertrophic Scar following Thermal Injury: A Porcine Model

    Directory of Open Access Journals (Sweden)

    Scott J. Rapp, MD

    2015-02-01

    Conclusions: Deep partial-thickness thermal injury to the back of domestic swine produces an immature hypertrophic scar by 10 weeks following burn with thickness appearing to coincide with the location along the dorsal axis. With minimal pig to pig variation, we describe our technique to provide a testable immature scar model.

  4. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  5. Accuracy and reproducibility of dental measurements on tomographic digital models: a systematic review and meta-analysis.

    Science.gov (United States)

    Ferreira, Jamille B; Christovam, Ilana O; Alencar, David S; da Motta, Andréa F J; Mattos, Claudia T; Cury-Saramago, Adriana

    2017-04-26

    The aim of this systematic review with meta-analysis was to assess the accuracy and reproducibility of dental measurements obtained from digital study models generated from CBCT compared with those acquired from plaster models. The electronic databases Cochrane Library, Medline (via PubMed), Scopus, VHL, Web of Science, and System for Information on Grey Literature in Europe were screened to identify articles from 1998 until February 2016. The inclusion criteria were: prospective and retrospective clinical trials in humans; validation and/or comparison articles of dental study models obtained from CBCT and plaster models; and articles that used dental linear measurements as an assessment tool. The methodological quality of the studies was carried out by Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. A meta-analysis was performed to validate all comparative measurements. The databases search identified a total of 3160 items and 554 duplicates were excluded. After reading titles and abstracts, 12 articles were selected. Five articles were included after reading in full. The methodological quality obtained through QUADAS-2 was poor to moderate. In the meta-analysis, there were statistical differences between the mesiodistal widths of mandibular incisors, maxillary canines and premolars, and overall Bolton analysis. Therefore, the measurements considered accurate were maxillary and mandibular crowding, intermolar width and mesiodistal width of maxillary incisors, mandibular canines and premolars, in both arches for molars. Digital models obtained from CBCT were not accurate for all measures assessed. The differences were clinically acceptable for all dental linear measurements, except for maxillary arch perimeter. Digital models are reproducible for all measurements when intraexaminer assessment is considered and need improvement in interexaminer evaluation.

  6. Validation and reproducibility assessment of modality independent elastography in a pre-clinical model of breast cancer

    Science.gov (United States)

    Weis, Jared A.; Kim, Dong K.; Yankeelov, Thomas E.; Miga, Michael I.

    2014-03-01

    Clinical observations have long suggested that cancer progression is accompanied by extracellular matrix remodeling and concomitant increases in mechanical stiffness. Due to the strong association of mechanics and tumor progression, there has been considerable interest in incorporating methodologies to diagnose cancer through the use of mechanical stiffness imaging biomarkers, resulting in commercially available US and MR elastography products. Extension of this approach towards monitoring longitudinal changes in mechanical properties along a course of cancer therapy may provide means for assessing early response to therapy; therefore a systematic study of the elasticity biomarker in characterizing cancer for therapeutic monitoring is needed. The elastography method we employ, modality independent elastography (MIE), can be described as a model-based inverse image-analysis method that reconstructs elasticity images using two acquired image volumes in a pre/post state of compression. In this work, we present preliminary data towards validation and reproducibility assessment of our elasticity biomarker in a pre-clinical model of breast cancer. The goal of this study is to determine the accuracy and reproducibility of MIE and therefore the magnitude of changes required to determine statistical differences during therapy. Our preliminary results suggest that the MIE method can accurately and robustly assess mechanical properties in a pre-clinical system and provide considerable enthusiasm for the extension of this technique towards monitoring therapy-induced changes to breast cancer tissue architecture.

  7. Quantitative phase-field modeling of nonisothermal solidification in dilute multicomponent alloys with arbitrary diffusivities.

    Science.gov (United States)

    Ohno, Munekazu

    2012-11-01

    A quantitative phase-field model is developed for simulating microstructural pattern formation in nonisothermal solidification in dilute multicomponent alloys with arbitrary thermal and solutal diffusivities. By performing the matched asymptotic analysis, it is shown that the present model with antitrapping current terms reproduces the free-boundary problem of interest in the thin-interface limit. Convergence of the simulation outcome with decreasing the interface thickness is demonstrated for nonisothermal free dendritic growth in binary alloys and isothermal and nonisothermal free dendritic growth in a ternary alloy.

  8. Experimental and Numerical Models of Complex Clinical Scenarios; Strategies to Improve Relevance and Reproducibility of Joint Replacement Research.

    Science.gov (United States)

    Bechtold, Joan E; Swider, Pascal; Goreham-Voss, Curtis; Soballe, Kjeld

    2016-02-01

    This research review aims to focus attention on the effect of specific surgical and host factors on implant fixation, and the importance of accounting for them in experimental and numerical models. These factors affect (a) eventual clinical applicability and (b) reproducibility of findings across research groups. Proper function and longevity for orthopedic joint replacement implants relies on secure fixation to the surrounding bone. Technology and surgical technique has improved over the last 50 years, and robust ingrowth and decades of implant survival is now routinely achieved for healthy patients and first-time (primary) implantation. Second-time (revision) implantation presents with bone loss with interfacial bone gaps in areas vital for secure mechanical fixation. Patients with medical comorbidities such as infection, smoking, congestive heart failure, kidney disease, and diabetes have a diminished healing response, poorer implant fixation, and greater revision risk. It is these more difficult clinical scenarios that require research to evaluate more advanced treatment approaches. Such treatments can include osteogenic or antimicrobial implant coatings, allo- or autogenous cellular or tissue-based approaches, local and systemic drug delivery, surgical approaches. Regarding implant-related approaches, most experimental and numerical models do not generally impose conditions that represent mechanical instability at the implant interface, or recalcitrant healing. Many treatments will work well in forgiving settings, but fail in complex human settings with disease, bone loss, or previous surgery. Ethical considerations mandate that we justify and limit the number of animals tested, which restricts experimental permutations of treatments. Numerical models provide flexibility to evaluate multiple parameters and combinations, but generally need to employ simplifying assumptions. The objectives of this paper are to (a) to highlight the importance of mechanical

  9. Evaluation of Nitinol staples for the Lapidus arthrodesis in a reproducible biomechanical model

    Directory of Open Access Journals (Sweden)

    Nicholas Alexander Russell

    2015-12-01

    Full Text Available While the Lapidus procedure is a widely accepted technique for treatment of hallux valgus, the optimal fixation method to maintain joint stability remains controversial. The purpose of this study was to evaluate the biomechanical properties of new Shape Memory Alloy staples arranged in different configurations in a repeatable 1st Tarsometatarsal arthrodesis model. Ten sawbones models of the whole foot (n=5 per group were reconstructed using a single dorsal staple or two staples in a delta configuration. Each construct was mechanically tested in dorsal four-point bending, medial four-point bending, dorsal three-point bending and plantar cantilever bending with the staples activated at 37°C. The peak load, stiffness and plantar gapping were determined for each test. Pressure sensors were used to measure the contact force and area of the joint footprint in each group. There was a significant (p < 0.05 increase in peak load in the two staple constructs compared to the single staple constructs for all testing modalities. Stiffness also increased significantly in all tests except dorsal four-point bending. Pressure sensor readings showed a significantly higher contact force at time zero and contact area following loading in the two staple constructs (p < 0.05. Both groups completely recovered any plantar gapping following unloading and restored their initial contact footprint. The biomechanical integrity and repeatability of the models was demonstrated with no construct failures due to hardware or model breakdown. Shape memory alloy staples provide fixation with the ability to dynamically apply and maintain compression across a simulated arthrodesis following a range of loading conditions.

  10. Reproducibility of the heat/capsaicin skin sensitization model in healthy volunteers

    Directory of Open Access Journals (Sweden)

    Cavallone LF

    2013-11-01

    Full Text Available Laura F Cavallone,1 Karen Frey,1 Michael C Montana,1 Jeremy Joyal,1 Karen J Regina,1 Karin L Petersen,2 Robert W Gereau IV11Department of Anesthesiology, Washington University in St Louis, School of Medicine, St Louis, MO, USA; 2California Pacific Medical Center Research Institute, San Francisco, CA, USAIntroduction: Heat/capsaicin skin sensitization is a well-characterized human experimental model to induce hyperalgesia and allodynia. Using this model, gabapentin, among other drugs, was shown to significantly reduce cutaneous hyperalgesia compared to placebo. Since the larger thermal probes used in the original studies to produce heat sensitization are now commercially unavailable, we decided to assess whether previous findings could be replicated with a currently available smaller probe (heated area 9 cm2 versus 12.5–15.7 cm2.Study design and methods: After Institutional Review Board approval, 15 adult healthy volunteers participated in two study sessions, scheduled 1 week apart (Part A. In both sessions, subjects were exposed to the heat/capsaicin cutaneous sensitization model. Areas of hypersensitivity to brush stroke and von Frey (VF filament stimulation were measured at baseline and after rekindling of skin sensitization. Another group of 15 volunteers was exposed to an identical schedule and set of sensitization procedures, but, in each session, received either gabapentin or placebo (Part B.Results: Unlike previous reports, a similar reduction of areas of hyperalgesia was observed in all groups/sessions. Fading of areas of hyperalgesia over time was observed in Part A. In Part B, there was no difference in area reduction after gabapentin compared to placebo.Conclusion: When using smaller thermal probes than originally proposed, modifications of other parameters of sensitization and/or rekindling process may be needed to allow the heat/capsaicin sensitization protocol to be used as initially intended. Standardization and validation of

  11. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    Science.gov (United States)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  12. The link between the Barents Sea and ENSO events reproduced by NEMO model

    Directory of Open Access Journals (Sweden)

    V. N. Stepanov

    2012-05-01

    Full Text Available An analysis of observational data in the Barents Sea along a meridian at 33°30´ E between 70°30´ and 72°30´ N has reported a negative correlation between El Niño/La Niña-Southern Oscillation (ENSO events and water temperature in the top 200 m: the temperature drops about 0.5 °C during warm ENSO events while during cold ENSO events the top 200 m layer of the Barents Sea is warmer. Results from 1 and 1/4-degree global NEMO models show a similar response for the whole Barents Sea. During the strong warm ENSO event in 1997–1998 an anticyclonic atmospheric circulation is settled over the Barents Sea instead of a usual cyclonic circulation. This change enhances heat loses in the Barents Sea, as well as substantially influencing the Barents Sea inflow from the North Atlantic, via changes in ocean currents. Under normal conditions along the Scandinavian peninsula there is a warm current entering the Barents sea from the North Atlantic, however after the 1997–1998 event this current is weakened.

    During 1997–1998 the model annual mean temperature in the Barents Sea is decreased by about 0.8 °C, also resulting in a higher sea ice volume. In contrast during the cold ENSO events in 1999–2000 and 2007–2008 the model shows a lower sea ice volume, and higher annual mean temperatures in the upper layer of the Barents Sea of about 0.7 °C.

    An analysis of model data shows that the Barents Sea inflow is the main source for the variability of Barents Sea heat content, and is forced by changing pressure and winds in the North Atlantic. However, surface heat-exchange with atmosphere can also play a dominant role in the Barents Sea annual heat balance, especially for the subsequent year after ENSO events.

  13. A computational model incorporating neural stem cell dynamics reproduces glioma incidence across the lifespan in the human population.

    Directory of Open Access Journals (Sweden)

    Roman Bauer

    Full Text Available Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computational model to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert differential susceptibility throughout the population. Overall, our model supports the hypothesis that glioma is caused by randomly-occurring oncogenic mutations within the neural stem cell population. Based on this model, we assess the influence of the (experimentally indicated decrease in the number of neural stem cells and increase of cell division rate during aging. Our model provides multiple testable predictions, and suggests that different temporal sequences of oncogenic mutations can lead to tumorigenesis. Finally, we conclude that four or five oncogenic mutations are sufficient for the formation of glioma.

  14. Can a global model chemical mechanism reproduce NO, NO2, and O3 measurements above a tropical rainforest?

    Directory of Open Access Journals (Sweden)

    C. N. Hewitt

    2009-12-01

    Full Text Available A cross-platform field campaign, OP3, was conducted in the state of Sabah in Malaysian Borneo between April and July of 2008. Among the suite of observations recorded, the campaign included measurements of NOx and O3–crucial outputs of any model chemistry mechanism. We describe the measurements of these species made from both the ground site and aircraft. We examine the output from the global model p-TOMCAT at two resolutions for this location during the April campaign period. The models exhibit reasonable ability in capturing the NOx diurnal cycle, but ozone is overestimated. We use a box model containing the same chemical mechanism to explore the weaknesses in the global model and the ability of the simplified global model chemical mechanism to capture the chemistry at the rainforest site. We achieve a good fit to the data for all three species (NO, NO2, and O3, though the model is much more sensitive to changes in the treatment of physical processes than to changes in the chemical mechanism. Indeed, without some parameterization of the nighttime boundary layer-free troposphere mixing, a time dependent box model will not reproduce the observations. The final simulation uses this mixing parameterization for NO and NO2 but not O3, as determined by the vertical structure of each species, and matches the measurements well.

  15. Reproducibility of quantitative measures of binding potential in rat striatum: A test re-test study using DTBZ dynamic PET studies

    Energy Technology Data Exchange (ETDEWEB)

    Avendaño-Estrada, A., E-mail: avilarod@uwalumni.com; Lara-Camacho, V. M., E-mail: avilarod@uwalumni.com; Ávila-García, M. C., E-mail: avilarod@uwalumni.com; Ávila- Rodríguez, M. A., E-mail: avilarod@uwalumni.com [Unidad PET, Facultad de Medicina, Universidad Nacional Autónoma de México, 04510, México, D.F. (Mexico)

    2014-11-07

    There is great interest in the study of dopamine (DA) pathways due to the increasing number of patients with illnesses related to the dopaminergic system and molecular imaging based in Positron Emission Tomography (PET) has been proven helpful for this task. Among the different radiopharmaceuticals available to study DA interaction, [{sup 11}C]Dihydrotetrabenazine (DTBZ) has a high affinity for the vesicular monoamine transporter type 2 (VMAT2) and its binding potential (BP) is a marker of DA terminal integrity. This paper reports on the intersubject reproducibility of BP measurements in rat striatum with [11C]DTBZ using the Logańs method.

  16. The ability of a GCM-forced hydrological model to reproduce global discharge variability

    Directory of Open Access Journals (Sweden)

    F. C. Sperna Weiland

    2010-08-01

    Full Text Available Data from General Circulation Models (GCMs are often used to investigate hydrological impacts of climate change. However GCM data are known to have large biases, especially for precipitation. In this study the usefulness of GCM data for hydrological studies, with focus on discharge variability and extremes, was tested by using bias-corrected daily climate data of the 20CM3 control experiment from a selection of twelve GCMs as input to the global hydrological model PCR-GLOBWB. Results of these runs were compared with discharge observations of the GRDC and discharges calculated from model runs based on two meteorological datasets constructed from the observation-based CRU TS2.1 and ERA-40 reanalysis. In the first dataset the CRU TS 2.1 monthly timeseries were downscaled to daily timeseries using the ERA-40 dataset (ERA6190. This dataset served as a best guess of the past climate and was used to analyze the performance of PCR-GLOBWB. The second dataset was created from the ERA-40 timeseries bias-corrected with the CRU TS 2.1 dataset using the same bias-correction method as applied to the GCM datasets (ERACLM. Through this dataset the influence of the bias-correction method was quantified. The bias-correction was limited to monthly mean values of precipitation, potential evaporation and temperature, as our focus was on the reproduction of inter- and intra-annual variability.

    After bias-correction the spread in discharge results of the GCM based runs decreased and results were similar to results of the ERA-40 based runs, especially for rivers with a strong seasonal pattern. Overall the bias-correction method resulted in a slight reduction of global runoff and the method performed less well in arid and mountainous regions. However, deviations between GCM results and GRDC statistics did decrease for Q, Q90 and IAV. After bias-correction consistency amongst

  17. Validation of the 3D Skin Comet assay using full thickness skin models: transferability and reproducibility

    Directory of Open Access Journals (Sweden)

    Kerstin Reisinger

    2015-06-01

    Full Text Available The 3D Skin Comet assay was developed to improve the in vitro prediction of the genotoxic potential of dermally applied chemicals. For this purpose, a classical read-out for genotoxicity (i.e. comet formation was combined with reconstructed 3D skin models as well-established test systems. Five laboratories (BASF, BfR (Federal Institute for Risk Assessment, Henkel, Procter & Gamble and TNO Triskilion started to validate this assay using the Phenion® Full- Thickness (FT Skin Model and 8 coded chemicals with financial support by Cosmetics Europe and the German Ministry of Education & Research. There was an excellent overall predictivity of the expected genotoxicity (>90%. Four labs correctly identified all chemicals and the fifth correctly identified 80% of the chemicals. Background DNA damage was low and values for solvent (acetone and positive (methyl methanesulfonate (MMS controls were comparable among labs. Inclusion of the DNA-polymerase inhibitor, aphidicolin (APC, in the protocol improved the predictivity of the assay since it enabled robust detection of pro-mutagens e.g., 7,12-dimethylbenz[a]anthracene and benzo[a]pyrene. Therefore, all negative findings are now confirmed by additional APC experiments to come to a final conclusion. Furthermore, MMC, which intercalates between DNA strands causing covalent binding, was detected with the standard protocol, in which it gave weak but statistically significant responses. Stronger responses, however, were obtained using a cross-linker specific protocol in which MMC reduced the migration of MMS-induced DNA damage. These data support the use of the Phenion® FT in the Comet assay: no false-positives and only one false-negative finding in a single lab. Testing will continue to obtain data for 30 chemicals. Once validated, the 3D Skin Comet assay is foreseen to be used as a follow-up test for positive results from the current in vitro genotoxicity test battery.

  18. PAMELA positron and electron spectra are reproduced by 3-dimensional cosmic-ray modeling

    CERN Document Server

    Gaggero, Daniele; Maccione, Luca; Di Bernardo, Giuseppe; Evoli, Carmelo

    2013-01-01

    The PAMELA collaboration recently released the $e^+$ absolute spectrum between 1 and 300 GeV in addition to the positron fraction and $e^-$ spectrum previously measured in the same time period. We use the newly developed 3-dimensional upgrade of the DRAGON code and the charge dependent solar modulation HelioProp code to consistently describe those data. We obtain very good fits of all data sets if a $e^+$ + $e^-$ hard extra-component peaked at 1 TeV is added to a softer $e^-$ background and the secondary $e^\\pm$ produced by the spallation of cosmic ray proton and helium nuclei. All sources are assumed to follow a realistic spiral arm spatial distribution. Remarkably, PAMELA data do not display any need of charge asymmetric extra-component. Finally, plain diffusion, or low re-acceleration, propagation models which are tuned against nuclear data, nicely describe PAMELA lepton data with no need to introduce a low energy break in the proton and Helium spectra.

  19. The diverse broad-band light-curves of Swift GRBs reproduced with the cannonball model

    CERN Document Server

    Dado, Shlomo; De Rújula, A

    2009-01-01

    Two radiation mechanisms, inverse Compton scattering (ICS) and synchrotron radiation (SR), suffice within the cannonball (CB) model of long gamma ray bursts (LGRBs) and X-ray flashes (XRFs) to provide a very simple and accurate description of their observed prompt emission and afterglows. Simple as they are, the two mechanisms and the burst environment generate the rich structure of the light curves at all frequencies and times. This is demonstrated for 33 selected Swift LGRBs and XRFs, which are well sampled from early time until late time and well represent the entire diversity of the broad band light curves of Swift LGRBs and XRFs. Their prompt gamma-ray and X-ray emission is dominated by ICS of glory light. During their fast decline phase, ICS is taken over by SR which dominates their broad band afterglow. The pulse shape and spectral evolution of the gamma-ray peaks and the early-time X-ray flares, and even the delayed optical `humps' in XRFs, are correctly predicted. The canonical and non-canonical X-ra...

  20. The statistics of repeating patterns of cortical activity can be reproduced by a model network of stochastic binary neurons.

    Science.gov (United States)

    Roxin, Alex; Hakim, Vincent; Brunel, Nicolas

    2008-10-15

    Calcium imaging of the spontaneous activity in cortical slices has revealed repeating spatiotemporal patterns of transitions between so-called down states and up states (Ikegaya et al., 2004). Here we fit a model network of stochastic binary neurons to data from these experiments, and in doing so reproduce the distributions of such patterns. We use two versions of this model: (1) an unconnected network in which neurons are activated as independent Poisson processes; and (2) a network with an interaction matrix, estimated from the data, representing effective interactions between the neurons. The unconnected model (model 1) is sufficient to account for the statistics of repeating patterns in 11 of the 15 datasets studied. Model 2, with interactions between neurons, is required to account for pattern statistics of the remaining four. Three of these four datasets are the ones that contain the largest number of transitions, suggesting that long datasets are in general necessary to render interactions statistically visible. We then study the topology of the matrix of interactions estimated for these four datasets. For three of the four datasets, we find sparse matrices with long-tailed degree distributions and an overrepresentation of certain network motifs. The remaining dataset exhibits a strongly interconnected, spatially localized subgroup of neurons. In all cases, we find that interactions between neurons facilitate the generation of long patterns that do not repeat exactly.

  1. Isokinetic eccentric exercise as a model to induce and reproduce pathophysiological alterations related to delayed onset muscle soreness

    DEFF Research Database (Denmark)

    Lund, Henrik; Vestergaard-Poulsen, P; Kanstrup, I.L.

    1998-01-01

    Physiological alterations following unaccustomed eccentric exercise in an isokinetic dynamometer of the right m. quadriceps until exhaustion were studied, in order to create a model in which the physiological responses to physiotherapy could be measured. In experiment I (exp. I), seven selected...... parameters were measured bilaterally in 7 healthy subjects at day 0 as a control value. Then after a standardized bout of eccentric exercise the same parameters were measured daily for the following 7 d (test values). The measured parameters were: the ratio of phosphocreatine to inorganic phosphate (PCr...... (133Xenon washout technique). This was repeated in experiment II (exp. II) 6-12 months later in order to study reproducibility. In experiment III (exp. III), the normal fluctuations over 8 d of the seven parameters were measured, without intervention with eccentric exercise in 6 other subjects. All...

  2. Can a quantitative simulation of an Otto engine be accurately rendered by a simple Novikov model with heat leak?

    Science.gov (United States)

    Fischer, A.; Hoffmann, K.-H.

    2004-03-01

    In this case study a complex Otto engine simulation provides data including, but not limited to, effects from losses due to heat conduction, exhaust losses and frictional losses. This data is used as a benchmark to test whether the Novikov engine with heat leak, a simple endoreversible model, can reproduce the complex engine behavior quantitatively by an appropriate choice of model parameters. The reproduction obtained proves to be of high quality.

  3. Efficient and Reproducible Myogenic Differentiation from Human iPS Cells: Prospects for Modeling Miyoshi Myopathy In Vitro

    Science.gov (United States)

    Tanaka, Akihito; Woltjen, Knut; Miyake, Katsuya; Hotta, Akitsu; Ikeya, Makoto; Yamamoto, Takuya; Nishino, Tokiko; Shoji, Emi; Sehara-Fujisawa, Atsuko; Manabe, Yasuko; Fujii, Nobuharu; Hanaoka, Kazunori; Era, Takumi; Yamashita, Satoshi; Isobe, Ken-ichi; Kimura, En; Sakurai, Hidetoshi

    2013-01-01

    The establishment of human induced pluripotent stem cells (hiPSCs) has enabled the production of in vitro, patient-specific cell models of human disease. In vitro recreation of disease pathology from patient-derived hiPSCs depends on efficient differentiation protocols producing relevant adult cell types. However, myogenic differentiation of hiPSCs has faced obstacles, namely, low efficiency and/or poor reproducibility. Here, we report the rapid, efficient, and reproducible differentiation of hiPSCs into mature myocytes. We demonstrated that inducible expression of myogenic differentiation1 (MYOD1) in immature hiPSCs for at least 5 days drives cells along the myogenic lineage, with efficiencies reaching 70–90%. Myogenic differentiation driven by MYOD1 occurred even in immature, almost completely undifferentiated hiPSCs, without mesodermal transition. Myocytes induced in this manner reach maturity within 2 weeks of differentiation as assessed by marker gene expression and functional properties, including in vitro and in vivo cell fusion and twitching in response to electrical stimulation. Miyoshi Myopathy (MM) is a congenital distal myopathy caused by defective muscle membrane repair due to mutations in DYSFERLIN. Using our induced differentiation technique, we successfully recreated the pathological condition of MM in vitro, demonstrating defective membrane repair in hiPSC-derived myotubes from an MM patient and phenotypic rescue by expression of full-length DYSFERLIN (DYSF). These findings not only facilitate the pathological investigation of MM, but could potentially be applied in modeling of other human muscular diseases by using patient-derived hiPSCs. PMID:23626698

  4. Quantitative models for sustainable supply chain management

    DEFF Research Database (Denmark)

    Brandenburg, M.; Govindan, Kannan; Sarkis, J.

    2014-01-01

    Sustainability, the consideration of environmental factors and social aspects, in supply chain management (SCM) has become a highly relevant topic for researchers and practitioners. The application of operations research methods and related models, i.e. formal modeling, for closed-loop SCM...... and reverse logistics has been effectively reviewed in previously published research. This situation is in contrast to the understanding and review of mathematical models that focus on environmental or social factors in forward supply chains (SC), which has seen less investigation. To evaluate developments...

  5. A short-term mouse model that reproduces the immunopathological features of rhinovirus-induced exacerbation of COPD.

    Science.gov (United States)

    Singanayagam, Aran; Glanville, Nicholas; Walton, Ross P; Aniscenko, Julia; Pearson, Rebecca M; Pinkerton, James W; Horvat, Jay C; Hansbro, Philip M; Bartlett, Nathan W; Johnston, Sebastian L

    2015-08-01

    Viral exacerbations of chronic obstructive pulmonary disease (COPD), commonly caused by rhinovirus (RV) infections, are poorly controlled by current therapies. This is due to a lack of understanding of the underlying immunopathological mechanisms. Human studies have identified a number of key immune responses that are associated with RV-induced exacerbations including neutrophilic inflammation, expression of inflammatory cytokines and deficiencies in innate anti-viral interferon. Animal models of COPD exacerbation are required to determine the contribution of these responses to disease pathogenesis. We aimed to develop a short-term mouse model that reproduced the hallmark features of RV-induced exacerbation of COPD. Evaluation of complex protocols involving multiple dose elastase and lipopolysaccharide (LPS) administration combined with RV1B infection showed suppression rather than enhancement of inflammatory parameters compared with control mice infected with RV1B alone. Therefore, these approaches did not accurately model the enhanced inflammation associated with RV infection in patients with COPD compared with healthy subjects. In contrast, a single elastase treatment followed by RV infection led to heightened airway neutrophilic and lymphocytic inflammation, increased expression of tumour necrosis factor (TNF)-α, C-X-C motif chemokine 10 (CXCL10)/IP-10 (interferon γ-induced protein 10) and CCL5 [chemokine (C-C motif) ligand 5]/RANTES (regulated on activation, normal T-cell expressed and secreted), mucus hypersecretion and preliminary evidence for increased airway hyper-responsiveness compared with mice treated with elastase or RV infection alone. In summary, we have developed a new mouse model of RV-induced COPD exacerbation that mimics many of the inflammatory features of human disease. This model, in conjunction with human models of disease, will provide an essential tool for studying disease mechanisms and allow testing of novel therapies with potential to

  6. Reproducing Electric Field Observations during Magnetic Storms by means of Rigorous 3-D Modelling and Distortion Matrix Co-estimation

    Science.gov (United States)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2015-04-01

    Electric fields induced in the conducting Earth during magnetic storms drive currents in power transmission grids, telecommunication lines or buried pipelines. These geomagnetically induced currents (GIC) can cause severe service disruptions. The prediction of GIC is thus of great importance for public and industry. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we developed a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a model of the magnetospheric source. The latter is described by low-degree spherical harmonics; its temporal evolution is derived from observatory magnetic data. Time series of the electric field can be computed for every location on Earth's surface. The actual electric field however is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the conductivity model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and computed electric fields. Using data of various magnetic storms that occurred between 2000 and 2003, we estimated distortion matrices for observatory sites onshore and on the ocean bottom. Strong correlations between modellings and measurements validate our method. The distortion matrix estimates prove to be reliable, as they are accurately reproduced for different magnetic storms. We further show that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of electric field time series during magnetic storms. Since the required computational resources are negligible, our approach is suitable for a real-time prediction of GIC. For this purpose, a reliable forecast of the source field, e.g. based on data from satellites

  7. Can CFMIP2 models reproduce the leading modes of cloud vertical structure in the CALIPSO-GOCCP observations?

    Science.gov (United States)

    Wang, Fang; Yang, Song

    2017-02-01

    Using principal component (PC) analysis, three leading modes of cloud vertical structure (CVS) are revealed by the GCM-Oriented CALIPSO Cloud Product (GOCCP), i.e. tropical high, subtropical anticyclonic and extratropical cyclonic cloud modes (THCM, SACM and ECCM, respectively). THCM mainly reflect the contrast between tropical high clouds and clouds in middle/high latitudes. SACM is closely associated with middle-high clouds in tropical convective cores, few-cloud regimes in subtropical anticyclonic clouds and stratocumulus over subtropical eastern oceans. ECCM mainly corresponds to clouds along extratropical cyclonic regions. Models of phase 2 of Cloud Feedback Model Intercomparison Project (CFMIP2) well reproduce the THCM, but SACM and ECCM are generally poorly simulated compared to GOCCP. Standardized PCs corresponding to CVS modes are generally captured, whereas original PCs (OPCs) are consistently underestimated (overestimated) for THCM (SACM and ECCM) by CFMIP2 models. The effects of CVS modes on relative cloud radiative forcing (RSCRF/RLCRF) (RSCRF being calculated at the surface while RLCRF at the top of atmosphere) are studied in terms of principal component regression method. Results show that CFMIP2 models tend to overestimate (underestimated or simulate the opposite sign) RSCRF/RLCRF radiative effects (REs) of ECCM (THCM and SACM) in unit global mean OPC compared to observations. These RE biases may be attributed to two factors, one of which is underestimation (overestimation) of low/middle clouds (high clouds) (also known as stronger (weaker) REs in unit low/middle (high) clouds) in simulated global mean cloud profiles, the other is eigenvector biases in CVS modes (especially for SACM and ECCM). It is suggested that much more attention should be paid on improvement of CVS, especially cloud parameterization associated with particular physical processes (e.g. downwelling regimes with the Hadley circulation, extratropical storm tracks and others), which

  8. Can a coupled meteorology-chemistry model reproduce the historical trend in aerosol direct radiative effects over the Northern Hemisphere?

    Directory of Open Access Journals (Sweden)

    J. Xing

    2015-05-01

    Full Text Available The ability of a coupled meteorology-chemistry model, i.e., WRF-CMAQ, in reproducing the historical trend in AOD and clear-sky short-wave radiation (SWR over the Northern Hemisphere has been evaluated through a comparison of 21 year simulated results with observation-derived records from 1990–2010. Six satellite retrieved AOD products including AVHRR, TOMS, SeaWiFS, MISR, MODIS-terra and -aqua as well as long-term historical records from 11 AERONET sites were used for the comparison of AOD trends. Clear-sky SWR products derived by CERES at both TOA and surface as well as surface SWR data derived from seven SURFRAD sites were used for the comparison of trends in SWR. The model successfully captured increasing AOD trends along with the corresponding increased TOA SWR (upwelling and decreased surface SWR (downwelling in both eastern China and the northern Pacific. The model also captured declining AOD trends along with the corresponding decreased TOA SWR (upwelling and increased surface SWR (downwelling in eastern US, Europe and northern Atlantic for the period of 2000–2010. However, the model underestimated the AOD over regions with substantial natural dust aerosol contributions, such as the Sahara Desert, Arabian Desert, central Atlantic and north Indian Ocean. Estimates of aerosol direct radiative effect (DRE at TOA are comparable with those derived by measurements. Compared to GCMs, the model exhibits better estimates of surface- aerosol direct radiative efficiency (Eτ. However, surface-DRE tends to be underestimated due to the underestimated AOD in land and dust regions. Further investigation of TOA-Eτ estimations as well as the dust module used for estimates of windblown-dust emissions is needed.

  9. A Bloch-McConnell simulator with pharmacokinetic modeling to explore accuracy and reproducibility in the measurement of hyperpolarized pyruvate

    Science.gov (United States)

    Walker, Christopher M.; Bankson, James A.

    2015-03-01

    Magnetic resonance imaging (MRI) of hyperpolarized (HP) agents has the potential to probe in-vivo metabolism with sensitivity and specificity that was not previously possible. Biological conversion of HP agents specifically for cancer has been shown to correlate to presence of disease, stage and response to therapy. For such metabolic biomarkers derived from MRI of hyperpolarized agents to be clinically impactful, they need to be validated and well characterized. However, imaging of HP substrates is distinct from conventional MRI, due to the non-renewable nature of transient HP magnetization. Moreover, due to current practical limitations in generation and evolution of hyperpolarized agents, it is not feasible to fully experimentally characterize measurement and processing strategies. In this work we use a custom Bloch-McConnell simulator with pharmacokinetic modeling to characterize the performance of specific magnetic resonance spectroscopy sequences over a range of biological conditions. We performed numerical simulations to evaluate the effect of sequence parameters over a range of chemical conversion rates. Each simulation was analyzed repeatedly with the addition of noise in order to determine the accuracy and reproducibility of measurements. Results indicate that under both closed and perfused conditions, acquisition parameters can affect measurements in a tissue dependent manner, suggesting that great care needs to be taken when designing studies involving hyperpolarized agents. More modeling studies will be needed to determine what effect sequence parameters have on more advanced acquisitions and processing methods.

  10. An Effective and Reproducible Model of Ventricular Fibrillation in Crossbred Yorkshire Swine (Sus scrofa) for Use in Physiologic Research.

    Science.gov (United States)

    Burgert, James M; Johnson, Arthur D; Garcia-Blanco, Jose C; Craig, W John; O'Sullivan, Joseph C

    2015-10-01

    Transcutaneous electrical induction (TCEI) has been used to induce ventricular fibrillation (VF) in laboratory swine for physiologic and resuscitation research. Many studies do not describe the method of TCEI in detail, thus making replication by future investigators difficult. Here we describe a detailed method of electrically inducing VF that was used successfully in a prospective, experimental resuscitation study. Specifically, an electrical current was passed through the heart to induce VF in crossbred Yorkshire swine (n = 30); the current was generated by using two 22-gauge spinal needles, with one placed above and one below the heart, and three 9V batteries connected in series. VF developed in 28 of the 30 pigs (93%) within 10 s of beginning the procedure. In the remaining 2 swine, VF was induced successfully after medial redirection of the superior parasternal needle. The TCEI method is simple, reproducible, and cost-effective. TCEI may be especially valuable to researchers with limited access to funding, sophisticated equipment, or colleagues experienced in interventional cardiology techniques. The TCEI method might be most appropriate for pharmacologic studies requiring VF, VF resulting from the R-on-T phenomenon (as in prolonged QT syndrome), and VF arising from other ectopic or reentrant causes. However, the TCEI method does not accurately model the most common cause of VF, acute coronary occlusive disease. Researchers must consider the limitations of TCEI that may affect internal and external validity of collected data, when designing experiments using this model of VF.

  11. Hazard Response Modeling Uncertainty (A Quantitative Method)

    Science.gov (United States)

    1988-10-01

    ersio 114-11aiaiI I I I II L I ATINI Iri Ig IN Ig - ISI I I s InWLS I I I II I I I IWILLa RguOSmI IT$ INDS In s list INDIN I Im Ad inla o o ILLS I...OesotASII II I I I" GASau ATI im IVS ES Igo Igo INC 9 -U TIg IN ImS. I IgoIDI II i t I I ol f i isI I I I I I * WOOL ETY tGMIM (SU I YESMI jWM# GUSA imp I...is the concentration predicted by some component or model.P The variance of C /C is calculated and defined as var(Model I), where Modelo p I could be

  12. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  13. A Qualitative and Quantitative Evaluation of 8 Clear Sky Models.

    Science.gov (United States)

    Bruneton, Eric

    2016-10-27

    We provide a qualitative and quantitative evaluation of 8 clear sky models used in Computer Graphics. We compare the models with each other as well as with measurements and with a reference model from the physics community. After a short summary of the physics of the problem, we present the measurements and the reference model, and how we "invert" it to get the model parameters. We then give an overview of each CG model, and detail its scope, its algorithmic complexity, and its results using the same parameters as in the reference model. We also compare the models with a perceptual study. Our quantitative results confirm that the less simplifications and approximations are used to solve the physical equations, the more accurate are the results. We conclude with a discussion of the advantages and drawbacks of each model, and how to further improve their accuracy.

  14. An update on the rotenone models of Parkinson's disease: their ability to reproduce the features of clinical disease and model gene-environment interactions.

    Science.gov (United States)

    Johnson, Michaela E; Bobrovskaya, Larisa

    2015-01-01

    Parkinson's disease (PD) is the second most common neurodegenerative disorder that is characterized by two major neuropathological hallmarks: the degeneration of dopaminergic neurons in the substantia nigra (SN) and the presence of Lewy bodies in the surviving SN neurons, as well as other regions of the central and peripheral nervous system. Animal models have been invaluable tools for investigating the underlying mechanisms of the pathogenesis of PD and testing new potential symptomatic, neuroprotective and neurorestorative therapies. However, the usefulness of these models is dependent on how precisely they replicate the features of clinical PD with some studies now employing combined gene-environment models to replicate more of the affected pathways. The rotenone model of PD has become of great interest following the seminal paper by the Greenamyre group in 2000 (Betarbet et al., 2000). This paper reported for the first time that systemic rotenone was able to reproduce the two pathological hallmarks of PD as well as certain parkinsonian motor deficits. Since 2000, many research groups have actively used the rotenone model worldwide. This paper will review rotenone models, focusing upon their ability to reproduce the two pathological hallmarks of PD, motor deficits, extranigral pathology and non-motor symptoms. We will also summarize the recent advances in neuroprotective therapies, focusing on those that investigated non-motor symptoms and review rotenone models used in combination with PD genetic models to investigate gene-environment interactions.

  15. A Solved Model to Show Insufficiency of Quantitative Adiabatic Condition

    Institute of Scientific and Technical Information of China (English)

    LIU Long-Jiang; LIU Yu-Zhen; TONG Dian-Min

    2009-01-01

    The adiabatic theorem is a useful tool in processing quantum systems slowly evolving,but its practical application depends on the quantitative condition expressed by Hamiltonian's eigenvalues and eigenstates,which is usually taken as a sufficient condition.Recently,the sumciency of the condition was questioned,and several counterex amples have been reported.Here we present a new solved model to show the insufficiency of the traditional quantitative adiabatic condition.

  16. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models

    Science.gov (United States)

    Rieger, TR; Musante, CJ

    2016-01-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777

  17. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  18. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  19. Safety and Reproducibility of a Clinical Trial System Using Induced Blood Stage Plasmodium vivax Infection and Its Potential as a Model to Evaluate Malaria Transmission

    Science.gov (United States)

    Elliott, Suzanne; Sekuloski, Silvana; Sikulu, Maggy; Hugo, Leon; Khoury, David; Cromer, Deborah; Davenport, Miles; Sattabongkot, Jetsumon; Ivinson, Karen; Ockenhouse, Christian; McCarthy, James

    2016-01-01

    Background Interventions to interrupt transmission of malaria from humans to mosquitoes represent an appealing approach to assist malaria elimination. A limitation has been the lack of systems to test the efficacy of such interventions before proceeding to efficacy trials in the field. We have previously demonstrated the feasibility of induced blood stage malaria (IBSM) infection with Plasmodium vivax. In this study, we report further validation of the IBSM model, and its evaluation for assessment of transmission of P. vivax to Anopheles stephensi mosquitoes. Methods Six healthy subjects (three cohorts, n = 2 per cohort) were infected with P. vivax by inoculation with parasitized erythrocytes. Parasite growth was monitored by quantitative PCR, and gametocytemia by quantitative reverse transcriptase PCR (qRT-PCR) for the mRNA pvs25. Parasite multiplication rate (PMR) and size of inoculum were calculated by linear regression. Mosquito transmission studies were undertaken by direct and membrane feeding assays over 3 days prior to commencement of antimalarial treatment, and midguts of blood fed mosquitoes dissected and checked for presence of oocysts after 7–9 days. Results The clinical course and parasitemia were consistent across cohorts, with all subjects developing mild to moderate symptoms of malaria. No serious adverse events were reported. Asymptomatic elevated liver function tests were detected in four of six subjects; these resolved without treatment. Direct feeding of mosquitoes was well tolerated. The estimated PMR was 9.9 fold per cycle. Low prevalence of mosquito infection was observed (1.8%; n = 32/1801) from both direct (4.5%; n = 20/411) and membrane (0.9%; n = 12/1360) feeds. Conclusion The P. vivax IBSM model proved safe and reliable. The clinical course and PMR were reproducible when compared with the previous study using this model. The IBSM model presented in this report shows promise as a system to test transmission-blocking interventions

  20. A rat model of post-traumatic stress disorder reproduces the hippocampal deficits seen in the human syndrome

    Directory of Open Access Journals (Sweden)

    Sonal eGoswami

    2012-06-01

    Full Text Available Despite recent progress, the causes and pathophysiology of post-traumatic stress disorder (PTSD remain poorly understood, partly because of ethical limitations inherent to human studies. One approach to circumvent this obstacle is to study PTSD in a valid animal model of the human syndrome. In one such model, extreme and long-lasting behavioral manifestations of anxiety develop in a subset of Lewis rats after exposure to an intense predatory threat that mimics the type of life-and-death situation known to precipitate PTSD in humans. This study aimed to assess whether the hippocampus-associated deficits observed in the human syndrome are reproduced in this rodent model. Prior to predatory threat, different groups of rats were each tested on one of three object recognition memory tasks that varied in the types of contextual clues (i.e. that require the hippocampus or not the rats could use to identify novel items. After task completion, the rats were subjected to predatory threat and, one week later, tested on the elevated plus maze. Based on their exploratory behavior in the plus maze, rats were then classified as resilient or PTSD-like and their performance on the pre-threat object recognition tasks compared. The performance of PTSD-like rats was inferior to that of resilient rats but only when subjects relied on an allocentric frame of reference to identify novel items, a process thought to be critically dependent on the hippocampus. Therefore, these results suggest that even prior to trauma, PTSD-like rats show a deficit in hippocampal-dependent functions, as reported in twin studies of human PTSD.

  1. Reproducing the organic matter model of anthropogenic dark earth of Amazonia and testing the ecotoxicity of functionalized charcoal compounds

    Directory of Open Access Journals (Sweden)

    Carolina Rodrigues Linhares

    2012-05-01

    Full Text Available The objective of this work was to obtain organic compounds similar to the ones found in the organic matter of anthropogenic dark earth of Amazonia (ADE using a chemical functionalization procedure on activated charcoal, as well as to determine their ecotoxicity. Based on the study of the organic matter from ADE, an organic model was proposed and an attempt to reproduce it was described. Activated charcoal was oxidized with the use of sodium hypochlorite at different concentrations. Nuclear magnetic resonance was performed to verify if the spectra of the obtained products were similar to the ones of humic acids from ADE. The similarity between spectra indicated that the obtained products were polycondensed aromatic structures with carboxyl groups: a soil amendment that can contribute to soil fertility and to its sustainable use. An ecotoxicological test with Daphnia similis was performed on the more soluble fraction (fulvic acids of the produced soil amendment. Aryl chloride was formed during the synthesis of the organic compounds from activated charcoal functionalization and partially removed through a purification process. However, it is probable that some aryl chloride remained in the final product, since the ecotoxicological test indicated that the chemical functionalized soil amendment is moderately toxic.

  2. Modeling quantitative phase image formation under tilted illuminations.

    Science.gov (United States)

    Bon, Pierre; Wattellier, Benoit; Monneret, Serge

    2012-05-15

    A generalized product-of-convolution model for simulation of quantitative phase microscopy of thick heterogeneous specimen under tilted plane-wave illumination is presented. Actual simulations are checked against a much more time-consuming commercial finite-difference time-domain method. Then modeled data are compared with experimental measurements that were made with a quadriwave lateral shearing interferometer.

  3. Improvement of the ID model for quantitative network data

    DEFF Research Database (Denmark)

    Sørensen, Peter Borgen; Damgaard, Christian Frølund; Dupont, Yoko Luise

    2015-01-01

    )1. This presentation will illustrate the application of the ID method based on a data set which consists of counts of visits by 152 pollinator species to 16 plant species. The method is based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi...... reproduce the high number of zero valued cells in the data set and mimic the sampling distribution. 1 Sørensen et al, Journal of Pollination Ecology, 6(18), 2011, pp129-139......Many interactions are often poorly registered or even unobserved in empirical quantitative networks. Hence, the output of the statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks...

  4. Generalized PSF modeling for optimized quantitation in PET imaging

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-ud-Din, Hassan; Karakatsanis, Nicolas A.; Jha, Abhinav K.; Casey, Michael E.; Kadrmas, Dan J.; Rahmim, Arman

    2017-06-01

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF

  5. Assessment of an ensemble of ocean-atmosphere coupled and uncoupled regional climate models to reproduce the climatology of Mediterranean cyclones

    Science.gov (United States)

    Flaounas, Emmanouil; Kelemen, Fanni Dora; Wernli, Heini; Gaertner, Miguel Angel; Reale, Marco; Sanchez-Gomez, Emilia; Lionello, Piero; Calmanti, Sandro; Podrascanin, Zorica; Somot, Samuel; Akhtar, Naveed; Romera, Raquel; Conte, Dario

    2016-11-01

    This study aims to assess the skill of regional climate models (RCMs) at reproducing the climatology of Mediterranean cyclones. Seven RCMs are considered, five of which were also coupled with an oceanic model. All simulations were forced at the lateral boundaries by the ERA-Interim reanalysis for a common 20-year period (1989-2008). Six different cyclone tracking methods have been applied to all twelve RCM simulations and to the ERA-Interim reanalysis in order to assess the RCMs from the perspective of different cyclone definitions. All RCMs reproduce the main areas of high cyclone occurrence in the region south of the Alps, in the Adriatic, Ionian and Aegean Seas, as well as in the areas close to Cyprus and to Atlas mountains. The RCMs tend to underestimate intense cyclone occurrences over the Mediterranean Sea and reproduce 24-40 % of these systems, as identified in the reanalysis. The use of grid nudging in one of the RCMs is shown to be beneficial, reproducing about 60 % of the intense cyclones and keeping a better track of the seasonal cycle of intense cyclogenesis. Finally, the most intense cyclones tend to be similarly reproduced in coupled and uncoupled model simulations, suggesting that modeling atmosphere-ocean coupled processes has only a weak impact on the climatology and intensity of Mediterranean cyclones.

  6. Refining the quantitative pathway of the Pathways to Mathematics model.

    Science.gov (United States)

    Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda

    2015-03-01

    In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. A computational model for histone mark propagation reproduces the distribution of heterochromatin in different human cell types.

    Science.gov (United States)

    Schwämmle, Veit; Jensen, Ole Nørregaard

    2013-01-01

    Chromatin is a highly compact and dynamic nuclear structure that consists of DNA and associated proteins. The main organizational unit is the nucleosome, which consists of a histone octamer with DNA wrapped around it. Histone proteins are implicated in the regulation of eukaryote genes and they carry numerous reversible post-translational modifications that control DNA-protein interactions and the recruitment of chromatin binding proteins. Heterochromatin, the transcriptionally inactive part of the genome, is densely packed and contains histone H3 that is methylated at Lys 9 (H3K9me). The propagation of H3K9me in nucleosomes along the DNA in chromatin is antagonizing by methylation of H3 Lysine 4 (H3K4me) and acetylations of several lysines, which is related to euchromatin and active genes. We show that the related histone modifications form antagonized domains on a coarse scale. These histone marks are assumed to be initiated within distinct nucleation sites in the DNA and to propagate bi-directionally. We propose a simple computer model that simulates the distribution of heterochromatin in human chromosomes. The simulations are in agreement with previously reported experimental observations from two different human cell lines. We reproduced different types of barriers between heterochromatin and euchromatin providing a unified model for their function. The effect of changes in the nucleation site distribution and of propagation rates were studied. The former occurs mainly with the aim of (de-)activation of single genes or gene groups and the latter has the power of controlling the transcriptional programs of entire chromosomes. Generally, the regulatory program of gene transcription is controlled by the distribution of nucleation sites along the DNA string.

  8. Reproducibility and accuracy of linear measurements on dental models derived from cone-beam computed tomography compared with digital dental casts

    NARCIS (Netherlands)

    Waard, O. de; Rangel, F.A.; Fudalej, P.S.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Breuning, K.H.

    2014-01-01

    INTRODUCTION: The aim of this study was to determine the reproducibility and accuracy of linear measurements on 2 types of dental models derived from cone-beam computed tomography (CBCT) scans: CBCT images, and Anatomodels (InVivoDental, San Jose, Calif); these were compared with digital models gene

  9. Randomised reproducing graphs

    CERN Document Server

    Jordan, Jonathan

    2011-01-01

    We introduce a model for a growing random graph based on simultaneous reproduction of the vertices. The model can be thought of as a generalisation of the reproducing graphs of Southwell and Cannings and Bonato et al to allow for a random element, and there are three parameters, $\\alpha$, $\\beta$ and $\\gamma$, which are the probabilities of edges appearing between different types of vertices. We show that as the probabilities associated with the model vary there are a number of phase transitions, in particular concerning the degree sequence. If $(1+\\alpha)(1+\\gamma)1$ then the degree of a typical vertex grows to infinity, and the proportion of vertices having any fixed degree $d$ tends to zero. We also give some results on the number of edges and on the spectral gap.

  10. Modeling Logistic Performance in Quantitative Microbial Risk Assessment

    NARCIS (Netherlands)

    Rijgersberg, H.; Tromp, S.O.; Jacxsens, L.; Uyttendaele, M.

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage ti

  11. Quantitative modelling in design and operation of food supply systems

    NARCIS (Netherlands)

    Beek, van P.

    2004-01-01

    During the last two decades food supply systems not only got interest of food technologists but also from the field of Operations Research and Management Science. Operations Research (OR) is concerned with quantitative modelling and can be used to get insight into the optimal configuration and opera

  12. Murine model of disseminated fusariosis: evaluation of the fungal burden by traditional CFU and quantitative PCR.

    Science.gov (United States)

    González, Gloria M; Márquez, Jazmín; Treviño-Rangel, Rogelio de J; Palma-Nicolás, José P; Garza-González, Elvira; Ceceñas, Luis A; Gerardo González, J

    2013-10-01

    Systemic disease is the most severe clinical form of fusariosis, and the treatment involves a challenge due to the refractory response to antifungals. Treatment for murine Fusarium solani infection has been described in models that employ CFU quantitation in organs as a parameter of therapeutic efficacy. However, CFU counts do not precisely reproduce the amount of cells for filamentous fungi such as F. solani. In this study, we developed a murine model of disseminated fusariosis and compared the fungal burden with two methods: CFU and quantitative PCR. ICR and BALB/c mice received an intravenous injection of 1 × 10(7) conidia of F. solani per mouse. On days 2, 5, 7, and 9, mice from each mice strain were killed. The spleen and kidneys of each animal were removed and evaluated by qPCR and CFU determinations. Results from CFU assay indicated that the spleen and kidneys had almost the same fungal burden in both BALB/c and ICR mice during the days of the evaluation. In the qPCR assay, the spleen and kidney of each mouse strain had increased fungal burden in each determination throughout the entire experiment. The fungal load determined by the qPCR assay was significantly greater than that determined from CFU measurements of tissue. qPCR could be considered as a tool for quantitative evaluation of fungal burden in experimental disseminated F. solani infection.

  13. The Need for Reproducibility

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-06-27

    The purpose of this presentation is to consider issues of reproducibility, specifically it determines whether bitwise reproducible computation is possible, if computational research in DOE improves its publication process, and if reproducible results can be achieved apart from the peer review process?

  14. A GPGPU accelerated modeling environment for quantitatively characterizing karst systems

    Science.gov (United States)

    Myre, J. M.; Covington, M. D.; Luhmann, A. J.; Saar, M. O.

    2011-12-01

    The ability to derive quantitative information on the geometry of karst aquifer systems is highly desirable. Knowing the geometric makeup of a karst aquifer system enables quantitative characterization of the systems response to hydraulic events. However, the relationship between flow path geometry and karst aquifer response is not well understood. One method to improve this understanding is the use of high speed modeling environments. High speed modeling environments offer great potential in this regard as they allow researchers to improve their understanding of the modeled karst aquifer through fast quantitative characterization. To that end, we have implemented a finite difference model using General Purpose Graphics Processing Units (GPGPUs). GPGPUs are special purpose accelerators which are capable of high speed and highly parallel computation. The GPGPU architecture is a grid like structure, making it is a natural fit for structured systems like finite difference models. To characterize the highly complex nature of karst aquifer systems our modeling environment is designed to use an inverse method to conduct the parameter tuning. Using an inverse method reduces the total amount of parameter space needed to produce a set of parameters describing a system of good fit. Systems of good fit are determined with a comparison to reference storm responses. To obtain reference storm responses we have collected data from a series of data-loggers measuring water depth, temperature, and conductivity at locations along a cave stream with a known geometry in southeastern Minnesota. By comparing the modeled response to those of the reference responses the model parameters can be tuned to quantitatively characterize geometry, and thus, the response of the karst system.

  15. Reservoir Stochastic Modeling Constrained by Quantitative Geological Conceptual Patterns

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper discusses the principles of geologic constraints on reservoir stochastic modeling. By using the system science theory, two kinds of uncertainties, including random uncertainty and fuzzy uncertainty, are recognized. In order to improve the precision of stochastic modeling and reduce the uncertainty in realization, the fuzzy uncertainty should be stressed, and the "geological genesis-controlled modeling" is conducted under the guidance of a quantitative geological pattern. An example of the Pingqiao horizontal-well division of the Ansai Oilfield in the Ordos Basin is taken to expound the method of stochastic modeling.

  16. Lessons Learned from Quantitative Dynamical Modeling in Systems Biology

    Science.gov (United States)

    Bachmann, Julie; Matteson, Andrew; Schelke, Max; Kaschek, Daniel; Hug, Sabine; Kreutz, Clemens; Harms, Brian D.; Theis, Fabian J.; Klingmüller, Ursula; Timmer, Jens

    2013-01-01

    Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here. PMID:24098642

  17. Lessons learned from quantitative dynamical modeling in systems biology.

    Directory of Open Access Journals (Sweden)

    Andreas Raue

    Full Text Available Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here.

  18. Quantitative performance metrics for stratospheric-resolving chemistry-climate models

    Directory of Open Access Journals (Sweden)

    D. W. Waugh

    2008-06-01

    Full Text Available A set of performance metrics is applied to stratospheric-resolving chemistry-climate models (CCMs to quantify their ability to reproduce key processes relevant for stratospheric ozone. The same metrics are used to assign a quantitative measure of performance ("grade" to each model-observations comparison shown in Eyring et al. (2006. A wide range of grades is obtained, both for different diagnostics applied to a single model and for the same diagnostic applied to different models, highlighting the wide range in ability of the CCMs to simulate key processes in the stratosphere. No model scores high or low on all tests, but differences in the performance of models can be seen, especially for transport processes where several models get low grades on multiple tests. The grades are used to assign relative weights to the CCM projections of 21st century total ozone. However, only small differences are found between weighted and unweighted multi-model mean total ozone projections. This study raises several issues with the grading and weighting of CCMs that need further examination, but it does provide a framework that will enable quantification of model improvements and assignment of relative weights to the model projections.

  19. Quantitative Analysis of Polarimetric Model-Based Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2016-11-01

    Full Text Available In this paper, we analyze the robustness of the parameter inversion provided by general polarimetric model-based decomposition methods from the perspective of a quantitative application. The general model and algorithm we have studied is the method proposed recently by Chen et al., which makes use of the complete polarimetric information and outperforms traditional decomposition methods in terms of feature extraction from land covers. Nevertheless, a quantitative analysis on the retrieved parameters from that approach suggests that further investigations are required in order to fully confirm the links between a physically-based model (i.e., approaches derived from the Freeman–Durden concept and its outputs as intermediate products before any biophysical parameter retrieval is addressed. To this aim, we propose some modifications on the optimization algorithm employed for model inversion, including redefined boundary conditions, transformation of variables, and a different strategy for values initialization. A number of Monte Carlo simulation tests for typical scenarios are carried out and show that the parameter estimation accuracy of the proposed method is significantly increased with respect to the original implementation. Fully polarimetric airborne datasets at L-band acquired by German Aerospace Center’s (DLR’s experimental synthetic aperture radar (E-SAR system were also used for testing purposes. The results show different qualitative descriptions of the same cover from six different model-based methods. According to the Bragg coefficient ratio (i.e., β , they are prone to provide wrong numerical inversion results, which could prevent any subsequent quantitative characterization of specific areas in the scene. Besides the particular improvements proposed over an existing polarimetric inversion method, this paper is aimed at pointing out the necessity of checking quantitatively the accuracy of model-based PolSAR techniques for a

  20. Development of a Three-Dimensional Hand Model Using Three-Dimensional Stereophotogrammetry: Assessment of Image Reproducibility.

    Directory of Open Access Journals (Sweden)

    Inge A Hoevenaren

    Full Text Available Using three-dimensional (3D stereophotogrammetry precise images and reconstructions of the human body can be produced. Over the last few years, this technique is mainly being developed in the field of maxillofacial reconstructive surgery, creating fusion images with computed tomography (CT data for precise planning and prediction of treatment outcome. Though, in hand surgery 3D stereophotogrammetry is not yet being used in clinical settings.A total of 34 three-dimensional hand photographs were analyzed to investigate the reproducibility. For every individual, 3D photographs were captured at two different time points (baseline T0 and one week later T1. Using two different registration methods, the reproducibility of the methods was analyzed. Furthermore, the differences between 3D photos of men and women were compared in a distance map as a first clinical pilot testing our registration method.The absolute mean registration error for the complete hand was 1.46 mm. This reduced to an error of 0.56 mm isolating the region to the palm of the hand. When comparing hands of both sexes, it was seen that the male hand was larger (broader base and longer fingers than the female hand.This study shows that 3D stereophotogrammetry can produce reproducible images of the hand without harmful side effects for the patient, so proving to be a reliable method for soft tissue analysis. Its potential use in everyday practice of hand surgery needs to be further explored.

  1. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.

    Science.gov (United States)

    Moray, Neville; Groeger, John; Stanton, Neville

    2017-02-01

    This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.

  2. A quantitative comparison of Calvin-Benson cycle models.

    Science.gov (United States)

    Arnold, Anne; Nikoloski, Zoran

    2011-12-01

    The Calvin-Benson cycle (CBC) provides the precursors for biomass synthesis necessary for plant growth. The dynamic behavior and yield of the CBC depend on the environmental conditions and regulation of the cellular state. Accurate quantitative models hold the promise of identifying the key determinants of the tightly regulated CBC function and their effects on the responses in future climates. We provide an integrative analysis of the largest compendium of existing models for photosynthetic processes. Based on the proposed ranking, our framework facilitates the discovery of best-performing models with regard to metabolomics data and of candidates for metabolic engineering.

  3. Real-time quantitative monitoring of hiPSC-based model of macular degeneration on Electric Cell-substrate Impedance Sensing microelectrodes.

    Science.gov (United States)

    Gamal, W; Borooah, S; Smith, S; Underwood, I; Srsen, V; Chandran, S; Bagnaninchi, P O; Dhillon, B

    2015-09-15

    Age-related macular degeneration (AMD) is the leading cause of blindness in the developed world. Humanized disease models are required to develop new therapies for currently incurable forms of AMD. In this work, a tissue-on-a-chip approach was developed through combining human induced pluripotent stem cells, Electric Cell-substrate Impedance Sensing (ECIS) and reproducible electrical wounding assays to model and quantitatively study AMD. Retinal Pigment Epithelium (RPE) cells generated from a patient with an inherited macular degeneration and from an unaffected sibling were used to test the model platform on which a reproducible electrical wounding assay was conducted to model RPE damage. First, a robust and reproducible real-time quantitative monitoring over a 25-day period demonstrated the establishment and maturation of RPE layers on the microelectrode arrays. A spatially controlled RPE layer damage that mimicked cell loss in AMD disease was then initiated. Post recovery, significant differences (P model-on-a-chip is a powerful platform for translational studies with considerable potential to investigate novel therapies by enabling real-time, quantitative and reproducible patient-specific RPE cell repair studies. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  4. From Peer-Reviewed to Peer-Reproduced in Scholarly Publishing: The Complementary Roles of Data Models and Workflows in Bioinformatics.

    Science.gov (United States)

    González-Beltrán, Alejandra; Li, Peter; Zhao, Jun; Avila-Garcia, Maria Susana; Roos, Marco; Thompson, Mark; van der Horst, Eelke; Kaliyaperumal, Rajaram; Luo, Ruibang; Lee, Tin-Lap; Lam, Tak-Wah; Edmunds, Scott C; Sansone, Susanna-Assunta; Rocca-Serra, Philippe

    2015-01-01

    Reproducing the results from a scientific paper can be challenging due to the absence of data and the computational tools required for their analysis. In addition, details relating to the procedures used to obtain the published results can be difficult to discern due to the use of natural language when reporting how experiments have been performed. The Investigation/Study/Assay (ISA), Nanopublications (NP), and Research Objects (RO) models are conceptual data modelling frameworks that can structure such information from scientific papers. Computational workflow platforms can also be used to reproduce analyses of data in a principled manner. We assessed the extent by which ISA, NP, and RO models, together with the Galaxy workflow system, can capture the experimental processes and reproduce the findings of a previously published paper reporting on the development of SOAPdenovo2, a de novo genome assembler. Executable workflows were developed using Galaxy, which reproduced results that were consistent with the published findings. A structured representation of the information in the SOAPdenovo2 paper was produced by combining the use of ISA, NP, and RO models. By structuring the information in the published paper using these data and scientific workflow modelling frameworks, it was possible to explicitly declare elements of experimental design, variables, and findings. The models served as guides in the curation of scientific information and this led to the identification of inconsistencies in the original published paper, thereby allowing its authors to publish corrections in the form of an errata. SOAPdenovo2 scripts, data, and results are available through the GigaScience Database: http://dx.doi.org/10.5524/100044; the workflows are available from GigaGalaxy: http://galaxy.cbiit.cuhk.edu.hk; and the representations using the ISA, NP, and RO models are available through the SOAPdenovo2 case study website http://isa-tools.github.io/soapdenovo2/. philippe

  5. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger

    OpenAIRE

    Moray, Neville; Groeger, John; Stanton, Neville

    2016-01-01

    This paper shows how to combine field observations, experimental data, and mathematical modeling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example we consider a major railway accident. In 1999 a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, "black box" data, and accident and engineering reports, to construct a case history of the accident. We show how t...

  6. Quantitative models of hydrothermal fluid-mineral reaction: The Ischia case

    Science.gov (United States)

    Di Napoli, Rossella; Federico, Cinzia; Aiuppa, Alessandro; D'Antonio, Massimo; Valenza, Mariano

    2013-03-01

    The intricate pathways of fluid-mineral reactions occurring underneath active hydrothermal systems are explored in this study by applying reaction path modelling to the Ischia case study. Ischia Island, in Southern Italy, hosts a well-developed and structurally complex hydrothermal system which, because of its heterogeneity in chemical and physical properties, is an ideal test sites for evaluating potentialities/limitations of quantitative geochemical models of hydrothermal reactions. We used the EQ3/6 software package, version 7.2b, to model reaction of infiltrating waters (mixtures of meteoric water and seawater in variable proportions) with Ischia's reservoir rocks (the Mount Epomeo Green Tuff units; MEGT). The mineral assemblage and composition of such MEGT units were initially characterised by ad hoc designed optical microscopy and electron microprobe analysis, showing that phenocrysts (dominantly alkali-feldspars and plagioclase) are set in a pervasively altered (with abundant clay minerals and zeolites) groundmass. Reaction of infiltrating waters with MEGT minerals was simulated over a range of realistic (for Ischia) temperatures (95-260 °C) and CO2 fugacities (10-0.2 to 100.5) bar. During the model runs, a set of secondary minerals (selected based on independent information from alteration minerals' studies) was allowed to precipitate from model solutions, when saturation was achieved. The compositional evolution of model solutions obtained in the 95-260 °C runs were finally compared with compositions of Ischia's thermal groundwaters, demonstrating an overall agreement. Our simulations, in particular, well reproduce the Mg-depleting maturation path of hydrothermal solutions, and have end-of-run model solutions whose Na-K-Mg compositions well reflect attainment of full-equilibrium conditions at run temperature. High-temperature (180-260 °C) model runs are those best matching the Na-K-Mg compositions of Ischia's most chemically mature water samples

  7. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  8. Quantitative versus qualitative modeling: a complementary approach in ecosystem study.

    Science.gov (United States)

    Bondavalli, C; Favilla, S; Bodini, A

    2009-02-01

    Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.

  9. QuantUM: Quantitative Safety Analysis of UML Models

    Directory of Open Access Journals (Sweden)

    Florian Leitner-Fischer

    2011-07-01

    Full Text Available When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Also, it is necessary that the description methods used do not require a profound knowledge of formal methods. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. All inputs of the analysis are specified at the level of a UML model. This model is then automatically translated into the analysis model, and the results of the analysis are consequently represented on the level of the UML model. Thus the analysis model and the formal methods used during the analysis are hidden from the user. We illustrate the usefulness of our approach using an industrial strength case study.

  10. Quantitative magnetospheric models derived from spacecraft magnetometer data

    Science.gov (United States)

    Mead, G. D.; Fairfield, D. H.

    1973-01-01

    Quantitative models of the external magnetospheric field were derived by making least-squares fits to magnetic field measurements from four IMP satellites. The data were fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the models contain the effects of seasonal north-south asymmetries. The expansions are divergence-free, but unlike the usual scalar potential expansions, the models contain a nonzero curl representing currents distributed within the magnetosphere. Characteristics of four models are presented, representing different degrees of magnetic disturbance as determined by the range of Kp values. The latitude at the earth separating open polar cap field lines from field lines closing on the dayside is about 5 deg lower than that determined by previous theoretically-derived models. At times of high Kp, additional high latitude field lines are drawn back into the tail.

  11. Quantitative inverse modelling of a cylindrical object in the laboratory using ERT: An error analysis

    Science.gov (United States)

    Korteland, Suze-Anne; Heimovaara, Timo

    2015-03-01

    Electrical resistivity tomography (ERT) is a geophysical technique that can be used to obtain three-dimensional images of the bulk electrical conductivity of the subsurface. Because the electrical conductivity is strongly related to properties of the subsurface and the flow of water it has become a valuable tool for visualization in many hydrogeological and environmental applications. In recent years, ERT is increasingly being used for quantitative characterization, which requires more detailed prior information than a conventional geophysical inversion for qualitative purposes. In addition, the careful interpretation of measurement and modelling errors is critical if ERT measurements are to be used in a quantitative way. This paper explores the quantitative determination of the electrical conductivity distribution of a cylindrical object placed in a water bath in a laboratory-scale tank. Because of the sharp conductivity contrast between the object and the water, a standard geophysical inversion using a smoothness constraint could not reproduce this target accurately. Better results were obtained by using the ERT measurements to constrain a model describing the geometry of the system. The posterior probability distributions of the parameters describing the geometry were estimated with the Markov chain Monte Carlo method DREAM(ZS). Using the ERT measurements this way, accurate estimates of the parameters could be obtained. The information quality of the measurements was assessed by a detailed analysis of the errors. Even for the uncomplicated laboratory setup used in this paper, errors in the modelling of the shape and position of the electrodes and the shape of the domain could be identified. The results indicate that the ERT measurements have a high information content which can be accessed by the inclusion of prior information and the consideration of measurement and modelling errors.

  12. Reproducibility study of [{sup 18}F]FPP(RGD){sub 2} uptake in murine models of human tumor xenografts

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Edwin; Liu, Shuangdong; Chin, Frederick; Cheng, Zhen [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Gowrishankar, Gayatri; Yaghoubi, Shahriar [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Wedgeworth, James Patrick [Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Berndorff, Dietmar; Gekeler, Volker [Bayer Schering Pharma AG, Global Drug Discovery, Berlin (Germany); Gambhir, Sanjiv S. [Stanford University, Molecular Imaging Program at Stanford, Department of Radiology, School of Medicine, Stanford, CA (United States); Stanford University, Molecular Imaging Program at Stanford, Department of Bioengineering, School of Medicine, Stanford, CA (United States); Canary Center at Stanford for Cancer Early Detection, Nuclear Medicine, Departments of Radiology and Bioengineering, Molecular Imaging Program at Stanford, Stanford, CA (United States)

    2011-04-15

    An {sup 18}F-labeled PEGylated arginine-glycine-aspartic acid (RGD) dimer [{sup 18}F]FPP(RGD){sub 2} has been used to image tumor {alpha}{sub v}{beta}{sub 3} integrin levels in preclinical and clinical studies. Serial positron emission tomography (PET) studies may be useful for monitoring antiangiogenic therapy response or for drug screening; however, the reproducibility of serial scans has not been determined for this PET probe. The purpose of this study was to determine the reproducibility of the integrin {alpha}{sub v}{beta}{sub 3}-targeted PET probe, [{sup 18}F ]FPP(RGD){sub 2} using small animal PET. Human HCT116 colon cancer xenografts were implanted into nude mice (n = 12) in the breast and scapular region and grown to mean diameters of 5-15 mm for approximately 2.5 weeks. A 3-min acquisition was performed on a small animal PET scanner approximately 1 h after administration of [{sup 18}F]FPP(RGD){sub 2} (1.9-3.8 MBq, 50-100 {mu}Ci) via the tail vein. A second small animal PET scan was performed approximately 6 h later after reinjection of the probe to assess for reproducibility. Images were analyzed by drawing an ellipsoidal region of interest (ROI) around the tumor xenograft activity. Percentage injected dose per gram (%ID/g) values were calculated from the mean or maximum activity in the ROIs. Coefficients of variation and differences in %ID/g values between studies from the same day were calculated to determine the reproducibility. The coefficient of variation (mean {+-}SD) for %ID{sub mean}/g and %ID{sub max}/g values between [{sup 18}F]FPP(RGD){sub 2} small animal PET scans performed 6 h apart on the same day were 11.1 {+-} 7.6% and 10.4 {+-} 9.3%, respectively. The corresponding differences in %ID{sub mean}/g and %ID{sub max}/g values between scans were -0.025 {+-} 0.067 and -0.039 {+-} 0.426. Immunofluorescence studies revealed a direct relationship between extent of {alpha}{sub {nu}}{beta}{sub 3} integrin expression in tumors and tumor vasculature

  13. Digital clocks: simple Boolean models can quantitatively describe circadian systems.

    Science.gov (United States)

    Akman, Ozgur E; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J; Ghazal, Peter

    2012-09-07

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day-night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we anticipate

  14. A right to reproduce?

    Science.gov (United States)

    Quigley, Muireann

    2010-10-01

    How should we conceive of a right to reproduce? And, morally speaking, what might be said to justify such a right? These are just two questions of interest that are raised by the technologies of assisted reproduction. This paper analyses the possible legitimate grounds for a right to reproduce within the two main theories of rights; interest theory and choice theory.

  15. Magni Reproducibility Example

    DEFF Research Database (Denmark)

    2016-01-01

    An example of how to use the magni.reproducibility package for storing metadata along with results from a computational experiment. The example is based on simulating the Mandelbrot set.......An example of how to use the magni.reproducibility package for storing metadata along with results from a computational experiment. The example is based on simulating the Mandelbrot set....

  16. Long-term stability, reproducibility, and statistical sensitivity of a telemetry-instrumented dog model: A 27-month longitudinal assessment.

    Science.gov (United States)

    Fryer, Ryan M; Ng, Khing Jow; Chi, Liguo; Jin, Xidong; Reinhart, Glenn A

    2015-01-01

    ICH guidelines, as well as best-practice and ethical considerations, provide strong rationale for use of telemetry-instrumented dog colonies for cardiovascular safety assessment. However, few studies have investigated the long-term stability of cardiovascular function at baseline, reproducibility in response to pharmacologic challenge, and maintenance of statistical sensitivity to define the usable life of the colony. These questions were addressed in 3 identical studies spanning 27months and were performed in the same colony of dogs. Telemetry-instrumented dogs (n=4) received a single dose of dl-sotalol (10mg/kg, p.o.), a β1 adrenergic and IKr blocker, or vehicle, in 3 separate studies spanning 27months. Systemic hemodynamics, cardiovascular function, and ECG parameters were monitored for 18h post-dose; plasma drug concentrations (Cp) were measured at 1, 3, 5, and 24h post-dose. Baseline hemodynamic/ECG values were consistent across the 27-month study with the exception of modest age-dependent decreases in heart rate and the corresponding QT-interval. dl-Sotalol elicited highly reproducible effects in each study. Reductions in heart rate after dl-sotalol treatment ranged between -22 and -32 beats/min, and slight differences in magnitude could be ascribed to variability in dl-sotalol Cp (range=3230-5087ng/mL); dl-sotalol also reduced LV-dP/dtmax 13-22%. dl-Sotalol increased the slope of the PR-RR relationship suggesting inhibition of AV-conduction. Increases in the heart-rate corrected QT-interval were not significantly different across the 3 studies and results of a power analysis demonstrated that the detection limit for QTc values was not diminished throughout the 27month period and across a range of power assumptions despite modest, age-dependent changes in heart rate. These results demonstrate the long-term stability of a telemetry dog colony as evidenced by a stability of baseline values, consistently reproducible response to pharmacologic challenge and no

  17. Measurement of cerebral blood flow by intravenous xenon-133 technique and a mobile system. Reproducibility using the Obrist model compared to total curve analysis

    DEFF Research Database (Denmark)

    Schroeder, T; Holstein, P; Lassen, N A

    1986-01-01

    and side-to-side asymmetry. Data were analysed according to the Obrist model and the results compared with those obtained using a model correcting for the air passage artifact. Reproducibility was of the same order of magnitude as reported using stationary equipment. The side-to-side CBF asymmetry...... differences, but in low flow situations the artifact model yielded significantly more stable results. The present apparatus, equipped with 3-5 detectors covering each hemisphere, offers the opportunity of performing serial CBF measurements in situations not otherwise feasible....

  18. Asynchronous adaptive time step in quantitative cellular automata modeling

    Directory of Open Access Journals (Sweden)

    Sun Yan

    2004-06-01

    Full Text Available Abstract Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment.

  19. Minimum joint space width (mJSW) of patellofemoral joint on standing ''skyline'' radiographs: test-retest reproducibility and comparison with quantitative magnetic resonance imaging (qMRI)

    Energy Technology Data Exchange (ETDEWEB)

    Simoni, Paolo; Jamali, Sanaa; Alvarez Miezentseva, Victoria [CHU de Liege, Diagnostic Imaging Departement, Domanine du Sart Tilman, Liege (Belgium); Albert, Adelin [CHU de Liege, Biostatistics Departement, Domanine du Sart Tilman, Liege (Belgium); Totterman, Saara; Schreyer, Edward; Tamez-Pena, Jose G. [Qmetrics Technologies, Rochester, NY (United States); Zobel, Bruno Beomonte [Campus Bio-Medico University, Diagnostic Imaging Departement, Rome (Italy); Gillet, Philippe [CHU de Liege, Orthopaedic surgery Department, Domanine du Sart Tilman, Liege (Belgium)

    2013-11-15

    To assess the intraobserver, interobserver, and test-retest reproducibility of minimum joint space width (mJSW) measurement of medial and lateral patellofemoral joints on standing ''skyline'' radiographs and to compare the mJSW of the patellofemoral joint to the mean cartilage thickness calculated by quantitative magnetic resonance imaging (qMRI). A couple of standing ''skyline'' radiographs of the patellofemoral joints and MRI of 55 knees of 28 volunteers (18 females, ten males, mean age, 48.5 {+-} 16.2 years) were obtained on the same day. The mJSW of the patellofemoral joint was manually measured and Kellgren and Lawrence grade (KLG) was independently assessed by two observers. The mJSW was compared to the mean cartilage thickness of patellofemoral joint calculated by qMRI. mJSW of the medial and lateral patellofemoral joint showed an excellent intraobserver agreement (interclass correlation (ICC) = 0.94 and 0.96), interobserver agreement (ICC = 0.90 and 0.95) and test-retest agreement (ICC = 0.92 and 0.96). The mJSW measured on radiographs was correlated to mean cartilage thickness calculated by qMRI (r = 0.71, p < 0.0001 for the medial PFJ and r = 0.81, p < 0.0001 for the lateral PFJ). However, there was a lack of concordance between radiographs and qMRI for extreme values of joint width and KLG. Radiographs yielded higher joint space measures than qMRI in knees with a normal joint space, while qMRI yielded higher joint space measures than radiographs in knees with joint space narrowing and higher KLG. Standing ''skyline'' radiographs are a reproducible tool for measuring the mJSW of the patellofemoral joint. The mJSW of the patellofemoral joint on radiographs are correlated with, but not concordant with, qMRI measurements. (orig.)

  20. Three-feature model to reproduce the topology of citation networks and the effects from authors' visibility on their h-index

    CERN Document Server

    Amancio, Diego R; Costa, Luciano da F; 10.1016/j.joi.2012.02.005

    2013-01-01

    Various factors are believed to govern the selection of references in citation networks, but a precise, quantitative determination of their importance has remained elusive. In this paper, we show that three factors can account for the referencing pattern of citation networks for two topics, namely "graphenes" and "complex networks", thus allowing one to reproduce the topological features of the networks built with papers being the nodes and the edges established by citations. The most relevant factor was content similarity, while the other two - in-degree (i.e. citation counts) and {age of publication} had varying importance depending on the topic studied. This dependence indicates that additional factors could play a role. Indeed, by intuition one should expect the reputation (or visibility) of authors and/or institutions to affect the referencing pattern, and this is only indirectly considered via the in-degree that should correlate with such reputation. Because information on reputation is not readily avai...

  1. Model for Quantitative Evaluation of Enzyme Replacement Treatment

    Directory of Open Access Journals (Sweden)

    Radeva B.

    2009-12-01

    Full Text Available Gaucher disease is the most frequent lysosomal disorder. Its enzyme replacement treatment was the new progress of modern biotechnology, successfully used in the last years. The evaluation of optimal dose of each patient is important due to health and economical reasons. The enzyme replacement is the most expensive treatment. It must be held continuously and without interruption. Since 2001, the enzyme replacement therapy with Cerezyme*Genzyme was formally introduced in Bulgaria, but after some time it was interrupted for 1-2 months. The dose of the patients was not optimal. The aim of our work is to find a mathematical model for quantitative evaluation of ERT of Gaucher disease. The model applies a kind of software called "Statistika 6" via the input of the individual data of 5-year-old children having the Gaucher disease treated with Cerezyme. The output results of the model gave possibilities for quantitative evaluation of the individual trends in the development of the disease of each child and its correlation. On the basis of this results, we might recommend suitable changes in ERT.

  2. Quantitative modeling and data analysis of SELEX experiments

    Science.gov (United States)

    Djordjevic, Marko; Sengupta, Anirvan M.

    2006-03-01

    SELEX (systematic evolution of ligands by exponential enrichment) is an experimental procedure that allows the extraction, from an initially random pool of DNA, of those oligomers with high affinity for a given DNA-binding protein. We address what is a suitable experimental and computational procedure to infer parameters of transcription factor-DNA interaction from SELEX experiments. To answer this, we use a biophysical model of transcription factor-DNA interactions to quantitatively model SELEX. We show that a standard procedure is unsuitable for obtaining accurate interaction parameters. However, we theoretically show that a modified experiment in which chemical potential is fixed through different rounds of the experiment allows robust generation of an appropriate dataset. Based on our quantitative model, we propose a novel bioinformatic method of data analysis for such a modified experiment and apply it to extract the interaction parameters for a mammalian transcription factor CTF/NFI. From a practical point of view, our method results in a significantly improved false positive/false negative trade-off, as compared to both the standard information theory based method and a widely used empirically formulated procedure.

  3. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  4. A Team Mental Model Perspective of Pre-Quantitative Risk

    Science.gov (United States)

    Cooper, Lynne P.

    2011-01-01

    This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.

  5. Quantitative risk assessment modeling for nonhomogeneous urban road tunnels.

    Science.gov (United States)

    Meng, Qiang; Qu, Xiaobo; Wang, Xinchang; Yuanita, Vivi; Wong, Siew Chee

    2011-03-01

    Urban road tunnels provide an increasingly cost-effective engineering solution, especially in compact cities like Singapore. For some urban road tunnels, tunnel characteristics such as tunnel configurations, geometries, provisions of tunnel electrical and mechanical systems, traffic volumes, etc. may vary from one section to another. These urban road tunnels that have characterized nonuniform parameters are referred to as nonhomogeneous urban road tunnels. In this study, a novel quantitative risk assessment (QRA) model is proposed for nonhomogeneous urban road tunnels because the existing QRA models for road tunnels are inapplicable to assess the risks in these road tunnels. This model uses a tunnel segmentation principle whereby a nonhomogeneous urban road tunnel is divided into various homogenous sections. Individual risk for road tunnel sections as well as the integrated risk indices for the entire road tunnel is defined. The article then proceeds to develop a new QRA model for each of the homogeneous sections. Compared to the existing QRA models for road tunnels, this section-based model incorporates one additional top event-toxic gases due to traffic congestion-and employs the Poisson regression method to estimate the vehicle accident frequencies of tunnel sections. This article further illustrates an aggregated QRA model for nonhomogeneous urban tunnels by integrating the section-based QRA models. Finally, a case study in Singapore is carried out.

  6. Quantitative phase-field modeling for wetting phenomena.

    Science.gov (United States)

    Badillo, Arnoldo

    2015-03-01

    A new phase-field model is developed for studying partial wetting. The introduction of a third phase representing a solid wall allows for the derivation of a new surface tension force that accounts for energy changes at the contact line. In contrast to other multi-phase-field formulations, the present model does not need the introduction of surface energies for the fluid-wall interactions. Instead, all wetting properties are included in a unique parameter known as the equilibrium contact angle θeq. The model requires the solution of a single elliptic phase-field equation, which, coupled to conservation laws for mass and linear momentum, admits the existence of steady and unsteady compact solutions (compactons). The representation of the wall by an additional phase field allows for the study of wetting phenomena on flat, rough, or patterned surfaces in a straightforward manner. The model contains only two free parameters, a measure of interface thickness W and β, which is used in the definition of the mixture viscosity μ=μlϕl+μvϕv+βμlϕw. The former controls the convergence towards the sharp interface limit and the latter the energy dissipation at the contact line. Simulations on rough surfaces show that by taking values for β higher than 1, the model can reproduce, on average, the effects of pinning events of the contact line during its dynamic motion. The model is able to capture, in good agreement with experimental observations, many physical phenomena fundamental to wetting science, such as the wetting transition on micro-structured surfaces and droplet dynamics on solid substrates.

  7. Quantitative modeling of a gene's expression from its intergenic sequence.

    Directory of Open Access Journals (Sweden)

    Md Abul Hassan Samee

    2014-03-01

    Full Text Available Modeling a gene's expression from its intergenic locus and trans-regulatory context is a fundamental goal in computational biology. Owing to the distributed nature of cis-regulatory information and the poorly understood mechanisms that integrate such information, gene locus modeling is a more challenging task than modeling individual enhancers. Here we report the first quantitative model of a gene's expression pattern as a function of its locus. We model the expression readout of a locus in two tiers: 1 combinatorial regulation by transcription factors bound to each enhancer is predicted by a thermodynamics-based model and 2 independent contributions from multiple enhancers are linearly combined to fit the gene expression pattern. The model does not require any prior knowledge about enhancers contributing toward a gene's expression. We demonstrate that the model captures the complex multi-domain expression patterns of anterior-posterior patterning genes in the early Drosophila embryo. Altogether, we model the expression patterns of 27 genes; these include several gap genes, pair-rule genes, and anterior, posterior, trunk, and terminal genes. We find that the model-selected enhancers for each gene overlap strongly with its experimentally characterized enhancers. Our findings also suggest the presence of sequence-segments in the locus that would contribute ectopic expression patterns and hence were "shut down" by the model. We applied our model to identify the transcription factors responsible for forming the stripe boundaries of the studied genes. The resulting network of regulatory interactions exhibits a high level of agreement with known regulatory influences on the target genes. Finally, we analyzed whether and why our assumption of enhancer independence was necessary for the genes we studied. We found a deterioration of expression when binding sites in one enhancer were allowed to influence the readout of another enhancer. Thus, interference

  8. Towards Reproducibility in Computational Hydrology

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    The ability to reproduce published scientific findings is a foundational principle of scientific research. Independent observation helps to verify the legitimacy of individual findings; build upon sound observations so that we can evolve hypotheses (and models) of how catchments function; and move them from specific circumstances to more general theory. The rise of computational research has brought increased focus on the issue of reproducibility across the broader scientific literature. This is because publications based on computational research typically do not contain sufficient information to enable the results to be reproduced, and therefore verified. Given the rise of computational analysis in hydrology over the past 30 years, to what extent is reproducibility, or a lack thereof, a problem in hydrology? Whilst much hydrological code is accessible, the actual code and workflow that produced and therefore documents the provenance of published scientific findings, is rarely available. We argue that in order to advance and make more robust the process of hypothesis testing and knowledge creation within the computational hydrological community, we need to build on from existing open data initiatives and adopt common standards and infrastructures to: first make code re-useable and easy to find through consistent use of metadata; second, publish well documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; finally, use unique persistent identifiers (e.g. DOIs) to reference re-useable and reproducible code, thereby clearly showing the provenance of published scientific findings. Whilst extra effort is require to make work reproducible, there are benefits to both the individual and the broader community in doing so, which will improve the credibility of the science in the face of the need for societies to adapt to changing hydrological environments.

  9. Reproducible Research in Speech Sciences

    Directory of Open Access Journals (Sweden)

    Kandaacute;lmandaacute;n Abari

    2012-11-01

    Full Text Available Reproducible research is the minimum standard of scientific claims in cases when independent replication proves to be difficult. With the special combination of available software tools, we provide a reproducibility recipe for the experimental research conducted in some fields of speech sciences. We have based our model on the triad of the R environment, the EMU-format speech database, and the executable publication. We present the use of three typesetting systems (LaTeX, Markdown, Org, with the help of a mini research.

  10. Reproducibility in Seismic Imaging

    Directory of Open Access Journals (Sweden)

    González-Verdejo O.

    2012-04-01

    Full Text Available Within the field of exploration seismology, there is interest at national level of integrating reproducibility in applied, educational and research activities related to seismic processing and imaging. This reproducibility implies the description and organization of the elements involved in numerical experiments. Thus, a researcher, teacher or student can study, verify, repeat, and modify them independently. In this work, we document and adapt reproducibility in seismic processing and imaging to spread this concept and its benefits, and to encourage the use of open source software in this area within our academic and professional environment. We present an enhanced seismic imaging example, of interest in both academic and professional environments, using Mexican seismic data. As a result of this research, we prove that it is possible to assimilate, adapt and transfer technology at low cost, using open source software and following a reproducible research scheme.

  11. Modeling Error in Quantitative Macro-Comparative Research

    Directory of Open Access Journals (Sweden)

    Salvatore J. Babones

    2015-08-01

    Full Text Available Much quantitative macro-comparative research (QMCR relies on a common set of published data sources to answer similar research questions using a limited number of statistical tools. Since all researchers have access to much the same data, one might expect quick convergence of opinion on most topics. In reality, of course, differences of opinion abound and persist. Many of these differences can be traced, implicitly or explicitly, to the different ways researchers choose to model error in their analyses. Much careful attention has been paid in the political science literature to the error structures characteristic of time series cross-sectional (TSCE data, but much less attention has been paid to the modeling of error in broadly cross-national research involving large panels of countries observed at limited numbers of time points. Here, and especially in the sociology literature, multilevel modeling has become a hegemonic – but often poorly understood – research tool. I argue that widely-used types of multilevel models, commonly known as fixed effects models (FEMs and random effects models (REMs, can produce wildly spurious results when applied to trended data due to mis-specification of error. I suggest that in most commonly-encountered scenarios, difference models are more appropriate for use in QMC.

  12. A quantitative model for integrating landscape evolution and soil formation

    Science.gov (United States)

    Vanwalleghem, T.; Stockmann, U.; Minasny, B.; McBratney, Alex B.

    2013-06-01

    evolution is closely related to soil formation. Quantitative modeling of the dynamics of soils and landscapes should therefore be integrated. This paper presents a model, named Model for Integrated Landscape Evolution and Soil Development (MILESD), which describes the interaction between pedogenetic and geomorphic processes. This mechanistic model includes the most significant soil formation processes, ranging from weathering to clay translocation, and combines these with the lateral redistribution of soil particles through erosion and deposition. The model is spatially explicit and simulates the vertical variation in soil horizon depth as well as basic soil properties such as texture and organic matter content. In addition, sediment export and its properties are recorded. This model is applied to a 6.25 km2 area in the Werrikimbe National Park, Australia, simulating soil development over a period of 60,000 years. Comparison with field observations shows how the model accurately predicts trends in total soil thickness along a catena. Soil texture and bulk density are predicted reasonably well, with errors of the order of 10%, however, field observations show a much higher organic carbon content than predicted. At the landscape scale, different scenarios with varying erosion intensity result only in small changes of landscape-averaged soil thickness, while the response of the total organic carbon stored in the system is higher. Rates of sediment export show a highly nonlinear response to soil development stage and the presence of a threshold, corresponding to the depletion of the soil reservoir, beyond which sediment export drops significantly.

  13. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Cobbs Gary

    2012-08-01

    Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

  14. Modeling logistic performance in quantitative microbial risk assessment.

    Science.gov (United States)

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  15. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  16. Quantitative model studies for interfaces in organic electronic devices

    Science.gov (United States)

    Gottfried, J. Michael

    2016-11-01

    In organic light-emitting diodes and similar devices, organic semiconductors are typically contacted by metal electrodes. Because the resulting metal/organic interfaces have a large impact on the performance of these devices, their quantitative understanding is indispensable for the further rational development of organic electronics. A study by Kröger et al (2016 New J. Phys. 18 113022) of an important single-crystal based model interface provides detailed insight into its geometric and electronic structure and delivers valuable benchmark data for computational studies. In view of the differences between typical surface-science model systems and real devices, a ‘materials gap’ is identified that needs to be addressed by future research to make the knowledge obtained from fundamental studies even more beneficial for real-world applications.

  17. Quantitative identification of technological discontinuities using simulation modeling

    CERN Document Server

    Park, Hyunseok

    2016-01-01

    The aim of this paper is to develop and test metrics to quantitatively identify technological discontinuities in a knowledge network. We developed five metrics based on innovation theories and tested the metrics by a simulation model-based knowledge network and hypothetically designed discontinuity. The designed discontinuity is modeled as a node which combines two different knowledge streams and whose knowledge is dominantly persistent in the knowledge network. The performances of the proposed metrics were evaluated by how well the metrics can distinguish the designed discontinuity from other nodes on the knowledge network. The simulation results show that the persistence times # of converging main paths provides the best performance in identifying the designed discontinuity: the designed discontinuity was identified as one of the top 3 patents with 96~99% probability by Metric 5 and it is, according to the size of a domain, 12~34% better than the performance of the second best metric. Beyond the simulation ...

  18. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  19. A Quantitative Theory Model of a Photobleaching Mechanism

    Institute of Scientific and Technical Information of China (English)

    陈同生; 曾绍群; 周炜; 骆清铭

    2003-01-01

    A photobleaching model:D-P(dye-photon interaction)and D-O(Dye-oxygen oxidative reaction)photobleaching theory,is proposed.The quantitative power dependences of photobleaching rates with both one-and two-photon excitations(1 PE and TPE)are obtained.This photobleaching model can be used to elucidate our and other experimental results commendably.Experimental studies of the photobleaching rates for rhodamine B with TPE under unsaturation conditions reveals that the power dependences of photobleaching rates increase with the increasing dye concentration,and that the photobleaching rate of a single molecule increases in the second power of the excitation intensity,which is different from the high-order(> 3)nonlinear dependence of ensemble molecules.

  20. Three-dimensional surgical modelling with an open-source software protocol: study of precision and reproducibility in mandibular reconstruction with the fibula free flap.

    Science.gov (United States)

    Ganry, L; Quilichini, J; Bandini, C M; Leyder, P; Hersant, B; Meningaud, J P

    2017-08-01

    Very few surgical teams currently use totally independent and free solutions to perform three-dimensional (3D) surgical modelling for osseous free flaps in reconstructive surgery. This study assessed the precision and technical reproducibility of a 3D surgical modelling protocol using free open-source software in mandibular reconstruction with fibula free flaps and surgical guides. Precision was assessed through comparisons of the 3D surgical guide to the sterilized 3D-printed guide, determining accuracy to the millimetre level. Reproducibility was assessed in three surgical cases by volumetric comparison to the millimetre level. For the 3D surgical modelling, a difference of less than 0.1mm was observed. Almost no deformations (modelling was between 0.1mm and 0.4mm, and the average precision of the complete reconstructed mandible was less than 1mm. The open-source software protocol demonstrated high accuracy without complications. However, the precision of the surgical case depends on the surgeon's 3D surgical modelling. Therefore, surgeons need training on the use of this protocol before applying it to surgical cases; this constitutes a limitation. Further studies should address the transfer of expertise. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  1. Cocaine addiction related reproducible brain regions of abnormal default-mode network functional connectivity: a group ICA study with different model orders.

    Science.gov (United States)

    Ding, Xiaoyu; Lee, Seong-Whan

    2013-08-26

    Model order selection in group independent component analysis (ICA) has a significant effect on the obtained components. This study investigated the reproducible brain regions of abnormal default-mode network (DMN) functional connectivity related with cocaine addiction through different model order settings in group ICA. Resting-state fMRI data from 24 cocaine addicts and 24 healthy controls were temporally concatenated and processed by group ICA using model orders of 10, 20, 30, 40, and 50, respectively. For each model order, the group ICA approach was repeated 100 times using the ICASSO toolbox and after clustering the obtained components, centrotype-based anterior and posterior DMN components were selected for further analysis. Individual DMN components were obtained through back-reconstruction and converted to z-score maps. A whole brain mixed effects factorial ANOVA was performed to explore the differences in resting-state DMN functional connectivity between cocaine addicts and healthy controls. The hippocampus, which showed decreased functional connectivity in cocaine addicts for all the tested model orders, might be considered as a reproducible abnormal region in DMN associated with cocaine addiction. This finding suggests that using group ICA to examine the functional connectivity of the hippocampus in the resting-state DMN may provide an additional insight potentially relevant for cocaine-related diagnoses and treatments. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. An infinitesimal model for quantitative trait genomic value prediction.

    Directory of Open Access Journals (Sweden)

    Zhiqiu Hu

    Full Text Available We developed a marker based infinitesimal model for quantitative trait analysis. In contrast to the classical infinitesimal model, we now have new information about the segregation of every individual locus of the entire genome. Under this new model, we propose that the genetic effect of an individual locus is a function of the genome location (a continuous quantity. The overall genetic value of an individual is the weighted integral of the genetic effect function along the genome. Numerical integration is performed to find the integral, which requires partitioning the entire genome into a finite number of bins. Each bin may contain many markers. The integral is approximated by the weighted sum of all the bin effects. We now turn the problem of marker analysis into bin analysis so that the model dimension has decreased from a virtual infinity to a finite number of bins. This new approach can efficiently handle virtually unlimited number of markers without marker selection. The marker based infinitesimal model requires high linkage disequilibrium of all markers within a bin. For populations with low or no linkage disequilibrium, we develop an adaptive infinitesimal model. Both the original and the adaptive models are tested using simulated data as well as beef cattle data. The simulated data analysis shows that there is always an optimal number of bins at which the predictability of the bin model is much greater than the original marker analysis. Result of the beef cattle data analysis indicates that the bin model can increase the predictability from 10% (multiple marker analysis to 33% (multiple bin analysis. The marker based infinitesimal model paves a way towards the solution of genetic mapping and genomic selection using the whole genome sequence data.

  3. Spatial-temporal reproducibility assessment of global seasonal forecasting system version 5 model for Dam Inflow forecasting

    Science.gov (United States)

    Moon, S.; Suh, A. S.; Soohee, H.

    2016-12-01

    The GloSea5(Global Seasonal forecasting system version 5) is provided and operated by the KMA(Korea Meteorological Administration). GloSea5 provides Forecast(FCST) and Hindcast(HCST) data and its horizontal resolution is about 60km (0.83° x 0.56°) in the mid-latitudes. In order to use this data in watershed-scale water management, GloSea5 needs spatial-temporal downscaling. As such, statistical downscaling was used to correct for systematic biases of variables and to improve data reliability. HCST data is provided in ensemble format, and the highest statistical correlation(R2 = 0.60, RMSE = 88.92, NSE = 0.57) of ensemble precipitation was reported for the Yongdam Dam watershed on the #6 grid. Additionally, the original GloSea5(600.1mm) showed the greatest difference(-26.5%) compared to observations(816.1mm) during the summer flood season. However, downscaled GloSea5 was shown to have only a ?3.1% error rate. Most of the underestimated results corresponded to precipitation levels during the flood season and the downscaled GloSea5 showed important results of restoration in precipitation levels. Per the analysis results of spatial autocorrelation using seasonal Moran's I, the spatial distribution was shown to be statistically significant. These results can improve the uncertainty of original GloSea5 and substantiate its spatial-temporal accuracy and validity. The spatial-temporal reproducibility assessment will play a very important role as basic data for watershed-scale water management.

  4. The Proximal Medial Sural Nerve Biopsy Model: A Standardised and Reproducible Baseline Clinical Model for the Translational Evaluation of Bioengineered Nerve Guides

    Directory of Open Access Journals (Sweden)

    Ahmet Bozkurt

    2014-01-01

    Full Text Available Autologous nerve transplantation (ANT is the clinical gold standard for the reconstruction of peripheral nerve defects. A large number of bioengineered nerve guides have been tested under laboratory conditions as an alternative to the ANT. The step from experimental studies to the implementation of the device in the clinical setting is often substantial and the outcome is unpredictable. This is mainly linked to the heterogeneity of clinical peripheral nerve injuries, which is very different from standardized animal studies. In search of a reproducible human model for the implantation of bioengineered nerve guides, we propose the reconstruction of sural nerve defects after routine nerve biopsy as a first or baseline study. Our concept uses the medial sural nerve of patients undergoing diagnostic nerve biopsy (≥2 cm. The biopsy-induced nerve gap was immediately reconstructed by implantation of the novel microstructured nerve guide, Neuromaix, as part of an ongoing first-in-human study. Here we present (i a detailed list of inclusion and exclusion criteria, (ii a detailed description of the surgical procedure, and (iii a follow-up concept with multimodal sensory evaluation techniques. The proximal medial sural nerve biopsy model can serve as a preliminarynature of the injuries or baseline nerve lesion model. In a subsequent step, newly developed nerve guides could be tested in more unpredictable and challenging clinical peripheral nerve lesions (e.g., following trauma which have reduced comparability due to the different nature of the injuries (e.g., site of injury and length of nerve gap.

  5. Quantitative model of the growth of floodplains by vertical accretion

    Science.gov (United States)

    Moody, J.A.; Troutman, B.M.

    2000-01-01

    A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.

  6. Quantitative Model for Estimating Soil Erosion Rates Using 137Cs

    Institute of Scientific and Technical Information of China (English)

    YANGHAO; GHANGQING; 等

    1998-01-01

    A quantitative model was developed to relate the amount of 137Cs loss from the soil profile to the rate of soil erosion,According th mass balance model,the depth distribution pattern of 137Cs in the soil profile ,the radioactive decay of 137Cs,sampling year and the difference of 137Cs fallout amount among years were taken into consideration.By introducing typical depth distribution functions of 137Cs into the model ,detailed equations for the model were got for different soil,The model shows that the rate of soil erosion is mainly controlled by the depth distrbution pattern of 137Cs ,the year of sampling,and the percentage reduction in total 137Cs,The relationship between the rate of soil loss and 137Cs depletion i neither linear nor logarithmic,The depth distribution pattern of 137Cs is a major factor for estimating the rate of soil loss,Soil erosion rate is directly related with the fraction of 137Cs content near the soil surface. The influences of the radioactive decay of 137Cs,sampling year and 137Cs input fraction are not large compared with others.

  7. Goal relevance as a quantitative model of human task relevance.

    Science.gov (United States)

    Tanner, James; Itti, Laurent

    2017-03-01

    The concept of relevance is used ubiquitously in everyday life. However, a general quantitative definition of relevance has been lacking, especially as pertains to quantifying the relevance of sensory observations to one's goals. We propose a theoretical definition for the information value of data observations with respect to a goal, which we call "goal relevance." We consider the probability distribution of an agent's subjective beliefs over how a goal can be achieved. When new data are observed, its goal relevance is measured as the Kullback-Leibler divergence between belief distributions before and after the observation. Theoretical predictions about the relevance of different obstacles in simulated environments agreed with the majority response of 38 human participants in 83.5% of trials, beating multiple machine-learning models. Our new definition of goal relevance is general, quantitative, explicit, and allows one to put a number onto the previously elusive notion of relevance of observations to a goal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Can four-zero-texture mass matrix model reproduce the quark and lepton mixing angles and CP violating phases?

    CERN Document Server

    Matsuda, K; Matsuda, Koichi; Nishiura, Hiroyuki

    2006-01-01

    We reconsider an universal mass matrix model which has a seesaw-invariant structure with four-zero texture common to all quarks and leptons. The CKM quark and MNS lepton mixing matrices of the model are analyzed analytically.We show that the model can be consistent with all the experimental data of neutrino oscillation and quark mixings by tuning free parameters of the model. It is also shown that the model predicts a relatively large value for (1,3) element of the MNS lepton mixing matrix, |(U_{MNS})_{13}|^2 \\simeq 2.6 \\times 10^{-2}. Using the seesaw mechanism, we also discuss the conditions for the components of the Dirac and the right-handed Majorana neutrino mass matrices which lead to the neutrino mass matrix consistent with the experimental data.

  9. A Quantitative Model to Estimate Drug Resistance in Pathogens

    Directory of Open Access Journals (Sweden)

    Frazier N. Baker

    2016-12-01

    Full Text Available Pneumocystis pneumonia (PCP is an opportunistic infection that occurs in humans and other mammals with debilitated immune systems. These infections are caused by fungi in the genus Pneumocystis, which are not susceptible to standard antifungal agents. Despite decades of research and drug development, the primary treatment and prophylaxis for PCP remains a combination of trimethoprim (TMP and sulfamethoxazole (SMX that targets two enzymes in folic acid biosynthesis, dihydrofolate reductase (DHFR and dihydropteroate synthase (DHPS, respectively. There is growing evidence of emerging resistance by Pneumocystis jirovecii (the species that infects humans to TMP-SMX associated with mutations in the targeted enzymes. In the present study, we report the development of an accurate quantitative model to predict changes in the binding affinity of inhibitors (Ki, IC50 to the mutated proteins. The model is based on evolutionary information and amino acid covariance analysis. Predicted changes in binding affinity upon mutations highly correlate with the experimentally measured data. While trained on Pneumocystis jirovecii DHFR/TMP data, the model shows similar or better performance when evaluated on the resistance data for a different inhibitor of PjDFHR, another drug/target pair (PjDHPS/SMX and another organism (Staphylococcus aureus DHFR/TMP. Therefore, we anticipate that the developed prediction model will be useful in the evaluation of possible resistance of the newly sequenced variants of the pathogen and can be extended to other drug targets and organisms.

  10. Quantitative Modeling of Human-Environment Interactions in Preindustrial Time

    Science.gov (United States)

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-04-01

    Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical

  11. Assessment of a numerical model to reproduce event‐scale erosion and deposition distributions in a braided river

    Science.gov (United States)

    Measures, R.; Hicks, D. M.; Brasington, J.

    2016-01-01

    Abstract Numerical morphological modeling of braided rivers, using a physics‐based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth‐averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high‐flow event. Evaluation of model performance primarily focused upon using high‐resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach‐scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers. PMID:27708477

  12. Assessment of a numerical model to reproduce event-scale erosion and deposition distributions in a braided river.

    Science.gov (United States)

    Williams, R D; Measures, R; Hicks, D M; Brasington, J

    2016-08-01

    Numerical morphological modeling of braided rivers, using a physics-based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth-averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high-flow event. Evaluation of model performance primarily focused upon using high-resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach-scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers.

  13. Mechanics of neutrophil phagocytosis: experiments and quantitative models.

    Science.gov (United States)

    Herant, Marc; Heinrich, Volkmar; Dembo, Micah

    2006-05-01

    To quantitatively characterize the mechanical processes that drive phagocytosis, we observed the FcgammaR-driven engulfment of antibody-coated beads of diameters 3 mum to 11 mum by initially spherical neutrophils. In particular, the time course of cell morphology, of bead motion and of cortical tension were determined. Here, we introduce a number of mechanistic models for phagocytosis and test their validity by comparing the experimental data with finite element computations for multiple bead sizes. We find that the optimal models involve two key mechanical interactions: a repulsion or pressure between cytoskeleton and free membrane that drives protrusion, and an attraction between cytoskeleton and membrane newly adherent to the bead that flattens the cell into a thin lamella. Other models such as cytoskeletal expansion or swelling appear to be ruled out as main drivers of phagocytosis because of the characteristics of bead motion during engulfment. We finally show that the protrusive force necessary for the engulfment of large beads points towards storage of strain energy in the cytoskeleton over a large distance from the leading edge ( approximately 0.5 microm), and that the flattening force can plausibly be generated by the known concentrations of unconventional myosins at the leading edge.

  14. Attempting to train a digital human model to reproduce human subject reach capabilities in an ejection seat aircraft

    NARCIS (Netherlands)

    Zehner, G.F.; Hudson, J.A.; Oudenhuijzen, A.

    2006-01-01

    From 1997 through 2002, the Air Force Research Lab and TNO Defence, Security and Safety (Business Unit Human Factors) were involved in a series of tests to quantify the accuracy of five Human Modeling Systems (HMSs) in determining accommodation limits of ejection seat aircraft. The results of these

  15. How well can a convection-permitting climate model reproduce decadal statistics of precipitation, temperature and cloud characteristics?

    Science.gov (United States)

    Brisson, Erwan; Van Weverberg, Kwinten; Demuzere, Matthias; Devis, Annemarie; Saeed, Sajjad; Stengel, Martin; van Lipzig, Nicole P. M.

    2016-11-01

    Convection-permitting climate model are promising tools for improved representation of extremes, but the number of regions for which these models have been evaluated are still rather limited to make robust conclusions. In addition, an integrated interpretation of near-surface characteristics (typically temperature and precipitation) together with cloud properties is limited. The objective of this paper is to comprehensively evaluate the performance of a `state-of-the-art' regional convection-permitting climate model for a mid-latitude coastal region with little orographic forcing. For this purpose, an 11-year integration with the COSMO-CLM model at Convection-Permitting Scale (CPS) using a grid spacing of 2.8 km was compared with in-situ and satellite-based observations of precipitation, temperature, cloud properties and radiation (both at the surface and the top of the atmosphere). CPS clearly improves the representation of precipitation, in especially the diurnal cycle, intensity and spatial distribution of hourly precipitation. Improvements in the representation of temperature are less obvious. In fact the CPS integration overestimates both low and high temperature extremes. The underlying cause for the overestimation of high temperature extremes was attributed to deficiencies in the cloud properties: The modelled cloud fraction is only 46 % whereas a cloud fraction of 65 % was observed. Surprisingly, the effect of this deficiency was less pronounced at the radiation balance at the top of the atmosphere due to a compensating error, in particular an overestimation of the reflectivity of clouds when they are present. Overall, a better representation of convective precipitation and a very good representation of the daily cycle in different cloud types were demonstrated. However, to overcome remaining deficiencies, additional efforts are necessary to improve cloud characteristics in CPS. This will be a challenging task due to compensating deficiencies that currently

  16. Quantitative Modeling of the Alternative Pathway of the Complement System.

    Science.gov (United States)

    Zewde, Nehemiah; Gorham, Ronald D; Dorado, Angel; Morikis, Dimitrios

    2016-01-01

    The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection.

  17. A two-stage unsupervised learning algorithm reproduces multisensory enhancement in a neural network model of the corticotectal system.

    Science.gov (United States)

    Anastasio, Thomas J; Patton, Paul E

    2003-07-30

    Multisensory enhancement (MSE) is the augmentation of the response to sensory stimulation of one modality by stimulation of a different modality. It has been described for multisensory neurons in the deep superior colliculus (DSC) of mammals, which function to detect, and direct orienting movements toward, the sources of stimulation (targets). MSE would seem to improve the ability of DSC neurons to detect targets, but many mammalian DSC neurons are unimodal. MSE requires descending input to DSC from certain regions of parietal cortex. Paradoxically, the descending projections necessary for MSE originate from unimodal cortical neurons. MSE, and the puzzling findings associated with it, can be simulated using a model of the corticotectal system. In the model, a network of DSC units receives primary sensory input that can be augmented by modulatory cortical input. Connection weights from primary and modulatory inputs are trained in stages one (Hebb) and two (Hebb-anti-Hebb), respectively, of an unsupervised two-stage algorithm. Two-stage training causes DSC units to extract information concerning simulated targets from their inputs. It also causes the DSC to develop a mixture of unimodal and multisensory units. The percentage of DSC multisensory units is determined by the proportion of cross-modal targets and by primary input ambiguity. Multisensory DSC units develop MSE, which depends on unimodal modulatory connections. Removal of the modulatory influence greatly reduces MSE but has little effect on DSC unit responses to stimuli of a single modality. The correspondence between model and data suggests that two-stage training captures important features of self-organization in the real corticotectal system.

  18. A quantitative model of technological catch-up

    Directory of Open Access Journals (Sweden)

    Hossein Gholizadeh

    2015-02-01

    Full Text Available This article presents a quantitative model for the analysis of technological gap. The rates of development of technological leaders and followers in nanotechnology are expressed in terms of coupled equations. On the basis of this model (first step comparative technological gap and rate of that will be studied. We can calculate the dynamics of the gap between leader and follower. In the Second step, we estimate the technology gap using the metafrontier approach. Then we test the relationship between the technology gap and the quality of dimensions of the Catch-up technology which were identified in previous step. The usefulness of this approach is then demonstrated in the analysis of the technological gap of nanotechnology in Iran, the leader in Middle East and the world. We shall present the behaviors of the technological leader and followers. At the end, analyzing Iran position will be identified and implying effective dimension of catch-up Suggestions will be offered which could be a fundamental for long-term policies of Iran.

  19. A quantitative model for assessing community dynamics of pleistocene mammals.

    Science.gov (United States)

    Lyons, S Kathleen

    2005-06-01

    Previous studies have suggested that species responded individualistically to the climate change of the last glaciation, expanding and contracting their ranges independently. Consequently, many researchers have concluded that community composition is plastic over time. Here I quantitatively assess changes in community composition over broad timescales and assess the effect of range shifts on community composition. Data on Pleistocene mammal assemblages from the FAUNMAP database were divided into four time periods (preglacial, full glacial, postglacial, and modern). Simulation analyses were designed to determine whether the degree of change in community composition is consistent with independent range shifts, given the distribution of range shifts observed. Results indicate that many of the communities examined in the United States were more similar through time than expected if individual range shifts were completely independent. However, in each time transition examined, there were areas of nonanalogue communities. I conducted sensitivity analyses to explore how the results were affected by the assumptions of the null model. Conclusions about changes in mammalian distributions and community composition are robust with respect to the assumptions of the model. Thus, whether because of biotic interactions or because of common environmental requirements, community structure through time is more complex than previously thought.

  20. A first attempt to reproduce basaltic soil chronosequences using a process-based soil profile model: implications for our understanding of soil evolution

    Science.gov (United States)

    Johnson, M.; Gloor, M.; Lloyd, J.

    2012-04-01

    Soils are complex systems which hold a wealth of information on both current and past conditions and many biogeochemical processes. The ability to model soil forming processes and predict soil properties will enable us to quantify such conditions and contribute to our understanding of long-term biogeochemical cycles, particularly the carbon cycle and plant nutrient cycles. However, attempts to confront such soil model predictions with data are rare, although increasingly more data from chronosquence studies is becoming available for such a purpose. Here we present initial results of an attempt to reproduce soil properties with a process-based soil evolution model similar to the model of Kirkby (1985, J. Soil Science). We specifically focus on the basaltic soils in both Hawaii and north Queensland, Australia. These soils are formed on a series of volcanic lava flows which provide sequences of different aged soils all with a relatively uniform parent material. These soil chronosequences provide a snapshot of a soil profile during different stages of development. Steep rainfall gradients in these regions also provide a system which allows us to test the model's ability to reproduce soil properties under differing climates. The mechanistic, soil evolution model presented here includes the major processes of soil formation such as i) mineral weathering, ii) percolation of rainfall through the soil, iii) leaching of solutes out of the soil profile iv) surface erosion and v) vegetation and biotic interactions. The model consists of a vertical profile and assumes simple geometry with a constantly sloping surface. The timescales of interest are on the order of tens to hundreds of thousand years. The specific properties the model predicts are, soil depth, the proportion of original elemental oxides remaining in each soil layer, pH of the soil solution, organic carbon distribution and CO2 production and concentration. The presentation will focus on a brief introduction of the

  1. Derivation of a quantitative minimal model from a detailed elementary-step mechanism supported by mathematical coupling analysis

    Science.gov (United States)

    Shaik, O. S.; Kammerer, J.; Gorecki, J.; Lebiedz, D.

    2005-12-01

    Accurate experimental data increasingly allow the development of detailed elementary-step mechanisms for complex chemical and biochemical reaction systems. Model reduction techniques are widely applied to obtain representations in lower-dimensional phase space which are more suitable for mathematical analysis, efficient numerical simulation, and model-based control tasks. Here, we exploit a recently implemented numerical algorithm for error-controlled computation of the minimum dimension required for a still accurate reduced mechanism based on automatic time scale decomposition and relaxation of fast modes. We determine species contributions to the active (slow) dynamical modes of the reaction system and exploit this information in combination with quasi-steady-state and partial-equilibrium approximations for explicit model reduction of a novel detailed chemical mechanism for the Ru-catalyzed light-sensitive Belousov-Zhabotinsky reaction. The existence of a minimum dimension of seven is demonstrated to be mandatory for the reduced model to show good quantitative consistency with the full model in numerical simulations. We derive such a maximally reduced seven-variable model from the detailed elementary-step mechanism and demonstrate that it reproduces quantitatively accurately the dynamical features of the full model within a given accuracy tolerance.

  2. Quantitative property-structural relation modeling on polymeric dielectric materials

    Science.gov (United States)

    Wu, Ke

    Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix

  3. Decadal Variability Shown by the Arctic Ocean Hydrochemical Data and Reproduced by an Ice-Ocean Model

    Institute of Scientific and Technical Information of China (English)

    M. Ikeda; R. Colony; H. Yamaguchi; T. Ikeda

    2005-01-01

    The Arctic is experiencing a significant warming trend as well as a decadal oscillation. The atmospheric circulation represented by the Polar Vortex and the sea ice cover show decadal variabilities, while it has been difficult to reveal the decadal oscillation from the ocean interior. The recent distribution of Russian hydrochemical data collected from the Arctic Basin provides useful information on ocean interior variabilities. Silicate is used to provide the most valuable data for showing the boundary between the silicate-rich Pacific Water and the opposite Atlantic Water. Here, it is assumed that the silicate distribution receives minor influence from seasonal biological productivity and Siberian Rivers outflow. It shows a clear maximum around 100m depth in the Canada Basin, along with a vertical gradient below 100 m, which provides information on the vertical motion of the upper boundary of the Atlantic Water at a decadal time scale. The boundary shifts upward (downward), as realized by the silicate reduction (increase) at a fixed depth, responding to a more intense (weaker) Polar Vortex or a positive (negative) phase of the Arctic Oscillation. A coupled ice-ocean model is employed to reconstruct this decadal oscillation.

  4. Reproducibility of haemodynamical simulations in a subject-specific stented aneurysm model--a report on the Virtual Intracranial Stenting Challenge 2007.

    Science.gov (United States)

    Radaelli, A G; Augsburger, L; Cebral, J R; Ohta, M; Rüfenacht, D A; Balossino, R; Benndorf, G; Hose, D R; Marzo, A; Metcalfe, R; Mortier, P; Mut, F; Reymond, P; Socci, L; Verhegghe, B; Frangi, A F

    2008-07-19

    This paper presents the results of the Virtual Intracranial Stenting Challenge (VISC) 2007, an international initiative whose aim was to establish the reproducibility of state-of-the-art haemodynamical simulation techniques in subject-specific stented models of intracranial aneurysms (IAs). IAs are pathological dilatations of the cerebral artery walls, which are associated with high mortality and morbidity rates due to subarachnoid haemorrhage following rupture. The deployment of a stent as flow diverter has recently been indicated as a promising treatment option, which has the potential to protect the aneurysm by reducing the action of haemodynamical forces and facilitating aneurysm thrombosis. The direct assessment of changes in aneurysm haemodynamics after stent deployment is hampered by limitations in existing imaging techniques and currently requires resorting to numerical simulations. Numerical simulations also have the potential to assist in the personalized selection of an optimal stent design prior to intervention. However, from the current literature it is difficult to assess the level of technological advancement and the reproducibility of haemodynamical predictions in stented patient-specific models. The VISC 2007 initiative engaged in the development of a multicentre-controlled benchmark to analyse differences induced by diverse grid generation and computational fluid dynamics (CFD) technologies. The challenge also represented an opportunity to provide a survey of available technologies currently adopted by international teams from both academic and industrial institutions for constructing computational models of stented aneurysms. The results demonstrate the ability of current strategies in consistently quantifying the performance of three commercial intracranial stents, and contribute to reinforce the confidence in haemodynamical simulation, thus taking a step forward towards the introduction of simulation tools to support diagnostics and

  5. Accuracy and reproducibility of voxel based superimposition of cone beam computed tomography models on the anterior cranial base and the zygomatic arches.

    Directory of Open Access Journals (Sweden)

    Rania M Nada

    Full Text Available Superimposition of serial Cone Beam Computed Tomography (CBCT scans has become a valuable tool for three dimensional (3D assessment of treatment effects and stability. Voxel based image registration is a newly developed semi-automated technique for superimposition and comparison of two CBCT scans. The accuracy and reproducibility of CBCT superimposition on the anterior cranial base or the zygomatic arches using voxel based image registration was tested in this study. 16 pairs of 3D CBCT models were constructed from pre and post treatment CBCT scans of 16 adult dysgnathic patients. Each pair was registered on the anterior cranial base three times and on the left zygomatic arch twice. Following each superimposition, the mean absolute distances between the 2 models were calculated at 4 regions: anterior cranial base, forehead, left and right zygomatic arches. The mean distances between the models ranged from 0.2 to 0.37 mm (SD 0.08-0.16 for the anterior cranial base registration and from 0.2 to 0.45 mm (SD 0.09-0.27 for the zygomatic arch registration. The mean differences between the two registration zones ranged between 0.12 to 0.19 mm at the 4 regions. Voxel based image registration on both zones could be considered as an accurate and a reproducible method for CBCT superimposition. The left zygomatic arch could be used as a stable structure for the superimposition of smaller field of view CBCT scans where the anterior cranial base is not visible.

  6. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  7. Epistasis analysis for quantitative traits by functional regression model.

    Science.gov (United States)

    Zhang, Futao; Boerwinkle, Eric; Xiong, Momiao

    2014-06-01

    The critical barrier in interaction analysis for rare variants is that most traditional statistical methods for testing interactions were originally designed for testing the interaction between common variants and are difficult to apply to rare variants because of their prohibitive computational time and poor ability. The great challenges for successful detection of interactions with next-generation sequencing (NGS) data are (1) lack of methods for interaction analysis with rare variants, (2) severe multiple testing, and (3) time-consuming computations. To meet these challenges, we shift the paradigm of interaction analysis between two loci to interaction analysis between two sets of loci or genomic regions and collectively test interactions between all possible pairs of SNPs within two genomic regions. In other words, we take a genome region as a basic unit of interaction analysis and use high-dimensional data reduction and functional data analysis techniques to develop a novel functional regression model to collectively test interactions between all possible pairs of single nucleotide polymorphisms (SNPs) within two genome regions. By intensive simulations, we demonstrate that the functional regression models for interaction analysis of the quantitative trait have the correct type 1 error rates and a much better ability to detect interactions than the current pairwise interaction analysis. The proposed method was applied to exome sequence data from the NHLBI's Exome Sequencing Project (ESP) and CHARGE-S study. We discovered 27 pairs of genes showing significant interactions after applying the Bonferroni correction (P-values < 4.58 × 10(-10)) in the ESP, and 11 were replicated in the CHARGE-S study.

  8. Adaptation and Validation of QUick, Easy, New, CHEap, and Reproducible (QUENCHER) Antioxidant Capacity Assays in Model Products Obtained from Residual Wine Pomace.

    Science.gov (United States)

    Del Pino-García, Raquel; García-Lomillo, Javier; Rivero-Pérez, María D; González-SanJosé, María L; Muñiz, Pilar

    2015-08-12

    Evaluation of the total antioxidant capacity of solid matrices without extraction steps is a very interesting alternative for food researchers and also for food industries. These methodologies have been denominated QUENCHER from QUick, Easy, New, CHEap, and Reproducible assays. To demonstrate and highlight the validity of QUENCHER (Q) methods, values of Q-method validation were showed for the first time, and they were tested with products of well-known different chemical properties. Furthermore, new QUENCHER assays to measure scavenging capacity against superoxide, hydroxyl, and lipid peroxyl radicals were developed. Calibration models showed good linearity (R(2) > 0.995), proportionality and precision (CV antioxidant capacity values significantly different from those obtained with water. The dilution of samples with powdered cellulose was discouraged because possible interferences with some of the matrices analyzed may take place.

  9. Quantitative Simulation of Granular Collapse Experiments with Visco-Plastic Models

    Science.gov (United States)

    Mangeney, A.; Ionescu, I. R.; Bouchut, F.; Roche, O.

    2014-12-01

    One of the key issues in landslide modeling is to define the appropriate rheological behavior of these natural granular flows. In particular the description of the static and of the flowing states of granular media is still an open issue. This plays a crucial role in erosion/deposition processes. A first step to address this issue is to derive models able to reproduce laboratory experiments of granular flows. We propose here a mechanical and numerical model of dry granular flows that quantitatively well reproduces granular column collapse over inclined planes, with rheological parameters directly derived from the laboratory experiments. We reformulate the so-called μ(I) rheology proposed by Jop et al. (2006) where I is the so-called inertial number in the framework of Drucker-Prager plasticity with yield stress and a viscosity η(||D||, p) depending on both the pressure p and the norm of the strain rate tensor ||D||. The resulting dynamic viscosity varies from very small values near the free surface and near the front to 1.5 Pa.s within the quasi-static zone. We show that taking into account a constant mean viscosity during the flow (η = 1 Pa.s here) provides results very similar to those obtained with the variable viscosity deduced from the μ(I) rheology, while significantly reducing the computational cost. This has important implication for application to real landslides and rock avalanches. The numerical results show that the flow is essentially located in a surface layer behind the front, while the whole granular material is flowing near the front where basal sliding occurs. The static/flowing interface changes as a function of space and time, in good agreement with experimental observations. Heterogeneities are observed within the flow with low and high pressure zones, localized small upward velocity zones and vortices near the transition between the flowing and static grains. These instabilities create 'sucking zones' and have some characteristics similar

  10. Compliant bipedal model with the center of pressure excursion associated with oscillatory behavior of the center of mass reproduces the human gait dynamics.

    Science.gov (United States)

    Jung, Chang Keun; Park, Sukyung

    2014-01-03

    Although the compliant bipedal model could reproduce qualitative ground reaction force (GRF) of human walking, the model with a fixed pivot showed overestimations in stance leg rotation and the ratio of horizontal to vertical GRF. The human walking data showed a continuous forward progression of the center of pressure (CoP) during the stance phase and the suspension of the CoP near the forefoot before the onset of step transition. To better describe human gait dynamics with a minimal expense of model complexity, we proposed a compliant bipedal model with the accelerated pivot which associated the CoP excursion with the oscillatory behavior of the center of mass (CoM) with the existing simulation parameter and leg stiffness. Owing to the pivot acceleration defined to emulate human CoP profile, the arrival of the CoP at the limit of the stance foot over the single stance duration initiated the step-to-step transition. The proposed model showed an improved match of walking data. As the forward motion of CoM during single stance was partly accounted by forward pivot translation, the previously overestimated rotation of the stance leg was reduced and the corresponding horizontal GRF became closer to human data. The walking solutions of the model ranged over higher speed ranges (~1.7 m/s) than those of the fixed pivoted compliant bipedal model (~1.5m/s) and exhibited other gait parameters, such as touchdown angle, step length and step frequency, comparable to the experimental observations. The good matches between the model and experimental GRF data imply that the continuous pivot acceleration associated with CoM oscillatory behavior could serve as a useful framework of bipedal model.

  11. Reproducible research in palaeomagnetism

    Science.gov (United States)

    Lurcock, Pontus; Florindo, Fabio

    2015-04-01

    The reproducibility of research findings is attracting increasing attention across all scientific disciplines. In palaeomagnetism as elsewhere, computer-based analysis techniques are becoming more commonplace, complex, and diverse. Analyses can often be difficult to reproduce from scratch, both for the original researchers and for others seeking to build on the work. We present a palaeomagnetic plotting and analysis program designed to make reproducibility easier. Part of the problem is the divide between interactive and scripted (batch) analysis programs. An interactive desktop program with a graphical interface is a powerful tool for exploring data and iteratively refining analyses, but usually cannot operate without human interaction. This makes it impossible to re-run an analysis automatically, or to integrate it into a larger automated scientific workflow - for example, a script to generate figures and tables for a paper. In some cases the parameters of the analysis process itself are not saved explicitly, making it hard to repeat or improve the analysis even with human interaction. Conversely, non-interactive batch tools can be controlled by pre-written scripts and configuration files, allowing an analysis to be 'replayed' automatically from the raw data. However, this advantage comes at the expense of exploratory capability: iteratively improving an analysis entails a time-consuming cycle of editing scripts, running them, and viewing the output. Batch tools also tend to require more computer expertise from their users. PuffinPlot is a palaeomagnetic plotting and analysis program which aims to bridge this gap. First released in 2012, it offers both an interactive, user-friendly desktop interface and a batch scripting interface, both making use of the same core library of palaeomagnetic functions. We present new improvements to the program that help to integrate the interactive and batch approaches, allowing an analysis to be interactively explored and refined

  12. Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy

    Science.gov (United States)

    Smith, Rachel; Cantrell, Kevin

    2007-01-01

    Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.

  13. Can a Dusty Warm Absorber Model Reproduce the Soft X-ray Spectra of MCG-6-30-15 and Mrk 766?

    CERN Document Server

    Sako, M; Branduardi-Raymont, G; Kaastra, J S; Brinkman, A C; Page, M J; Behar, E; Paerels, F B S; Kinkhabwala, A; Liedahl, D A; Den Herder, J W A

    2003-01-01

    XMM-Newton RGS spectra of MCG-6-30-15 and Mrk 766 exhibit complex discrete structure, which was interpreted in a paper by Branduardi-Raymont et al. (2001) as evidence for the existence of relativistically broadened Lyman alpha emission from carbon, nitrogen, and oxygen, produced in the inner-most regions of an accretion disk around a Kerr black hole. This suggestion was subsequently criticized in a paper by Lee et al. (2001), who argued that for MCG-6-30-15, the Chandra HETG spectrum, which is partially overlapping the RGS in spectral coverage, is adequately fit by a dusty warm absorber model, with no relativistic line emission. We present a reanalysis of the original RGS data sets in terms of the Lee et al. (2001) model, and demonstrate that spectral models consisting of a smooth continuum with ionized and dust absorption alone cannot reproduce the RGS spectra of both objects. The original relativistic line model with warm absorption proposed by Branduardi-Raymont et al. (2001) provides a superior fit to the...

  14. Current status of the ability of the GEMS/MACC models to reproduce the tropospheric CO vertical distribution as measured by MOZAIC

    Directory of Open Access Journals (Sweden)

    N. Elguindi

    2010-10-01

    Full Text Available Vertical profiles of CO taken from the MOZAIC aircraft database are used to globally evaluate the performance of the GEMS/MACC models, including the ECMWF-Integrated Forecasting System (IFS model coupled to the CTM MOZART-3 with 4DVAR data assimilation for the year 2004. This study provides a unique opportunity to compare the performance of three offline CTMs (MOZART-3, MOCAGE and TM5 driven by the same meteorology as well as one coupled atmosphere/CTM model run with data assimilation, enabling us to assess the potential gain brought by the combination of online transport and the 4DVAR chemical satellite data assimilation.

    First we present a global analysis of observed CO seasonal averages and interannual variability for the years 2002–2007. Results show that despite the intense boreal forest fires that occurred during the summer in Alaska and Canada, the year 2004 had comparably lower tropospheric CO concentrations. Next we present a validation of CO estimates produced by the MACC models for 2004, including an assessment of their ability to transport pollutants originating from the Alaskan/Canadian wildfires. In general, all the models tend to underestimate CO. The coupled model and the CTMs perform best in Europe and the US where biases range from 0 to -25% in the free troposphere and from 0 to -50% in the surface and boundary layers (BL. Using the 4DVAR technique to assimilate MOPITT V4 CO significantly reduces biases by up to 50% in most regions. However none of the models, even the IFS-MOZART-3 coupled model with assimilation, are able to reproduce well the CO plumes originating from the Alaskan/Canadian wildfires at downwind locations in the eastern US and Europe. Sensitivity tests reveal that deficiencies in the fire emissions inventory and injection height play a role.

  15. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    DEFF Research Database (Denmark)

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting...... particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLAN) allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLAN semantics based on discrete-time Markov chains....... The Maude implementation of PFLAN is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning) and the expected average...

  16. Opening Reproducible Research

    Science.gov (United States)

    Nüst, Daniel; Konkol, Markus; Pebesma, Edzer; Kray, Christian; Klötgen, Stephanie; Schutzeichel, Marc; Lorenz, Jörg; Przibytzin, Holger; Kussmann, Dirk

    2016-04-01

    Open access is not only a form of publishing such that research papers become available to the large public free of charge, it also refers to a trend in science that the act of doing research becomes more open and transparent. When science transforms to open access we not only mean access to papers, research data being collected, or data being generated, but also access to the data used and the procedures carried out in the research paper. Increasingly, scientific results are generated by numerical manipulation of data that were already collected, and may involve simulation experiments that are completely carried out computationally. Reproducibility of research findings, the ability to repeat experimental procedures and confirm previously found results, is at the heart of the scientific method (Pebesma, Nüst and Bivand, 2012). As opposed to the collection of experimental data in labs or nature, computational experiments lend themselves very well for reproduction. Some of the reasons why scientists do not publish data and computational procedures that allow reproduction will be hard to change, e.g. privacy concerns in the data, fear for embarrassment or of losing a competitive advantage. Others reasons however involve technical aspects, and include the lack of standard procedures to publish such information and the lack of benefits after publishing them. We aim to resolve these two technical aspects. We propose a system that supports the evolution of scientific publications from static papers into dynamic, executable research documents. The DFG-funded experimental project Opening Reproducible Research (ORR) aims for the main aspects of open access, by improving the exchange of, by facilitating productive access to, and by simplifying reuse of research results that are published over the Internet. Central to the project is a new form for creating and providing research results, the executable research compendium (ERC), which not only enables third parties to

  17. Herd immunity and pneumococcal conjugate vaccine: a quantitative model.

    Science.gov (United States)

    Haber, Michael; Barskey, Albert; Baughman, Wendy; Barker, Lawrence; Whitney, Cynthia G; Shaw, Kate M; Orenstein, Walter; Stephens, David S

    2007-07-20

    Invasive pneumococcal disease in older children and adults declined markedly after introduction in 2000 of the pneumococcal conjugate vaccine for young children. An empirical quantitative model was developed to estimate the herd (indirect) effects on the incidence of invasive disease among persons >or=5 years of age induced by vaccination of young children with 1, 2, or >or=3 doses of the pneumococcal conjugate vaccine, Prevnar (PCV7), containing serotypes 4, 6B, 9V, 14, 18C, 19F and 23F. From 1994 to 2003, cases of invasive pneumococcal disease were prospectively identified in Georgia Health District-3 (eight metropolitan Atlanta counties) by Active Bacterial Core surveillance (ABCs). From 2000 to 2003, vaccine coverage levels of PCV7 for children aged 19-35 months in Fulton and DeKalb counties (of Atlanta) were estimated from the National Immunization Survey (NIS). Based on incidence data and the estimated average number of doses received by 15 months of age, a Poisson regression model was fit, describing the trend in invasive pneumococcal disease in groups not targeted for vaccination (i.e., adults and older children) before and after the introduction of PCV7. Highly significant declines in all the serotypes contained in PCV7 in all unvaccinated populations (5-19, 20-39, 40-64, and >64 years) from 2000 to 2003 were found under the model. No significant change in incidence was seen from 1994 to 1999, indicating rates were stable prior to vaccine introduction. Among unvaccinated persons 5+ years of age, the modeled incidence of disease caused by PCV7 serotypes as a group dropped 38.4%, 62.0%, and 76.6% for 1, 2, and 3 doses, respectively, received on average by the population of children by the time they are 15 months of age. Incidence of serotypes 14 and 23F had consistent significant declines in all unvaccinated age groups. In contrast, the herd immunity effects on vaccine-related serotype 6A incidence were inconsistent. Increasing trends of non

  18. Accuracy and reproducibility of patient-specific hemodynamic models of stented intracranial aneurysms: report on the Virtual Intracranial Stenting Challenge 2011.

    Science.gov (United States)

    Cito, S; Geers, A J; Arroyo, M P; Palero, V R; Pallarés, J; Vernet, A; Blasco, J; San Román, L; Fu, W; Qiao, A; Janiga, G; Miura, Y; Ohta, M; Mendina, M; Usera, G; Frangi, A F

    2015-01-01

    Validation studies are prerequisites for computational fluid dynamics (CFD) simulations to be accepted as part of clinical decision-making. This paper reports on the 2011 edition of the Virtual Intracranial Stenting Challenge. The challenge aimed to assess the reproducibility with which research groups can simulate the velocity field in an intracranial aneurysm, both untreated and treated with five different configurations of high-porosity stents. Particle imaging velocimetry (PIV) measurements were obtained to validate the untreated velocity field. Six participants, totaling three CFD solvers, were provided with surface meshes of the vascular geometry and the deployed stent geometries, and flow rate boundary conditions for all inlets and outlets. As output, they were invited to submit an abstract to the 8th International Interdisciplinary Cerebrovascular Symposium 2011 (ICS'11), outlining their methods and giving their interpretation of the performance of each stent configuration. After the challenge, all CFD solutions were collected and analyzed. To quantitatively analyze the data, we calculated the root-mean-square error (RMSE) over uniformly distributed nodes on a plane slicing the main flow jet along its axis and normalized it with the maximum velocity on the slice of the untreated case (NRMSE). Good agreement was found between CFD and PIV with a NRMSE of 7.28%. Excellent agreement was found between CFD solutions, both untreated and treated. The maximum difference between any two groups (along a line perpendicular to the main flow jet) was 4.0 mm/s, i.e. 4.1% of the maximum velocity of the untreated case, and the average NRMSE was 0.47% (range 0.28-1.03%). In conclusion, given geometry and flow rates, research groups can accurately simulate the velocity field inside an intracranial aneurysm-as assessed by comparison with in vitro measurements-and find excellent agreement on the hemodynamic effect of different stent configurations.

  19. Implementation of Contraction to Electrophysiological Ventricular Myocyte Models, and Their Quantitative Characterization via Post-Extrasystolic Potentiation.

    Science.gov (United States)

    Ji, Yanyan Claire; Gray, Richard A; Fenton, Flavio H

    2015-01-01

    Heart failure (HF) affects over 5 million Americans and is characterized by impairment of cellular cardiac contractile function resulting in reduced ejection fraction in patients. Electrical stimulation such as cardiac resynchronization therapy (CRT) and cardiac contractility modulation (CCM) have shown some success in treating patients with HF. Computer simulations have the potential to help improve such therapy (e.g. suggest optimal lead placement) as well as provide insight into the underlying mechanisms which could be beneficial. However, these myocyte models require a quantitatively accurate excitation-contraction coupling such that the electrical and contraction predictions are correct. While currently there are close to a hundred models describing the detailed electrophysiology of cardiac cells, the majority of cell models do not include the equations to reproduce contractile force or they have been added ad hoc. Here we present a systematic methodology to couple first generation contraction models into electrophysiological models via intracellular calcium and then compare the resulting model predictions to experimental data. This is done by using a post-extrasystolic pacing protocol, which captures essential dynamics of contractile forces. We found that modeling the dynamic intracellular calcium buffers is necessary in order to reproduce the experimental data. Furthermore, we demonstrate that in models the mechanism of the post-extrasystolic potentiation is highly dependent on the calcium released from the Sarcoplasmic Reticulum. Overall this study provides new insights into both specific and general determinants of cellular contractile force and provides a framework for incorporating contraction into electrophysiological models, both of which will be necessary to develop reliable simulations to optimize electrical therapies for HF.

  20. Reproducing the observed energy-dependent structure of Earth's electron radiation belts during storm recovery with an event-specific diffusion model

    Science.gov (United States)

    Ripoll, J.-F.; Reeves, G. D.; Cunningham, G. S.; Loridan, V.; Denton, M.; Santolík, O.; Kurth, W. S.; Kletzing, C. A.; Turner, D. L.; Henderson, M. G.; Ukhorskiy, A. Y.

    2016-06-01

    We present dynamic simulations of energy-dependent losses in the radiation belt "slot region" and the formation of the two-belt structure for the quiet days after the 1 March storm. The simulations combine radial diffusion with a realistic scattering model, based data-driven spatially and temporally resolved whistler-mode hiss wave observations from the Van Allen Probes satellites. The simulations reproduce Van Allen Probes observations for all energies and L shells (2-6) including (a) the strong energy dependence to the radiation belt dynamics (b) an energy-dependent outer boundary to the inner zone that extends to higher L shells at lower energies and (c) an "S-shaped" energy-dependent inner boundary to the outer zone that results from the competition between diffusive radial transport and losses. We find that the characteristic energy-dependent structure of the radiation belts and slot region is dynamic and can be formed gradually in ~15 days, although the "S shape" can also be reproduced by assuming equilibrium conditions. The highest-energy electrons (E > 300 keV) of the inner region of the outer belt (L ~ 4-5) also constantly decay, demonstrating that hiss wave scattering affects the outer belt during times of extended plasmasphere. Through these simulations, we explain the full structure in energy and L shell of the belts and the slot formation by hiss scattering during storm recovery. We show the power and complexity of looking dynamically at the effects over all energies and L shells and the need for using data-driven and event-specific conditions.

  1. Archiving Reproducible Research with R and Dataverse

    DEFF Research Database (Denmark)

    Leeper, Thomas

    2014-01-01

    Reproducible research and data archiving are increasingly important issues in research involving statistical analyses of quantitative data. This article introduces the dvn package, which allows R users to publicly archive datasets, analysis files, codebooks, and associated metadata in Dataverse...... Network online repositories, an open-source data archiving project sponsored by Harvard University. In this article I review the importance of data archiving in the context of reproducible research, introduce the Dataverse Network, explain the implementation of the dvn package, and provide example code...... for archiving and releasing data using the package....

  2. Archiving Reproducible Research with R and Dataverse

    DEFF Research Database (Denmark)

    Leeper, Thomas

    2014-01-01

    Reproducible research and data archiving are increasingly important issues in research involving statistical analyses of quantitative data. This article introduces the dvn package, which allows R users to publicly archive datasets, analysis files, codebooks, and associated metadata in Dataverse...... Network online repositories, an open-source data archiving project sponsored by Harvard University. In this article I review the importance of data archiving in the context of reproducible research, introduce the Dataverse Network, explain the implementation of the dvn package, and provide example code...

  3. A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model

    Science.gov (United States)

    2007-06-01

    12TH ICCRTS “Adapting C2 to the 21st Century” A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model Tracks...Lenahan2 identified metrics and techniques for adversarial C2 process modeling . We intend to further that work by developing a set of adversarial process ...Approaches in an Adversarial Process Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK

  4. Current status of the ability of the GEMS/MACC models to reproduce the tropospheric CO vertical distribution as measured by MOZAIC

    Directory of Open Access Journals (Sweden)

    N. Elguindi

    2010-04-01

    Full Text Available Vertical profiles of CO taken from the MOZAIC aircraft database are used to present (1 a global analysis of CO seasonal averages and interannual variability for the years 2002–2007 and (2 a global validation of CO estimates produced by the MACC models for 2004, including an assessment of their ability to transport pollutants originating from the Alaskan/Canadian wildfires. Seasonal averages and interannual variability from several MOZAIC sites representing different regions of the world show that CO concentrations are highest and most variable during the winter season. The inter-regional variability is significant with concentrations increasing eastward from Europe to Japan. The impact of the intense boreal fires, particularly in Russia, during the fall of 2002 on the Northern Hemisphere CO concentrations throughout the troposphere is well represented by the MOZAIC data.

    A global validation of the GEMS/MACC GRG models which include three stand-alone CTMs (MOZART, MOCAGE and TM5 and the coupled ECMWF Integrated Forecasting System (IFS/MOZART model with and without MOPITT CO data assimilation show that the models have a tendency to underestimate CO. The models perform best in Europe and the US where biases range from 0 to –25% in the free troposphere and from 0 to –50% in the surface and boundary layers (BL. The biases are largest in the winter and during the daytime when emissions are highest, indicating that current inventories are too low. Data assimilation is shown to reduce biases by up to 25% in some regions. The models are not able to reproduce well the CO plumes originating from the Alaskan/Canadian wildfires at downwind locations in the eastern US and Europe, not even with assimilation. Sensitivity tests reveal that this is mainly due to deficiencies in the fire emissions inventory and injection height.

  5. Reproducible research in vadose zone sciences

    Science.gov (United States)

    A significant portion of present-day soil and Earth science research is computational, involving complex data analysis pipelines, advanced mathematical and statistical models, and sophisticated computer codes. Opportunities for scientific progress are greatly diminished if reproducing and building o...

  6. ASSETS MANAGEMENT - A CONCEPTUAL MODEL DECOMPOSING VALUE FOR THE CUSTOMER AND A QUANTITATIVE MODEL

    Directory of Open Access Journals (Sweden)

    Susana Nicola

    2015-03-01

    Full Text Available In this paper we describe de application of a modeling framework, the so-called Conceptual Model Decomposing Value for the Customer (CMDVC, in a Footwear Industry case study, to ascertain the usefulness of this approach. The value networks were used to identify the participants, both tangible and intangible deliverables/endogenous and exogenous assets, and the analysis of their interactions as the indication for an adequate value proposition. The quantitative model of benefits and sacrifices, using the Fuzzy AHP method, enables the discussion of how the CMDVC can be applied and used in the enterprise environment and provided new relevant relations between perceived benefits (PBs.

  7. A quantitative method for defining high-arched palate using the Tcof1(+/-) mutant mouse as a model.

    Science.gov (United States)

    Conley, Zachary R; Hague, Molly; Kurosaka, Hiroshi; Dixon, Jill; Dixon, Michael J; Trainor, Paul A

    2016-07-15

    The palate functions as the roof of the mouth in mammals, separating the oral and nasal cavities. Its complex embryonic development and assembly poses unique susceptibilities to intrinsic and extrinsic disruptions. Such disruptions may cause failure of the developing palatal shelves to fuse along the midline resulting in a cleft. In other cases the palate may fuse at an arch, resulting in a vaulted oral cavity, termed high-arched palate. There are many models available for studying the pathogenesis of cleft palate but a relative paucity for high-arched palate. One condition exhibiting either cleft palate or high-arched palate is Treacher Collins syndrome, a congenital disorder characterized by numerous craniofacial anomalies. We quantitatively analyzed palatal perturbations in the Tcof1(+/-) mouse model of Treacher Collins syndrome, which phenocopies the condition in humans. We discovered that 46% of Tcof1(+/-) mutant embryos and new born pups exhibit either soft clefts or full clefts. In addition, 17% of Tcof1(+/-) mutants were found to exhibit high-arched palate, defined as two sigma above the corresponding wild-type population mean for height and angular based arch measurements. Furthermore, palatal shelf length and shelf width were decreased in all Tcof1(+/-) mutant embryos and pups compared to controls. Interestingly, these phenotypes were subsequently ameliorated through genetic inhibition of p53. The results of our study therefore provide a simple, reproducible and quantitative method for investigating models of high-arched palate.

  8. Quantitative, comprehensive, analytical model for magnetic reconnection in Hall magnetohydrodynamics.

    Science.gov (United States)

    Simakov, Andrei N; Chacón, L

    2008-09-05

    Dissipation-independent, or "fast", magnetic reconnection has been observed computationally in Hall magnetohydrodynamics (MHD) and predicted analytically in electron MHD. However, a quantitative analytical theory of reconnection valid for arbitrary ion inertial lengths, d{i}, has been lacking and is proposed here for the first time. The theory describes a two-dimensional reconnection diffusion region, provides expressions for reconnection rates, and derives a formal criterion for fast reconnection in terms of dissipation parameters and d{i}. It also confirms the electron MHD prediction that both open and elongated diffusion regions allow fast reconnection, and reveals strong dependence of the reconnection rates on d{i}.

  9. Reliability of quantifying vascular white matter brain lesions - a contribution to reproducible quantitative diagnosis; Reliabilitaet der Quantifizierung von vaskulaeren Laesionen der weissen Hirnsubstanz - ein Beitrag zur replizierbaren quantitativen Diagnostik

    Energy Technology Data Exchange (ETDEWEB)

    Hentschel, F.; Kreis, M.; Damian, M. [Abt. Neuroradiologie, ZI, Fakultaet fuer klinische Medizin Mannheim der Univ. Heidelberg (Germany); Diepers, M. [Abt. Neuroradiologie des Universitaetsklinikums Mannheim (Germany); Disque, C.; Dzialowski, I.; Kitzler, H.; Rodewald, A. [Abt. Neuroradiologie des Universitaetsklinikums Dresden (Germany); Struffert, T. [Abt. Neuroradiologie des Universitaetsklinikums Homburg (Germany); Trittmacher, S. [Abt. Neuroradiologie des Universitaetsklinikums Giessen (Germany); Wille, P.R. [Inst. fuer Neuroradiologie der Univ. Mainz (Germany); Krumm, B. [Abt. Biostatistik, ZI, Fakultaet fuer klinische Medizin Mannheim der Univ. Heidelberg (Germany)

    2005-01-01

    Purpose: microangiopathic lesions of the brain tissue correlate with the clinical diagnosis of vascular subcortical dementia. The ''experience-based'' evaluation is insufficient. Rating scales may contribute to reproducible quantification. Materials and methods: in MRI studies of 10 patients, 9 neuroradiologists quantified vascular white matter lesions (WMLs) at two different points in time for 12 anatomically defined regions with respect to number, size and localization (score). For 9 observers and 10 studies, 90 intra-observer differences were obtained for each of the 12 WML scores. To calculate the inter-observer reliability, rating pairs were formed. Furthermore, 360 differences were computed for each score and rating for 12 anatomically defined WML scores, and the intraclass correlation (ICC) was calculated as a measure of agreement (reliability). Results: as to the intra-observer reliability, the median of the differences was 1.5 for the entire brain as opposed to 0 for defined brain regions. The corresponding values for the inter-observer reliability were 3 and 1, respectively. The mean intra-class correlation coefficient for the 10 studies was 0.88, whereas the mean interclass correlation concerning the inter-observer reliability was 0.70, with the first and second rating being averaged. The rating of each study took about 6 minutes. Conclusion: the rating scale with high intra- and inter-observer reliability can dependably quantify WMLs and correlates with the clinical diagnosis of vascular dementia. Using a reliable rating scale, the diagnostic distinction of age - associated physiological vs. pathological size of the NMC can make a contribution to the reproducible quantifiable diagnostic evaluation of vascular brain tissue lesions within the framework of dementia diagnostics. (orig.)

  10. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Science.gov (United States)

    2011-05-18

    ... COMMISSION NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection... issued for public comment a document entitled: NUREG/CR-XXXX, ``Development of Quantitative Software... development of regulatory guidance for using risk information related to digital systems in the...

  11. Photon-tissue interaction model for quantitative assessment of biological tissues

    Science.gov (United States)

    Lee, Seung Yup; Lloyd, William R.; Wilson, Robert H.; Chandra, Malavika; McKenna, Barbara; Simeone, Diane; Scheiman, James; Mycek, Mary-Ann

    2014-02-01

    In this study, we describe a direct fit photon-tissue interaction model to quantitatively analyze reflectance spectra of biological tissue samples. The model rapidly extracts biologically-relevant parameters associated with tissue optical scattering and absorption. This model was employed to analyze reflectance spectra acquired from freshly excised human pancreatic pre-cancerous tissues (intraductal papillary mucinous neoplasm (IPMN), a common precursor lesion to pancreatic cancer). Compared to previously reported models, the direct fit model improved fit accuracy and speed. Thus, these results suggest that such models could serve as real-time, quantitative tools to characterize biological tissues assessed with reflectance spectroscopy.

  12. Quantitative Reproducibility Study of CT Perfusion Parameters in Rabbits with Implanted VX-2 Lung Tumors%兔VX-2肺种植肿瘤CT灌注成像定量检测可重复性的研究

    Institute of Scientific and Technical Information of China (English)

    崔磊; 龚沈初; 何书; 尹剑兵; 杨巨顺; 杨红

    2011-01-01

    Purpose To prospectively evaluate the reproducibility of CT perfusion parameters in rabbits with implanted VX2 lung tumors. Materials and Methods Perfusion CT was performed twice with 24-hour interval in 10 New Zealand White rabbits with implanted VX2 lung tumors. The volume, maximum diameter, blood volume (B V), Blood flow (BF), Time to peak (TTP), Permeability surface (PS), Patlak BV (PBV), Patlak R square (PatRsq) and Patlak Residual (PatRea) were measured, and reproducibility was evaluated using Bland-Altman statistics. Results CT perfusion parameters showed good agreement between two perfusion examinations. Intragroup correlation coefficients (ICCs) were all more than 0.6, within-subject coefficient of variation (WCV) was from 10.8% to 30.2%. The WCV of PS (10.8%) and PBV (12.3%) showed excellent agreement between studies comparable to the WCV of volume (8.5%) and maximum diameter (10.0%). Conclusion The trial confirms that lung tumor perfusion CT yields a good reproducibility and a range of reference values for CT perfusion parameters.%目的 前瞻性评估兔VX-2肺种植肿瘤CT灌注检查测量参数的可重复性.材料与方法 10只肺种植肿瘤的新西兰大白兔行2次CT灌注检查(间隔24h),分别测量肿瘤体积、最大径、血容量、血流量、达峰时间、表面通透性(PS)、Patlak血容量(PBV)、Patlak R方程(PatRsq)及Patlak残差(PatRes).使用Bland-Altman法分析2组CT测量数据的可重复性.结果 2次CT检查所有的CT灌注参数均显示较好的一致性,组内相关系数(ICC)均>0.6,个体间变异系数(WCV) 10.8%~30.2%.其中,PS和PBV的WCV分别为10.8%和12.3%,与CT形态学指标体积及最大径相仿(8.5%和10.0%).结论 通过动物实验验证了肺种植肿瘤CT灌注的可重复性,初步确定了CT灌注参数变化的参考值范围,为进一步研究提供了实验数据.

  13. Modular System Modeling for Quantitative Reliability Evaluation of Technical Systems

    Directory of Open Access Journals (Sweden)

    Stephan Neumann

    2016-01-01

    Full Text Available In modern times, it is necessary to offer reliable products to match the statutory directives concerning product liability and the high expectations of customers for durable devices. Furthermore, to maintain a high competitiveness, engineers need to know as accurately as possible how long their product will last and how to influence the life expectancy without expensive and time-consuming testing. As the components of a system are responsible for the system reliability, this paper introduces and evaluates calculation methods for life expectancy of common machine elements in technical systems. Subsequently, a method for the quantitative evaluation of the reliability of technical systems is proposed and applied to a heavy-duty power shift transmission.

  14. Quantitative assessment of meteorological and tropospheric Zenith Hydrostatic Delay models

    Science.gov (United States)

    Zhang, Di; Guo, Jiming; Chen, Ming; Shi, Junbo; Zhou, Lv

    2016-09-01

    Tropospheric delay has always been an important issue in GNSS/DORIS/VLBI/InSAR processing. Most commonly used empirical models for the determination of tropospheric Zenith Hydrostatic Delay (ZHD), including three meteorological models and two empirical ZHD models, are carefully analyzed in this paper. Meteorological models refer to UNB3m, GPT2 and GPT2w, while ZHD models include Hopfield and Saastamoinen. By reference to in-situ meteorological measurements and ray-traced ZHD values of 91 globally distributed radiosonde sites, over a four-years period from 2010 to 2013, it is found that there is strong correlation between errors of model-derived values and latitudes. Specifically, the Saastamoinen model shows a systematic error of about -3 mm. Therefore a modified Saastamoinen model is developed based on the "best average" refractivity constant, and is validated by radiosonde data. Among different models, the GPT2w and the modified Saastamoinen model perform the best. ZHD values derived from their combination have a mean bias of -0.1 mm and a mean RMS of 13.9 mm. Limitations of the present models are discussed and suggestions for further improvements are given.

  15. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...

  16. Building Quantitative Hydrologic Storylines from Process-based Models for Managing Water Resources in the U.S. Under Climate-changed Futures

    Science.gov (United States)

    Arnold, J.; Gutmann, E. D.; Clark, M. P.; Nijssen, B.; Vano, J. A.; Addor, N.; Wood, A.; Newman, A. J.; Mizukami, N.; Brekke, L. D.; Rasmussen, R.; Mendoza, P. A.

    2016-12-01

    Climate change narratives for water-resource applications must represent the change signals contextualized by hydroclimatic process variability and uncertainty at multiple scales. Building narratives of plausible change includes assessing uncertainties across GCM structure, internal climate variability, climate downscaling methods, and hydrologic models. Work with this linked modeling chain has dealt mostly with GCM sampling directed separately to either model fidelity (does the model correctly reproduce the physical processes in the world?) or sensitivity (of different model responses to CO2 forcings) or diversity (of model type, structure, and complexity). This leaves unaddressed any interactions among those measures and with other components in the modeling chain used to identify water-resource vulnerabilities to specific climate threats. However, time-sensitive, real-world vulnerability studies typically cannot accommodate a full uncertainty ensemble across the whole modeling chain, so a gap has opened between current scientific knowledge and most routine applications for climate-changed hydrology. To close that gap, the US Army Corps of Engineers, the Bureau of Reclamation, and the National Center for Atmospheric Research are working on techniques to subsample uncertainties objectively across modeling chain components and to integrate results into quantitative hydrologic storylines of climate-changed futures. Importantly, these quantitative storylines are not drawn from a small sample of models or components. Rather, they stem from the more comprehensive characterization of the full uncertainty space for each component. Equally important from the perspective of water-resource practitioners, these quantitative hydrologic storylines are anchored in actual design and operations decisions potentially affected by climate change. This talk will describe part of our work characterizing variability and uncertainty across modeling chain components and their

  17. Validity and Reproducibility of a Self-Administered Semi-Quantitative Food-Frequency Questionnaire for Estimating Usual Daily Fat, Fibre, Alcohol, Caffeine and Theobromine Intakes among Belgian Post-Menopausal Women

    Directory of Open Access Journals (Sweden)

    Selin Bolca

    2009-01-01

    Full Text Available A novel food-frequency questionnaire (FFQ was developed and validated to assess the usual daily fat, saturated, mono-unsaturated and poly-unsaturated fatty acid, fibre, alcohol, caffeine, and theobromine intakes among Belgian post-menopausal women participating in dietary intervention trials with phyto-oestrogens. The relative validity of the FFQ was estimated by comparison with 7 day (d estimated diet records (EDR, n 64 and its reproducibility was evaluated by repeated administrations 6 weeks apart (n 79. Although the questionnaire underestimated significantly all intakes compared to the 7 d EDR, it had a good ranking ability (r 0.47-0.94; weighted κ 0.25-0.66 and it could reliably distinguish extreme intakes for all the estimated nutrients, except for saturated fatty acids. Furthermore, the correlation between repeated administrations was high (r 0.71-0.87 with a maximal misclassification of 7% (weighted κ 0.33-0.80. In conclusion, these results compare favourably with those reported by others and indicate that the FFQ is a satisfactorily reliable and valid instrument for ranking individuals within this study population.

  18. Validity and reproducibility of a self-administered semi-quantitative food-frequency questionnaire for estimating usual daily fat, fibre, alcohol, caffeine and theobromine intakes among Belgian post-menopausal women.

    Science.gov (United States)

    Bolca, Selin; Huybrechts, Inge; Verschraegen, Mia; De Henauw, Stefaan; Van de Wiele, Tom

    2009-01-01

    A novel food-frequency questionnaire (FFQ) was developed and validated to assess the usual daily fat, saturated, mono-unsaturated and poly-unsaturated fatty acid, fibre, alcohol, caffeine, and theobromine intakes among Belgian post-menopausal women participating in dietary intervention trials with phyto-oestrogens. The relative validity of the FFQ was estimated by comparison with 7 day (d) estimated diet records (EDR, n 64) and its reproducibility was evaluated by repeated administrations 6 weeks apart (n 79). Although the questionnaire underestimated significantly all intakes compared to the 7 d EDR, it had a good ranking ability (r 0.47-0.94; weighted kappa 0.25-0.66) and it could reliably distinguish extreme intakes for all the estimated nutrients, except for saturated fatty acids. Furthermore, the correlation between repeated administrations was high (r 0.71-0.87) with a maximal misclassification of 7% (weighted kappa 0.33-0.80). In conclusion, these results compare favourably with those reported by others and indicate that the FFQ is a satisfactorily reliable and valid instrument for ranking individuals within this study population.

  19. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    Science.gov (United States)

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level.

  20. A Quantitative Causal Model Theory of Conditional Reasoning

    Science.gov (United States)

    Fernbach, Philip M.; Erb, Christopher D.

    2013-01-01

    The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…

  1. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    Science.gov (United States)

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  2. Towards the quantitative evaluation of visual attention models.

    Science.gov (United States)

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations.

  3. Assessment of Quantitative Precipitation Forecasts from Operational NWP Models (Invited)

    Science.gov (United States)

    Sapiano, M. R.

    2010-12-01

    Previous work has shown that satellite and numerical model estimates of precipitation have complimentary strengths, with satellites having greater skill at detecting convective precipitation events and model estimates having greater skill at detecting stratiform precipitation. This is due in part to the challenges associated with retrieving stratiform precipitation from satellites and the difficulty in resolving sub-grid scale processes in models. These complimentary strengths can be exploited to obtain new merged satellite/model datasets, and several such datasets have been constructed using reanalysis data. Whilst reanalysis data are stable in a climate sense, they also have relatively coarse resolution compared to the satellite estimates (many of which are now commonly available at quarter degree resolution) and they necessarily use fixed forecast systems that are not state-of-the-art. An alternative to reanalysis data is to use Operational Numerical Weather Prediction (NWP) model estimates, which routinely produce precipitation with higher resolution and using the most modern techniques. Such estimates have not been combined with satellite precipitation and their relative skill has not been sufficiently assessed beyond model validation. The aim of this work is to assess the information content of the models relative to satellite estimates with the goal of improving techniques for merging these data types. To that end, several operational NWP precipitation forecasts have been compared to satellite and in situ data and their relative skill in forecasting precipitation has been assessed. In particular, the relationship between precipitation forecast skill and other model variables will be explored to see if these other model variables can be used to estimate the skill of the model at a particular time. Such relationships would be provide a basis for determining weights and errors of any merged products.

  4. Quantitative Methods for Comparing Different Polyline Stream Network Models

    Energy Technology Data Exchange (ETDEWEB)

    Danny L. Anderson; Daniel P. Ames; Ping Yang

    2014-04-01

    Two techniques for exploring relative horizontal accuracy of complex linear spatial features are described and sample source code (pseudo code) is presented for this purpose. The first technique, relative sinuosity, is presented as a measure of the complexity or detail of a polyline network in comparison to a reference network. We term the second technique longitudinal root mean squared error (LRMSE) and present it as a means for quantitatively assessing the horizontal variance between two polyline data sets representing digitized (reference) and derived stream and river networks. Both relative sinuosity and LRMSE are shown to be suitable measures of horizontal stream network accuracy for assessing quality and variation in linear features. Both techniques have been used in two recent investigations involving extracting of hydrographic features from LiDAR elevation data. One confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes yielded better stream network delineations, based on sinuosity and LRMSE, when using LiDAR-derived DEMs. The other demonstrated a new method of delineating stream channels directly from LiDAR point clouds, without the intermediate step of deriving a DEM, showing that the direct delineation from LiDAR point clouds yielded an excellent and much better match, as indicated by the LRMSE.

  5. Evaluation of guidewire path reproducibility.

    Science.gov (United States)

    Schafer, Sebastian; Hoffmann, Kenneth R; Noël, Peter B; Ionita, Ciprian N; Dmochowski, Jacek

    2008-05-01

    The number of minimally invasive vascular interventions is increasing. In these interventions, a variety of devices are directed to and placed at the site of intervention. The device used in almost all of these interventions is the guidewire, acting as a monorail for all devices which are delivered to the intervention site. However, even with the guidewire in place, clinicians still experience difficulties during the interventions. As a first step toward understanding these difficulties and facilitating guidewire and device guidance, we have investigated the reproducibility of the final paths of the guidewire in vessel phantom models on different factors: user, materials and geometry. Three vessel phantoms (vessel diameters approximately 4 mm) were constructed having tortuousity similar to the internal carotid artery from silicon tubing and encased in Sylgard elastomer. Several trained users repeatedly passed two guidewires of different flexibility through the phantoms under pulsatile flow conditions. After the guidewire had been placed, rotational c-arm image sequences were acquired (9 in. II mode, 0.185 mm pixel size), and the phantom and guidewire were reconstructed (512(3), 0.288 mm voxel size). The reconstructed volumes were aligned. The centerlines of the guidewire and the phantom vessel were then determined using region-growing techniques. Guidewire paths appear similar across users but not across materials. The average root mean square difference of the repeated placement was 0.17 +/- 0.02 mm (plastic-coated guidewire), 0.73 +/- 0.55 mm (steel guidewire) and 1.15 +/- 0.65 mm (steel versus plastic-coated). For a given guidewire, these results indicate that the guidewire path is relatively reproducible in shape and position.

  6. The JBEI quantitative metabolic modeling library (jQMM): a python library for modeling microbial metabolism.

    Science.gov (United States)

    Birkel, Garrett W; Ghosh, Amit; Kumar, Vinay S; Weaver, Daniel; Ando, David; Backman, Tyler W H; Arkin, Adam P; Keasling, Jay D; Martín, Héctor García

    2017-04-05

    Modeling of microbial metabolism is a topic of growing importance in biotechnology. Mathematical modeling helps provide a mechanistic understanding for the studied process, separating the main drivers from the circumstantial ones, bounding the outcomes of experiments and guiding engineering approaches. Among different modeling schemes, the quantification of intracellular metabolic fluxes (i.e. the rate of each reaction in cellular metabolism) is of particular interest for metabolic engineering because it describes how carbon and energy flow throughout the cell. In addition to flux analysis, new methods for the effective use of the ever more readily available and abundant -omics data (i.e. transcriptomics, proteomics and metabolomics) are urgently needed. The jQMM library presented here provides an open-source, Python-based framework for modeling internal metabolic fluxes and leveraging other -omics data for the scientific study of cellular metabolism and bioengineering purposes. Firstly, it presents a complete toolbox for simultaneously performing two different types of flux analysis that are typically disjoint: Flux Balance Analysis and (13)C Metabolic Flux Analysis. Moreover, it introduces the capability to use (13)C labeling experimental data to constrain comprehensive genome-scale models through a technique called two-scale (13)C Metabolic Flux Analysis (2S-(13)C MFA). In addition, the library includes a demonstration of a method that uses proteomics data to produce actionable insights to increase biofuel production. Finally, the use of the jQMM library is illustrated through the addition of several Jupyter notebook demonstration files that enhance reproducibility and provide the capability to be adapted to the user's specific needs. jQMM will facilitate the design and metabolic engineering of organisms for biofuels and other chemicals, as well as investigations of cellular metabolism and leveraging -omics data. As an open source software project, we hope it

  7. Probabilistic Quantitative Precipitation Forecasting Using Ensemble Model Output Statistics

    CERN Document Server

    Scheuerer, Michael

    2013-01-01

    Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics (EMOS). We model precipitation amounts by a generalized extreme value distribution that is left-censored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18-h forecasts of 6-h accumulated precipitation over Germany in 2011 using the COSMO-DE ensemble prediction system operated by the Germa...

  8. Quantitative modeling of degree-degree correlation in complex networks

    CERN Document Server

    Niño, Alfonso

    2013-01-01

    This paper presents an approach to the modeling of degree-degree correlation in complex networks. Thus, a simple function, \\Delta(k', k), describing specific degree-to- degree correlations is considered. The function is well suited to graphically depict assortative and disassortative variations within networks. To quantify degree correlation variations, the joint probability distribution between nodes with arbitrary degrees, P(k', k), is used. Introduction of the end-degree probability function as a basic variable allows using group theory to derive mathematical models for P(k', k). In this form, an expression, representing a family of seven models, is constructed with the needed normalization conditions. Applied to \\Delta(k', k), this expression predicts a nonuniform distribution of degree correlation in networks, organized in two assortative and two disassortative zones. This structure is actually observed in a set of four modeled, technological, social, and biological networks. A regression study performed...

  9. Quantitative modeling of selective lysosomal targeting for drug design

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rosania, G.; Horobin, R.W.;

    2008-01-01

    Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers....... This demonstrates that the cell model can be a useful tool for the design of effective lysosome-targeting drugs with minimal off-target interactions....

  10. Quantitative model analysis with diverse biological data: applications in developmental pattern formation.

    Science.gov (United States)

    Pargett, Michael; Umulis, David M

    2013-07-15

    Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Evaluating quantitative and qualitative models: an application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the mode

  12. Evaluating quantitative and qualitative models: an application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the mode

  13. Statistical analysis of probabilistic models of software product lines with quantitative constraints

    DEFF Research Database (Denmark)

    Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...

  14. Evaluating quantitative and qualitative models: an application for nationwide water erosion assessment in Ethiopia

    NARCIS (Netherlands)

    Sonneveld, B.G.J.S.; Keyzer, M.A.; Stroosnijder, L.

    2011-01-01

    This paper tests the candidacy of one qualitative response model and two quantitative models for a nationwide water erosion hazard assessment in Ethiopia. After a descriptive comparison of model characteristics the study conducts a statistical comparison to evaluate the explanatory power of the

  15. A suite of models to support the quantitative assessment of spread in pest risk analysis

    NARCIS (Netherlands)

    Robinet, C.; Kehlenbeck, H.; Werf, van der W.

    2012-01-01

    In the frame of the EU project PRATIQUE (KBBE-2007-212459 Enhancements of pest risk analysis techniques) a suite of models was developed to support the quantitative assessment of spread in pest risk analysis. This dataset contains the model codes (R language) for the four models in the suite. Three

  16. The power of a good idea: quantitative modeling of the spread of ideas from epidemiological models

    CERN Document Server

    Bettencourt, L M A; Kaiser, D I; Castillo-Chavez, C; Bettencourt, Lu\\'{i}s M.A.; Cintr\\'{o}n-Arias, Ariel; Kaiser, David I.; Castillo-Ch\\'{a}vez, Carlos

    2005-01-01

    The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the thr...

  17. Process of quantitative evaluation of validity of rock cutting model

    Directory of Open Access Journals (Sweden)

    Jozef Futó

    2012-12-01

    Full Text Available Most of complex technical systems, including the rock cutting process, are very difficult to describe mathematically due to limitedhuman recognition abilities depending on achieved state in natural sciences and technology. A confrontation between the conception(model and the real system often arises in the investigation ofrock cutting process. Identification represents determinationof the systembased on its input and output in specified system class in a manner to obtain the determined system equivalent to the exploredsystem. Incase of rock cutting, the qualities of the model derived from aconventional energy theory ofrock cutting are compared to thequalitiesof non-standard models obtained byscanning of the acoustic signal as an accompanying effect of the surroundings in the rock cuttingprocess by calculated characteristics ofthe acoustic signal. The paper focuses on optimization using the specific cutting energy andpossibility of optimization using the accompanying acoustic signal, namely by one of itscharacteristics, i.e. volume of totalsignal Mrepresenting the result of the system identification.

  18. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    Science.gov (United States)

    Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.

  19. Exploiting linkage disequilibrium in statistical modelling in quantitative genomics

    DEFF Research Database (Denmark)

    Wang, Lei

    Alleles at two loci are said to be in linkage disequilibrium (LD) when they are correlated or statistically dependent. Genomic prediction and gene mapping rely on the existence of LD between gentic markers and causul variants of complex traits. In the first part of the thesis, a novel method...... to quantify and visualize local variation in LD along chromosomes in describet, and applied to characterize LD patters at the local and genome-wide scale in three Danish pig breeds. In the second part, different ways of taking LD into account in genomic prediction models are studied. One approach is to use...... the recently proposed antedependence models, which treat neighbouring marker effects as correlated; another approach involves use of haplotype block information derived using the program Beagle. The overall conclusion is that taking LD information into account in genomic prediction models potentially improves...

  20. First principles pharmacokinetic modeling: A quantitative study on Cyclosporin

    DEFF Research Database (Denmark)

    Mošat', Andrej; Lueshen, Eric; Heitzig, Martina

    2013-01-01

    renal and hepatic clearances, elimination half-life, and mass transfer coefficients, to establish drug biodistribution dynamics in all organs and tissues. This multi-scale model satisfies first principles and conservation of mass, species and momentum.Prediction of organ drug bioaccumulation...... as a function of cardiac output, physiology, pathology or administration route may be possible with the proposed PBPK framework. Successful application of our model-based drug development method may lead to more efficient preclinical trials, accelerated knowledge gain from animal experiments, and shortened time-to-market...

  1. A quantitative magnetospheric model derived from spacecraft magnetometer data

    Science.gov (United States)

    Mead, G. D.; Fairfield, D. H.

    1975-01-01

    The model is derived by making least squares fits to magnetic field measurements from four Imp satellites. It includes four sets of coefficients, representing different degrees of magnetic disturbance as determined by the range of Kp values. The data are fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the effects of seasonal north-south asymmetries are contained. The expansion is divergence-free, but unlike the usual scalar potential expansion, the model contains a nonzero curl representing currents distributed within the magnetosphere. The latitude at the earth separating open polar cap field lines from field lines closing on the day side is about 5 deg lower than that determined by previous theoretically derived models. At times of high Kp, additional high-latitude field lines extend back into the tail. Near solstice, the separation latitude can be as low as 75 deg in the winter hemisphere. The average northward component of the external field is much smaller than that predicted by theoretical models; this finding indicates the important effects of distributed currents in the magnetosphere.

  2. Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.

    Science.gov (United States)

    Richards, Jef I.; Preston, Ivan L.

    Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…

  3. Essays on Quantitative Marketing Models and Monte Carlo Integration Methods

    NARCIS (Netherlands)

    R.D. van Oest (Rutger)

    2005-01-01

    textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for addr

  4. Fuzzy Logic as a Computational Tool for Quantitative Modelling of Biological Systems with Uncertain Kinetic Data.

    Science.gov (United States)

    Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha

    2015-01-01

    Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator.

  5. Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models

    Science.gov (United States)

    Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.

    1979-01-01

    The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.

  6. Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models

    Science.gov (United States)

    Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.

    1979-01-01

    The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.

  7. Quantitative comparisons of satellite observations and cloud models

    Science.gov (United States)

    Wang, Fang

    Microwave radiation interacts directly with precipitating particles and can therefore be used to compare microphysical properties found in models with those found in nature. Lower frequencies (minimization procedures but produce different CWP and RWP. The similarity in Tb can be attributed to comparable Total Water Path (TWP) between the two retrievals while the disagreement in the microphysics is caused by their different degrees of constraint of the cloud/rain ratio by the observations. This situation occurs frequently and takes up 46.9% in the one month 1D-Var retrievals examined. To attain better constrained cloud/rain ratios and improved retrieval quality, this study suggests the implementation of higher microwave frequency channels in the 1D-Var algorithm. Cloud Resolving Models (CRMs) offer an important pathway to interpret satellite observations of microphysical properties of storms. High frequency microwave brightness temperatures (Tbs) respond to precipitating-sized ice particles and can, therefore, be compared with simulated Tbs at the same frequencies. By clustering the Tb vectors at these frequencies, the scene can be classified into distinct microphysical regimes, in other words, cloud types. The properties for each cloud type in the simulated scene are compared to those in the observation scene to identify the discrepancies in microphysics within that cloud type. A convective storm over the Amazon observed by the Tropical Rainfall Measuring Mission (TRMM) is simulated using the Regional Atmospheric Modeling System (RAMS) in a semi-ideal setting, and four regimes are defined within the scene using cluster analysis: the 'clear sky/thin cirrus' cluster, the 'cloudy' cluster, the 'stratiform anvil' cluster and the 'convective' cluster. The relationship between Tb difference of 37 and 85 GHz and Tb at 85 GHz is found to contain important information of microphysical properties such as hydrometeor species and size distributions. Cluster

  8. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    Science.gov (United States)

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  9. Quantitative Risk Modeling of Fire on the International Space Station

    Science.gov (United States)

    Castillo, Theresa; Haught, Megan

    2014-01-01

    The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.

  10. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    Directory of Open Access Journals (Sweden)

    Bradley J Beattie

    Full Text Available There has been recent and growing interest in applying Cerenkov radiation (CR for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.

  11. Software applications toward quantitative metabolic flux analysis and modeling.

    Science.gov (United States)

    Dandekar, Thomas; Fieselmann, Astrid; Majeed, Saman; Ahmed, Zeeshan

    2014-01-01

    Metabolites and their pathways are central for adaptation and survival. Metabolic modeling elucidates in silico all the possible flux pathways (flux balance analysis, FBA) and predicts the actual fluxes under a given situation, further refinement of these models is possible by including experimental isotopologue data. In this review, we initially introduce the key theoretical concepts and different analysis steps in the modeling process before comparing flux calculation and metabolite analysis programs such as C13, BioOpt, COBRA toolbox, Metatool, efmtool, FiatFlux, ReMatch, VANTED, iMAT and YANA. Their respective strengths and limitations are discussed and compared to alternative software. While data analysis of metabolites, calculation of metabolic fluxes, pathways and their condition-specific changes are all possible, we highlight the considerations that need to be taken into account before deciding on a specific software. Current challenges in the field include the computation of large-scale networks (in elementary mode analysis), regulatory interactions and detailed kinetics, and these are discussed in the light of powerful new approaches.

  12. Production process reproducibility and product quality consistency of transient gene expression in HEK293 cells with anti-PD1 antibody as the model protein.

    Science.gov (United States)

    Ding, Kai; Han, Lei; Zong, Huifang; Chen, Junsheng; Zhang, Baohong; Zhu, Jianwei

    2017-03-01

    Demonstration of reproducibility and consistency of process and product quality is one of the most crucial issues in using transient gene expression (TGE) technology for biopharmaceutical development. In this study, we challenged the production consistency of TGE by expressing nine batches of recombinant IgG antibody in human embryonic kidney 293 cells to evaluate reproducibility including viable cell density, viability, apoptotic status, and antibody yield in cell culture supernatant. Product quality including isoelectric point, binding affinity, secondary structure, and thermal stability was assessed as well. In addition, major glycan forms of antibody from different batches of production were compared to demonstrate glycosylation consistency. Glycan compositions of the antibody harvested at different time periods were also measured to illustrate N-glycan distribution over the culture time. From the results, it has been demonstrated that different TGE batches are reproducible from lot to lot in overall cell growth, product yield, and product qualities including isoelectric point, binding affinity, secondary structure, and thermal stability. Furthermore, major N-glycan compositions are consistent among different TGE batches and conserved during cell culture time.

  13. Contextual sensitivity in scientific reproducibility.

    Science.gov (United States)

    Van Bavel, Jay J; Mende-Siedlecki, Peter; Brady, William J; Reinero, Diego A

    2016-06-07

    In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher's degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed "hidden moderators") between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility.

  14. Spine curve modeling for quantitative analysis of spinal curvature.

    Science.gov (United States)

    Hay, Ori; Hershkovitz, Israel; Rivlin, Ehud

    2009-01-01

    Spine curvature and posture are important to sustain healthy back. Incorrect spine configuration can add strain to muscles and put stress on the spine, leading to low back pain (LBP). We propose new method for analyzing spine curvature in 3D, using CT imaging. The proposed method is based on two novel concepts: the spine curvature is derived from spinal canal centerline, and evaluation of the curve is carried out against a model based on healthy individuals. We show results of curvature analysis of healthy population, pathological (scoliosis) patients, and patients having nonspecific chronic LBP.

  15. Modeling Cancer Metastasis using Global, Quantitative and Integrative Network Biology

    DEFF Research Database (Denmark)

    Schoof, Erwin; Erler, Janine

    phosphorylation dynamics in a given biological sample. In Chapter III, we move into Integrative Network Biology, where, by combining two fundamental technologies (MS & NGS), we can obtain more in-depth insights into the links between cellular phenotype and genotype. Article 4 describes the proof...... cancer networks using Network Biology. Technologies key to this, such as Mass Spectrometry (MS), Next-Generation Sequencing (NGS) and High-Content Screening (HCS) are briefly described. In Chapter II, we cover how signaling networks and mutational data can be modeled in order to gain a better...

  16. Quantitative Model of microRNA-mRNA interaction

    Science.gov (United States)

    Noorbakhsh, Javad; Lang, Alex; Mehta, Pankaj

    2012-02-01

    MicroRNAs are short RNA sequences that regulate gene expression and protein translation by binding to mRNA. Experimental data reveals the existence of a threshold linear output of protein based on the expression level of microRNA. To understand this behavior, we propose a mathematical model of the chemical kinetics of the interaction between mRNA and microRNA. Using this model we have been able to quantify the threshold linear behavior. Furthermore, we have studied the effect of internal noise, showing the existence of an intermediary regime where the expression level of mRNA and microRNA has the same order of magnitude. In this crossover regime the mRNA translation becomes sensitive to small changes in the level of microRNA, resulting in large fluctuations in protein levels. Our work shows that chemical kinetics parameters can be quantified by studying protein fluctuations. In the future, studying protein levels and their fluctuations can provide a powerful tool to study the competing endogenous RNA hypothesis (ceRNA), in which mRNA crosstalk occurs due to competition over a limited pool of microRNAs.

  17. Quantitative Genetics and Functional-Structural Plant Growth Models: Simulation of Quantitative Trait Loci Detection for Model Parameters and Application to Potential Yield Optimization

    CERN Document Server

    Letort, Veronique; Cournède, Paul-Henry; De Reffye, Philippe; Courtois, Brigitte; 10.1093/aob/mcm197

    2010-01-01

    Background and Aims: Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype x environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functional-structural growth model, which gives access to more fundamental traits for quantitative trait loci (QTL) detection and thus to promising tools for yield optimization. Methods: The GreenLab model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings ...

  18. A quantitative and dynamic model for plant stem cell regulation.

    Directory of Open Access Journals (Sweden)

    Florian Geier

    Full Text Available Plants maintain pools of totipotent stem cells throughout their entire life. These stem cells are embedded within specialized tissues called meristems, which form the growing points of the organism. The shoot apical meristem of the reference plant Arabidopsis thaliana is subdivided into several distinct domains, which execute diverse biological functions, such as tissue organization, cell-proliferation and differentiation. The number of cells required for growth and organ formation changes over the course of a plants life, while the structure of the meristem remains remarkably constant. Thus, regulatory systems must be in place, which allow for an adaptation of cell proliferation within the shoot apical meristem, while maintaining the organization at the tissue level. To advance our understanding of this dynamic tissue behavior, we measured domain sizes as well as cell division rates of the shoot apical meristem under various environmental conditions, which cause adaptations in meristem size. Based on our results we developed a mathematical model to explain the observed changes by a cell pool size dependent regulation of cell proliferation and differentiation, which is able to correctly predict CLV3 and WUS over-expression phenotypes. While the model shows stem cell homeostasis under constant growth conditions, it predicts a variation in stem cell number under changing conditions. Consistent with our experimental data this behavior is correlated with variations in cell proliferation. Therefore, we investigate different signaling mechanisms, which could stabilize stem cell number despite variations in cell proliferation. Our results shed light onto the dynamic constraints of stem cell pool maintenance in the shoot apical meristem of Arabidopsis in different environmental conditions and developmental states.

  19. Application of non-quantitative modelling in the analysis of a network warfare environment

    CSIR Research Space (South Africa)

    Veerasamy, N

    2008-07-01

    Full Text Available of the various interacting components, a model to better understand the complexity in a network warfare environment would be beneficial. Non-quantitative modelling is a useful method to better characterize the field due to the rich ideas that can be generated...

  20. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  1. Quantitative hardware prediction modeling for hardware/software co-design

    NARCIS (Netherlands)

    Meeuws, R.J.

    2012-01-01

    Hardware estimation is an important factor in Hardware/Software Co-design. In this dissertation, we present the Quipu Modeling Approach, a high-level quantitative prediction model for HW/SW Partitioning using statistical methods. Our approach uses linear regression between software complexity metric

  2. Evaluation of a quantitative phosphorus transport model for potential improvement of southern phosphorus indices

    Science.gov (United States)

    Due to a shortage of available phosphorus (P) loss data sets, simulated data from a quantitative P transport model could be used to evaluate a P-index. However, the model would need to accurately predict the P loss data sets that are available. The objective of this study was to compare predictions ...

  3. Development of probabilistic models for quantitative pathway analysis of plant pests introduction for the EU territory

    NARCIS (Netherlands)

    Douma, J.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Roques, A.; Werf, van der W.

    2015-01-01

    The aim of this report is to provide EFSA with probabilistic models for quantitative pathway analysis of plant pest introduction for the EU territory through non-edible plant products or plants. We first provide a conceptualization of two types of pathway models. The individual based PM simulates an

  4. Hyperbolic L2-modules with Reproducing Kernels

    Institute of Scientific and Technical Information of China (English)

    David EELPODE; Frank SOMMEN

    2006-01-01

    Abstract In this paper, the Dirac operator on the Klein model for the hyperbolic space is considered. A function space containing L2-functions on the sphere Sm-1 in (R)m, which are boundary values of solutions for this operator, is defined, and it is proved that this gives rise to a Hilbert module with a reproducing kernel.

  5. Quantitative Modeling of Entangled Polymer Rheology: Experiments, Tube Models and Slip-Link Simulations

    Science.gov (United States)

    Desai, Priyanka Subhash

    Rheology properties are sensitive indicators of molecular structure and dynamics. The relationship between rheology and polymer dynamics is captured in the constitutive model, which, if accurate and robust, would greatly aid molecular design and polymer processing. This dissertation is thus focused on building accurate and quantitative constitutive models that can help predict linear and non-linear viscoelasticity. In this work, we have used a multi-pronged approach based on the tube theory, coarse-grained slip-link simulations, and advanced polymeric synthetic and characterization techniques, to confront some of the outstanding problems in entangled polymer rheology. First, we modified simple tube based constitutive equations in extensional rheology and developed functional forms to test the effect of Kuhn segment alignment on a) tube diameter enlargement and b) monomeric friction reduction between subchains. We, then, used these functional forms to model extensional viscosity data for polystyrene (PS) melts and solutions. We demonstrated that the idea of reduction in segmental friction due to Kuhn alignment is successful in explaining the qualitative difference between melts and solutions in extension as revealed by recent experiments on PS. Second, we compiled literature data and used it to develop a universal tube model parameter set and prescribed their values and uncertainties for 1,4-PBd by comparing linear viscoelastic G' and G" mastercurves for 1,4-PBds of various branching architectures. The high frequency transition region of the mastercurves superposed very well for all the 1,4-PBds irrespective of their molecular weight and architecture, indicating universality in high frequency behavior. Therefore, all three parameters of the tube model were extracted from this high frequency transition region alone. Third, we compared predictions of two versions of the tube model, Hierarchical model and BoB model against linear viscoelastic data of blends of 1,4-PBd

  6. High-response piezoelectricity modeled quantitatively near a phase boundary

    Science.gov (United States)

    Newns, Dennis M.; Kuroda, Marcelo A.; Cipcigan, Flaviu S.; Crain, Jason; Martyna, Glenn J.

    2017-01-01

    Interconversion of mechanical and electrical energy via the piezoelectric effect is fundamental to a wide range of technologies. The discovery in the 1990s of giant piezoelectric responses in certain materials has therefore opened new application spaces, but the origin of these properties remains a challenge to our understanding. A key role is played by the presence of a structural instability in these materials at compositions near the "morphotropic phase boundary" (MPB) where the crystal structure changes abruptly and the electromechanical responses are maximal. Here we formulate a simple, unified theoretical description which accounts for extreme piezoelectric response, its observation at compositions near the MPB, accompanied by ultrahigh dielectric constant and mechanical compliances with rather large anisotropies. The resulting model, based upon a Landau free energy expression, is capable of treating the important domain engineered materials and is found to be predictive while maintaining simplicity. It therefore offers a general and powerful means of accounting for the full set of signature characteristics in these functional materials including volume conserving sum rules and strong substrate clamping effects.

  7. A quantitative confidence signal detection model: 1. Fitting psychometric functions.

    Science.gov (United States)

    Yi, Yongwoo; Merfeld, Daniel M

    2016-04-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. Copyright © 2016 the American Physiological Society.

  8. Toward a quantitative model of metamorphic nucleation and growth

    Science.gov (United States)

    Gaidies, F.; Pattison, D. R. M.; de Capitani, C.

    2011-11-01

    The formation of metamorphic garnet during isobaric heating is simulated on the basis of the classical nucleation and reaction rate theories and Gibbs free energy dissipation in a multi-component model system. The relative influences are studied of interfacial energy, chemical mobility at the surface of garnet clusters, heating rate and pressure on interface-controlled garnet nucleation and growth kinetics. It is found that the interfacial energy controls the departure from equilibrium required to nucleate garnet if attachment and detachment processes at the surface of garnet limit the overall crystallization rate. The interfacial energy for nucleation of garnet in a metapelite of the aureole of the Nelson Batholith, BC, is estimated to range between 0.03 and 0.3 J/m2 at a pressure of ca. 3,500 bar. This corresponds to a thermal overstep of the garnet-forming reaction of ca. 30°C. The influence of the heating rate on thermal overstepping is negligible. A significant feedback is predicted between chemical fractionation associated with garnet formation and the kinetics of nucleation and crystal growth of garnet giving rise to its lognormal—shaped crystal size distribution.

  9. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  10. Quantitative computed tomography and cranial burr holes: a model to evaluate the quality of cranial reconstruction in humans.

    Science.gov (United States)

    Worm, Paulo Valdeci; Ferreira, Nelson Pires; Ferreira, Marcelo Paglioli; Kraemer, Jorge Luiz; Lenhardt, Rene; Alves, Ronnie Peterson Marcondes; Wunderlich, Ricardo Castilho; Collares, Marcus Vinicius Martins

    2012-05-01

    Current methods to evaluate the biologic development of bone grafts in human beings do not quantify results accurately. Cranial burr holes are standardized critical bone defects, and the differences between bone powder and bone grafts have been determined in numerous experimental studies. This study evaluated quantitative computed tomography (QCT) as a method to objectively measure cranial bone density after cranial reconstruction with autografts. In each of 8 patients, 2 of 4 surgical burr holes were reconstructed with autogenous wet bone powder collected during skull trephination, and the other 2 holes, with a circular cortical bone fragment removed from the inner table of the cranial bone flap. After 12 months, the reconstructed areas and a sample of normal bone were studied using three-dimensional QCT; bone density was measured in Hounsfield units (HU). Mean (SD) bone density was 1535.89 (141) HU for normal bone (P < 0.0001), 964 (176) HU for bone fragments, and 453 (241) HU for bone powder (P < 0.001). As expected, the density of the bone fragment graft was consistently greater than that of bone powder. Results confirm the accuracy and reproducibility of QCT, already demonstrated for bone in other locations, and suggest that it is an adequate tool to evaluate cranial reconstructions. The combination of QCT and cranial burr holes is an excellent model to accurately measure the quality of new bone in cranial reconstructions and also seems to be an appropriate choice of experimental model to clinically test any cranial bone or bone substitute reconstruction.

  11. Repeatability and reproducibility of Population Viability Analysis (PVA and the implications for threatened species management

    Directory of Open Access Journals (Sweden)

    Clare Morrison

    2016-08-01

    Full Text Available Conservation triage focuses on prioritizing species, populations or habitats based on urgency, biodiversity benefits, recovery potential as well as cost. Population Viability Analysis (PVA is frequently used in population focused conservation prioritizations. The critical nature of many of these management decisions requires that PVA models are repeatable and reproducible to reliably rank species and/or populations quantitatively. This paper assessed the repeatability and reproducibility of a subset of previously published PVA models. We attempted to rerun baseline models from 90 publicly available PVA studies published between 2000-2012 using the two most common PVA modelling software programs, VORTEX and RAMAS-GIS. Forty percent (n = 36 failed, 50% (45 were both repeatable and reproducible, and 10% (9 had missing baseline models. Repeatability was not linked to taxa, IUCN category, PVA program version used, year published or the quality of publication outlet, suggesting that the problem is systemic within the discipline. Complete and systematic presentation of PVA parameters and results are needed to ensure that the scientific input into conservation planning is both robust and reliable, thereby increasing the chances of making decisions that are both beneficial and defensible. The implications for conservation triage may be far reaching if population viability models cannot be reproduced with confidence, thus undermining their intended value.

  12. A quantitative quantum chemical model of the Dewar-Knott color rule for cationic diarylmethanes

    Science.gov (United States)

    Olsen, Seth

    2012-04-01

    We document the quantitative manifestation of the Dewar-Knott color rule in a four-electron, three-orbital state-averaged complete active space self-consistent field (SA-CASSCF) model of a series of bridge-substituted cationic diarylmethanes. We show that the lowest excitation energies calculated using multireference perturbation theory based on the model are linearly correlated with the development of hole density in an orbital localized on the bridge, and the depletion of pair density in the same orbital. We quantitatively express the correlation in the form of a generalized Hammett equation.

  13. A quantitative model of human DNA base excision repair. I. mechanistic insights

    OpenAIRE

    Sokhansanj, Bahrad A.; Rodrigue, Garry R.; Fitch, J. Patrick; David M Wilson

    2002-01-01

    Base excision repair (BER) is a multistep process involving the sequential activity of several proteins that cope with spontaneous and environmentally induced mutagenic and cytotoxic DNA damage. Quantitative kinetic data on single proteins of BER have been used here to develop a mathematical model of the BER pathway. This model was then employed to evaluate mechanistic issues and to determine the sensitivity of pathway throughput to altered enzyme kinetics. Notably, the model predicts conside...

  14. A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon

    Science.gov (United States)

    Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.

    2017-01-01

    The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.

  15. Quantitative photoacoustic tomography using forward and adjoint Monte Carlo models of radiance

    CERN Document Server

    Hochuli, Roman; Arridge, Simon; Cox, Ben

    2016-01-01

    Forward and adjoint Monte Carlo (MC) models of radiance are proposed for use in model-based quantitative photoacoustic tomography. A 2D radiance MC model using a harmonic angular basis is introduced and validated against analytic solutions for the radiance in heterogeneous media. A gradient-based optimisation scheme is then used to recover 2D absorption and scattering coefficients distributions from simulated photoacoustic measurements. It is shown that the functional gradients, which are a challenge to compute efficiently using MC models, can be calculated directly from the coefficients of the harmonic angular basis used in the forward and adjoint models. This work establishes a framework for transport-based quantitative photoacoustic tomography that can fully exploit emerging highly parallel computing architectures.

  16. A method to isolate bacterial communities and characterize ecosystems from food products: Validation and utilization in as a reproducible chicken meat model.

    Science.gov (United States)

    Rouger, Amélie; Remenant, Benoit; Prévost, Hervé; Zagorec, Monique

    2017-04-17

    Influenced by production and storage processes and by seasonal changes the diversity of meat products microbiota can be very variable. Because microbiotas influence meat quality and safety, characterizing and understanding their dynamics during processing and storage is important for proposing innovative and efficient storage conditions. Challenge tests are usually performed using meat from the same batch, inoculated at high levels with one or few strains. Such experiments do not reflect the true microbial situation, and the global ecosystem is not taken into account. Our purpose was to constitute live stocks of chicken meat microbiotas to create standard and reproducible ecosystems. We searched for the best method to collect contaminating bacterial communities from chicken cuts to store as frozen aliquots. We tested several methods to extract DNA of these stored communities for subsequent PCR amplification. We determined the best moment to collect bacteria in sufficient amounts during the product shelf life. Results showed that the rinsing method associated to the use of Mobio DNA extraction kit was the most reliable method to collect bacteria and obtain DNA for subsequent PCR amplification. Then, 23 different chicken meat microbiotas were collected using this procedure. Microbiota aliquots were stored at -80°C without important loss of viability. Their characterization by cultural methods confirmed the large variability (richness and abundance) of bacterial communities present on chicken cuts. Four of these bacterial communities were used to estimate their ability to regrow on meat matrices. Challenge tests performed on sterile matrices showed that these microbiotas were successfully inoculated and could overgrow the natural microbiota of chicken meat. They can therefore be used for performing reproducible challenge tests mimicking a true meat ecosystem and enabling the possibility to test the influence of various processing or storage conditions on complex meat

  17. A Novel Quantitative Analysis Model for Information System Survivability Based on Conflict Analysis

    Institute of Scientific and Technical Information of China (English)

    WANG Jian; WANG Huiqiang; ZHAO Guosheng

    2007-01-01

    This paper describes a novel quantitative analysis model for system survivability based on conflict analysis, which provides a direct-viewing survivable situation. Based on the three-dimensional state space of conflict, each player's efficiency matrix on its credible motion set can be obtained. The player whose desire is the strongest in all initiates the moving and the overall state transition matrix of information system may be achieved. In addition, the process of modeling and stability analysis of conflict can be converted into a Markov analysis process, thus the obtained results with occurring probability of each feasible situation will help the players to quantitatively judge the probability of their pursuing situations in conflict. Compared with the existing methods which are limited to post-explanation of system's survivable situation, the proposed model is relatively suitable for quantitatively analyzing and forecasting the future development situation of system survivability. The experimental results show that the model may be effectively applied to quantitative analysis for survivability. Moreover, there will be a good application prospect in practice.

  18. Quantitative genetics model as the unifying model for defining genomic relationship and inbreeding coefficient.

    Science.gov (United States)

    Wang, Chunkao; Da, Yang

    2014-01-01

    The traditional quantitative genetics model was used as the unifying approach to derive six existing and new definitions of genomic additive and dominance relationships. The theoretical differences of these definitions were in the assumptions of equal SNP effects (equivalent to across-SNP standardization), equal SNP variances (equivalent to within-SNP standardization), and expected or sample SNP additive and dominance variances. The six definitions of genomic additive and dominance relationships on average were consistent with the pedigree relationships, but had individual genomic specificity and large variations not observed from pedigree relationships. These large variations may allow finding least related genomes even within the same family for minimizing genomic relatedness among breeding individuals. The six definitions of genomic relationships generally had similar numerical results in genomic best linear unbiased predictions of additive effects (GBLUP) and similar genomic REML (GREML) estimates of additive heritability. Predicted SNP dominance effects and GREML estimates of dominance heritability were similar within definitions assuming equal SNP effects or within definitions assuming equal SNP variance, but had differences between these two groups of definitions. We proposed a new measure of genomic inbreeding coefficient based on parental genomic co-ancestry coefficient and genomic additive correlation as a genomic approach for predicting offspring inbreeding level. This genomic inbreeding coefficient had the highest correlation with pedigree inbreeding coefficient among the four methods evaluated for calculating genomic inbreeding coefficient in a Holstein sample and a swine sample.

  19. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    Science.gov (United States)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that

  20. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby

    2017-03-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the laser-induced breakdown spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element's emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple "sub-model" method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then "blending" these "sub-models" into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares (PLS) regression, is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  1. Multi-objective intelligent coordinating optimization blending system based on qualitative and quantitative synthetic model

    Institute of Scientific and Technical Information of China (English)

    WANG Ya-lin; MA Jie; GUI Wei-hua; YANG Chun-hua; ZHANG Chuan-fu

    2006-01-01

    A multi-objective intelligent coordinating optimization strategy based on qualitative and quantitative synthetic model for Pb-Zn sintering blending process was proposed to obtain optimal mixture ratio. The mechanism and neural network quantitative models for predicting compositions and rule models for expert reasoning were constructed based on statistical data and empirical knowledge. An expert reasoning method based on these models were proposed to solve blending optimization problem, including multi-objective optimization for the first blending process and area optimization for the second blending process, and to determine optimal mixture ratio which will meet the requirement of intelligent coordination. The results show that the qualified rates of agglomerate Pb, Zn and S compositions are increased by 7.1%, 6.5% and 6.9%, respectively, and the fluctuation of sintering permeability is reduced by 7.0 %, which effectively stabilizes the agglomerate compositions and the permeability.

  2. The evolution and extinction of the ichthyosaurs from the perspective of quantitative ecospace modelling.

    Science.gov (United States)

    Dick, Daniel G; Maxwell, Erin E

    2015-07-01

    The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the 'migration model'.

  3. Reproducibility of Quantitative Structural and Physiological MRI Measurements

    Science.gov (United States)

    2017-08-09

    lobe , we manually counted the number of WMH and used freely available Mango software version 4.0 (RRID:SCR_009603...1.6 [1.3, 1.9] 0.889 N = 5 (High) N = 2 (High) lh inferior parietal 2.63 [2.57, 2.68] 2.63 [2.58, 2.67] 2.64 [2.58, 2.70] 1.1 [0.85, 1.4] 1.5...2.79, 2.88] 2.86 [2.82, 2.91] 1.6 [1.1, 2.1] 2.0 [1.4, 2.7] 0.918 N = 7 (High) N = 2 (High) lh superior parietal 2.30 [2.24, 2.35] 2.31

  4. Reproducible Bioinformatics Research for Biologists

    Science.gov (United States)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  5. Evaluation of multichannel reproduced sound

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2007-01-01

    A study was conducted with the goal of quantifying auditory attributes which underlie listener preference for multichannel reproduced sound. Short musical excerpts were presented in mono, stereo and several multichannel formats to a panel of forty selected listeners. Scaling of auditory attributes...

  6. [Reproducibility of subjective refraction measurement].

    Science.gov (United States)

    Grein, H-J; Schmidt, O; Ritsche, A

    2014-11-01

    Reproducibility of subjective refraction measurement is limited by various factors. The main factors affecting reproducibility include the characteristics of the measurement method and of the subject and the examiner. This article presents the results of a study on this topic, focusing on the reproducibility of subjective refraction measurement in healthy eyes. The results of previous studies are not all presented in the same way by the respective authors and cannot be fully standardized without consulting the original scientific data. To the extent that they are comparable, the results of our study largely correspond largely with those of previous investigations: During repeated subjective refraction measurement, 95% of the deviation from the mean value was approximately ±0.2 D to ±0.65 D for the spherical equivalent and cylindrical power. The reproducibility of subjective refraction measurement in healthy eyes is limited, even under ideal conditions. Correct assessment of refraction results is only feasible after identifying individual variability. Several measurements are required. Refraction cannot be measured without a tolerance range. The English full-text version of this article is available at SpringerLink (under supplemental).

  7. Reproducible research in computational science.

    Science.gov (United States)

    Peng, Roger D

    2011-12-02

    Computational science has led to exciting new developments, but the nature of the work has exposed limitations in our ability to evaluate published findings. Reproducibility has the potential to serve as a minimum standard for judging scientific claims when full independent replication of a study is not possible.

  8. Universal properties of high-temperature superconductors from real-space pairing: t -J -U model and its quantitative comparison with experiment

    Science.gov (United States)

    Spałek, Józef; Zegrodnik, Michał; Kaczmarczyk, Jan

    2017-01-01

    Selected universal experimental properties of high-temperature superconducting (HTS) cuprates have been singled out in the last decade. One of the pivotal challenges in this field is the designation of a consistent interpretation framework within which we can describe quantitatively the universal features of those systems. Here we analyze in a detailed manner the principal experimental data and compare them quantitatively with the approach based on a single-band model of strongly correlated electrons supplemented with strong antiferromagnetic (super)exchange interaction (the so-called t -J -U model). The model rationale is provided by estimating its microscopic parameters on the basis of the three-band approach for the Cu-O plane. We use our original full Gutzwiller wave-function solution by going beyond the renormalized mean-field theory (RMFT) in a systematic manner. Our approach reproduces very well the observed hole doping (δ ) dependence of the kinetic-energy gain in the superconducting phase, one of the principal non-Bardeen-Cooper-Schrieffer features of the cuprates. The calculated Fermi velocity in the nodal direction is practically δ -independent and its universal value agrees very well with that determined experimentally. Also, a weak doping dependence of the Fermi wave vector leads to an almost constant value of the effective mass in a pure superconducting phase which is both observed in experiment and reproduced within our approach. An assessment of the currently used models (t -J , Hubbard) is carried out and the results of the canonical RMFT as a zeroth-order solution are provided for comparison to illustrate the necessity of the introduced higher-order contributions.

  9. Human judgment vs. quantitative models for the management of ecological resources.

    Science.gov (United States)

    Holden, Matthew H; Ellner, Stephen P

    2016-07-01

    Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed

  10. Quantitative agent based model of user behavior in an Internet discussion forum.

    Directory of Open Access Journals (Sweden)

    Pawel Sobkowicz

    Full Text Available The paper presents an agent based simulation of opinion evolution, based on a nonlinear emotion/information/opinion (E/I/O individual dynamics, to an actual Internet discussion forum. The goal is to reproduce the results of two-year long observations and analyses of the user communication behavior and of the expressed opinions and emotions, via simulations using an agent based model. The model allowed to derive various characteristics of the forum, including the distribution of user activity and popularity (outdegree and indegree, the distribution of length of dialogs between the participants, their political sympathies and the emotional content and purpose of the comments. The parameters used in the model have intuitive meanings, and can be translated into psychological observables.

  11. Quantitative agent based model of user behavior in an Internet discussion forum.

    Science.gov (United States)

    Sobkowicz, Pawel

    2013-01-01

    The paper presents an agent based simulation of opinion evolution, based on a nonlinear emotion/information/opinion (E/I/O) individual dynamics, to an actual Internet discussion forum. The goal is to reproduce the results of two-year long observations and analyses of the user communication behavior and of the expressed opinions and emotions, via simulations using an agent based model. The model allowed to derive various characteristics of the forum, including the distribution of user activity and popularity (outdegree and indegree), the distribution of length of dialogs between the participants, their political sympathies and the emotional content and purpose of the comments. The parameters used in the model have intuitive meanings, and can be translated into psychological observables.

  12. A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Amir Jamshidnezhad

    2011-01-01

    Full Text Available In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy.

  13. Review on modelling aspects in reversed-phase liquid chromatographic quantitative structure-retention relationships

    Energy Technology Data Exchange (ETDEWEB)

    Put, R. [FABI, Department of Analytical Chemistry and Pharmaceutical Technology, Pharmaceutical Institute, Vrije Universiteit Brussel (VUB), Laarbeeklaan 103, B-1090 Brussels (Belgium); Vander Heyden, Y. [FABI, Department of Analytical Chemistry and Pharmaceutical Technology, Pharmaceutical Institute, Vrije Universiteit Brussel (VUB), Laarbeeklaan 103, B-1090 Brussels (Belgium)], E-mail: yvanvdh@vub.ac.be

    2007-10-29

    In the literature an increasing interest in quantitative structure-retention relationships (QSRR) can be observed. After a short introduction on QSRR and other strategies proposed to deal with the starting point selection problem prior to method development in reversed-phase liquid chromatography, a number of interesting papers is reviewed, dealing with QSRR models for reversed-phase liquid chromatography. The main focus in this review paper is put on the different modelling methodologies applied and the molecular descriptors used in the QSRR approaches. Besides two semi-quantitative approaches (i.e. principal component analysis, and decision trees), these methodologies include artificial neural networks, partial least squares, uninformative variable elimination partial least squares, stochastic gradient boosting for tree-based models, random forests, genetic algorithms, multivariate adaptive regression splines, and two-step multivariate adaptive regression splines.

  14. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  15. New Quantitative Structure-Activity Relationship Models Improve Predictability of Ames Mutagenicity for Aromatic Azo Compounds.

    Science.gov (United States)

    Manganelli, Serena; Benfenati, Emilio; Manganaro, Alberto; Kulkarni, Sunil; Barton-Maclaren, Tara S; Honma, Masamitsu

    2016-10-01

    Existing Quantitative Structure-Activity Relationship (QSAR) models have limited predictive capabilities for aromatic azo compounds. In this study, 2 new models were built to predict Ames mutagenicity of this class of compounds. The first one made use of descriptors based on simplified molecular input-line entry system (SMILES), calculated with the CORAL software. The second model was based on the k-nearest neighbors algorithm. The statistical quality of the predictions from single models was satisfactory. The performance further improved when the predictions from these models were combined. The prediction results from other QSAR models for mutagenicity were also evaluated. Most of the existing models were found to be good at finding toxic compounds but resulted in many false positive predictions. The 2 new models specific for this class of compounds avoid this problem thanks to a larger set of related compounds as training set and improved algorithms.

  16. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    Science.gov (United States)

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  17. Reproducibility of NIF hohlraum measurements

    Science.gov (United States)

    Moody, J. D.; Ralph, J. E.; Turnbull, D. P.; Casey, D. T.; Albert, F.; Bachmann, B. L.; Doeppner, T.; Divol, L.; Grim, G. P.; Hoover, M.; Landen, O. L.; MacGowan, B. J.; Michel, P. A.; Moore, A. S.; Pino, J. E.; Schneider, M. B.; Tipton, R. E.; Smalyuk, V. A.; Strozzi, D. J.; Widmann, K.; Hohenberger, M.

    2015-11-01

    The strategy of experimentally ``tuning'' the implosion in a NIF hohlraum ignition target towards increasing hot-spot pressure, areal density of compressed fuel, and neutron yield relies on a level of experimental reproducibility. We examine the reproducibility of experimental measurements for a collection of 15 identical NIF hohlraum experiments. The measurements include incident laser power, backscattered optical power, x-ray measurements, hot-electron fraction and energy, and target characteristics. We use exact statistics to set 1-sigma confidence levels on the variations in each of the measurements. Of particular interest is the backscatter and laser-induced hot-spot locations on the hohlraum wall. Hohlraum implosion designs typically include variability specifications [S. W. Haan et al., Phys. Plasmas 18, 051001 (2011)]. We describe our findings and compare with the specifications. This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

  18. [Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].

    Science.gov (United States)

    Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan

    2005-06-01

    Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.

  19. Quantitative Safety: Linking Proof-Based Verification with Model Checking for Probabilistic Systems

    CERN Document Server

    Ndukwu, Ukachukwu

    2009-01-01

    This paper presents a novel approach for augmenting proof-based verification with performance-style analysis of the kind employed in state-of-the-art model checking tools for probabilistic systems. Quantitative safety properties usually specified as probabilistic system invariants and modeled in proof-based environments are evaluated using bounded model checking techniques. Our specific contributions include the statement of a theorem that is central to model checking safety properties of proof-based systems, the establishment of a procedure; and its full implementation in a prototype system (YAGA) which readily transforms a probabilistic model specified in a proof-based environment to its equivalent verifiable PRISM model equipped with reward structures. The reward structures capture the exact interpretation of the probabilistic invariants and can reveal succinct information about the model during experimental investigations. Finally, we demonstrate the novelty of the technique on a probabilistic library cas...

  20. Comparison of Quantitative Structure-Activity Relationship Model Performances on Carboquinone Derivatives

    Directory of Open Access Journals (Sweden)

    Sorana D. Bolboaca

    2009-01-01

    Full Text Available Quantitative structure-activity relationship (qSAR models are used to understand how the structure and activity of chemical compounds relate. In the present study, 37 carboquinone derivatives were evaluated and two different qSAR models were developed using members of the Molecular Descriptors Family (MDF and the Molecular Descriptors Family on Vertices (MDFV. The usual parameters of regression models and the following estimators were defined and calculated in order to analyze the validity and to compare the models: Akaike?s information criteria (three parameters, Schwarz (or Bayesian information criterion, Amemiya prediction criterion, Hannan-Quinn criterion, Kubinyi function, Steiger's Z test, and Akaike's weights. The MDF and MDFV models proved to have the same estimation ability of the goodness-of-fit according to Steiger's Z test. The MDFV model proved to be the best model for the considered carboquinone derivatives according to the defined information and prediction criteria, Kubinyi function, and Akaike's weights.

  1. Modelling Activities In Kinematics Understanding quantitative relations with the contribution of qualitative reasoning

    Science.gov (United States)

    Orfanos, Stelios

    2010-01-01

    In Greek traditional teaching a lot of significant concepts are introduced with a sequence that does not provide the students with all the necessary information required to comprehend. We consider that understanding concepts and the relations among them is greatly facilitated by the use of modelling tools, taking into account that the modelling process forces students to change their vague, imprecise ideas into explicit causal relationships. It is not uncommon to find students who are able to solve problems by using complicated relations without getting a qualitative and in-depth grip on them. Researchers have already shown that students often have a formal mathematical and physical knowledge without a qualitative understanding of basic concepts and relations." The aim of this communication is to present some of the results of our investigation into modelling activities related to kinematical concepts. For this purpose, we have used ModellingSpace, an environment that was especially designed to allow students from eleven to seventeen years old to express their ideas and gradually develop them. The ModellingSpace enables students to build their own models and offers the choice of observing directly simulations of real objects and/or all the other alternative forms of representations (tables of values, graphic representations and bar-charts). The students -in order to answer the questions- formulate hypotheses, they create models, they compare their hypotheses with the representations of their models and they modify or create other models when their hypotheses did not agree with the representations. In traditional ways of teaching, students are educated to utilize formulas as the most important strategy. Several times the students recall formulas in order to utilize them, without getting an in-depth understanding on them. Students commonly use the quantitative type of reasoning, since it is primarily used in teaching, although it may not be fully understood by them

  2. Polymorphic ethyl alcohol as a model system for the quantitative study of glassy behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, H.E.; Schober, H.; Gonzalez, M.A. [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France); Bermejo, F.J.; Fayos, R.; Dawidowski, J. [Consejo Superior de Investigaciones Cientificas, Madrid (Spain); Ramos, M.A.; Vieira, S. [Universidad Autonoma de Madrid (Spain)

    1997-04-01

    The nearly universal transport and dynamical properties of amorphous materials or glasses are investigated. Reasonably successful phenomenological models have been developed to account for these properties as well as the behaviour near the glass-transition, but quantitative microscopic models have had limited success. One hindrance to these investigations has been the lack of a material which exhibits glass-like properties in more than one phase at a given temperature. This report presents results of neutron-scattering experiments for one such material ordinary ethyl alcohol, which promises to be a model system for future investigations of glassy behaviour. (author). 8 refs.

  3. Quantitative explanation of circuit experiments and real traffic using the optimal velocity model

    Science.gov (United States)

    Nakayama, Akihiro; Kikuchi, Macoto; Shibata, Akihiro; Sugiyama, Yuki; Tadaki, Shin-ichi; Yukawa, Satoshi

    2016-04-01

    We have experimentally confirmed that the occurrence of a traffic jam is a dynamical phase transition (Tadaki et al 2013 New J. Phys. 15 103034, Sugiyama et al 2008 New J. Phys. 10 033001). In this study, we investigate whether the optimal velocity (OV) model can quantitatively explain the results of experiments. The occurrence and non-occurrence of jammed flow in our experiments agree with the predictions of the OV model. We also propose a scaling rule for the parameters of the model. Using this rule, we obtain critical density as a function of a single parameter. The obtained critical density is consistent with the observed values for highway traffic.

  4. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.

    Science.gov (United States)

    Rohlfs, Rori V; Nielsen, Rasmus

    2015-09-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  5. Incorporation of caffeine into a quantitative model of fatigue and sleep.

    Science.gov (United States)

    Puckeridge, M; Fulcher, B D; Phillips, A J K; Robinson, P A

    2011-03-21

    A recent physiologically based model of human sleep is extended to incorporate the effects of caffeine on sleep-wake timing and fatigue. The model includes the sleep-active neurons of the hypothalamic ventrolateral preoptic area (VLPO), the wake-active monoaminergic brainstem populations (MA), their interactions with cholinergic/orexinergic (ACh/Orx) input to MA, and circadian and homeostatic drives. We model two effects of caffeine on the brain due to competitive antagonism of adenosine (Ad): (i) a reduction in the homeostatic drive and (ii) an increase in cholinergic activity. By comparing the model output to experimental data, constraints are determined on the parameters that describe the action of caffeine on the brain. In accord with experiment, the ranges of these parameters imply significant variability in caffeine sensitivity between individuals, with caffeine's effectiveness in reducing fatigue being highly dependent on an individual's tolerance, and past caffeine and sleep history. Although there are wide individual differences in caffeine sensitivity and thus in parameter values, once the model is calibrated for an individual it can be used to make quantitative predictions for that individual. A number of applications of the model are examined, using exemplar parameter values, including: (i) quantitative estimation of the sleep loss and the delay to sleep onset after taking caffeine for various doses and times; (ii) an analysis of the system's stable states showing that the wake state during sleep deprivation is stabilized after taking caffeine; and (iii) comparing model output successfully to experimental values of subjective fatigue reported in a total sleep deprivation study examining the reduction of fatigue with caffeine. This model provides a framework for quantitatively assessing optimal strategies for using caffeine, on an individual basis, to maintain performance during sleep deprivation.

  6. Data Science Innovations That Streamline Development, Documentation, Reproducibility, and Dissemination of Models in Computational Thermodynamics: An Application of Image Processing Techniques for Rapid Computation, Parameterization and Modeling of Phase Diagrams

    Science.gov (United States)

    Ghiorso, M. S.

    2014-12-01

    Computational thermodynamics (CT) represents a collection of numerical techniques that are used to calculate quantitative results from thermodynamic theory. In the Earth sciences, CT is most often applied to estimate the equilibrium properties of solutions, to calculate phase equilibria from models of the thermodynamic properties of materials, and to approximate irreversible reaction pathways by modeling these as a series of local equilibrium steps. The thermodynamic models that underlie CT calculations relate the energy of a phase to temperature, pressure and composition. These relationships are not intuitive and they are seldom well constrained by experimental data; often, intuition must be applied to generate a robust model that satisfies the expectations of use. As a consequence of this situation, the models and databases the support CT applications in geochemistry and petrology are tedious to maintain as new data and observations arise. What is required to make the process more streamlined and responsive is a computational framework that permits the rapid generation of observable outcomes from the underlying data/model collections, and importantly, the ability to update and re-parameterize the constitutive models through direct manipulation of those outcomes. CT procedures that take models/data to the experiential reference frame of phase equilibria involve function minimization, gradient evaluation, the calculation of implicit lines, curves and surfaces, contour extraction, and other related geometrical measures. All these procedures are the mainstay of image processing analysis. Since the commercial escalation of video game technology, open source image processing libraries have emerged (e.g., VTK) that permit real time manipulation and analysis of images. These tools find immediate application to CT calculations of phase equilibria by permitting rapid calculation and real time feedback between model outcome and the underlying model parameters.

  7. The optimal hyperspectral quantitative models for chlorophyll-a of chlorella vulgaris

    Science.gov (United States)

    Cheng, Qian; Wu, Xiuju

    2009-09-01

    Chlorophyll-a of Chlorella vulgaris had been related with spectrum. Based on hyperspectral measurement for Chlorella vulgaris, the hyperspectral characteristics of Chlorella vulgaris and their optimal hyperspectral quantitative models of chlorophyll-a (Chla) estimation were researched in situ experiment. The results showed that the optimal hyperspectral quantitative model of Chlorella vulgaris was Chla=180.5+1125787(R700)'+2.4 *109[(R700)']2 (P0Chlorella vulgaris, two reflectance crests were around 540 nm and 700 nm and their locations moved right while Chl-a concentration increased. The reflectance of Chlorella vulgaris decreases with Cha concentration increase in 540 nm, but on the contrary in 700nm.

  8. Business Scenario Evaluation Method Using Monte Carlo Simulation on Qualitative and Quantitative Hybrid Model

    Science.gov (United States)

    Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa

    We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.

  9. From classical genetics to quantitative genetics to systems biology: modeling epistasis.

    Directory of Open Access Journals (Sweden)

    David L Aylor

    2008-03-01

    Full Text Available Gene expression data has been used in lieu of phenotype in both classical and quantitative genetic settings. These two disciplines have separate approaches to measuring and interpreting epistasis, which is the interaction between alleles at different loci. We propose a framework for estimating and interpreting epistasis from a classical experiment that combines the strengths of each approach. A regression analysis step accommodates the quantitative nature of expression measurements by estimating the effect of gene deletions plus any interaction. Effects are selected by significance such that a reduced model describes each expression trait. We show how the resulting models correspond to specific hierarchical relationships between two regulator genes and a target gene. These relationships are the basic units of genetic pathways and genomic system diagrams. Our approach can be extended to analyze data from a variety of experiments, multiple loci, and multiple environments.

  10. Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model

    Science.gov (United States)

    Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi

    2017-09-01

    Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.

  11. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China.

    Science.gov (United States)

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-04-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.

  12. Quantitative mathematical modeling of PSA dynamics of prostate cancer patients treated with intermittent androgen suppression

    Institute of Scientific and Technical Information of China (English)

    Yoshito Hirata; Koichiro Akakura; Celestia S.Higano; Nicholas Bruchovsky; Kazuyuki Aihara

    2012-01-01

    If a mathematical model is to be used in the diagnosis,treatment,or prognosis of a disease,it must describe the inherent quantitative dynamics of the state.An ideal candidate disease is prostate cancer owing to the fact that it is characterized by an excellent biomarker,prostate-specific antigen (PSA),and also by a predictable response to treatment in the form of androgen suppression therapy.Despite a high initial response rate,the cancer will often relapse to a state of androgen independence which no longer responds to manipulations of the hormonal environment.In this paper,we present relevant background information and a quantitative mathematical model that potentially can be used in the optimal management of patients to cope with biochemical relapse as indicated by a rising PSA.

  13. Shortening the learning curve in endoscopic endonasal skull base surgery: a reproducible polymer tumor model for the trans-sphenoidal trans-tubercular approach to retro-infundibular tumors.

    Science.gov (United States)

    Berhouma, Moncef; Baidya, Nishanta B; Ismaïl, Abdelhay A; Zhang, Jun; Ammirati, Mario

    2013-09-01

    Endoscopic endonasal skull base surgery attracts an increasing number of young neurosurgeons. This recent technique requires specific technical skills for the approaches to non-pituitary tumors (expanded endoscopic endonasal surgery). Actual residents' busy schedules carry the risk of compromising their laboratory training by limiting significantly the dedicated time for dissections. To enhance and shorten the learning curve in expanded endoscopic endonasal skull base surgery, we propose a reproducible model based on the implantation of a polymer via an intracranial route to provide a pathological retro-infundibular expansive lesion accessible to a virgin expanded endoscopic endonasal route, avoiding the ethically-debatable need to hundreds of pituitary cases in live patients before acquiring the desired skills. A polymer-based tumor model was implanted in 6 embalmed human heads via a microsurgical right fronto-temporal approach through the carotido-oculomotor cistern to mimic a retro-infundibular tumor. The tumor's position was verified by CT-scan. An endoscopic endonasal trans-sphenoidal trans-tubercular trans-planum approach was then carried out on a virgin route under neuronavigation tracking. Dissection of the tumor model from displaced surrounding neurovascular structures reproduced live surgery's sensations and challenges. Post-implantation CT-scan allowed the pre-removal assessment of the tumor insertion, its relationships as well as naso-sphenoidal anatomy in preparation of the endoscopic approach. Training on easily reproducible retro-infundibular approaches in a context of pathological distorted anatomy provides a unique opportunity to avoid the need for repetitive live surgeries to acquire skills for this kind of rare tumors, and may shorten the learning curve for endoscopic endonasal surgery. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Quantitative Modeling of Microbial Population Responses to Chronic Irradiation Combined with Other Stressors

    OpenAIRE

    Igor Shuryak; Ekaterina Dadachova

    2016-01-01

    Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil co...

  15. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    DEFF Research Database (Denmark)

    Han, Xue; Sandels, Claes; Zhu, Kun;

    2013-01-01

    , comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation...... results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles....

  16. Quantitative Mapping of Reversible Mitochondrial Complex I Cysteine Oxidation in a Parkinson Disease Mouse Model*

    OpenAIRE

    Danielson, Steven R.; Held, Jason M.; Oo, May; Riley, Rebeccah; Gibson, Bradford W.; Andersen, Julie K.

    2011-01-01

    Differential cysteine oxidation within mitochondrial Complex I has been quantified in an in vivo oxidative stress model of Parkinson disease. We developed a strategy that incorporates rapid and efficient immunoaffinity purification of Complex I followed by differential alkylation and quantitative detection using sensitive mass spectrometry techniques. This method allowed us to quantify the reversible cysteine oxidation status of 34 distinct cysteine residues out of a total 130 present in muri...

  17. Toxicity Mechanisms of the Food Contaminant Citrinin: Application of a Quantitative Yeast Model

    OpenAIRE

    Amparo Pascual-Ahuir; Elena Vanacloig-Pedros; Markus Proft

    2014-01-01

    Mycotoxins are important food contaminants and a serious threat for human nutrition. However, in many cases the mechanisms of toxicity for this diverse group of metabolites are poorly understood. Here we apply live cell gene expression reporters in yeast as a quantitative model to unravel the cellular defense mechanisms in response to the mycotoxin citrinin. We find that citrinin triggers a fast and dose dependent activation of stress responsive promoters such as GRE2 or SOD2. More specifical...

  18. Integration of CFD codes and advanced combustion models for quantitative burnout determination

    Energy Technology Data Exchange (ETDEWEB)

    Javier Pallares; Inmaculada Arauzo; Alan Williams [University of Zaragoza, Zaragoza (Spain). Centre of Research for Energy Resources and Consumption (CIRCE)

    2007-10-15

    CFD codes and advanced kinetics combustion models are extensively used to predict coal burnout in large utility boilers. Modelling approaches based on CFD codes can accurately solve the fluid dynamics equations involved in the problem but this is usually achieved by including simple combustion models. On the other hand, advanced kinetics combustion models can give a detailed description of the coal combustion behaviour by using a simplified description of the flow field, this usually being obtained from a zone-method approach. Both approximations describe correctly general trends on coal burnout, but fail to predict quantitative values. In this paper a new methodology which takes advantage of both approximations is described. In the first instance CFD solutions were obtained of the combustion conditions in the furnace in the Lamarmora power plant (ASM Brescia, Italy) for a number of different conditions and for three coals. Then, these furnace conditions were used as inputs for a more detailed chemical combustion model to predict coal burnout. In this, devolatilization was modelled using a commercial macromolecular network pyrolysis model (FG-DVC). For char oxidation an intrinsic reactivity approach including thermal annealing, ash inhibition and maceral effects, was used. Results from the simulations were compared against plant experimental values, showing a reasonable agreement in trends and quantitative values. 28 refs., 4 figs., 4 tabs.

  19. A quantitative model of human DNA base excision repair. I. Mechanistic insights.

    Science.gov (United States)

    Sokhansanj, Bahrad A; Rodrigue, Garry R; Fitch, J Patrick; Wilson, David M

    2002-04-15

    Base excision repair (BER) is a multistep process involving the sequential activity of several proteins that cope with spontaneous and environmentally induced mutagenic and cytotoxic DNA damage. Quantitative kinetic data on single proteins of BER have been used here to develop a mathematical model of the BER pathway. This model was then employed to evaluate mechanistic issues and to determine the sensitivity of pathway throughput to altered enzyme kinetics. Notably, the model predicts considerably less pathway throughput than observed in experimental in vitro assays. This finding, in combination with the effects of pathway cooperativity on model throughput, supports the hypothesis of cooperation during abasic site repair and between the apurinic/apyrimidinic (AP) endonuclease, Ape1, and the 8-oxoguanine DNA glycosylase, Ogg1. The quantitative model also predicts that for 8-oxoguanine and hydrolytic AP site damage, short-patch Polbeta-mediated BER dominates, with minimal switching to the long-patch subpathway. Sensitivity analysis of the model indicates that the Polbeta-catalyzed reactions have the most control over pathway throughput, although other BER reactions contribute to pathway efficiency as well. The studies within represent a first step in a developing effort to create a predictive model for BER cellular capacity.

  20. A quantitative comparison of the TERA modeling and DFT magnetic resonance image reconstruction techniques.

    Science.gov (United States)

    Smith, M R; Nichols, S T; Constable, R T; Henkelman, R M

    1991-05-01

    The resolution of magnetic resonance images reconstructed using the discrete Fourier transform (DFT) algorithm is limited by the effective window generated by the finite data length. The transient error reconstruction approach (TERA) is an alternative reconstruction method based on autoregressive moving average (ARMA) modeling techniques. Quantitative measurements comparing the truncation artifacts present during DFT and TERA image reconstruction show that the modeling method substantially reduces these artifacts on "full" (256 X 256), "truncated" (256 X 192), and "severely truncated" (256 X 128) data sets without introducing the global amplitude distortion found in other modeling techniques. Two global measures for determining the success of modeling are suggested. Problem areas for one-dimensional modeling are examined and reasons for considering two-dimensional modeling discussed. Analysis of both medical and phantom data reconstructions are presented.

  1. Quantitative nucleation and growth kinetics of gold nanoparticles via model-assisted dynamic spectroscopic approach.

    Science.gov (United States)

    Zhou, Yao; Wang, Huixuan; Lin, Wenshuang; Lin, Liqin; Gao, Yixian; Yang, Feng; Du, Mingming; Fang, Weiping; Huang, Jiale; Sun, Daohua; Li, Qingbiao

    2013-10-01

    Lacking of quantitative experimental data and/or kinetic models that could mathematically depict the redox chemistry and the crystallization issue, bottom-to-up formation kinetics of gold nanoparticles (GNPs) remains a challenge. We measured the dynamic regime of GNPs synthesized by l-ascorbic acid (representing a chemical approach) and/or foliar aqueous extract (a biogenic approach) via in situ spectroscopic characterization and established a redox-crystallization model which allows quantitative and separate parameterization of the nucleation and growth processes. The main results were simplified as the following aspects: (I) an efficient approach, i.e., the dynamic in situ spectroscopic characterization assisted with the redox-crystallization model, was established for quantitative analysis of the overall formation kinetics of GNPs in solution; (II) formation of GNPs by the chemical and the biogenic approaches experienced a slow nucleation stage followed by a growth stage which behaved as a mixed-order reaction, and different from the chemical approach, the biogenic method involved heterogeneous nucleation; (III) also, biosynthesis of flaky GNPs was a kinetic-controlled process favored by relatively slow redox chemistry; and (IV) though GNPs formation consists of two aspects, namely the redox chemistry and the crystallization issue, the latter was the rate-determining event that controls the dynamic regime of the whole physicochemical process.

  2. A quantitative approach for comparing modeled biospheric carbon flux estimates across regional scales

    Directory of Open Access Journals (Sweden)

    D. N. Huntzinger

    2010-10-01

    Full Text Available Given the large differences between biospheric model estimates of regional carbon exchange, there is a need to understand and reconcile the predicted spatial variability of fluxes across models. This paper presents a set of quantitative tools that can be applied for comparing flux estimates in light of the inherent differences in model formulation. The presented methods include variogram analysis, variable selection, and geostatistical regression. These methods are evaluated in terms of their ability to assess and identify differences in spatial variability in flux estimates across North America among a small subset of models, as well as differences in the environmental drivers that appear to have the greatest control over the spatial variability of predicted fluxes. The examined models are the Simple Biosphere (SiB 3.0, Carnegie Ames Stanford Approach (CASA, and CASA coupled with the Global Fire Emissions Database (CASA GFEDv2, and the analyses are performed on model-predicted net ecosystem exchange, gross primary production, and ecosystem respiration. Variogram analysis reveals consistent seasonal differences in spatial variability among modeled fluxes at a 1°×1° spatial resolution. However, significant differences are observed in the overall magnitude of the carbon flux spatial variability across models, in both net ecosystem exchange and component fluxes. Results of the variable selection and geostatistical regression analyses suggest fundamental differences between the models in terms of the factors that control the spatial variability of predicted flux. For example, carbon flux is more strongly correlated with percent land cover in CASA GFEDv2 than in SiB or CASA. Some of these factors can be linked back to model formulation, and would have been difficult to identify simply by comparing net fluxes between models. Overall, the quantitative approach presented here provides a set of tools for comparing predicted grid-scale fluxes across

  3. Characterization and optimization of experimental variables within a reproducible bladder encrustation model and in vitro evaluation of the efficacy of urease inhibitors for the prevention of medical device-related encrustation.

    Science.gov (United States)

    Jones, David S; Djokic, Jasmina; Gorman, Sean P

    2006-01-01

    This study presents a reproducible, cost-effective in vitro encrustation model and, furthermore, describes the effects of components of the artificial urine and the presence of agents that modify the action of urease on encrustation on commercially available ureteral stents. The encrustation model involved the use of small-volume reactors (700 mL) containing artificial urine and employing an orbital incubator (at 37 degrees C) to ensure controlled stirring. The artificial urine contained sources of calcium and magnesium (both as chlorides), albumin and urease. Alteration of the ratio (% w/w) of calcium salt to magnesium salt affected the mass of encrustation, with the greatest encrustation noted whenever magnesium was excluded from the artificial urine. Increasing the concentration of albumin, designed to mimic the presence of protein in urine, significantly decreased the mass of both calcium and magnesium encrustation until a plateau was observed. Finally, exclusion of urease from the artificial urine significantly reduced encrustation due to the indirect effects of this enzyme on pH. Inclusion of the urease inhibitor, acetohydroxamic acid, or urease substrates (methylurea or ethylurea) into the artificial medium markedly reduced encrustation on ureteral stents. In conclusion, this study has described the design of a reproducible, cost-effective in vitro encrustation model. Encrustation was markedly reduced on biomaterials by the inclusion of agents that modify the action of urease. These agents may, therefore, offer a novel clinical approach to the control of encrustation on urological medical devices.

  4. How well do environmental archives of atmospheric mercury deposition in the Arctic reproduce rates and trends depicted by atmospheric models and measurements?

    Science.gov (United States)

    Goodsite, M E; Outridge, P M; Christensen, J H; Dastoor, A; Muir, D; Travnikov, O; Wilson, S

    2013-05-01

    This review compares the reconstruction of atmospheric Hg deposition rates and historical trends over recent decades in the Arctic, inferred from Hg profiles in natural archives such as lake and marine sediments, peat bogs and glacial firn (permanent snowpack), against those predicted by three state-of-the-art atmospheric models based on global Hg emission inventories from 1990 onwards. Model veracity was first tested against atmospheric Hg measurements. Most of the natural archive and atmospheric data came from the Canadian-Greenland sectors of the Arctic, whereas spatial coverage was poor in other regions. In general, for the Canadian-Greenland Arctic, models provided good agreement with atmospheric gaseous elemental Hg (GEM) concentrations and trends measured instrumentally. However, there are few instrumented deposition data with which to test the model estimates of Hg deposition, and these data suggest models over-estimated deposition fluxes under Arctic conditions. Reconstructed GEM data from glacial firn on Greenland Summit showed the best agreement with the known decline in global Hg emissions after about 1980, and were corroborated by archived aerosol filter data from Resolute, Nunavut. The relatively stable or slowly declining firn and model GEM trends after 1990 were also corroborated by real-time instrument measurements at Alert, Nunavut, after 1995. However, Hg fluxes and trends in northern Canadian lake sediments and a southern Greenland peat bog did not exhibit good agreement with model predictions of atmospheric deposition since 1990, the Greenland firn GEM record, direct GEM measurements, or trends in global emissions since 1980. Various explanations are proposed to account for these discrepancies between atmosphere and archives, including problems with the accuracy of archive chronologies, climate-driven changes in Hg transfer rates from air to catchments, waters and subsequently into sediments, and post-depositional diagenesis in peat bogs

  5. Quantitative Regression Models for the Prediction of Chemical Properties by an Efficient Workflow.

    Science.gov (United States)

    Yin, Yongmin; Xu, Congying; Gu, Shikai; Li, Weihua; Liu, Guixia; Tang, Yun

    2015-10-01

    Rapid safety assessment is more and more needed for the increasing chemicals both in chemical industries and regulators around the world. The traditional experimental methods couldn't meet the current demand any more. With the development of the information technology and the growth of experimental data, in silico modeling has become a practical and rapid alternative for the assessment of chemical properties, especially for the toxicity prediction of organic chemicals. In this study, a quantitative regression workflow was built by KNIME to predict chemical properties. With this regression workflow, quantitative values of chemical properties can be obtained, which is different from the binary-classification model or multi-classification models that can only give qualitative results. To illustrate the usage of the workflow, two predictive models were constructed based on datasets of Tetrahymena pyriformis toxicity and Aqueous solubility. The qcv (2) and qtest (2) of 5-fold cross validation and external validation for both types of models were greater than 0.7, which implies that our models are robust and reliable, and the workflow is very convenient and efficient in prediction of various chemical properties. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Forward and adjoint radiance Monte Carlo models for quantitative photoacoustic imaging

    Science.gov (United States)

    Hochuli, Roman; Powell, Samuel; Arridge, Simon; Cox, Ben

    2015-03-01

    In quantitative photoacoustic imaging, the aim is to recover physiologically relevant tissue parameters such as chromophore concentrations or oxygen saturation. Obtaining accurate estimates is challenging due to the non-linear relationship between the concentrations and the photoacoustic images. Nonlinear least squares inversions designed to tackle this problem require a model of light transport, the most accurate of which is the radiative transfer equation. This paper presents a highly scalable Monte Carlo model of light transport that computes the radiance in 2D using a Fourier basis to discretise in angle. The model was validated against a 2D finite element model of the radiative transfer equation, and was used to compute gradients of an error functional with respect to the absorption and scattering coefficient. It was found that adjoint-based gradient calculations were much more robust to inherent Monte Carlo noise than a finite difference approach. Furthermore, the Fourier angular discretisation allowed very efficient gradient calculations as sums of Fourier coefficients. These advantages, along with the high parallelisability of Monte Carlo models, makes this approach an attractive candidate as a light model for quantitative inversion in photoacoustic imaging.

  7. Reproducibility of a reaming test

    DEFF Research Database (Denmark)

    Pilny, Lukas; Müller, Pavel; De Chiffre, Leonardo

    2014-01-01

    The reproducibility of a reaming test was analysed to document its applicability as a performance test for cutting fluids. Reaming tests were carried out on a drilling machine using HSS reamers. Workpiece material was an austenitic stainless steel, machined using 4.75 m•min−1 cutting speed and 0...... a built–up edge occurrence hindering a robust evaluation of cutting fluid performance, if the data evaluation is based on surface finish only. Measurements of hole geometry provide documentation to recognise systematic error distorting the performance test....

  8. Reproducibility of a reaming test

    DEFF Research Database (Denmark)

    Pilny, Lukas; Müller, Pavel; De Chiffre, Leonardo

    2012-01-01

    The reproducibility of a reaming test was analysed to document its applicability as a performance test for cutting fluids. Reaming tests were carried out on a drilling machine using HSS reamers. Workpiece material was an austenitic stainless steel, machined using 4.75 m∙min-1 cutting speed and 0...... a built-up edge occurrence hindering a robust evaluation of cutting fluid performance, if the data evaluation is based on surface finish only. Measurements of hole geometry provide documentation to recognize systematic error distorting the performance test....

  9. Curating and Preparing High-Throughput Screening Data for Quantitative Structure-Activity Relationship Modeling.

    Science.gov (United States)

    Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2016-01-01

    Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described.

  10. Theoretical Modeling and Computer Simulations for the Origins and Evolution of Reproducing Molecular Systems and Complex Systems with Many Interactive Parts

    Science.gov (United States)

    Liang, Shoudan

    2000-01-01

    Our research effort has produced nine publications in peer-reviewed journals listed at the end of this report. The work reported here are in the following areas: (1) genetic network modeling; (2) autocatalytic model of pre-biotic evolution; (3) theoretical and computational studies of strongly correlated electron systems; (4) reducing thermal oscillations in atomic force microscope; (5) transcription termination mechanism in prokaryotic cells; and (6) the low glutamine usage in thennophiles obtained by studying completely sequenced genomes. We discuss the main accomplishments of these publications.

  11. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  12. Exploring simple, transparent, interpretable and predictive QSAR models for classification and quantitative prediction of rat toxicity of ionic liquids using OECD recommended guidelines.

    Science.gov (United States)

    Das, Rudra Narayan; Roy, Kunal; Popelier, Paul L A

    2015-11-01

    The present study explores the chemical attributes of diverse ionic liquids responsible for their cytotoxicity in a rat leukemia cell line (IPC-81) by developing predictive classification as well as regression-based mathematical models. Simple and interpretable descriptors derived from a two-dimensional representation of the chemical structures along with quantum topological molecular similarity indices have been used for model development, employing unambiguous modeling strategies that strictly obey the guidelines of the Organization for Economic Co-operation and Development (OECD) for quantitative structure-activity relationship (QSAR) analysis. The structure-toxicity relationships that emerged from both classification and regression-based models were in accordance with the findings of some previous studies. The models suggested that the cytotoxicity of ionic liquids is dependent on the cationic surfactant action, long alkyl side chains, cationic lipophilicity as well as aromaticity, the presence of a dialkylamino substituent at the 4-position of the pyridinium nucleus and a bulky anionic moiety. The models have been transparently presented in the form of equations, thus allowing their easy transferability in accordance with the OECD guidelines. The models have also been subjected to rigorous validation tests proving their predictive potential and can hence be used for designing novel and "greener" ionic liquids. The major strength of the present study lies in the use of a diverse and large dataset, use of simple reproducible descriptors and compliance with the OECD norms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    Science.gov (United States)

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  14. Quantitative analysis of markers of podocyte injury in the rat puromycin aminonucleoside nephropathy model.

    Science.gov (United States)

    Kakimoto, Tetsuhiro; Okada, Kinya; Fujitaka, Keisuke; Nishio, Masashi; Kato, Tsuyoshi; Fukunari, Atsushi; Utsumi, Hiroyuki

    2015-02-01

    Podocytes are an essential component of the renal glomerular filtration barrier, their injury playing an early and important role in progressive renal dysfunction. This makes quantification of podocyte marker immunoreactivity important for early detection of glomerular histopathological changes. Here we have specifically applied a state-of-the-art automated computational method of glomerulus recognition, which we have recently developed, to study quantitatively podocyte markers in a model with selective podocyte injury, namely the rat puromycin aminonucleoside (PAN) nephropathy model. We also retrospectively investigated mRNA expression levels of these markers in glomeruli which were isolated from the same formalin-fixed, paraffin-embedded kidney samples by laser microdissection. Among the examined podocyte markers, the immunopositive area and mRNA expression level of both podoplanin and synaptopodin were decreased in PAN glomeruli. The immunopositive area of podocin showed a slight decrease in PAN glomeruli, while its mRNA level showed no change. We have also identified a novel podocyte injury marker β-enolase, which was increased exclusively by podocytes in PAN glomeruli, similarly to another widely used marker, desmin. Thus, we have shown the specific application of a state-of-the-art computational method and retrospective mRNA expression analysis to quantitatively study the changes of various podocyte markers. The proposed methods will open new avenues for quantitative elucidation of renal glomerular histopathology. Copyright © 2014 Elsevier GmbH. All rights reserved.

  15. Sensitive quantitative assays for tau and phospho-tau in transgenic mouse models

    Science.gov (United States)

    Acker, Christopher M.; Forest, Stefanie K.; Zinkowski, Ray; Davies, Peter; d’Abramo, Cristina

    2012-01-01

    Transgenic mouse models have been an invaluable resource in elucidating the complex roles of Aβ and tau in Alzheimer’s disease. While many laboratories rely on qualitative or semi-quantitative techniques when investigating tau pathology, we have developed four Low-Tau Sandwich ELISAs that quantitatively assess different epitopes of tau relevant to Alzheimer’s disease: total tau, pSer-202, pThr-231, pSer-396/404. In this study, after comparing our assays to commercially available ELISAs, we demonstrate our assays high specificity and quantitative capabilities using brain homogenates from tau transgenic mice, htau, JNPL3, tau KO mice. All four ELISAs show excellent specificity for mouse and human tau, with no reactivity to tau KO animals. An age dependent increase of serum tau in both tau transgenic models was also seen. Taken together, these assays are valuable methods to quantify tau and phospho-tau levels in transgenic animals, by examining tau levels in brain and measuring tau as a potential serum biomarker. PMID:22727277

  16. Enhancing the Quantitative Representation of Socioeconomic Conditions in the Shared Socio-economic Pathways (SSPs) using the International Futures Model

    Science.gov (United States)

    Rothman, D. S.; Siraj, A.; Hughes, B.

    2013-12-01

    The international research community is currently in the process of developing new scenarios for climate change research. One component of these scenarios are the Shared Socio-economic Pathways (SSPs), which describe a set of possible future socioeconomic conditions. These are presented in narrative storylines with associated quantitative drivers. The core quantitative drivers include total population, average GDP per capita, educational attainment, and urbanization at the global, regional, and national levels. At the same time there have been calls, particularly by the IAV community, for the SSPs to include additional quantitative information on other key social factors, such as income inequality, governance, health, and access to key infrastructures, which are discussed in the narratives. The International Futures system (IFs), based at the Pardee Center at the University of Denver, is able to provide forecasts of many of these indicators. IFs cannot use the SSP drivers as exogenous inputs, but we are able to create development pathways that closely reproduce the core quantitative drivers defined by the different SSPs, as well as incorporating assumptions on other key driving factors described in the qualitative narratives. In this paper, we present forecasts for additional quantitative indicators based upon the implementation of the SSP development pathways in IFs. These results will be of value to many researchers.

  17. Reply to the comment of S. Rayne on "QSAR model reproducibility and applicability: A case study of rate constants of hydroxyl radical reaction models applied to polybrominated diphenyl ethers and (benzo-)triazoles".

    Science.gov (United States)

    Gramatica, Paola; Kovarich, Simona; Roy, Partha Pratim

    2013-07-30

    We appreciate the interest of Dr. Rayne on our article and we completely agree that the dataset of (benzo-)triazoles, which were screened by the hydroxyl radical reaction quantitative structure-activity relationship (QSAR) model, was not only composed of benzo-triazoles but also included some simpler triazoles (without the condensed benzene ring), such as the chemicals listed by Dr. Rayne, as well as some related heterocycles (also few not aromatic). We want to clarify that in this article (as well as in other articles in which the same dataset was screened), for conciseness, the abbreviations (B)TAZs and BTAZs were used as general (and certainly too simplified) notations meaning an extended dataset of benzo-triazoles, triazoles, and related compounds. Copyright © 2013 Wiley Periodicals, Inc.

  18. A hierarchical statistical model for estimating population properties of quantitative genes

    Directory of Open Access Journals (Sweden)

    Wu Rongling

    2002-06-01

    Full Text Available Abstract Background Earlier methods for detecting major genes responsible for a quantitative trait rely critically upon a well-structured pedigree in which the segregation pattern of genes exactly follow Mendelian inheritance laws. However, for many outcrossing species, such pedigrees are not available and genes also display population properties. Results In this paper, a hierarchical statistical model is proposed to monitor the existence of a major gene based on its segregation and transmission across two successive generations. The model is implemented with an EM algorithm to provide maximum likelihood estimates for genetic parameters of the major locus. This new method is successfully applied to identify an additive gene having a large effect on stem height growth of aspen trees. The estimates of population genetic parameters for this major gene can be generalized to the original breeding population from which the parents were sampled. A simulation study is presented to evaluate finite sample properties of the model. Conclusions A hierarchical model was derived for detecting major genes affecting a quantitative trait based on progeny tests of outcrossing species. The new model takes into account the population genetic properties of genes and is expected to enhance the accuracy, precision and power of gene detection.

  19. Downscaling SSPs in the GBM Delta - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    Science.gov (United States)

    Allan, Andrew; Barbour, Emily; Salehin, Mashfiqus; Munsur Rahman, Md.; Hutton, Craig; Lazar, Attila

    2016-04-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  20. Downscaling SSPs in Bangladesh - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    Science.gov (United States)

    Allan, A.; Barbour, E.; Salehin, M.; Hutton, C.; Lázár, A. N.; Nicholls, R. J.; Rahman, M. M.

    2015-12-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  1. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    Science.gov (United States)

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article.

  2. A training set selection strategy for a universal near-infrared quantitative model.

    Science.gov (United States)

    Jia, Yan-Hua; Liu, Xu-Ping; Feng, Yan-Chun; Hu, Chang-Qin

    2011-06-01

    The purpose of this article is to propose an empirical solution to the problem of how many clusters of complex samples should be selected to construct the training set for a universal near infrared quantitative model based on the Naes method. The sample spectra were hierarchically classified into clusters by Ward's algorithm and Euclidean distance. If the sample spectra were classified into two clusters, the 1/50 of the largest Heterogeneity value in the cluster with larger variation was set as the threshold to determine the total number of clusters. One sample was then randomly selected from each cluster to construct the training set, and the number of samples in training set equaled the number of clusters. In this study, 98 batches of rifampicin capsules with API contents ranging from 50.1% to 99.4% were studied with this strategy. The root mean square errors of cross validation and prediction were 2.54% and 2.31% for the model for rifampicin capsules, respectively. Then, we evaluated this model in terms of outlier diagnostics, accuracy, precision, and robustness. We also used the strategy of training set sample selection to revalidate the models for cefradine capsules, roxithromycin tablets, and erythromycin ethylsuccinate tablets, and the results were satisfactory. In conclusion, all results showed that this training set sample selection strategy assisted in the quick and accurate construction of quantitative models using near-infrared spectroscopy.

  3. A semi-quantitative model for risk appreciation and risk weighing

    DEFF Research Database (Denmark)

    Bos, Peter M.J.; Boon, Polly E.; van der Voet, Hilko

    2009-01-01

    Risk managers need detailed information on (1) the type of effect, (2) the size (severity) of the expected effect(s) and (3) the fraction of the population at risk to decide on well-balanced risk reduction measures. A previously developed integrated probabilistic risk assessment (IPRA) model...... provides quantitative information on these three parameters. A semi-quantitative tool is presented that combines information on these parameters into easy-readable charts that will facilitate risk evaluations of exposure situations and decisions on risk reduction measures. This tool is based on a concept...... of health impact categorization that has been successfully in force for several years within several emergency planning programs. Four health impact categories are distinguished: No-Health Impact, Low-Health Impact, Moderate-Health Impact and Severe-Health Impact. Two different charts are presented...

  4. Modelling bacterial growth in quantitative microbiological risk assessment: is it possible?

    Science.gov (United States)

    Nauta, Maarten J

    2002-03-01

    Quantitative microbiological risk assessment (QMRA), predictive modelling and HACCP may be used as tools to increase food safety and can be integrated fruitfully for many purposes. However, when QMRA is applied for public health issues like the evaluation of the status of public health, existing predictive models may not be suited to model bacterial growth. In this context, precise quantification of risks is more important than in the context of food manufacturing alone. In this paper, the modular process risk model (MPRM) is briefly introduced as a QMRA modelling framework. This framework can be used to model the transmission of pathogens through any food pathway, by assigning one of six basic processes (modules) to each of the processing steps. Bacterial growth is one of these basic processes. For QMRA, models of bacterial growth need to be expressed in terms of probability, for example to predict the probability that a critical concentration is reached within a certain amount of time. In contrast, available predictive models are developed and validated to produce point estimates of population sizes and therefore do not fit with this requirement. Recent experience from a European risk assessment project is discussed to illustrate some of the problems that may arise when predictive growth models are used in QMRA. It is suggested that a new type of predictive models needs to be developed that incorporates modelling of variability and uncertainty in growth.

  5. Simulation of the hydrodynamic conditions of the eye to better reproduce the drug release from hydrogel contact lenses: experiments and modeling.

    Science.gov (United States)

    Pimenta, A F R; Valente, A; Pereira, J M C; Pereira, J C F; Filipe, H P; Mata, J L G; Colaço, R; Saramago, B; Serro, A P

    2016-12-01

    Currently, most in vitro drug release studies for ophthalmic applications are carried out in static sink conditions. Although this procedure is simple and useful to make comparative studies, it does not describe adequately the drug release kinetics in the eye, considering the small tear volume and flow rates found in vivo. In this work, a microfluidic cell was designed and used to mimic the continuous, volumetric flow rate of tear fluid and its low volume. The suitable operation of the cell, in terms of uniformity and symmetry of flux, was proved using a numerical model based in the Navier-Stokes and continuity equations. The release profile of a model system (a hydroxyethyl methacrylate-based hydrogel (HEMA/PVP) for soft contact lenses (SCLs) loaded with diclofenac) obtained with the microfluidic cell was compared with that obtained in static conditions, showing that the kinetics of release in dynamic conditions is slower. The application of the numerical model demonstrated that the designed cell can be used to simulate the drug release in the whole range of the human eye tear film volume and allowed to estimate the drug concentration in the volume of liquid in direct contact with the hydrogel. The knowledge of this concentration, which is significantly different from that measured in the experimental tests during the first hours of release, is critical to predict the toxicity of the drug release system and its in vivo efficacy. In conclusion, the use of the microfluidic cell in conjunction with the numerical model shall be a valuable tool to design and optimize new therapeutic drug-loaded SCLs.

  6. Accessibility and Reproducibility of Stable High-qmin Steady-State Scenarios by q-profile+βN Model Predictive Control

    Science.gov (United States)

    Schuster, E.; Wehner, W.; Holcomb, C. T.; Victor, B.; Ferron, J. R.; Luce, T. C.

    2016-10-01

    The capability of combined q-profile and βN control to enable access to and repeatability of steady-state scenarios for qmin > 1.4 discharges has been assessed in DIII-D experiments. To steer the plasma to the desired state, model predictive control (MPC) of both the q-profile and βN numerically solves successive optimization problems in real time over a receding time horizon by exploiting efficient quadratic programming techniques. A key advantage of this control approach is that it allows for explicit incorporation of state/input constraints to prevent the controller from driving the plasma outside of stability/performance limits and obtain, as closely as possible, steady state conditions. The enabler of this feedback-control approach is a control-oriented model capturing the dominant physics of the q-profile and βN responses to the available actuators. Experiments suggest that control-oriented model-based scenario planning in combination with MPC can play a crucial role in exploring stability limits of scenarios of interest. Supported by the US DOE under DE-SC0010661.

  7. Large barrier, highly uniform and reproducible Ni-Si/4H-SiC forward Schottky diode characteristics: testing the limits of Tung's model

    Science.gov (United States)

    Omar, Sabih U.; Sudarshan, Tangali S.; Rana, Tawhid A.; Song, Haizheng; Chandrashekhar, M. V. S.

    2014-07-01

    We report highly ideal (n < 1.1), uniform nickel silicide (Ni-Si)/SiC Schottky barrier (1.60-1.67 eV with a standard deviation <2.8%) diodes, fabricated on 4H-SiC epitaxial layers grown by chemical vapour deposition. The barrier height was constant over a wide epilayer doping range of 1014-1016 cm-3, apart from a slight decrease consistent with image force lowering. This remarkable uniformity was achieved by careful optimization of the annealing of the Schottky interface to minimize non-idealities that could lead to inhomogeneity. Tung's barrier inhomogeneity model was used to quantify the level of inhomogeneity in the optimized annealed diodes. The estimated ‘bulk’ barrier height (1.75 eV) was consistent with the Shockley-Mott limit for the Ni-Si/4H-SiC interface, implying an unpinned Fermi level. But the model was not useful to explain the poor ideality in unoptimized, as-deposited Schottky contacts (n = 1.6 - 2.5). We show analytically and numerically that only idealities n < 1.21 can be explained using Tung's model, irrespective of material system, indicating that the barrier height inhomogeneity is not the only cause of poor ideality in Schottky diodes. For explaining this highly non-ideal behaviour, other factors (e.g. interface traps, morphological defects, extrinsic impurities, etc) need to be considered.

  8. Linking antisocial behavior, substance use, and personality: an integrative quantitative model of the adult externalizing spectrum.

    Science.gov (United States)

    Krueger, Robert F; Markon, Kristian E; Patrick, Christopher J; Benning, Stephen D; Kramer, Mark D

    2007-11-01

    Antisocial behavior, substance use, and impulsive and aggressive personality traits often co-occur, forming a coherent spectrum of personality and psychopathology. In the current research, the authors developed a novel quantitative model of this spectrum. Over 3 waves of iterative data collection, 1,787 adult participants selected to represent a range across the externalizing spectrum provided extensive data about specific externalizing behaviors. Statistical methods such as item response theory and semiparametric factor analysis were used to model these data. The model and assessment instrument that emerged from the research shows how externalizing phenomena are organized hierarchically and cover a wide range of individual differences. The authors discuss the utility of this model for framing research on the correlates and the etiology of externalizing phenomena.

  9. Precise Quantitative Analysis of Probabilistic Business Process Model and Notation Workflows

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for modeling and analysis of real-world business workflows. We present a formalized core subset of the business process modeling and notation (BPMN) and then proceed to extend this language with probabilistic nondeterministic branching and general-purpose reward annotations....... We present an algorithm for the translation of such models into Markov decision processes (MDP) expressed in the syntax of the PRISM model checker. This enables precise quantitative analysis of business processes for the following properties: transient and steady-state probabilities, the timing......, occurrence and ordering of events, reward-based properties, and best- and worst- case scenarios. We develop a simple example of medical workflow and demonstrate the utility of this analysis in accurate provisioning of drug stocks. Finally, we suggest a path to building upon these techniques to cover...

  10. Reproducibility of esophageal scintigraphy using semi-solid yoghurt

    Energy Technology Data Exchange (ETDEWEB)

    Imai, Yukinori; Kinoshita, Manabu; Asakura, Yasushi; Kakinuma, Tohru; Shimoji, Katsunori; Fujiwara, Kenji; Suzuki, Kenji; Miyamae, Tatsuya [Saitama Medical School, Moroyama (Japan)

    1999-10-01

    Esophageal scintigraphy is a non-invasive method which evaluate esophageal function quantitatively. We applied new technique using semi-solid yoghurt, which can evaluate esophageal function in a sitting position. To evaluate the reproducibility of this method, scintigraphy were performed in 16 healthy volunteers. From the result of four swallows except the first one, the mean coefficients of variation in esophageal transit time and esophageal emptying time were 12.8% and 13.4% respectively (interday variation). As regards the interday variation, this method had also good reproducibility from the result on the 2 separate days. (author)

  11. Quantitative model of cell cycle arrest and cellular senescence in primary human fibroblasts.

    Directory of Open Access Journals (Sweden)

    Sascha Schäuble

    Full Text Available Primary human fibroblasts in tissue culture undergo a limited number of cell divisions before entering a non-replicative "senescent" state. At early population doublings (PD, fibroblasts are proliferation-competent displaying exponential growth. During further cell passaging, an increasing number of cells become cell cycle arrested and finally senescent. This transition from proliferating to senescent cells is driven by a number of endogenous and exogenous stress factors. Here, we have developed a new quantitative model for the stepwise transition from proliferating human fibroblasts (P via reversibly cell cycle arrested (C to irreversibly arrested senescent cells (S. In this model, the transition from P to C and to S is driven by a stress function γ and a cellular stress response function F which describes the time-delayed cellular response to experimentally induced irradiation stress. The application of this model based on senescence marker quantification at the single-cell level allowed to discriminate between the cellular states P, C, and S and delivers the transition rates between the P, C and S states for different human fibroblast cell types. Model-derived quantification unexpectedly revealed significant differences in the stress response of different fibroblast cell lines. Evaluating marker specificity, we found that SA-β-Gal is a good quantitative marker for cellular senescence in WI-38 and BJ cells, however much less so in MRC-5 cells. Furthermore we found that WI-38 cells are more sensitive to stress than BJ and MRC-5 cells. Thus, the explicit separation of stress induction from the cellular stress response, and the differentiation between three cellular states P, C and S allows for the first time to quantitatively assess the response of primary human fibroblasts towards endogenous and exogenous stress during cellular ageing.

  12. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  13. Flow assignment model for quantitative analysis of diverting bulk freight from road to railway.

    Science.gov (United States)

    Liu, Chang; Lin, Boliang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian

    2017-01-01

    Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway.

  14. Growth mixture modeling as an exploratory analysis tool in longitudinal quantitative trait loci analysis.

    Science.gov (United States)

    Chang, Su-Wei; Choi, Seung Hoan; Li, Ke; Fleur, Rose Saint; Huang, Chengrui; Shen, Tong; Ahn, Kwangmi; Gordon, Derek; Kim, Wonkuk; Wu, Rongling; Mendell, Nancy R; Finch, Stephen J

    2009-12-15

    We examined the properties of growth mixture modeling in finding longitudinal quantitative trait loci in a genome-wide association study. Two software packages are commonly used in these analyses: Mplus and the SAS TRAJ procedure. We analyzed the 200 replicates of the simulated data with these programs using three tests: the likelihood-ratio test statistic, a direct test of genetic model coefficients, and the chi-square test classifying subjects based on the trajectory model's posterior Bayesian probability. The Mplus program was not effective in this application due to its computational demands. The distributions of these tests applied to genes not related to the trait were sensitive to departures from Hardy-Weinberg equilibrium. The likelihood-ratio test statistic was not usable in this application because its distribution was far from the expected asymptotic distributions when applied to markers with no genetic relation to the quantitative trait. The other two tests were satisfactory. Power was still substantial when we used markers near the gene rather than the gene itself. That is, growth mixture modeling may be useful in genome-wide association studies. For markers near the actual gene, there was somewhat greater power for the direct test of the coefficients and lesser power for the posterior Bayesian probability chi-square test.

  15. Experimental Research on Quantitative Inversion Models of Suspended Sediment Concentration Using Remote Sensing Technology

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Research on quantitative models of suspended sediment concentration (SSC) using remote sensing technology is very important to understand the scouring and siltation variation in harbors and water channels. Based on laboratory study of the relationship between different suspended sediment concentrations and reflectance spectra measured synchronously, quantitative inversion models of SSC based on single factor, band ratio and sediment parameter were developed, which provides an effective method to retrieve the SSC from satellite images. Results show that the b1 (430-500nm) and b3 (670-735nm) are the optimal wavelengths for the estimation of lower SSC and the b4 (780-835nm) is the optimal wavelength to estimate the higher SSC. Furthermore the band ratio B2/B3 can be used to simulate the variation of lower SSC better and the B4/B1 to estimate the higher SSC accurately. Also the inversion models developed by sediment parameters of higher and lower SSCs can get a relatively higher accuracy than the single factor and band ratio models.

  16. Research on Quantitative Models of Electric Vehicle Charging Stations Based on Principle of Energy Equivalence

    Directory of Open Access Journals (Sweden)

    Zhenpo Wang

    2013-01-01

    Full Text Available In order to adapt the matching and planning requirements of charging station in the electric vehicle (EV marketization application, with related layout theories of the gas stations, a location model of charging stations is established based on electricity consumption along the roads among cities. And a quantitative model of charging stations is presented based on the conversion of oil sales in a certain area. Both are combining the principle based on energy consuming equivalence substitution in process of replacing traditional vehicles with EVs. Defined data are adopted in the example analysis of two numerical case models and analyze the influence on charging station layout and quantity from the factors like the proportion of vehicle types and the EV energy consumption at the same time. The results show that the quantitative model of charging stations is reasonable and feasible. The number of EVs and the energy consumption of EVs bring more significant impact on the number of charging stations than that of vehicle type proportion, which provides a basis for decision making for charging stations construction layout in reality.

  17. Quantitative analysis of anaerobic oxidation of methane (AOM) in marine sediments: A modeling perspective

    Science.gov (United States)

    Regnier, P.; Dale, A. W.; Arndt, S.; LaRowe, D. E.; Mogollón, J.; Van Cappellen, P.

    2011-05-01

    Recent developments in the quantitative modeling of methane dynamics and anaerobic oxidation of methane (AOM) in marine sediments are critically reviewed. The first part of the review begins with a comparison of alternative kinetic models for AOM. The roles of bioenergetic limitations, intermediate compounds and biomass growth are highlighted. Next, the key transport mechanisms in multi-phase sedimentary environments affecting AOM and methane fluxes are briefly treated, while attention is also given to additional controls on methane and sulfate turnover, including organic matter mineralization, sulfur cycling and methane phase transitions. In the second part of the review, the structure, forcing functions and parameterization of published models of AOM in sediments are analyzed. The six-orders-of-magnitude range in rate constants reported for the widely used bimolecular rate law for AOM emphasizes the limited transferability of this simple kinetic model and, hence, the need for more comprehensive descriptions of the AOM reaction system. The derivation and implementation of more complete reaction models, however, are limited by the availability of observational data. In this context, we attempt to rank the relative benefits of potential experimental measurements that should help to better constrain AOM models. The last part of the review presents a compilation of reported depth-integrated AOM rates (ΣAOM). These rates reveal the extreme variability of ΣAOM in marine sediments. The model results are further used to derive quantitative relationships between ΣAOM and the magnitude of externally impressed fluid flow, as well as between ΣAOM and the depth of the sulfate-methane transition zone (SMTZ). This review contributes to an improved understanding of the global significance of the AOM process, and helps identify outstanding questions and future directions in the modeling of methane cycling and AOM in marine sediments.

  18. Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    2014-01-01

    Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...

  19. A Quantitative Model of Keyhole Instability Induced Porosity in Laser Welding of Titanium Alloy

    Science.gov (United States)

    Pang, Shengyong; Chen, Weidong; Wang, Wen

    2014-06-01

    Quantitative prediction of the porosity defects in deep penetration laser welding has generally been considered as a very challenging task. In this study, a quantitative model of porosity defects induced by keyhole instability in partial penetration CO2 laser welding of a titanium alloy is proposed. The three-dimensional keyhole instability, weld pool dynamics, and pore formation are determined by direct numerical simulation, and the results are compared to prior experimental results. It is shown that the simulated keyhole depth fluctuations could represent the variation trends in the number and average size of pores for the studied process conditions. Moreover, it is found that it is possible to use the predicted keyhole depth fluctuations as a quantitative measure of the average size of porosity. The results also suggest that due to the shadowing effect of keyhole wall humps, the rapid cooling of the surface of the keyhole tip before keyhole collapse could lead to a substantial decrease in vapor pressure inside the keyhole tip, which is suggested to be the mechanism by which shielding gas enters into the porosity.

  20. Three-dimensional modeling and quantitative analysis of gap junction distributions in cardiac tissue.

    Science.gov (United States)

    Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W

    2011-11-01

    Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.

  1. A Mouse Model That Reproduces the Developmental Pathways and Site Specificity of the Cancers Associated With the Human BRCA1 Mutation Carrier State.

    Science.gov (United States)

    Liu, Ying; Yen, Hai-Yun; Austria, Theresa; Pettersson, Jonas; Peti-Peterdi, Janos; Maxson, Robert; Widschwendter, Martin; Dubeau, Louis

    2015-10-01

    Predisposition to breast and extrauterine Müllerian carcinomas in BRCA1 mutation carriers is due to a combination of cell-autonomous consequences of BRCA1 inactivation on cell cycle homeostasis superimposed on cell-nonautonomous hormonal factors magnified by the effects of BRCA1 mutations on hormonal changes associated with the menstrual cycle. We used the Müllerian inhibiting substance type 2 receptor (Mis2r) promoter and a truncated form of the Follicle stimulating hormone receptor (Fshr) promoter to introduce conditional knockouts of Brca1 and p53 not only in mouse mammary and Müllerian epithelia, but also in organs that control the estrous cycle. Sixty percent of the double mutant mice developed invasive Müllerian and mammary carcinomas. Mice carrying heterozygous mutations in Brca1 and p53 also developed invasive tumors, albeit at a lesser (30%) rate, in which the wild type alleles were no longer present due to loss of heterozygosity. While mice carrying heterozygous mutations in both genes developed mammary tumors, none of the mice carrying only a heterozygous p53 mutation developed such tumors (P < 0.0001), attesting to a role for Brca1 mutations in tumor development. This mouse model is attractive to investigate cell-nonautonomous mechanisms associated with cancer predisposition in BRCA1 mutation carriers and to investigate the merit of chemo-preventive drugs targeting such mechanisms.

  2. A Mouse Model That Reproduces the Developmental Pathways and Site Specificity of the Cancers Associated With the Human BRCA1 Mutation Carrier State

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2015-10-01

    Full Text Available Predisposition to breast and extrauterine Müllerian carcinomas in BRCA1 mutation carriers is due to a combination of cell-autonomous consequences of BRCA1 inactivation on cell cycle homeostasis superimposed on cell-nonautonomous hormonal factors magnified by the effects of BRCA1 mutations on hormonal changes associated with the menstrual cycle. We used the Müllerian inhibiting substance type 2 receptor (Mis2r promoter and a truncated form of the Follicle stimulating hormone receptor (Fshr promoter to introduce conditional knockouts of Brca1 and p53 not only in mouse mammary and Müllerian epithelia, but also in organs that control the estrous cycle. Sixty percent of the double mutant mice developed invasive Müllerian and mammary carcinomas. Mice carrying heterozygous mutations in Brca1 and p53 also developed invasive tumors, albeit at a lesser (30% rate, in which the wild type alleles were no longer present due to loss of heterozygosity. While mice carrying heterozygous mutations in both genes developed mammary tumors, none of the mice carrying only a heterozygous p53 mutation developed such tumors (P < 0.0001, attesting to a role for Brca1 mutations in tumor development. This mouse model is attractive to investigate cell-nonautonomous mechanisms associated with cancer predisposition in BRCA1 mutation carriers and to investigate the merit of chemo-preventive drugs targeting such mechanisms.

  3. Quantitative model for the generic 3D shape of ICMEs at 1 AU

    Science.gov (United States)

    Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.

    2016-10-01

    Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.

  4. Impact Assessment of Abiotic Resources in LCA: Quantitative Comparison of Selected Characterization Models

    DEFF Research Database (Denmark)

    Rørbech, Jakob Thaysen; Vadenbo, Carl; Hellweg, Stefanie

    2014-01-01

    Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment...... results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247...... groups, according to method focus and modeling approach, to aid method selection within LCA....

  5. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  6. An equivalent magnetic dipoles model for quantitative damage recognition of broken wire

    Institute of Scientific and Technical Information of China (English)

    TAN Ji-wen; ZHAN Wei-xia; LI Chun-jing; WEN Yan; SHU Jie

    2005-01-01

    By simplifying saturatedly magnetized wire-rope to magnetic dipoles of the same magnetic field strength, an equivalent magnetic dipoles model is developed and the measuring principle for recognising damage of broken wire was presented. The relevant calculation formulas were also deduced. A composite solution method about nonlinear optimization was given. An example was given to illustrate the use of the equivalent magnetic dipoles method for quantitative damage recognition, and demonstrates that the result of this method is consistent with the real situation, so the method is valid and practical.

  7. Model development for quantitative evaluation of proliferation resistance of nuclear fuel cycles

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Won Il; Kim, Ho Dong; Yang, Myung Seung

    2000-07-01

    This study addresses the quantitative evaluation of the proliferation resistance which is important factor of the alternative nuclear fuel cycle system. In this study, model was developed to quantitatively evaluate the proliferation resistance of the nuclear fuel cycles. The proposed models were then applied to Korean environment as a sample study to provide better references for the determination of future nuclear fuel cycle system in Korea. In order to quantify the proliferation resistance of the nuclear fuel cycle, the proliferation resistance index was defined in imitation of an electrical circuit with an electromotive force and various electrical resistance components. The analysis on the proliferation resistance of nuclear fuel cycles has shown that the resistance index as defined herein can be used as an international measure of the relative risk of the nuclear proliferation if the motivation index is appropriately defined. It has also shown that the proposed model can include political issues as well as technical ones relevant to the proliferation resistance, and consider all facilities and activities in a specific nuclear fuel cycle (from mining to disposal). In addition, sensitivity analyses on the sample study indicate that the direct disposal option in a country with high nuclear propensity may give rise to a high risk of the nuclear proliferation than the reprocessing option in a country with low nuclear propensity.

  8. Spectral Quantitative Analysis Model with Combining Wavelength Selection and Topology Structure Optimization

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2016-01-01

    Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.

  9. The genetic architecture of heterochronsy as a quantitative trait: lessons from a computational model.

    Science.gov (United States)

    Sun, Lidan; Sang, Mengmeng; Zheng, Chenfei; Wang, Dongyang; Shi, Hexin; Liu, Kaiyue; Guo, Yanfang; Cheng, Tangren; Zhang, Qixiang; Wu, Rongling

    2017-05-30

    Heterochrony is known as a developmental change in the timing or rate of ontogenetic events across phylogenetic lineages. It is a key concept synthesizing development into ecology and evolution to explore the mechanisms of how developmental processes impact on phenotypic novelties. A number of molecular experiments using contrasting organisms in developmental timing have identified specific genes involved in heterochronic variation. Beyond these classic approaches that can only identify single genes or pathways, quantitative models derived from current next-generation sequencing data serve as a more powerful tool to precisely capture heterochronic variation and systematically map a complete set of genes that contribute to heterochronic processes. In this opinion note, we discuss a computational framework of genetic mapping that can characterize heterochronic quantitative trait loci that determine the pattern and process of development. We propose a unifying model that charts the genetic architecture of heterochrony that perceives and responds to environmental perturbations and evolves over geologic time. The new model may potentially enhance our understanding of the adaptive value of heterochrony and its evolutionary origins, providing a useful context for designing new organisms that can best use future resources. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies.

  11. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    Science.gov (United States)

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).

  12. Modeling of microfluidic microbial fuel cells using quantitative bacterial transport parameters

    Science.gov (United States)

    Mardanpour, Mohammad Mahdi; Yaghmaei, Soheila; Kalantar, Mohammad

    2017-02-01

    The objective of present study is to analyze the dynamic modeling of bioelectrochemical processes and improvement of the performance of previous models using quantitative data of bacterial transport parameters. The main deficiency of previous MFC models concerning spatial distribution of biocatalysts is an assumption of initial distribution of attached/suspended bacteria on electrode or in anolyte bulk which is the foundation for biofilm formation. In order to modify this imperfection, the quantification of chemotactic motility to understand the mechanisms of the suspended microorganisms' distribution in anolyte and/or their attachment to anode surface to extend the biofilm is implemented numerically. The spatial and temporal distributions of the bacteria, as well as the dynamic behavior of the anolyte and biofilm are simulated. The performance of the microfluidic MFC as a chemotaxis assay is assessed by analyzing the bacteria activity, substrate variation, bioelectricity production rate and the influences of external resistance on the biofilm and anolyte's features.

  13. Characterizing Pairwise Social Relationships Quantitatively: Interest-Oriented Mobility Modeling for Human Contacts in Delay Tolerant Networks

    Directory of Open Access Journals (Sweden)

    Jiaxu Chen

    2013-01-01

    Full Text Available Human mobility modeling has increasingly drawn the attention of researchers working on wireless mobile networks such as delay tolerant networks (DTNs in the last few years. So far, a number of human mobility models have been proposed to reproduce people’s social relationships, which strongly affect people’s daily life movement behaviors. However, most of them are based on the granularity of community. This paper presents interest-oriented human contacts (IHC mobility model, which can reproduce social relationships on a pairwise granularity. As well, IHC provides two methods to generate input parameters (interest vectors based on the social interaction matrix of target scenarios. By comparing synthetic data generated by IHC with three different real traces, we validate our model as a good approximation for human mobility. Exhaustive experiments are also conducted to show that IHC can predict well the performance of routing protocols.

  14. A Mathematical Calculation Model Using Biomarkers to Quantitatively Determine the Relative Source Proportion of Mixed Oils

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    It is difficult to identify the source(s) of mixed oils from multiple source rocks, and in particular the relative contribution of each source rock. Artificial mixing experiments using typical crude oils and ratios of different biomarkers show that the relative contribution changes are non-linear when two oils with different concentrations of biomarkers mix with each other. This may result in an incorrect conclusion if ratios of biomarkers and a simple binary linear equation are used to calculate the contribution proportion of each end-member to the mixed oil. The changes of biomarker ratios with the mixing proportion of end-member oils in the trinal mixing model are more complex than in the binary mixing model. When four or more oils mix, the contribution proportion of each end-member oil to the mixed oil cannot be calculated using biomarker ratios and a simple formula. Artificial mixing experiments on typical oils reveal that the absolute concentrations of biomarkers in the mixed oil cause a linear change with mixing proportion of each end-member. Mathematical inferences verify such linear changes. Some of the mathematical calculation methods using the absolute concentrations or ratios of biomarkers to quantitatively determine the proportion of each end-member in the mixed oils are deduced from the results of artificial experiments and by theoretical inference. Ratio of two biomarker compounds changes as a hyperbola with the mixing proportion in the binary mixing model,as a hyperboloid in the trinal mixing model, and as a hypersurface when mixing more than three endmembers. The mixing proportion of each end-member can be quantitatively determined with these mathematical models, using the absolute concentrations and the ratios of biomarkers. The mathematical calculation model is more economical, convenient, accurate and reliable than conventional artificial mixing methods.

  15. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    Science.gov (United States)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  16. Toxicity challenges in environmental chemicals: Prediction of human plasma protein binding through quantitative structure-activity relationship (QSAR) models

    Science.gov (United States)

    The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...

  17. Modeling Morphogenesis in silico and in vitro: Towards Quantitative, Predictive, Cell-based Modeling

    NARCIS (Netherlands)

    R.M.H. Merks (Roeland); P. Koolwijk

    2009-01-01

    htmlabstractCell-based, mathematical models help make sense of morphogenesis—i.e. cells organizing into shape and pattern—by capturing cell behavior in simple, purely descriptive models. Cell-based models then predict the tissue-level patterns the cells produce collectively. The first

  18. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME

    Science.gov (United States)

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2017-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948

  19. Quantitative computational models of molecular self-assembly in systems biology

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-06-01

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  20. Quantitative Validation of a Human Body Finite Element Model Using Rigid Body Impacts.

    Science.gov (United States)

    Vavalle, Nicholas A; Davis, Matthew L; Stitzel, Joel D; Gayzik, F Scott

    2015-09-01

    Validation is a critical step in finite element model (FEM) development. This study focuses on the validation of the Global Human Body Models Consortium full body average male occupant FEM in five localized loading regimes-a chest impact, a shoulder impact, a thoracoabdominal impact, an abdominal impact, and a pelvic impact. Force and deflection outputs from the model were compared to experimental traces and corridors scaled to the 50th percentile male. Predicted fractures and injury severity measures were compared to evaluate the model's injury prediction capabilities. The methods of ISO/TS 18571 were used to quantitatively assess the fit of model outputs to experimental force and deflection traces. The model produced peak chest, shoulder, thoracoabdominal, abdominal, and pelvis forces of 4.8, 3.3, 4.5, 5.1, and 13.0 kN compared to 4.3, 3.2, 4.0, 4.0, and 10.3 kN in the experiments, respectively. The model predicted rib and pelvic fractures related to Abbreviated I